time-icon
5
min read

How to build AI Speaker Evaluator Assistants to Transform the Session Selection Process.

Event teams can now review large volumes of session submissions quickly and accurately with consistent scoring, tailored feedback, and real time dashboards. Sessionboard’s AI Evaluations Assistant is built to speed up review cycles, reduce manual effort, and improve the quality of programming decisions.

It all starts with the right setup. With customizable evaluator personas, structured rubrics, and a centralized dashboard, your team can move faster and smarter without losing the depth and context that make content great.

But a strong AI evaluation process begins with the right setup.

Why This Guide Is Different

If you have read our guide on Building AI Evaluator Prompts, this article picks up where that one left off. While the prompts guide focuses on writing clear, contextual instructions for the evaluator, this piece shows you how to structure the personas themselves and set up your full evaluation workflow inside Sessionboard.

Why Evaluator Personas Matter

Behind every AI review is a structured persona, an evaluator with a name, tone, preferences, and criteria. These personas help simulate diverse perspectives and ensure that submissions are reviewed with consistency and context.

Sessionboard lets you create evaluator personas that reflect your event’s values, audience, and content strategy. Whether you are looking for bold ideas, practical takeaways, or strong alignment with your theme, your persona setup determines how sessions are reviewed.

Read more about Evaluator Personas here

Step 1: Theme

Add your event's theme directly in the Settings panel. This step is important because it equips AI evaluators with key insights about your event objectives. Describe the focus, audience, and intended takeaways just like you would when briefing a new reviewer. The more detail you include here, the better your personas can contextualize submissions.

Example:

You are evaluating sessions for our annual sustainability conference focused on practical solutions for mid-size businesses. Our audience includes sustainability directors, operations managers, and C-suite executives looking for strategies they can implement within six months.

Learn how to enhance your evaluations with complete even details here

Step 2: Detail the Persona

Each persona includes:

  • Name
  • Title or Role
  • Short Bio
  • Feedback Style (positive, constructive, or neutral)
  • Likes (qualities or themes this persona values in a session)
  • Dislikes (red flags they commonly watch out for)

Create three to five distinct personas that reflect different reviewer mindsets. For example:

  • The Industry Veteran: prioritizes hands-on, actionable takeaways
  • The Innovation Scout: seeks out bold, future focused ideas
  • The Audience Advocate: evaluates based on attendee needs
  • The Program Curator: looks for content variety and session flow
  • The Skeptical Practitioner: challenges vague or theoretical proposals

For tips on building strong evaluator instructions, our prompt-building guide can help you frame each reviewer’s expectations.

Step 3: Create the Evaluation Plan

Once your personas are ready, create an Evaluation Plan using the Evaluations module:

  1. Choose Type: Select Virtual Evaluators
  2. Configure Settings: Name your plan and add instructions
  3. Select Evaluators: Assign your custom personas
  4. Filter Sessions: Choose the sessions you want to evaluate by selecting criteria such as levels, languages, tracks, tags, format, and status. For example, you might filter by Track = "Cybersecurity"
  5. Display Fields: Decide which details are visible during the review process. To help your personas make informed evaluations, include key information like the session title, description, and speaker background such as job title, company, and bio.
  6. Set Grading Options: Start by selecting a clear scoring scale using icons like faces, hearts, stars, or numbers. Then, if desired, create a weighted rubric to break down the evaluation by specific criteria such as relevance, originality, and speaker authority.

Example Rubric Weights:

  • Relevance to Audience: 40%
  • Originality of Topic: 30%
  • Speaker Authority: 20%
  • Storytelling Potential: 10%

Bonus tip: If you are evaluating technical sessions, you might want to swap storytelling for clarity or practical depth.

Step 4: Add Smart Evaluator Questions

Each persona can also answer specific questions to provide deeper insight:

  • How directly does this session address a challenge faced by our audience?
  • What is the biggest risk in selecting this session?
  • Would this content spark strong engagement or discussion?
  • Is the content too broad or too niche for our audience?
  • Could this speaker be suited for other formats like webinars or pre-event content?

These responses give you more than a score. They provide insight you can use to shape your agenda.

Step 5: Review the Results

Once your evaluations are generated, head to your Evaluation Plan to explore the results. Plans that use Virtual Evaluators are marked with a blue icon, so you can quickly spot them.

Click the ellipsis in the Actions column to review session feedback or export a report.

You’ll see two export options:

  • Individual Grades Report: Shows a breakdown of scores by evaluator, including comments and how each criterion was rated
  • Cumulative Grades Report: Provides a summary of session-level scores along with session details like the title and submitter information.

These reports help you understand not just how sessions were scored, but why. They provide the foundation for confident, informed programming decisions.

For a broader view across all plans, visit the Evaluation Summary Dashboard. It highlights top-performing sessions, completion progress, evaluator activity, and outliers that received mixed feedback. This dashboard helps you quickly spot what’s working and what needs attention. For a full breakdown of the metrics and visualizations available, take a look at our Evaluation Summary Dashboard post.

Build a Smarter Review Panel

Do not stop at one persona. Build a diverse team:

  • Let evaluators disagree
  • Surface blind spots
  • View submissions from multiple perspectives

Stronger evaluation starts with better setup. With clear event context, detailed personas, and thoughtful prompts, you reduce review time and make smarter programming decisions.

To Sum Up

AI evaluator personas in Sessionboard streamline the session review process by aligning evaluations with your event goals. They provide consistency, speed, and real insight whether you are reviewing 50 or 500 submissions. With the right configuration, teams can score, sort, and select better content faster.

Want to improve how your team evaluates speaker submissions? Request a demo to see how Sessionboard makes the submission review process faster, clearer, and more strategic.

Mario Azuaje

Product Marketing