time-icon
8
min read

Building AI Evaluator Prompts in Sessionboard

Learn how to create powerful AI evaluator prompts in Sessionboard to streamline speaker submission reviews.

Introduction

The launch of Sessionboard’s AI Evaluations Assistant opens a new chapter in conference programming — where reviewing speaker submissions becomes faster, more consistent, and more scalable.

But here’s the key: even the best AI can only follow your lead.

That’s why crafting a clear, contextual, and well-structured evaluator prompt is so important. Think of it like onboarding a new member of your content team: the more background and direction you give, the better their work will be.

This guide will show you how to write a powerful prompt, break down every section, and offer a plug-and-play template you can use today.

Sessionboard | AI Evaluators

Step 1: Provide Event Context

Your AI evaluator needs to understand what kind of event it’s reviewing for. Don’t make it guess. Think of this like briefing a new reviewer.

What to include:

  • The conference theme and target audience
  • The desired tone or strategic goals of the content
  • Session formats or tracks (if applicable)

Pro tip: In your event setup, you can now add them information. Think of this like you would onboarding a new coworker. What is your intended audience? What goals do you have for the event content? An AI evaluating sessions for a technical DevOps conference needs different context than one reviewing talks for a marketing summit.

Example:

"You're evaluating sessions for our annual sustainability conference focused on practical solutions for mid-size businesses. Our audience includes sustainability directors, operations managers, and C-suite executives looking for actionable strategies they can implement within 6 months."

Step 2: Define Evaluator Personas

Create 3–5 AI personas that simulate a diverse review committee. Each persona should reflect a different perspective on what makes content great.

Persona Examples:

  • The Industry Veteran — prioritizes hands-on, actionable takeaways
  • The Innovation Scout — seeks out bold, future-focused ideas
  • The Audience Advocate — evaluates sessions based on attendee needs
  • The Program Curator — considers flow and variety across the agenda
  • The Skeptical Practitioner — challenges vague or theoretical proposals
Sessionboard AI Evaluators | AI Personas

Write each persona like you’d introduce them to a colleague: include role, values, expertise, and preferred content style.- use specific keywords and adjectives that reflect their personality.

In this example, we’ve created a senior academic who values providing constructive feedback. Other options could include mid-level experts who value hands-on approaches, or emerging voices who spot new trends and fresh ideas. You can create as many as you need to get a realistic sample of diverse perspectives.

Sessionboard AI Evaluators | Persona Example

Step 3: Use the Quick-Start Prompt Template

This detailed instruction set ensures the AI knows what to prioritize, how to score, and how to communicate clearly.

You are acting as an evaluator for a ___________________ conference. The audience includes _____________business leaders, ______________ managers, and ____________________ professionals looking for highly practical, solution-oriented content they can implement within the next 6–12 months.
Assist the event team by reviewing session submissions and scoring them consistently based on defined evaluation criteria. For each session, analyze the following:

Relevance to Event Theme
Is the session clearly aligned with the event's tracks or goals?

Speaker Credibility
Does the speaker have qualifications, past presentations, or affiliations that make them credible?

Clarity of Proposal
Is the session abstract clearly written with defined learning outcomes?

Audience Engagement Potential
Is the session likely to draw interest or participation from attendees based on this event's theme?

Originality
Is the topic fresh, innovative, or providing a unique point of view?

Scoring Method
1 = Poor / Not aligned
3 = Neutral / Adequate
5 = Excellent / Highly aligned

Aim to score most sessions between 3-5. Only score below 3 if the session completely lacks information in the description.
You can give partial decimal scores i.e. "3.5"

Average the scores to determine the overall recommendation:
4.0+ = Recommend for acceptance
3.0-3.9 = Neutral, may need human review
Below 3.0 = Do not recommend

Notes and Comments
Add a brief comment (~1-2 sentences) explaining the score, especially for low or high ratings

Do not favor sessions based on speaker identity, organization, or popularity unless it's explicitly part of the evaluation logic.

Focus strictly on the content submitted.

Focus on the details submitted and do not elaborate, hallucinate, or infer information. Stick to the information you have.

It is permitted to point out missing information (i.e. "More emphasis on this point would make a stronger submission.")
Additional context: [ANY SPECIAL CONSIDERATIONS]

Step 4: Add Smart Evaluator Questions

Sessionboard lets you add specific questions for each persona to answer. These help the AI dive deeper than the scores alone.

Sample High-Impact Questions:

  • How directly does this session address a real challenge faced by our target audience?
  • How original is the speaker’s point of view or approach?
  • What is the biggest risk in selecting this session for the agenda?
  • Is this session likely to generate strong engagement or discussion?
  • Does the abstract clearly articulate what the audience will walk away with?
  • Would this session be better suited as a workshop, panel, or solo talk — and why?
  • Is the content actionable, theoretical, or too generic?
  • Is this topic too niche or too broad for our audience?
  • Is there a mismatch between speaker experience and proposed session depth?
  • Could this speaker be a candidate for other formats (webinar, pre-event content)?

Step 5: Advanced Prompting Tips

  • Avoid "kitchen sink" prompts. Keep the instruction clean and intentional.
  • Include audience level. A beginner talk might miss the mark for a senior audience.
  • Reference speaker bios. Let the AI validate alignment between speaker and topic.
  • Keep your criteria consistent. Session #1 and #100 should be judged the same way.
  • Use bell curve logic. Remind the AI to score most sessions in the 3.0–4.0 range unless clearly outstanding or weak.

Pro Tip: Compare and Combine

You can run the same session through multiple AI personas and compare the outputs side by side. One might praise the practicality, while another questions the originality. That tension is exactly what leads to stronger agendas and better speaker coaching.

You can Build a Smarter, More Balanced Review Process with an AI “Review Committee”

In the real world, the best program committees aren’t made up of one kind of thinker. They’re intentionally diverse: blending analytical minds, member champions, skeptics, visionaries, and subject matter experts. Why? Because great content isn’t one-dimensional.

With Sessionboard’s AI Evaluations Assistant, you can mirror that same balance by creating multiple AI Evaluator Personas — each with a distinct lens. Think of it as your very own panel of AI Rivals: colleagues who don’t always agree, but collectively sharpen the decisions being made.

Instead of relying on a single AI voice, you can simulate a conversation between multiple evaluators — each one asking different questions, highlighting different risks, and surfacing different opportunities.

Example AI Personas to Include in Your Evaluation Panel

Sessionboard AI Evaluators | AI Personas to include

Why This Works So Well

  • You see the session from multiple angles, not just a binary “good or bad” lens.
  • You reduce blind spots — what one persona might miss, another will flag.
  • You create a more nuanced understanding of why a session should be included, improved, or passed on.
  • You simulate the friction and richness of a real committee — without the scheduling headaches.

It’s not about getting every persona to agree — it’s about making better, more informed programming decisions.

The takeaway? Don’t stop at one evaluator. Build a team of intelligent, complementary AI reviewers — and let them pressure-test your speaker submissions from every angle.

Let your AI Evaluations Assistant do what humans do best: disagree productively.

One other interesting item to consider prompting your AI Persona with:


Organizational Mission and Vision Alignment: Does the speaker have a background or expertise that aligns well with the mission and vision of the organization?

To help the AI Assistant look for that correlation, you may have to share your mission, vision or strategic goals of the event or organization in the prompt and then structure the initial submission form or follow up form to collect indicators of that value alignment. 

The Bottom Line

Your evaluator prompt(s) is/are critical
The more care you put into crafting your evaluator prompt, the more value you get from AI. Investing just 15 minutes upfront will save hours of manual review time and lead to stronger, more aligned programming.

Want a deeper look before the demo? Visit our Knowledge Center for a comprehensive overview of the AI Evaluator's capabilities.

Chris Carver

CEO & Co-Founder