Building AI Evaluator Prompts in Sessionboard
Learn how to create powerful AI evaluator prompts in Sessionboard to streamline speaker submission reviews.
Learn how to create powerful AI evaluator prompts in Sessionboard to streamline speaker submission reviews.
The launch of Sessionboard’s AI Evaluations Assistant opens a new chapter in conference programming — where reviewing speaker submissions becomes faster, more consistent, and more scalable.
But here’s the key: even the best AI can only follow your lead.
That’s why crafting a clear, contextual, and well-structured evaluator prompt is so important. Think of it like onboarding a new member of your content team: the more background and direction you give, the better their work will be.
This guide will show you how to write a powerful prompt, break down every section, and offer a plug-and-play template you can use today.
Your AI evaluator needs to understand what kind of event it’s reviewing for. Don’t make it guess. Think of this like briefing a new reviewer.
What to include:
Pro tip: In your event setup, you can now add them information. Think of this like you would onboarding a new coworker. What is your intended audience? What goals do you have for the event content? An AI evaluating sessions for a technical DevOps conference needs different context than one reviewing talks for a marketing summit.
"You're evaluating sessions for our annual sustainability conference focused on practical solutions for mid-size businesses. Our audience includes sustainability directors, operations managers, and C-suite executives looking for actionable strategies they can implement within 6 months."
Create 3–5 AI personas that simulate a diverse review committee. Each persona should reflect a different perspective on what makes content great.
Write each persona like you’d introduce them to a colleague: include role, values, expertise, and preferred content style.- use specific keywords and adjectives that reflect their personality.
In this example, we’ve created a senior academic who values providing constructive feedback. Other options could include mid-level experts who value hands-on approaches, or emerging voices who spot new trends and fresh ideas. You can create as many as you need to get a realistic sample of diverse perspectives.
This detailed instruction set ensures the AI knows what to prioritize, how to score, and how to communicate clearly.
You are acting as an evaluator for a ___________________ conference. The audience includes _____________business leaders, ______________ managers, and ____________________ professionals looking for highly practical, solution-oriented content they can implement within the next 6–12 months.
Assist the event team by reviewing session submissions and scoring them consistently based on defined evaluation criteria. For each session, analyze the following:
Relevance to Event Theme
Is the session clearly aligned with the event's tracks or goals?
Speaker Credibility
Does the speaker have qualifications, past presentations, or affiliations that make them credible?
Clarity of Proposal
Is the session abstract clearly written with defined learning outcomes?
Audience Engagement Potential
Is the session likely to draw interest or participation from attendees based on this event's theme?
Originality
Is the topic fresh, innovative, or providing a unique point of view?
Scoring Method
1 = Poor / Not aligned
3 = Neutral / Adequate
5 = Excellent / Highly aligned
Aim to score most sessions between 3-5. Only score below 3 if the session completely lacks information in the description.
You can give partial decimal scores i.e. "3.5"
Average the scores to determine the overall recommendation:
4.0+ = Recommend for acceptance
3.0-3.9 = Neutral, may need human review
Below 3.0 = Do not recommend
Notes and Comments
Add a brief comment (~1-2 sentences) explaining the score, especially for low or high ratings
Do not favor sessions based on speaker identity, organization, or popularity unless it's explicitly part of the evaluation logic.
Focus strictly on the content submitted.
Focus on the details submitted and do not elaborate, hallucinate, or infer information. Stick to the information you have.
It is permitted to point out missing information (i.e. "More emphasis on this point would make a stronger submission.")
Additional context: [ANY SPECIAL CONSIDERATIONS]
Sessionboard lets you add specific questions for each persona to answer. These help the AI dive deeper than the scores alone.
You can run the same session through multiple AI personas and compare the outputs side by side. One might praise the practicality, while another questions the originality. That tension is exactly what leads to stronger agendas and better speaker coaching.
You can Build a Smarter, More Balanced Review Process with an AI “Review Committee”
In the real world, the best program committees aren’t made up of one kind of thinker. They’re intentionally diverse: blending analytical minds, member champions, skeptics, visionaries, and subject matter experts. Why? Because great content isn’t one-dimensional.
With Sessionboard’s AI Evaluations Assistant, you can mirror that same balance by creating multiple AI Evaluator Personas — each with a distinct lens. Think of it as your very own panel of AI Rivals: colleagues who don’t always agree, but collectively sharpen the decisions being made.
Instead of relying on a single AI voice, you can simulate a conversation between multiple evaluators — each one asking different questions, highlighting different risks, and surfacing different opportunities.
It’s not about getting every persona to agree — it’s about making better, more informed programming decisions.
The takeaway? Don’t stop at one evaluator. Build a team of intelligent, complementary AI reviewers — and let them pressure-test your speaker submissions from every angle.
Let your AI Evaluations Assistant do what humans do best: disagree productively.
Organizational Mission and Vision Alignment: Does the speaker have a background or expertise that aligns well with the mission and vision of the organization?
To help the AI Assistant look for that correlation, you may have to share your mission, vision or strategic goals of the event or organization in the prompt and then structure the initial submission form or follow up form to collect indicators of that value alignment.
Your evaluator prompt(s) is/are critical
The more care you put into crafting your evaluator prompt, the more value you get from AI. Investing just 15 minutes upfront will save hours of manual review time and lead to stronger, more aligned programming.
Want a deeper look before the demo? Visit our Knowledge Center for a comprehensive overview of the AI Evaluator's capabilities.