time-icon
5
min read

11 Tough Truths About A Manual Session Selection and Evaluation Process (And Why It’s Time for a Smarter Way)

Why we built a new way to evaluate session and speaker submissions in your call for content.

It Doesn't Have To Be "The Way It Is"

If you’ve ever managed a call for speakers, you already know: the process of evaluating hundreds—if not thousands—of session proposals is exhausting. But here’s the thing no one talks about:

It’s not your fault. It’s the system.

The current process wasn’t built for the scale, speed, or complexity of modern events. And most event pros haven’t questioned it - because, until now, there hasn’t really been a better way.

At Sessionboard, our team has spent months interviewing conference organizers, event strategists, and content teams to map out the hidden friction in today’s submission review process. What we found was a set of persistent, recurring pain points - some you’ve likely accepted as “just the way it is.”

But what if we didn’t have to?

Let’s break down the 11 most common - and costly -challenges we uncovered:

1. Overwhelming Volume of Submissions

When you receive 500+ proposals, even a simple review process becomes a full-time job. Your team’s time vanishes, and the pressure to cut corners grows.

When 500 session proposals feels like 5,000.

2. Evaluator Fatigue & Drop-Off

Even the most dedicated reviewers hit a wall. Fatigue leads to rushed scores, skimmed proposals, and overlooked gems. That’s why many organizers limit how much an evaluator can review - they know attention spans wane. It’s also why some shut down their call for papers early…but that introduces another risk: outdated content in a fast-moving market.

3. Missed Gems

A killer submission can still be lost if it’s reviewed by someone lacking the right expertise or context. Without cross-evaluator calibration, even great content gets buried.

4. Herding Evaluators = Project Management Nightmare

You shouldn’t have to send 12 reminder emails just to get someone to score 15 sessions. Managing the process often takes more time than the evaluations themselves.

5. Lack of Alignment (Scoring, Goals, etc.)

Even with detailed instructions and a clear rubric, getting full alignment is tough. One evaluator may value practical, consistent content; another may favor bold, visionary ideas. Some interpret “actionable” as a checklist, others as an inspiring new direction. Despite your best efforts to guide them, each evaluator brings their own lens—which means the same proposal could receive wildly different scores, depending on who's reading it.

The result? Great content gets overlooked - not because it missed the mark, but because the mark keeps moving.

6. Poor Pattern Recognition (No Bird’s-Eye View)

Redundancy. Gaps. Overlap. Most tools don’t let you see the whole program evolve in real time—so you catch these problems late, when it’s harder (or politically risky) to make changes. Whereas, an AI Evaluations tool can look for gaps, conflicts, similarities and even opportunities in your content strategy.   

7. Hidden or Unconscious Bias

Favoritism, name recognition, and over-familiarity skew results—often unintentionally. The same speakers return year after year, not necessarily because they’re the best, but because they’re familiar. True innovation gets filtered out before it has a chance.

8. Limited Transparency or Audit Trail

Why was a session accepted? Why was another rejected? Without a clear record of how decisions were made—or who made them—your team loses institutional memory, and your speakers lose trust in the process. The reality is, some speakers may not be a fit for prime-time today, but could be worth nurturing for the future. Having transparency allows you to give more thoughtful feedback or coaching, and preserve the relationship for future events, webinars, or content opportunities. Keep in mind that, Poor feedback or no feedback can damage your brand, frustrate submitters, and discourage future participation.

9. No Second Opinion or Diverse Perspectives

Some submissions only get one quick review—and that’s the final word. Without a system that allows for second looks or alternate points of view, strong proposals can be unfairly dismissed or misunderstood. The truth is, it’s hard to evaluate every session from multiple angles—strategic, technical, creative, audience fit—and when that doesn’t happen, important nuances get missed. The result: more risk of letting a great session fall through the cracks or letting a weak one through unchecked.

When only one set of eyes sees a submission, the margin for error gets a lot bigger.

10. Manual Processes, Frankenstacks & Spreadsheets

Between email chains, Google Sheets, and duct-taped workflows, most review processes are held together by sheer willpower. This invites errors, miscommunication, and time loss.

If your submission review system feels like a scavenger hunt, you're not alone

11. Repeat Pain Every Event Cycle

Every year, the cycle restarts: New evaluators, new chaos, same problems. You reinvent the wheel with every call for papers.

So, What Now?

These issues shape the quality, diversity, and relevance of your entire program. And they’ve been accepted for far too long.

That’s why we built the first ever AI Session Evaluations Assistant - a new way to evaluate session and speaker submissions.

AI Evaluations allow you to:

  • Build customer evaluator personas that tailors expertise, feedback style, and more
  • Assign AI Evaluators to the right sessions based on their expertise
  • Greatly accelerates session selection - evaluate sessions in minutes
  • Spend more time on elevating your agenda

It’s not about replacing humans. It’s about supporting them with smarter tools, built for the way great events are run today.

👉 Ready to See the Difference?

Book a demo today to experience the new standard in session and speaker selection.

Chris Carver

CEO & Co-Founder