11 Tough Truths About A Manual Session Selection and Evaluation Process (And Why It’s Time for a Smarter Way)
Why we built a new way to evaluate session and speaker submissions in your call for content.
Why we built a new way to evaluate session and speaker submissions in your call for content.
If you’ve ever managed a call for speakers, you already know: the process of evaluating hundreds—if not thousands—of session proposals is exhausting. But here’s the thing no one talks about:
It’s not your fault. It’s the system.
The current process wasn’t built for the scale, speed, or complexity of modern events. And most event pros haven’t questioned it - because, until now, there hasn’t really been a better way.
At Sessionboard, our team has spent months interviewing conference organizers, event strategists, and content teams to map out the hidden friction in today’s submission review process. What we found was a set of persistent, recurring pain points - some you’ve likely accepted as “just the way it is.”
But what if we didn’t have to?
Let’s break down the 11 most common - and costly -challenges we uncovered:
When you receive 500+ proposals, even a simple review process becomes a full-time job. Your team’s time vanishes, and the pressure to cut corners grows.
When 500 session proposals feels like 5,000.
Even the most dedicated reviewers hit a wall. Fatigue leads to rushed scores, skimmed proposals, and overlooked gems. That’s why many organizers limit how much an evaluator can review - they know attention spans wane. It’s also why some shut down their call for papers early…but that introduces another risk: outdated content in a fast-moving market.
A killer submission can still be lost if it’s reviewed by someone lacking the right expertise or context. Without cross-evaluator calibration, even great content gets buried.
You shouldn’t have to send 12 reminder emails just to get someone to score 15 sessions. Managing the process often takes more time than the evaluations themselves.
Even with detailed instructions and a clear rubric, getting full alignment is tough. One evaluator may value practical, consistent content; another may favor bold, visionary ideas. Some interpret “actionable” as a checklist, others as an inspiring new direction. Despite your best efforts to guide them, each evaluator brings their own lens—which means the same proposal could receive wildly different scores, depending on who's reading it.
The result? Great content gets overlooked - not because it missed the mark, but because the mark keeps moving.
Redundancy. Gaps. Overlap. Most tools don’t let you see the whole program evolve in real time—so you catch these problems late, when it’s harder (or politically risky) to make changes. Whereas, an AI Evaluations tool can look for gaps, conflicts, similarities and even opportunities in your content strategy.
Favoritism, name recognition, and over-familiarity skew results—often unintentionally. The same speakers return year after year, not necessarily because they’re the best, but because they’re familiar. True innovation gets filtered out before it has a chance.
Why was a session accepted? Why was another rejected? Without a clear record of how decisions were made—or who made them—your team loses institutional memory, and your speakers lose trust in the process. The reality is, some speakers may not be a fit for prime-time today, but could be worth nurturing for the future. Having transparency allows you to give more thoughtful feedback or coaching, and preserve the relationship for future events, webinars, or content opportunities. Keep in mind that, Poor feedback or no feedback can damage your brand, frustrate submitters, and discourage future participation.
Some submissions only get one quick review—and that’s the final word. Without a system that allows for second looks or alternate points of view, strong proposals can be unfairly dismissed or misunderstood. The truth is, it’s hard to evaluate every session from multiple angles—strategic, technical, creative, audience fit—and when that doesn’t happen, important nuances get missed. The result: more risk of letting a great session fall through the cracks or letting a weak one through unchecked.
When only one set of eyes sees a submission, the margin for error gets a lot bigger.
Between email chains, Google Sheets, and duct-taped workflows, most review processes are held together by sheer willpower. This invites errors, miscommunication, and time loss.
If your submission review system feels like a scavenger hunt, you're not alone
Every year, the cycle restarts: New evaluators, new chaos, same problems. You reinvent the wheel with every call for papers.
These issues shape the quality, diversity, and relevance of your entire program. And they’ve been accepted for far too long.
That’s why we built the first ever AI Session Evaluations Assistant - a new way to evaluate session and speaker submissions.
AI Evaluations allow you to:
It’s not about replacing humans. It’s about supporting them with smarter tools, built for the way great events are run today.
Book a demo today to experience the new standard in session and speaker selection.