How To Run A Call For Papers That Actually Works
Learn how to run a call for papers that attracts great submissions
.png)
Every conference season, event organizers face the same wall: you open your inbox, post a form link somewhere, and wait. Submissions trickle in. Reviewers respond on different timelines. Feedback lives in email threads, scorecards in spreadsheets, and follow-up messages in someone's personal inbox.
By the time you're ready to finalize your agenda, you've spent more time managing the process than evaluating the content.
A call for papers — or CFP — is supposed to surface your best speakers and sessions. When it's run well, it's one of the highest-leverage activities in your entire event calendar. When it's not, it becomes one of the biggest time drains your team faces each cycle.
This guide covers how to structure a CFP from the ground up: from writing the brief to closing submissions, evaluating sessions at scale, and building a process your team can actually repeat.
A call for papers (CFP) is an open invitation to speakers, practitioners, researchers, or subject-matter experts to submit session ideas for consideration at a conference, event, or summit. The term originates in academia but is now standard across corporate conferences, technology events, associations, and industry summits of all sizes.
The CFP is the front door to your content program. Everything downstream — your agenda, your speaker lineup, your event marketing, your attendee experience — starts with what comes in through that door.
Most event organizers treat the CFP as a logistics task. The teams that get the most out of it treat it as a content strategy.
When a CFP is well-designed, you don't just receive submissions — you attract the right speakers, filter for quality early, and set up a review process that gives you a defensible, high-quality agenda with less last-minute scrambling.
When it isn't, you spend weeks chasing down information, resolving committee disagreements over incomplete submissions, and rebuilding the process from scratch next year.
The most common reason a call for papers produces disappointing results isn't promotion, it's clarity. Submitters don't know what you want, so they send you what they have.
Before you publish anything, get alignment on three things.
Your audience, precisely. Not "enterprise tech professionals" but specifically: what role, what level, what problems are they trying to solve this year? The more precisely you can define your audience, the more precisely submitters can pitch to them.
Your content mix. How many sessions are you programming? What formats — keynotes, breakouts, workshops, panels? What ratio of beginner to advanced content? What topics are overrepresented in your inbox every year that you want less of, and what are you actively trying to find more of?
Your evaluation criteria. Before submissions open, define what a great submission looks like. This matters for two reasons: it helps submitters self-select and keeps your review committee aligned as scoring begins. Common criteria include: relevance to the audience, speaker credibility, originality of angle, and practical applicability.
Getting this right before launch is not extra work — it's the work that prevents four rounds of committee revisions later.
The CFP brief is your first filter. A well-written brief attracts submissions that are easier to evaluate and far more likely to produce a great agenda. A vague brief invites everything, which means more review time and a higher rejection rate.
Lead with your audience, not your event. Most CFP briefs open with the event name, date, and a paragraph about the organization. Flip it. Start with who will be in the room and what they care about this year. Speakers need to picture the audience before they can pitch to them.
Be specific about what you will not accept. Every CFP brief should include an explicit note about session types you're not programming, vendor pitches, introductory-level content if your event skews advanced, and topics covered heavily last year. This saves everyone time.
Give submitters a format to follow. The more structured your submission form, the easier your evaluation process becomes. Ask for: a session title, a one-paragraph abstract, three to five key takeaways the attendee will leave with, a speaker bio, and any supporting materials (past talk recordings, writing samples). Avoid open-ended prompts; structured fields make scoring consistent.
Set honest timelines. If your review process typically takes six weeks, say six weeks. Submitters plan around these dates. Unexplained delays damage your relationship with the speaker community before the event even starts.
Tell speakers what happens after they submit. Will everyone receive a decision? By what date? Will you provide feedback on rejected submissions? Transparency here builds trust and increases the quality of your next CFP.
Once submissions open, the operational challenge shifts from marketing to management. For events that receive dozens or hundreds of session submissions, tracking, routing, and reviewing them is where most teams lose control.
Centralize everything from day one. Every submission needs to land in one place, visible to every reviewer who needs it. Spreadsheets create version control problems the moment a second person opens them. Shared inboxes create accountability gaps. A single platform that collects, tags, and routes all session submissions eliminates both.
Assign reviewers early and clearly. Each submission should have a named reviewer (or panel of reviewers) responsible for scoring it by a specific date. Ambiguous ownership is the single biggest cause of review delays. If a submission sits unassigned for two weeks, no one is to blame, which means no one fixes it.
Use consistent scoring criteria. This sounds obvious, but most review committees discover mid-process that their criteria mean different things to different people. A "5 out of 5 for audience relevance" from one reviewer and a "3 out of 5" from another, on the same submission, often reflects an alignment problem rather than a genuine disagreement about quality. Calibration sessions, where the full committee scores a sample batch together before real scoring begins, reduce this dramatically.
Track status at all times. Every submission should have a live status: received, under review, approved, waitlisted, rejected, or confirmed. Status visibility reduces the number of "where does this stand?" conversations by orders of magnitude and ensures nothing falls through the cracks between submission close and agenda lock.
Build an appeals process. For large events with competitive CFPs, borderline submissions are inevitable. Having a defined process for escalating close calls, rather than resolving them informally — keeps the committee fair and the process defensible.
Scoring hundreds of session submissions is genuinely difficult work. The best CFP evaluation processes share a few traits.
Blind review where possible. Removing speaker names and credentials from the initial scoring round reduces the influence of name recognition and surfaces content quality as the primary factor. This is especially important for events trying to diversify their speaker lineups beyond the usual circuit.
Separate content quality from speaker qualification. Evaluate the session idea first — is the topic compelling? Is the framing original? Will attendees leave with something actionable? Then evaluate the speaker — do they have the expertise and platform presence to deliver on the pitch? Conflating these two questions into a single score introduces noise into your data.
Use weighted scoring for what matters most. If audience relevance is twice as important as speaker originality for your event, build that into the scoring system. Don't average a five-point scale across all criteria and pretend the results are meaningful.
Give reviewers a deadline and mean it. Without a firm deadline, scoring always takes longer than planned. Build a buffer of three to five business days between your internal review deadline and your decision communication date — enough time to resolve edge cases without crashing your timeline.
Document the decisions. Every rejected submission should have a reason logged — even a brief one. This matters for two reasons: it protects your committee if a speaker follows up to ask why, and it creates institutional memory for the next cycle ("we tend to reject sessions on this topic because X").
The way you communicate CFP decisions is a direct signal to the speaker community about how your event is run. A thoughtful rejection builds goodwill. A form email with no context, or worse, no communication at all, loses a speaker for years.
Send every decision, including rejections. This sounds basic, but a meaningful percentage of CFPs never close the loop with declined submitters. Every person who submitted invested time. They deserve a response.
Be prompt. The longer the gap between submission close and decision, the more submitters have made other plans. For competitive events, top speakers receive multiple invitations. Delayed decisions cost you, first-choice speakers.
Personalize where you can. For waitlisted or borderline submissions, a brief note about why they weren't selected this year — and an invitation to apply next year — is one of the highest-ROI communication investments in your whole event cycle. It converts a rejection into a relationship.
Make it easy for confirmed speakers to say yes. The acceptance confirmation should include everything a speaker needs to commit to: session date and time (if known), format and duration, next steps, and a clear point of contact for questions. The faster a speaker can confirm, the faster your agenda firms up.
A call for papers doesn't end at selection — it ends when your agenda is live, your speakers are onboarded, and your content is ready to market. Most of the time lost in the back half of the CFP process comes from the same place: fragmented handoffs between the evaluation phase and the speaker management phase.
When submissions live in one system and speaker onboarding lives somewhere else (a different spreadsheet, a different inbox, a different platform), information has to be re-entered, re-confirmed, and re-chased. Speaker bios submitted during the CFP are requested again during onboarding. Session titles change between acceptance and the agenda going live, and no one's sure which version is current.
The most efficient CFP processes carry the submission data — title, abstract, speaker details, review notes — directly into the speaker management workflow. Accepted speakers move from "approved" to "onboarding" without anyone having to re-enter a row. Session details flow from the evaluation platform to the agenda builder without a copy-paste step.
That continuity is where the real time savings live. Not in reviewing submissions faster — but in eliminating the manual handoff between the end of the CFP and the start of speaker onboarding.
Sessionboard is the next-generation speaker and content management platform built for exactly this: managing the complete lifecycle from session submission through published agenda — without the spreadsheet overhead.
With Sessionboard, the call for papers process works end-to-end in one place:
The result: less time on administration, more time on programming — and a content workflow your team can actually repeat at scale.
A great call for papers doesn't happen by accident. It starts with clarity before you launch, earns quality submissions through a well-written brief, manages volume through a consistent process, and closes the loop in ways that build your reputation with the speaker community over time.
The teams that run CFPs well — year after year, at scale — aren't working harder. They're working with a process that's built to repeat. And increasingly, they're doing it with a platform that handles the administration so the team can focus on the programming.
If you're heading into a new CFP cycle, [see how Sessionboard handles the full submission and review workflow →]
What's the difference between a call for papers and a call for speakers?
A call for papers (CFP) originated in academic and research contexts, where the emphasis was on written submissions and peer-reviewed content. A call for speakers typically refers to a broader invitation for session pitches, demonstrations, or presentations without a formal paper requirement. In practice, many corporate and industry conferences use "call for papers" and "call for speakers" interchangeably — both refer to an open invitation to submit sessions.
How long should a call for papers be open?
Most conferences keep their CFP open for four to eight weeks. Shorter windows limit the number of submissions; longer windows tend to produce last-minute rushes that compress the review timeline. A four-to-six-week submission window, followed by a four-to-six-week review period, is a common and workable structure for mid- to large-scale events.
How many reviewers should score each submission?
For most events, two to three reviewers per submission is the right balance — enough to catch individual bias without creating committee paralysis. High-stakes or highly competitive CFPs sometimes use four to five reviewers per session, with a defined escalation path for close decisions.
Should we accept vendor session submissions?
This depends entirely on your event model. If your conference is vendor-sponsored, some vendor-led content is expected. If your audience comes for practitioner knowledge, vendor pitches erode trust quickly. The most common approach: accept vendor submissions but hold them to the same content quality bar as any other submission, with an explicit prohibition on product pitches from the stage.
How do we handle speaker cancellations after the CFP closes?
Build a waitlist during your review process. For every accepted session, have one or two backup sessions ready to slot in — ideally from the same topic area so the swap is seamless for attendees. Communicate your cancellation policy clearly at the time of acceptance, including timelines and any obligations on both sides.
Learn how to run a call for papers that attracts great submissions
Every conference season, event organizers face the same wall: you open your inbox, post a form link somewhere, and wait. Submissions trickle in. Reviewers respond on different timelines. Feedback lives in email threads, scorecards in spreadsheets, and follow-up messages in someone's personal inbox.
By the time you're ready to finalize your agenda, you've spent more time managing the process than evaluating the content.
A call for papers — or CFP — is supposed to surface your best speakers and sessions. When it's run well, it's one of the highest-leverage activities in your entire event calendar. When it's not, it becomes one of the biggest time drains your team faces each cycle.
This guide covers how to structure a CFP from the ground up: from writing the brief to closing submissions, evaluating sessions at scale, and building a process your team can actually repeat.
A call for papers (CFP) is an open invitation to speakers, practitioners, researchers, or subject-matter experts to submit session ideas for consideration at a conference, event, or summit. The term originates in academia but is now standard across corporate conferences, technology events, associations, and industry summits of all sizes.
The CFP is the front door to your content program. Everything downstream — your agenda, your speaker lineup, your event marketing, your attendee experience — starts with what comes in through that door.
Most event organizers treat the CFP as a logistics task. The teams that get the most out of it treat it as a content strategy.
When a CFP is well-designed, you don't just receive submissions — you attract the right speakers, filter for quality early, and set up a review process that gives you a defensible, high-quality agenda with less last-minute scrambling.
When it isn't, you spend weeks chasing down information, resolving committee disagreements over incomplete submissions, and rebuilding the process from scratch next year.
The most common reason a call for papers produces disappointing results isn't promotion, it's clarity. Submitters don't know what you want, so they send you what they have.
Before you publish anything, get alignment on three things.
Your audience, precisely. Not "enterprise tech professionals" but specifically: what role, what level, what problems are they trying to solve this year? The more precisely you can define your audience, the more precisely submitters can pitch to them.
Your content mix. How many sessions are you programming? What formats — keynotes, breakouts, workshops, panels? What ratio of beginner to advanced content? What topics are overrepresented in your inbox every year that you want less of, and what are you actively trying to find more of?
Your evaluation criteria. Before submissions open, define what a great submission looks like. This matters for two reasons: it helps submitters self-select and keeps your review committee aligned as scoring begins. Common criteria include: relevance to the audience, speaker credibility, originality of angle, and practical applicability.
Getting this right before launch is not extra work — it's the work that prevents four rounds of committee revisions later.
The CFP brief is your first filter. A well-written brief attracts submissions that are easier to evaluate and far more likely to produce a great agenda. A vague brief invites everything, which means more review time and a higher rejection rate.
Lead with your audience, not your event. Most CFP briefs open with the event name, date, and a paragraph about the organization. Flip it. Start with who will be in the room and what they care about this year. Speakers need to picture the audience before they can pitch to them.
Be specific about what you will not accept. Every CFP brief should include an explicit note about session types you're not programming, vendor pitches, introductory-level content if your event skews advanced, and topics covered heavily last year. This saves everyone time.
Give submitters a format to follow. The more structured your submission form, the easier your evaluation process becomes. Ask for: a session title, a one-paragraph abstract, three to five key takeaways the attendee will leave with, a speaker bio, and any supporting materials (past talk recordings, writing samples). Avoid open-ended prompts; structured fields make scoring consistent.
Set honest timelines. If your review process typically takes six weeks, say six weeks. Submitters plan around these dates. Unexplained delays damage your relationship with the speaker community before the event even starts.
Tell speakers what happens after they submit. Will everyone receive a decision? By what date? Will you provide feedback on rejected submissions? Transparency here builds trust and increases the quality of your next CFP.
Once submissions open, the operational challenge shifts from marketing to management. For events that receive dozens or hundreds of session submissions, tracking, routing, and reviewing them is where most teams lose control.
Centralize everything from day one. Every submission needs to land in one place, visible to every reviewer who needs it. Spreadsheets create version control problems the moment a second person opens them. Shared inboxes create accountability gaps. A single platform that collects, tags, and routes all session submissions eliminates both.
Assign reviewers early and clearly. Each submission should have a named reviewer (or panel of reviewers) responsible for scoring it by a specific date. Ambiguous ownership is the single biggest cause of review delays. If a submission sits unassigned for two weeks, no one is to blame, which means no one fixes it.
Use consistent scoring criteria. This sounds obvious, but most review committees discover mid-process that their criteria mean different things to different people. A "5 out of 5 for audience relevance" from one reviewer and a "3 out of 5" from another, on the same submission, often reflects an alignment problem rather than a genuine disagreement about quality. Calibration sessions, where the full committee scores a sample batch together before real scoring begins, reduce this dramatically.
Track status at all times. Every submission should have a live status: received, under review, approved, waitlisted, rejected, or confirmed. Status visibility reduces the number of "where does this stand?" conversations by orders of magnitude and ensures nothing falls through the cracks between submission close and agenda lock.
Build an appeals process. For large events with competitive CFPs, borderline submissions are inevitable. Having a defined process for escalating close calls, rather than resolving them informally — keeps the committee fair and the process defensible.
Scoring hundreds of session submissions is genuinely difficult work. The best CFP evaluation processes share a few traits.
Blind review where possible. Removing speaker names and credentials from the initial scoring round reduces the influence of name recognition and surfaces content quality as the primary factor. This is especially important for events trying to diversify their speaker lineups beyond the usual circuit.
Separate content quality from speaker qualification. Evaluate the session idea first — is the topic compelling? Is the framing original? Will attendees leave with something actionable? Then evaluate the speaker — do they have the expertise and platform presence to deliver on the pitch? Conflating these two questions into a single score introduces noise into your data.
Use weighted scoring for what matters most. If audience relevance is twice as important as speaker originality for your event, build that into the scoring system. Don't average a five-point scale across all criteria and pretend the results are meaningful.
Give reviewers a deadline and mean it. Without a firm deadline, scoring always takes longer than planned. Build a buffer of three to five business days between your internal review deadline and your decision communication date — enough time to resolve edge cases without crashing your timeline.
Document the decisions. Every rejected submission should have a reason logged — even a brief one. This matters for two reasons: it protects your committee if a speaker follows up to ask why, and it creates institutional memory for the next cycle ("we tend to reject sessions on this topic because X").
The way you communicate CFP decisions is a direct signal to the speaker community about how your event is run. A thoughtful rejection builds goodwill. A form email with no context, or worse, no communication at all, loses a speaker for years.
Send every decision, including rejections. This sounds basic, but a meaningful percentage of CFPs never close the loop with declined submitters. Every person who submitted invested time. They deserve a response.
Be prompt. The longer the gap between submission close and decision, the more submitters have made other plans. For competitive events, top speakers receive multiple invitations. Delayed decisions cost you, first-choice speakers.
Personalize where you can. For waitlisted or borderline submissions, a brief note about why they weren't selected this year — and an invitation to apply next year — is one of the highest-ROI communication investments in your whole event cycle. It converts a rejection into a relationship.
Make it easy for confirmed speakers to say yes. The acceptance confirmation should include everything a speaker needs to commit to: session date and time (if known), format and duration, next steps, and a clear point of contact for questions. The faster a speaker can confirm, the faster your agenda firms up.
A call for papers doesn't end at selection — it ends when your agenda is live, your speakers are onboarded, and your content is ready to market. Most of the time lost in the back half of the CFP process comes from the same place: fragmented handoffs between the evaluation phase and the speaker management phase.
When submissions live in one system and speaker onboarding lives somewhere else (a different spreadsheet, a different inbox, a different platform), information has to be re-entered, re-confirmed, and re-chased. Speaker bios submitted during the CFP are requested again during onboarding. Session titles change between acceptance and the agenda going live, and no one's sure which version is current.
The most efficient CFP processes carry the submission data — title, abstract, speaker details, review notes — directly into the speaker management workflow. Accepted speakers move from "approved" to "onboarding" without anyone having to re-enter a row. Session details flow from the evaluation platform to the agenda builder without a copy-paste step.
That continuity is where the real time savings live. Not in reviewing submissions faster — but in eliminating the manual handoff between the end of the CFP and the start of speaker onboarding.
Sessionboard is the next-generation speaker and content management platform built for exactly this: managing the complete lifecycle from session submission through published agenda — without the spreadsheet overhead.
With Sessionboard, the call for papers process works end-to-end in one place:
The result: less time on administration, more time on programming — and a content workflow your team can actually repeat at scale.
A great call for papers doesn't happen by accident. It starts with clarity before you launch, earns quality submissions through a well-written brief, manages volume through a consistent process, and closes the loop in ways that build your reputation with the speaker community over time.
The teams that run CFPs well — year after year, at scale — aren't working harder. They're working with a process that's built to repeat. And increasingly, they're doing it with a platform that handles the administration so the team can focus on the programming.
If you're heading into a new CFP cycle, [see how Sessionboard handles the full submission and review workflow →]
What's the difference between a call for papers and a call for speakers?
A call for papers (CFP) originated in academic and research contexts, where the emphasis was on written submissions and peer-reviewed content. A call for speakers typically refers to a broader invitation for session pitches, demonstrations, or presentations without a formal paper requirement. In practice, many corporate and industry conferences use "call for papers" and "call for speakers" interchangeably — both refer to an open invitation to submit sessions.
How long should a call for papers be open?
Most conferences keep their CFP open for four to eight weeks. Shorter windows limit the number of submissions; longer windows tend to produce last-minute rushes that compress the review timeline. A four-to-six-week submission window, followed by a four-to-six-week review period, is a common and workable structure for mid- to large-scale events.
How many reviewers should score each submission?
For most events, two to three reviewers per submission is the right balance — enough to catch individual bias without creating committee paralysis. High-stakes or highly competitive CFPs sometimes use four to five reviewers per session, with a defined escalation path for close decisions.
Should we accept vendor session submissions?
This depends entirely on your event model. If your conference is vendor-sponsored, some vendor-led content is expected. If your audience comes for practitioner knowledge, vendor pitches erode trust quickly. The most common approach: accept vendor submissions but hold them to the same content quality bar as any other submission, with an explicit prohibition on product pitches from the stage.
How do we handle speaker cancellations after the CFP closes?
Build a waitlist during your review process. For every accepted session, have one or two backup sessions ready to slot in — ideally from the same topic area so the swap is seamless for attendees. Communicate your cancellation policy clearly at the time of acceptance, including timelines and any obligations on both sides.

Stay up to date with our latest news
See how real teams simplify speaker management, scale content operations, and run smoother events with Sessionboard.