AI critiques
Storymakers reviews of every deck.
Each deck reviewed by an AI editor through the Storymakers lens — narrative arc, opening hook, closing call-to-action, and action-title quality. With a one-line verdict, top strengths and weaknesses, and three concrete fixes per deck.
1086 reviewed decks
· mean 43.8
· click a bar to filter
Search by prescribed fix
most common opening verb across 3405 suggestions↑ Top 5 on closing
↓ Toughest critiques
“ ” Verdict gallery
- “A solid, clearly-structured Roland Berger advocacy deck with declarative titles and a punchy close — useful as a Storymakers exemplar for action-title discipline and section dividers, but not for opening hooks or tight SCQA framing.” — RolandBerger, 2022
- “A disciplined Deloitte industry POV with a strong answer-first opening and a rallying close — usable as a Storymakers exemplar for S→C→A→R framing and call-to-action craft, but the middle analytical pillars are a cautionary tale on MECE sprawl and topic-label titles.” — Deloitte, 2021
- “A well-structured thought-leadership report with a clean six-pillar MECE spine and mostly insight-bearing body titles — use its divider architecture as a Storymakers exemplar, but not its opening or its generically-titled recommendations.” — Deloitte, 2022
- “Polished investor-day deck with strong action titles and a clean opening/closing thesis pair, but missing an explicit Complication and pillar signposting — use the title craft and closing pages as exemplars, not the overall narrative architecture.” — JPMorgan, 2022
- “A competent investor-day deck with strong quantified action titles and a clean closing arc, but front-matter-heavy and missing explicit MECE pillars — useful as a teaching example for action-title craft (p.9, p.13), not for overall structure.” — JPMorgan, 2025
- “Solid, disciplined analytical consulting report with a clean MECE five-finding spine and a rare, well-built closing playbook - use the recommendation slides (p25, p31, p41) as action-title exemplars, but not the persona or data sections, where titles regress to topic labels.” — Accenture, 2019
- “A solidly-built thought-leadership report with answer-first framing and a clear call to action, but over-long openings and under-signposted middle acts keep it from being a Storymakers exemplar — use p.22-30 as a teaching example of analysis-to-recommendation flow, not the deck's overall structure.” — Accenture, 2022
- “A competently structured Accenture thought-leadership report with a clean four-act story and a strong closing call to action - useful as a teaching example for section architecture and audience-segmented recommendations, but its delayed thesis and figure-caption titles keep it out of Storymakers-exemplar territory.” — Accenture, 2025
All reviewed decks
1086 matching · page 46 / 46
12
closing
HR Pulse Survey Presentation of results
“A competently organized survey reference document, not a Storymakers deck — useful as a negative example of how topic-ordered analytical dumps bury the insight and skip the recommendation act entirely.”
↓ Zero recommendations or 'so what' slides across 59 pages — the deck is 49 consecutive analyze_data slides with no resolution act
12
closing
ipsos hisf world affairs report 2023 final
“A topic-indexed survey data dump with strong parallel structure but no thesis, no recommendation, and titles that are mostly category labels — use it as a counter-example of how to publish findings without a story, not as a Storymakers exemplar.”
↓ No executive summary, key-findings page, or recommendation anywhere in 92 pages — the insight-per-slide ratio is close to zero for a reader skimming titles
12
closing
International Women's Day 2023 full report
“A clean, well-segmented IPSOS research report that leads with findings but ends without a recommendation — useful as a teaching example of disciplined section architecture and well-written callouts, but a cautionary example of titles-as-survey-questions and missing 'so what' resolution.”
↓ Action titles are survey questions, not insights — p.16, p.17, p.18, p.19, p.20 all share the title 'To what extent do you agree or disagree with the following statement?'
12
closing
Review of efficiency of the operation of the federal courts
“This is an educational primer on how the U.S. federal courts work — not a consulting argument — and serves as a counter-example for Storymakers, useful only to illustrate what happens when a deck has topic labels but no thesis, analysis, or recommendation.”
↓ Action titles carry zero insight — every slide title is a noun phrase (e.g. p.10 'THE JURISDICTION OF THE FEDERAL COURTS', p.23 'The Appeals Process'); a reader skimming titles learns nothing.
10
closing
Second Quarter 2023 Results
“This is an earnings-disclosure deck, not a consulting argument — topic-label titles, no SCQA arc, and a closing half built entirely of reconciliation tables; useful as a counter-example of what Storymakers principles are designed to replace, not as an exemplar.”
↓ Zero action titles across 25 pages — 'Non-GAAP P&L', 'Research Metrics', 'Capital Structure and Allocation' are all category labels that force the reader to mine the chart for the point
8
closing
gol 6
“This is a financial-product fact sheet with disclaimers, not a Storymakers consulting narrative — useful only as a counter-example of what happens when a document has no action titles, no arc, and no recommendation.”
↓ Action titles are entirely absent — every page header is a product code or firm name (p1-11), so the deck has no insight scaffolding