AI critiques

Storymakers reviews of every deck.

Each deck reviewed by an AI editor through the Storymakers lens — narrative arc, opening hook, closing call-to-action, and action-title quality. With a one-line verdict, top strengths and weaknesses, and three concrete fixes per deck.

1086 reviewed decks · mean 61.6 · click a bar to filter

Filtered reviewed decks

30 matching · page 2 / 2
42 title quality
Accenture · 2025 · 67p
Accenture Tech Vision 2025
“A well-structured thought-leadership report with genuine MECE pillars and strong evidence cadence, but it buries its insights in generic section labels and fades into an appendix instead of landing a recommendation — useful as a teaching example for pillar architecture, not for action titling or closing.”
↓ Duplicated titles ('The Big Picture', 'The Technology', 'What's Next', 'A Portrait of the Future') recur in every section, making the deck unscannable and forcing readers to rely on callouts
42 title quality
MorganStanley · 2025 · 53p
EY Ireland FS Research Report
“A structurally disciplined research report with clean MECE pillars and a repeatable evidence→recommendation pattern — useful as a Storymakers exemplar for STRUCTURE and pillar consistency, but not for action titling or closing punch.”
↓ Action titles are overwhelmingly topic labels — repeating '5.1 Technological Infrastructure & Innovation' verbatim across pp.13/14/15 wastes the most valuable real estate on the page
32 title quality
Deloitte · 2022 · 53p
CEOs ready to face up to crises
“A competent Deloitte survey report with declarative section dividers but topic-label slide titles and no resolution act — useful as a teaching example of how pillar dividers and data-rich callouts can carry a deck despite weak within-section titles and a missing recommendation close.”
↓ Slide titles are topic dumps, not action titles — p.7, 8, 9 are all titled 'Strategy'; p.25-28 all titled 'Financing'; the reader cannot skim for the argument
30 title quality
IPSOS · 2024 · 33p
Ipsos report Single use plastics
“A competently executed but narratively flat survey readout — strong as a reference document for the underlying data, weak as a Storymakers exemplar because the titles are questions, the structure is a topic dump, and the deck ends without ever telling the reader what to do.”
↓ No synthesis or recommendation slide anywhere — the deck ends on p.31 with a producer-fee benchmark and jumps straight to methodology
28 title quality
KPMG · 2023 · 93p
Our Impact Plan 2023
“A well-structured ESG/impact report with exemplary MECE pillar architecture but weak action titles and no call to action — use the section-divider structure as a teaching example, not the title craft or the closing.”
↓ Topic-label titles dominate (p.13 'Purposeful business', p.25 'Human rights', p.51 'Decarbonization', p.59 'Climate risk') — the action-title discipline is largely absent
14 title quality
PwC · 2014 · 50p
Review of efficiency of the operation of the federal courts
“This is an educational primer on how the U.S. federal courts work — not a consulting argument — and serves as a counter-example for Storymakers, useful only to illustrate what happens when a deck has topic labels but no thesis, analysis, or recommendation.”
↓ Action titles carry zero insight — every slide title is a noun phrase (e.g. p.10 'THE JURISDICTION OF THE FEDERAL COURTS', p.23 'The Appeals Process'); a reader skimming titles learns nothing.