Someone sends you a 60-page report and says "let me know your thoughts by end of day." You have 30 minutes. The report has 47 pages of context and 13 pages that matter. These prompts find the 13 pages, extract what your decision-maker needs to know, and format it so the brief actually gets read.
Long reports are not written for the person who needs to act on them. They are written for the person who needs to prove they did the work. The analyst who produced the report includes methodology, caveats, appendices, and 15 charts that all say the same thing in different ways. The executive who receives it needs three numbers, one recommendation, and the strongest argument against that recommendation.
Most people handle this by skimming. They read the executive summary (which the author wrote last, in a rush), glance at the charts, and form an opinion based on whatever caught their eye. The result is decisions based on whichever data point was most visually prominent, not which was most important.
The alternative is not "read the whole thing." Nobody has time for that. The alternative is structured extraction: pull out what matters, identify what is missing, and present it in the format your decision-maker actually uses.
A 300-word brief that gives your decision-maker the finding, the evidence, the recommendation, and the risk, without making them read 60 pages to get there. A contradiction check that catches the things most readers miss because they stop at the executive summary. And a translation layer that turns technical findings into the language your specific audience uses. The person you send this to will think you spent an hour reading the report. You spent 10 minutes.
Executive summaries are written by the same person who wrote the report. They carry the same biases, the same framing, and the same blind spots. The author spent three weeks on the analysis and cannot see it from the outside anymore. Their summary emphasizes what was hardest to research, not what is most important to decide.
An AI reading the same report has no attachment to any section. It does not care that the methodology section took two weeks to write. It evaluates every page with equal weight, which means findings buried on page 43 get the same attention as the headline chart on page 5. That neutrality is the advantage.
The most valuable part of any brief is not what the report says. It is what the report does not say. Every 60-page report has gaps the authors know about but chose not to highlight. Competitor data they could not get. Time periods they excluded. Assumptions they made but did not test.
When your brief includes "this report does not address X," you change the conversation. Instead of debating the report's conclusions, the room focuses on whether those conclusions hold given what is missing. That is a higher-quality discussion. The person who surfaces the gap looks like the most careful reader in the room, even if they spent the least time reading.
A 300-word brief with bullet points gets read. A 600-word summary in paragraph form gets skimmed. A forwarded report with "see attached, thoughts?" gets ignored until someone asks about it in a meeting.
The brief is not a summary of the report. It is a decision tool. Every sentence should either inform the decision or clarify the risk of getting it wrong. If a sentence does neither, it does not belong in the brief. AI is good at enforcing this discipline because it does not feel compelled to preserve the author's original structure.
3 reports per week × 45 minutes saved = ~2+ hours back every week
Plus the quality effect: when your briefs consistently surface the right finding and the right risk, people start sending you the reports first. That is how information flow becomes influence.
One trick per week. Five minutes to read. Zero cost to implement.
Free. Unsubscribe anytime. No spam, ever.