Issue #8 May 5, 2026 5 min read

Vendor Evaluation in One Afternoon

Three vendor proposals. Each one 40 to 80 pages, formatted to make their product look perfect. One prompt normalizes them all, exposes hidden costs, and surfaces the questions they hoped you would not ask.

The Problem

You receive three vendor proposals for a critical system. CRM, ERP, cloud migration, pick your poison. Each proposal is 40 to 80 pages, structured to make their product look like the obvious choice. You need to decide in two weeks.

Your team compares apples to oranges because every vendor structures their proposal differently. Vendor A leads with a 30-page feature list. Vendor B leads with case studies. Vendor C buries the implementation timeline in Appendix D. Key concerns get buried because that is where they belong from the vendor's perspective. The real cost sits behind "contact us for pricing" or is scattered across footnotes that nobody reads.

You end up choosing based on the best presentation, not the best fit. Three weeks later, the implementation cost you did not catch is now a contract clause you cannot escape.

The Fix

  1. Gather all three proposals. PDFs, Word documents, slide decks. Whatever the vendors sent. The more complete, the better the output.
  2. Upload them to Claude, ChatGPT, or Gemini. All three major AI tools support multi-document uploads. Attach everything in one session.
  3. Paste this prompt:
Copy-paste prompt
"I need to evaluate [NUMBER] vendor proposals for [SYSTEM/SERVICE]. The attached documents are their proposals. Act as a procurement specialist who has evaluated 200+ enterprise vendor proposals. Create a structured evaluation with these sections: (1) NORMALIZED COMPARISON TABLE: map all vendors to the same criteria (features, pricing model, implementation timeline, support terms, SLAs, scalability, integration capabilities, security/compliance), using the exact same rows so nothing is hidden by formatting differences, (2) TOTAL COST OF OWNERSHIP: calculate 3-year TCO for each vendor including license, implementation, training, migration, ongoing support, and any usage-based costs they mention or imply but do not itemize clearly, (3) RED FLAGS: for each vendor, list claims that are vague ('industry-leading'), missing information they should have included, and terms that favor the vendor over you, (4) QUESTIONS THEY HOPE YOU WON'T ASK: generate 5 tough questions per vendor based on gaps, vague claims, or suspiciously favorable terms in their proposals, (5) RECOMMENDATION MATRIX: score each vendor 1-5 on your evaluation criteria with brief justification per score, and highlight the top recommendation with a one-paragraph rationale."
What you get

A normalized comparison that strips away vendor formatting tricks, exposes hidden costs, and gives you 15 tough questions to ask before signing. Instead of spending 2 to 3 days reading proposals side by side, you get a structured evaluation in one afternoon.

Cost
$0
Time to learn
0 min
Time saved per evaluation
~2 days

Why normalization changes everything

Vendors deliberately structure proposals differently. Vendor A leads with features because features are their strength. Vendor B leads with case studies because their client list is impressive. Vendor C leads with pricing because it is their strongest point. Every structural choice is a sales decision, not a communication decision.

The prompt forces every vendor into the same grid. When you see all three side by side with identical row labels, the gaps become obvious. A vendor with 12 rows of feature detail but two lines on implementation support is telling you something. A vendor who mentions "enterprise-grade security" in three places but never names a specific certification is also telling you something.

The "Questions They Hope You Won't Ask" section exists because every proposal has strategic omissions. A vendor who does not mention their implementation failure rate probably has a bad one. A vendor who says "flexible pricing" without defining what triggers a price increase is leaving themselves room to surprise you after signing. The AI reads the gaps, not just the text, and turns them into questions you can take into your next call.

Works for

  • Enterprise software selection (CRM, ERP, HRIS)
  • Cloud infrastructure and hosting providers
  • Consulting and professional services RFPs
  • Marketing agency evaluations
  • Insurance and benefits provider comparisons

4 major vendor decisions per year × 2 days saved per evaluation = 8 days per year
Plus: you ask better questions, catch hidden costs before signing, and the losing vendors cannot blame a biased process because you used the same framework for everyone.

The Bigger Picture
Where This Is Going
Each issue builds your AI toolkit. Here is what subscribers get access to as we grow.
Now
Weekly AI Trick
One tested technique per week. Copy-paste prompts. Time and cost estimates. Works Monday morning.
Coming Q2 2026
Searchable Archive
Every trick indexed by role, department, and use case. "Show me all finance tricks" or "What works for product?"
Coming Q2 2026
Custom Topics
Tell us your industry and role. We prioritize tricks that match your daily workflows.
Coming Q3 2026
Competitive Radar
Monthly briefing on how your competitors are using AI. Based on public filings, job postings, and press.

Get Issue #9 next Monday

One trick per week. Five minutes to read. Zero cost to implement.