UploadSearchDashboard

10 Prompting Patterns with MCP + Assay

Force grounding, show evidence, and turn your corpus into reliable intelligence

Why These Patterns Matter

MCP + Assay gives Claude direct access to your research corpus — but without constraints, LLMs default to blending your documents with training knowledge, inventing citations, or producing generic summaries that could apply to anyone's library. These 10 patterns force grounding, make reasoning transparent, and turn your Assay collection into a reliable research assistant instead of a creative writing exercise.

Foundation: Sources and Workflow

1) Declare the role + the baseline

Tell the agent what to treat as "truth."

Treat my Assay corpus as the baseline. Don't use external knowledge unless I explicitly allow it.

2) Force a tool-first workflow

Make it show its work before writing.

Before synthesizing, always call tools first. Use: browse_themes (if exploring themes), search_by_keywords or search_by_theme (to retrieve docs), then get_document_summary (for detail). Show tool outputs before analysis.

3) Demand grounding with citations (and define what "citation" means)

Users should get doc title + doc ID at minimum.

Ground every claim in specific Assay documents (title + document ID). If unsupported, say 'Not present in corpus.'

4) Ask for "Direct support vs Inferred vs Not present"

This is the single best anti-hallucination lever.

For every key claim, explicitly label: - Direct support: [Doc title + ID] - Inferred alignment: [Doc title + ID + reasoning] - Not present in corpus: 'No supporting documents found' This is mandatory — do not skip.

Scoping: Control What Gets Analyzed

5) Constrain scope: themes + time + corpus slice

Don't let it roam across everything.

Scope: themes [THEME.IDs], timeframe [if applicable], max 15 documents. If more than 15 match, prioritize: (1) recency, (2) author authority, (3) theme centrality.

6) Require a "top signals" section with counts

Make the model quantify what it's seeing.

For each trend, provide: - Signal strength: [X documents, Y authors] - Maturity: Emerging / Established / Declining - Direction: Growing / Stable / Contracting - Citations: 2-3 representative docs (title + ID)

Quality: Surface Gaps & Tensions

7) Ask for contradictions and tensions explicitly

Your best outputs surface disagreement, not just consensus.

Include a 'Contradictions & Tensions' section. Examples: - Documents that disagree on approach (cite both) - Evolving positions from same author (cite chronologically) - Unresolved debates in the corpus If no contradictions, state: 'Corpus alignment: [describe consensus].'

8) Ask for "whitespace / missing coverage"

This turns MCP into competitive intelligence, not summarization.

Identify gaps in my corpus: - Topics mentioned but under-documented (cite examples) - Adjacent areas with no coverage - Recommend 3 documents I should add (with search queries or author names)

9) Separate "analysis" from "recommendations"

Keeps the output clean and PM-usable.

Use this structure: 1. Findings (grounded in documents, cited) 2. Implications (reasoned from findings, cite which findings) 3. Recommendations (actionable, tied to implications) No recommendations without a cited finding → implication chain.

Format: Make Output Reusable

10) Always request an output format and length cap

Prevents bloated essays and makes it reusable.

Choose one:

"10-bullet exec summary + 3-section narrative (800-1200 words)"
"Comparison table (3 columns: Doc, Position, Evidence) + 2-paragraph synthesis"
"FAQ format (5 Qs, 3-sentence As, each citing 2 docs)"

Copy-Paste Starter Prompts

A) Research Synthesis / Literature Review

Using my Assay corpus as the only source, synthesize the top 5 themes on [topic]. Workflow: 1. browse_themes to identify relevant L0/L1 domains 2. search_by_keywords or search_by_theme to retrieve docs 3. For each theme, provide: - 2 supporting docs (title + ID) - Direct/Inferred/Not present labeling for key claims - 1 contradiction or tension (if any) 4. End with: 5 gaps in corpus coverage + 3 recommended additions Format: 800-1200 words, structured as: Overview → Theme Analysis → Gaps → References

B) Product / Competitive Intelligence

Treat my Assay corpus as baseline truth. Identify trends in [market/topic]. Required sections: 1. Signal strength: For each trend, include doc count + maturity + direction 2. Strategic positioning: - Table stakes (what everyone does, cite 3+ docs) - Differentiators (what's emerging, cite 2 docs) - Contrarian bets (what's contested, cite both sides) 3. Risks: 3 threats with supporting evidence (or 'Not present in corpus') 4. Unknowns: What we don't know (gaps requiring new research) All claims must cite title + ID. No unsupported assertions.

C) Document Comparison / Decision Support

Compare documents [list IDs or search parameters] to support a decision on [question]. Workflow: 1. Use compare_documents or get_document_summary for each 2. Produce comparison table: - Column 1: Criterion (e.g., approach, evidence strength, maturity) - Columns 2-N: Each document's position (cited) 3. Synthesis: Where they agree / Where they conflict / What's missing 4. Recommendation: Based on comparison, which approach is best supported? (Must cite specific findings, not preferences) Format: Table + 2-paragraph synthesis + 1-paragraph recommendation

Quick Reference Card

When you want...Use this patternKey rule
No hallucination#4: Direct/Inferred/Not presentMandatory labeling
Visible reasoning#11: Be transparent and honestShow all tool calls
Strategic insight#7 + #8: Contradictions + GapsSurface tensions
Decision supportTemplate C: Comparison tableCriteria-driven
Executive summary#10: Format + length cap10 bullets max

Common Mistakes to Avoid

1. Asking open-ended questions without constraints

❌ Bad:

"What does my corpus say about AI safety?"

✅ Good:

"What does my corpus say about AI safety? Scope: GENERATIVE_AI.SAFETY_ALIGNMENT theme, max 10 docs, label all claims Direct/Inferred/Not present."

2. Accepting citations without verification

❌ Bad:

Taking "Source: XYZ paper" at face value

✅ Good:

Requiring "Doc title + ID" format so you can verify

3. Not forcing tool visibility

❌ Bad:

Letting Claude search without showing queries

✅ Good:

"Show me every tool call: which tool, what parameters, how many results"

4. Mixing corpus knowledge with external knowledge

❌ Bad:

"Tell me about AI safety and recommend best practices"

✅ Good:

"Using ONLY my Assay corpus, synthesize AI safety approaches. Mark gaps as 'Not present in corpus.'"

5. Accepting summaries without seeing contradictions

❌ Bad:

Generic consensus statements

✅ Good:

"Include a Contradictions section. If none exist, state 'Corpus alignment: [describe].'"

Advanced Patterns

Iterative Refinement

First pass: Broad search across themes Second pass: Deep dive on top 3 most-cited documents Third pass: Identify gaps and recommend specific additions

Multi-Corpus Comparison

Compare my Assay corpus against [external source]. For each claim from [external]: - Supported in my corpus: [cite docs] - Contradicted in my corpus: [cite docs] - Not present in my corpus: [note gap]

Temporal Analysis

For documents published [date range], track theme evolution: - Early positions (cite oldest) - Shift points (cite transition docs) - Current consensus (cite recent) - Unresolved debates (cite both sides)

Transparency: Show Your Work

11) Be transparent and honest

Prevent "black box" tool usage and make reasoning visible.

After calling each tool, show me: - Which tool you used - What query/parameters you sent - How many results returned - Why you chose those parameters Then proceed with analysis.

Version History

v1.0 (2025-01-04): Initial release with 10 core patterns + 3 golden prompts

Additional Resources