How to write boring prompts that actually work
Inside: A new MIT study shows AI prioritizes syntax over logic (and my 3-step boring protocol I use to fix it).
Welcome to issue #90 of FutureBrief. Three times a week I share practical insights on AI & automation trends, tools, and tutorials for business leaders. If you need support on your technological journey, join our community and get access to group chat, Q&As, workshops, and templates.
Ninjabot, a ready-to-deploy AI agent that responds to inquiries in 10-60 seconds (not hours), reaches out to relevant contacts on social media 24/7 (no spam), and warms up your database via SMS/email (including follow-ups).
The Paris problem
The MIT team discovered something interesting. They trained models where specific topics were linked to specific sentence structures.
When they asked the model: “Where is Paris located?” The model correctly answered: “France.”
But then they gave it a nonsense sentence with the exactly same grammatical structure (adverb + verb + proper noun + verb):
The query: “Quickly sit Paris clouded?”
The model answered: “France.”
The model didn’t understand the geography. It understood the syntax. It recognized the shape of the sentence and guessed the topic associated with that shape.
That’s because AI generates the most probable output based on the input and everything generated so far.
So if you write complex, polite, professional letters to your AI agent, you are introducing syntactic noise. You are giving the model more grammatical patterns to latch onto, increasing the chance it ignores your actual instructions and hallucinates based on the vibe of your sentence structure.
My boring protocol
I was reviewing a workflow for document extraction. It was supposed to extract “Delivery Date” from messy PDFs, but it kept hallucinating dates from the invoice header (the billing date) instead of the shipping body.
This is how the original (failing) prompt: “Could you please review the attached document and identify the date when the shipment is expected to arrive? Please let me know if you’re unsure.”
When I saw this structure, I immediately started with a hypothesis that this polite, conversational structure triggers a generic “document summarization” pattern in the model rather than a specific extraction task.
So I applied the boring protocol. I stripped away all politeness, all complex clauses, and all humanity.
The new (working) prompt: “Task: Extract delivery date. Context: Shipping document. Output format: YYYY-MM-DD. Constraint: Ignore billing date.”
Result? 100% accuracy tested on 10 outputs.
Step-by-step implementation
If you have an automation that works 80% of the time but fails randomly, it is likely a syntax issue. Here is how to fix it.
Step 1: The caveman audit
Go through your system prompts. Remove every instance of:
“Please” / “Kindly” / “I would appreciate”
Complex transition words (”Furthermore,” “Consequently,” “In regards to”)
Polite padding (”Hope you are well”)
These words are expensive (tokens) and dangerous (pattern distraction).
Step 2: Break the Prose
MIT found that “grammatical patterns” trigger hallucinations. The best way to break a grammatical pattern is to stop using grammar.
Don’t write paragraphs. Write lists. Use XML tags to separate instructions from data. This forces the model to process the logical structure rather than the linguistic flow.
Bad: “Read the email below and if the user is angry draft a response that apologizes but don’t promise a refund.”
Good: (XML)
<input>
</input>
<rules>
1. Analyze sentiment.
2. If sentiment = negative, draft apology.
3. Constraint: No refunds.
</rules>
Step 3: The “meta-rewrite”
You don’t have to do this manually. You can use an LLM to lobotomize your prompts for you. I use a specific meta-prompt to clean up client instructions before putting them into production.
Copy this prompt:
“Rewrite the following prompt to be grammatically flat and boring. Remove all politeness, conversational filler, and complex sentence structures. Convert instructions into a bulleted list or XML structure. Prioritize direct logic over linguistic flow.
[Insert your current prompt]”



