Pasting your entire prompt twice in a single message gives the LLM a structural advantage, and one study saw accuracy jump from 21% to 97% with zero extra effort.
Here’s a prompting technique that takes zero effort and genuinely improves results: copy your entire prompt and paste it twice in the same message. Send the whole thing as one input, repeated back to back.
So:
- You write your prompt.
(Don’t hit enter yet!) - You copy your entire prompt. Paste it at the end of the prompt.
- Now you hit enter and send it to the LLM.
That’s it. No rewriting. No prompt engineering. No system instructions. Just the same prompt, twice, in a single submission. Courtesy of the great Andriy Burkov.
Why this works
LLMs process text from left to right. Each token can only attend to what came before it, never what comes after. When you write a long prompt with context at the top and a question at the bottom, the model answers having “seen” the context. But the context tokens were generated without any awareness of the question. They were processed blind.
When you repeat the prompt a second time in the same message, every part of the input gets a second pass. The context tokens from the second copy can now attend to the question from the first copy. The model processes the same information, but this time the context knows what it’s being asked.
A recent paper tested this across seven benchmarks and seven different models (Gemini, ChatGPT, Claude, and DeepSeek). Accuracy went up across the board. One model jumped from 21% to 97% on a name-lookup task. Output length didn’t increase. Response time barely moved, because input processing happens in parallel on the hardware anyway.
No cost, no complexity
There are no new losses to compute, no fine-tuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing varies. Sometimes it’s marginal. Sometimes it’s the difference between a usable answer and a wrong one.
The instinct when a prompt doesn’t work is to start rewriting. Add more context. Restructure the instructions. Specify the format. Sometimes that’s necessary. But before spending thirty minutes engineering a better prompt, it’s worth trying the dumbest possible intervention first: just say it again.
When to reach for this
This is most useful for long, context-heavy prompts where the model needs to synthesize information scattered across the input. Short, simple prompts probably won’t see much difference. But if you’re feeding an LLM a document and asking it to extract something specific, or giving it a complex scenario and asking for analysis, repetition gives the model a structural advantage it didn’t have before.
The broader lesson is worth noting too. The biggest gains in working with LLMs often don’t come from cleverness. They come from understanding the mechanics of how these models actually process text, and working with those mechanics instead of against them.