Today, we can speak with Artificial Intelligence, and enable it to generate human-like responses. Like every conversation, the more context you provide while writing prompts, the more relevant results a language model like ChatGPT provides.
For this, many ‘prompt influencers’ have come up with ‘magical prompting templates’ and prompt engineering learning resources that claim to help you get results. While they maybe useful to kickstart, the key to make the most of language models is not some magical prompting formula, but to interact with it. Instead of stuffing down your requirements in a single prompt, it is better to adjust with every response till you get satisfactory results.
To improve reasoning abilities of large language models, Jason Wei and others have suggested usage of a series of intermediate reasoning steps or ‘chain of thought’.
Please note, that these are advanced prompting techniques, and aren’t really required for small questions or tasks you want ChatGPT to work on. Also, AI continues to get smarter, and is already getting sharper at knowing your intent. Chain of thought prompting gives you a structure to interact with AI – and hence, its useful to learn about it.
What is ‘chain of thought’ prompting?
Chain-of-thought prompting involves implementing a sequence of prompts that progressively enhance the conversation and guide the language model’s responses. This user enabled guidance helps the language models provide contextually relevant responses that align to the user requirements.
With chain of thought prompts, the LLM model must share with you the series of reasoning steps it took to reach the answer it is providing you. This helps you understand and validate the response for tasks with complex reasoning.
Here’s an example from Google Brain Team’s paper by Jason Wie and others:

In the above image, the user has shown an example on what reasoning approach they want the language model to follow, thus enabling it to provide an accurate response accordingly.
You can use chain of thought for casual tasks too by providing step-by-step instructions. For example, this is how you can ask for book recommendations to ChatGPT:
Situation: You want to get a new book suggestion for a 3 hour flight.
Prompt 1: “I have a 3 hour flight and would like to read a good non-fiction book. Any recommendations?”
Prompt 2: “Thanks for the suggestion! Can you give me a brief overview of the plot?”
Prompt 3: “Sounds fascinating! Could you tell me about the writing style and the author’s background?”
Prompt 4: “Interesting! Are there any similar books or authors you would recommend?”
Here, you can see how instead of asking everything in one-go, as a user, I am able to also streamline my own requirements. Towards the end, you have the book recommendations, plot details, author introduction and other similar books for which you can run similar prompts to get more recommendations. This is much better than simply asking “Give me a book recommendation for 3 hour flight”.
How does chain of thought prompting facilitate reasoning for language models?
Implementing chain of thought to structure your prompt helps the language model as follows:
- It decompose the problem statement – if your use case require multiple steps, then sharing them one by one makes it easy for the language models to maintain the context.
- Its easy to troubleshoot – as you make the language model enter chain of thought, it is easy to detect the prompts that lead to an output which deviates from your set expectations. It will help you improve the prompts and possibly figure out a zero shot prompt template to get the same output.
- Useful for use cases that require reasoning – for math, symbolic manipulation, or critical reasoning, one uses chain of thought to arrive at solutions.
Two approaches to chain of thought prompting
When you write prompts without many instructions of fine tuning the potential response of the model, it is called as ‘Zero Shot’ prompting. The model will use its general knowledge to interpret the instructions in the prompt without prior specific training on the task mentioned.
Zero shot prompts aren’t great to perform specific tasks. To get an output of specific structure from the language model for specific tasks, you need to modify prompts. Here are 3 techniques you can explore to add structure to your Zero Shot prompt using chain of thought prompting technique:
Zero-Shot prompting + Chain of thought reasoning (step-by-step)
In Zero Shot + Step by Step Thinking technique, you include prompts in a sequence of steps which facilitates the language model to delivery desired output. Here, each prompt builds further on the previous prompt, thus ensuring the model aligns its approach to that of yours. It helps train the model for engaging into conversations that are coherent with user’s needs to give accurate responses.

To implement zero shot + step by step thinking prompting, simply ask the model to do so by adding ‘Let’s think step by step’ as done in the above image. Else, you can break down the problem statement and prompt step by step like this:
Zero shot prompt: I need some vacation recommendations in India
Then, you can add step by step instructions to make the model provide more details with below prompts:
- “What Indian state are you interested in?”
- “Do you prefer mountains or beaches?”
- “How many days do you have for the vacation?”
- “Are there any specific themes or moods you’d like the location to have?”
Few shot prompting + Chain of Thought Reasoning
Few shot prompting means you add an one or more example for demonstrating the language model what output you expect or how you want it to think about the problem statement. This helps the model learn from the example and generalize it to produce an output that aligns with the example.
Here’s an example of how you can use Few shot prompting with 3 examples (or 3 shots)

It is possible to improve the reasoning of the language model by adding ‘Let’s think step by step’ to a few shot prompt. Here’s how it went with ChatGPT:

Self consistency with chain of thought prompt [CoT-SC]

What happens if one of the reasoning thoughts in the chain is wrong?
That is exactly the limitation of Chain of Thought prompting – that any single wrong reasoning provides unsuccessful response. To avoid this – we use self-consistency prompts with Chain of Thought.
Self-consistency prompting approach involves initiating multiple chain of thought prompts to the LLM to elicit various outputs from it. Once these diverse outputs are generated, the final outcome is determined through a consensus mechanism or majority vote. This means, the predominant or most frequently occurring output among the generated responses is chosen as the final result
To implement self-consistency in your chain of thought prompts, you can make the below addition into it:
Imagine three independent experts with different reasoning styles are answering this question. The final answer is obtained by a majority vote. The question is:
//”Add question to solve”
Learn a more advanced concept which performs better than chain of thought prompting – The Tree of Thought Prompt Technique
Difference between chain of thought and multi-step prompting technique

Here’s a table I have prepared that compares both chain of thought and multi-step prompting techniques:
| Criteria | Chain of Thought Prompting Technique | Multi-Step Prompting Technique |
|---|---|---|
| Definition | Sequential prompts that build upon each other, maintaining context and continuity throughout the interaction. | Series of prompts designed to guide the language model through a multi-step process or sequence, often with specific objectives for each step. |
| Sequential Progression | Each subsequent prompt relies on the previous response, ensuring a coherent and continuous narrative or reasoning process. | Each step or prompt focuses on specific objectives or tasks, guiding the language model through distinct phases or components of a process. |
| Contextual Continuity | Emphasizes maintaining consistency and context throughout the interaction, ensuring logical progression and coherence. | Focuses on achieving specific objectives or milestones within each step, with less emphasis on maintaining continuity across steps. |
| Complexity Handling | Effective for storytelling, role-playing, scenarios requiring consistent narrative, or context where each prompt builds upon the previous response. | Suitable for multi-step tasks, processes, or sequences where each prompt addresses distinct objectives or components without strict continuity requirements. |
| Flexibility and Adaptability | Provides flexibility to explore diverse scenarios, adapt narratives, and maintain contextual consistency based on user-defined sequences or objectives. | Offers flexibility to design multi-step processes, tasks, or sequences with distinct objectives, allowing for customization and adaptability based on specific requirements. |
| Examples | 1. Storytelling sequences (e.g., narrative progression in a fictional story). 2. Role-playing scenarios (e.g., character interactions and dialogue). 3. Sequential tasks requiring context (e.g., step-by-step instructions or procedures). | 1. Guided workflows (e.g., software tutorials or procedural guides). 2. Multi-stage processes (e.g., product development lifecycle). 3. Sequential tasks with distinct objectives (e.g., problem-solving steps or decision-making frameworks). |
7 limitations of Chain of Thought prompts
- Loss of contextual depth: As the chain progresses, there’s a potential risk of losing the depth and richness of context established in earlier prompts. The subsequent prompts might not capture the nuanced details or intricacies present in the initial interactions. This leads to a diluted or oversimplified narrative.
- Dependency on previous responses: The effectiveness of the chain relies heavily on the accuracy and relevance of preceding responses. If earlier prompts generate misleading or irrelevant information, it can steer the entire chain off course, compromising the overall coherence and accuracy of subsequent interactions. Thus, one should implement self-sufficiency described in this guide before to combat this limitation.
- Potential for redundancy: Without careful design and planning, chain of thought prompts may introduce repetitive themes or ideas, resulting in redundant outputs. This repetition can diminish the novelty and value of generated content, leading to diminished engagement or utility for end-users.
- Limited flexibility: The linear progression inherent in chain of thought prompts may restrict flexibility and adaptability in exploring diverse scenarios. This structured approach can hinder the exploration of alternative paths or address evolving requirements, thus limiting the breadth and depth of generated insights.
- Complexity management: Managing and maintaining the complexity of a prolonged chain of thought prompts can become challenging. As the sequence expands, tracking, evaluating, and refining multiple interconnected responses becomes increasingly complex. One requires meticulous oversight and management to ensure coherence, relevance, and accuracy.
- Resource intensive: Executing extensive chains of thought prompts, especially with multiple iterations or branching pathways, can be resource-intensive in terms of computational resources, time, and effort. This limitation may constrain scalability and accessibility, particularly in applications requiring real-time or high-volume interactions.
- Risk of inconsistency: Despite efforts to maintain continuity and coherence, the inherent variability and unpredictability of language models may introduce inconsistencies or contradictions within the chain. These inconsistencies can undermine the credibility, reliability, and trustworthiness of generated content. It can compromise its utility and effectiveness in practical applications.
Chain of thought prompts offer a structured approach to guiding interactions. They help explore complex scenarios. Recognizing these limitations allows adopting prompt optimization strategies. These practices help mitigate challenges and optimize the application of chain of thought prompts in various contexts.
Let us know how you use chain of thought prompting
Do you have an interesting way in which you’re drafting your prompts for any language model? Let me know, happy to include your case study to further improve this guide.
You can subscribe to our newsletter to get notified when we publish new guides – shared once a month!

This blog post is written using resources of Merrative. We are a publishing talent marketplace that helps you create publications and content libraries.
Get in touch if you would like to create a content library like ours. We help in the niche of Applied AI, Technology, Machine Learning, or Data Science for your brand.

Leave a Reply