Explore all awesome publications across topics by Merrative – Media Assets

We write practical guides and tutorials on using Artificial Intelligence Tools for real-world applications. Applied AI Tools is a publication managed by Merrative

Subscribe for latest posts shared once a month

Today, we can speak with Artificial Intelligence, and enable it to generate human-like responses. Like every conversation, the more context you provide while writing prompts, the more relevant results a language model like ChatGPT provides.

For this, many ‘prompt influencers’ have come up with ‘magical prompting templates’ and prompt engineering learning resources that claim to help you get results. While they maybe useful to kickstart, the key to make the most of language models is not some magical prompting formula, but to interact with it. Instead of stuffing down your requirements in a single prompt, it is better to adjust with every response till you get satisfactory results.

To improve reasoning abilities of large language models, Jason Wei and others have suggested usage of a series of intermediate reasoning steps or ‘chain of thought’.

Please note, that these are advanced prompting techniques, and aren’t really required for small questions or tasks you want ChatGPT to work on. Also, AI continues to get smarter, and is already getting sharper at knowing your intent. Chain of thought prompting gives you a structure to interact with AI – and hence, its useful to learn about it.

What is ‘chain of thought’ prompting?

Chain-of-thought prompting involves implementing a sequence of prompts that progressively enhance the conversation and guide the language model’s responses. This user enabled guidance helps the language models provide contextually relevant responses that align to the user requirements.

Here’s an example from Google Brain Team’s paper by Jason Wie and others:

Image source: Google Research, Brain Team
Image source: Google Research, Brain Team (https://arxiv.org/pdf/2201.11903.pdf)

In the above image, the user has shown an example on what reasoning approach they want the language model to follow, thus enabling it to provide an accurate response accordingly.

You can use chain of thought for casual tasks too by providing step-by-step instructions. For example, this is how you can ask for book recommendations to ChatGPT:

Situation: You want to get a new book suggestion for a 3 hour flight.

Prompt 1: “I have a 3 hour flight and would like to read a good non-fiction book. Any recommendations?”

Prompt 2: “Thanks for the suggestion! Can you give me a brief overview of the plot?”

Prompt 3: “Sounds fascinating! Could you tell me about the writing style and the author’s background?”

Prompt 4: “Interesting! Are there any similar books or authors you would recommend?”

Here, you can see how instead of asking everything in one-go, as a user, I am able to also streamline my own requirements. Towards the end, you have the book recommendations, plot details, author introduction and other similar books for which you can run similar prompts to get more recommendations. This is much better than simply asking “Give me a book recommendation for 3 hour flight”.

How does chain of thought prompting facilitate reasoning for language models?

Implementing chain of thought to structure your prompt helps the language model as follows:

  • It decompose the problem statement – if your use case require multiple steps, then sharing them one by one makes it easy for the language models to maintain the context.
  • Its easy to troubleshoot – as you make the language model enter chain of thought, it is easy to detect the prompts that lead to an output which deviates from your set expectations. It will help you improve the prompts and possibly figure out a zero shot prompt template to get the same output.
  • Useful for use cases that require reasoning – for math, symbolic manipulation, or critical reasoning, one uses chain of thought to arrive at solutions.

Two approaches to chain of thought prompting

When you write prompts without many instructions of fine tuning the potential response of the model, it is called as ‘Zero Shot’ prompting. The model will use its general knowledge to interpret the instructions in the prompt without prior specific training on the task mentioned.

Zero shot prompts aren’t great to perform specific tasks. To get an output of specific structure from the language model for specific tasks, you need to modify prompts. Here are 3 techniques you can explore to add structure to your Zero Shot prompt using chain of thought prompting technique:

Zero-Shot prompting + Chain of thought reasoning (step-by-step)

In Zero Shot + Step by Step Thinking technique, you include prompts in a sequence of steps which facilitates the language model to delivery desired output. Here, each prompt builds further on the previous prompt, thus ensuring the model aligns its approach to that of yours. It helps train the model for engaging into conversations that are coherent with user’s needs to give accurate responses.

Image Source: Takeshi Kojima and others - Large Language Models are Zero-Shot Reasoners
Image Source: Takeshi Kojima and others – Large Language Models are Zero-Shot Reasoners

To implement zero shot + step by step thinking prompting, simply ask the model to do so by adding ‘Let’s think step by step’ as done in the above image. Else, you can break down the problem statement and prompt step by step like this:

Zero shot prompt: I need some vacation recommendations in India

Then, you can add step by step instructions to make the model provide more details with below prompts:

  1. “What Indian state are you interested in?”
  2. “Do you prefer mountains or beaches?”
  3. “How many days do you have for the vacation?”
  4. “Are there any specific themes or moods you’d like the location to have?”

Few shot prompting + Chain of Thought Reasoning

Few shot prompting means you add an one or more example for demonstrating the language model what output you expect or how you want it to think about the problem statement. This helps the model learn from the example and generalize it to produce an output that aligns with the example.

Here’s an example of how you can use Few shot prompting with 3 examples (or 3 shots)

ChatGPT response to a 3 shot prompt example
3 shot prompt example with output

It is possible to improve the reasoning of the language model by adding ‘Let’s think step by step’ to a few shot prompt. Here’s how it went with ChatGPT:

ChatGPT response screenshot for using few shot prompt with chain of thought reasoning technique
Using few shot prompt with chain of thought reasoning technique

Let us know how you use chain of thought prompting

Do you have an interesting way in which you’re drafting your prompts for any language model? Let me know, happy to include your case study to further improve this guide.

You can subscribe to our newsletter to get notified when we publish new guides – shared once a month!

This blog post is written using resources of Merrative – a publishing talent marketplace that helps you create publications and content libraries.

Get in touch if you would like to create a content library like ours in the niche of Applied AI, Technology, Machine Learning, or Data Science for your brand.

Leave a Reply

Powered by WordPress.com.

%d bloggers like this: