15 non-technical ChatGPT prompt engineering techniques to optimize AI prompts

In the rapidly evolving landscape of artificial intelligence, fine-tuning models to generate more accurate and relevant responses has become crucial. One of the key aspects of this fine-tuning process is prompt engineering. In this blog post, we will explore 15 non-technical techniques to optimize prompts specifically tailored for ChatGPT, a powerful language model developed by OpenAI.

Key takeaways:

  • Basics of ChatGPT prompt optimization
  • 15 key prompt techniques to optimize and write better prompts – no jargon explanations!
  • 5 resources that help with prompt optimization

What is Prompt Engineering?

Before delving into the techniques, let’s understand what prompt engineering is.

In simple terms, it is the art of crafting input queries or instructions in a way that guides the AI model to produce desired outputs. In the context of ChatGPT, effective prompt engineering ensures more precise and contextually relevant responses.

It is akin to providing the model with instructions that help it understand the user’s intent and generate responses that align closely with the user’s expectations. Think of a prompt as a conversation starter or a set of instructions you give to ChatGPT. Without effective prompt engineering, the model might generate responses that are generic or not precisely aligned with what the user desires. However, by carefully crafting prompts, users can influence the model’s behavior and improve the quality of its outputs.

Here’s what happens when I give a generic prompt of ‘Explain what is poetry’ where ChatGPT has provided textbook definitions and its elements, although quite informative.

Screenshot of ChatGPT general prompt on poetry giving generic output
ChatGPT general prompt on poetry giving generic output

But, if you learn how to write a good prompt, you can refine the same and get results as follows:

Screenshot of ChatGPT optimized prompt adding role, context, and instructions to give better output
ChatGPT optimized prompt adding role, context, and instructions to give better output

Here I used tricks such as assigning ChatGPT a role, giving context and information about how I wanted the poetry to be. The more information and context you provide, the better results you will achieve.

As we delve into the techniques for prompt engineering, it’s essential to understand that this process is not about programming or coding but rather about effective communication with AI. By mastering prompt engineering, non-technical professionals can harness the full potential of AI models like ChatGPT, making them versatile tools for a wide range of tasks in different domains.

Why should you optimize prompts?

Optimizing prompts is essential for enhancing the performance of language models like ChatGPT. Well-crafted prompts can help in obtaining outputs that align closely with user expectations, leading to improved user experience and productivity. While the model is inherently powerful, the quality of its responses can be significantly enhanced through thoughtful and intentional prompt engineering.

  • Make prompts precise and relevant: when you try to write a prompt, you have specific goals in your mind regarding how you want the output to be. Without optimization, the model might provide general or verbose responses that may not address your specific needs. Prompt optimization ensures that the model’s responses are not only accurate but also directly applicable to the task at hand.
  • Tailor output as per requirements: different tasks or use cases may require outputs of varying lengths, styles, or formats. Optimizing prompts allows you to specify these requirements explicitly, ensuring that the generated responses align with their preferences. Whether it’s a short answer, a creative paragraph, or a detailed explanation, prompt optimization tailors the model’s output to meet your expectations. Consider a scenario where a user wants creative content. Without optimization, a prompt like “Generate a story” might result in a generic narrative. However, by optimizing the prompt to “Craft an imaginative story with a mysterious plot twist in Sylvia Plath’s writing style,” you can influence the model to produce more engaging and tailored content.
  • Overcome AI model limitations: while AI models are sophisticated, they are not infallible. Optimizing prompts allows users to work around potential limitations or biases in the model’s understanding. By providing additional context, specifying requirements, or asking for clarification, users can guide the model to overcome challenges and generate more accurate responses.

How are the two prompts different – Generic vs Optimized prompts?

Understanding the distinction between a generic prompt and an optimized prompt is fundamental to realizing the full potential of AI models like ChatGPT. The choice of words, specificity, and clarity in a prompt can significantly influence the model’s responses. Let’s explore the key differences between these two types of prompts:

What do you mean by a ‘Generic Prompt’?

A generic prompt is broad and lacks specific details or instructions.

It typically leaves the interpretation of the request entirely to the AI model. While such prompts may produce responses, they often result in generic or expansive answers that may not align closely with your specific requirements.

For example, consider the generic prompt:

“Explain the concept of machine learning.”

While this prompt provides a general direction, it leaves the door open for the model to generate a response that might be too detailed, too technical, or not focused on your particular interests.

What do you mean by ‘Optimized Prompt’?

An optimized prompt, on the other hand, is carefully crafted to guide the model toward generating responses that are more precise, relevant, and tailored to your needs. It includes specific details, instructions, or constraints that help narrow down the focus of the AI model.

Continuing with the example, an optimized prompt for the same concept could be:

“Summarize the key principles of machine learning in 50 words.”

In this case, the user has specified the desired format (a summary) and set a constraint on the response length (50 words). This optimization ensures that the generated answer is concise, focused, and aligned with your intent.

When should you use prompting strategies?

Prompting strategies are particularly beneficial in scenarios that require precise and relevant information. For example -drafting emails, generating creative content, or seeking technical assistance are some cases where employing prompt engineering can significantly enhance the quality of responses.

Here are some situations where employing prompting strategies can significantly enhance the effectiveness of your interactions with AI:

  1. Retrieve specific information: prompting strategies help narrow down the focus of the model when you’re seeking to get output from ChatGPT in a specific manner or format. For example, if you’re researching a topic and need concise information, you might use prompting strategies to specify the format, length, or key details you’re looking for in the response.
  2. Generate creative content: for creative tasks like writing, brainstorming, or idea generation, prompting strategies are essential. They can guide the model to produce content that aligns with your creative vision and chain of thought.
  3. Solve problems: if you’re facing complex problems or scenarios, employing prompting strategies directs the model’s focus toward specific aspects of the problem. Whether you’re looking for alternative solutions, analyzing potential outcomes, or seeking recommendations, well-crafted prompts help elicit targeted responses that contribute to effective problem-solving.
  4. Summarize content: ChatGPT is great for working with large volumes of information which can aid in extracting concise and relevant summaries. By specifying the desired format, length, or key points, users can leverage prompting strategies to obtain succinct summaries that distill essential information from complex content.
  5. Develop instructional content: it is possible to create educational content where clarity and precision are paramount. You can tailor prompts to guide the model in providing explanations, definitions, or step-by-step instructions that meet your audience’s educational level.
  6. Branded content: certain use cases like using AI for writing marketing copy, specifying tone, style, and key messaging can result in more tailored and brand-aligned content. You can even provide ChatGPT with your brand book by optimizing the prompt and it can align your copies accordingly.
  7. Task automation and script generation: prompting strategies are useful to refine automation scripts and tailor them to the required output. For example, if you need to automate a data cleaning process, an optimized prompt such as “Generate a Python script to clean and preprocess a CSV file containing customer data” directs the model to produce a script tailored to the specified task.
  8. Iterative Refinement: prompting strategies are indispensable when engaging in an iterative process where initial model responses inform subsequent prompts. You can analyze the model’s outputs, identify areas for improvement, and refine prompts accordingly. This iterative approach ensures a continuous improvement in the quality and relevance of generated responses.

15 Prompt engineering techniques to optimize ChatGPT prompts

I have explained 15 advanced prompt optimization techniques in an easy language with examples that you can understand even if you’re not a coder:

1. Specify output format

Specifying the output format provides clear instructions to the AI model regarding the structure and style of the generated response. By outlining the desired format, users can influence the way information is presented, ensuring that the output meets specific criteria or conforms to predefined standards set by you.

How specifying format helps in prompt optimization:

When users specify the output format, they guide the model to organize information in a manner that is most useful for their needs. This technique is particularly valuable when users require responses in a specific style, such as a summary, list, paragraph, or any other structured format. By setting these expectations, users can avoid receiving overly verbose or unstructured responses.

Example:

Without prompt technique: “Explain content marketing.”

Here, it has provided me with very verbose and fluff-filled content in its generalized format.

Screenshot of ChatGPT's response to a general prompt that doesn't specify any format
ChatGPT’s response to a general prompt that doesn’t specify any format

With prompt technique: “Explain the concept of content marketing which includes a definition and 1 content marketing case study. In the case study, use the best practices of content marketing and highlight these principles where used within the case study. Use a tabular format to showcase the case study.”

Screenshot of ChatGPT's output to an optimized prompt that includes format
ChatGPT’s output to an optimized prompt that includes format

In the first prompt without specifying the output format, the model generates a detailed paragraph. In contrast, the second prompt guides the model to provide a concise and organized response in the form of a tabular case study facilitating easier consumption and understanding of information.

Usage scenario:

Imagine you are conducting research and need a quick overview of recent advancements in renewable energy. By specifying the output format, such as requesting a tabular summary, you can receive a well-organized and easily digestible response that highlights key developments, their significance, and relevant data.

Best practices:

  • Clearly define the desired output format, whether it’s a list, table, paragraph, or any other structure.
  • Tailor the format to suit the type of information you are seeking.
  • Experiment with different output formats to determine the most effective one for your specific use case.

2. Use keywords

Using keywords in prompts involves incorporating specific terms or phrases that are essential to your inquiry. By including these keywords, you can guide the AI model’s attention toward relevant aspects of the topic, ensuring that the generated response is focused and directly addresses your needs.

How using keywords helps in prompt optimization:

This technique is particularly useful when users have specific terms or concepts they want the model to emphasize or elaborate upon. Integrating keywords into prompts is an effective way to steer the model toward the desired information. It helps in preventing the model from providing generic or unrelated responses.

Example:

Without prompt technique: “Discuss the impact of technology on society.”

It again provided me with a verbose answer that is focused on varied themes, though this time it did include a bulleted list.

Screenshot of ChatGPT's generic response without keyword optimization
ChatGPT’s generic response without keyword optimization

With prompt technique: “Perform an impact assessment of the societal impacts of artificial intelligence-led automation. Present it in a tabular format.”

In the second prompt, the inclusion of keywords such as ‘artificial intelligence’, ‘automation’, and ‘tabular format’ directs the model to focus on these specific aspects, resulting in a more targeted and detailed response. Here’s how it responds with the optimized prompt:

Screenshot of ChatGPT's response to a keyword-optimized prompt
ChatGPT’s response to a keyword-optimized prompt

Usage scenario:

Suppose you are researching the implications of blockchain technology on financial systems. By using keywords like “blockchain,” “financial industry,” and “decentralization,” you can guide the model to provide insights specifically related to these aspects, avoiding irrelevant information.

Best practices:

  • Identify key terms or phrases relevant to your inquiry.
  • Use specific and concise keywords to guide the model’s attention.
  • Experiment with different combinations of keywords to refine the focus of the generated responses.

3. Control temperature

Controlling temperature in the context of AI models refers to adjusting the “temperature” parameter during the generation of responses. The temperature parameter influences the level of randomness and creativity in the model’s outputs. A lower temperature (e.g., 0.2) results in more deterministic and focused responses, while a higher temperature (e.g., 0.8) introduces more randomness and creativity.

How controlling temperature helps in prompt optimization:

Controlling temperature allows users to fine-tune the balance between precision and creativity in the generated responses. A lower temperature is beneficial when users seek more deterministic and straightforward answers, reducing the likelihood of the model introducing unnecessary complexity or ambiguity. On the other hand, a higher temperature can be useful for creative tasks where more diverse and imaginative outputs are desired.

Example:

Without prompt technique: “Generate a creative romance story.”

After using the keyword ‘romance’, it has provided me with a decent response. But I would like to see something more creative.

Screenshot of ChatGPT's general response to a creative prompt without temperature control
ChatGPT’s general response to a creative prompt without temperature control

With prompt technique: “Generate a creative romance story with a temperature of 0.8.”

In the second prompt, I explicitly set the temperature to 0.8, indicating to the model that I seek a more creative and varied response. Here’s the response – notice how it uses more elaborative and poetic words. The story is more streamlined and dives deeper into the ‘ice’ theme, unlike in the first prompt (without temperature control) where it kept changing scenes.

Screenshot of ChatGPT's response to a temperature-optimized prompt
ChatGPT’s response to a temperature-optimized prompt

Usage scenario:

Consider a scenario where you are using ChatGPT to brainstorm ideas for a marketing campaign. By adjusting the temperature, you can control the level of creativity in the generated concepts. A lower temperature may yield more structured and focused ideas, while a higher temperature may result in more unconventional and imaginative suggestions.

Best practices:

  • Experiment with different temperature values to find the right balance for your specific use case.
  • Use lower temperatures for tasks that require precision and clarity.
  • Utilize higher temperatures for creative tasks or when exploring a range of ideas.

4. Limit response length

Limiting response length is a prompt engineering technique that involves setting constraints on the number of words, characters, or sentences in the generated response. By defining a maximum length, users guide the AI model to provide concise and focused answers, preventing overly verbose or extended responses.

How limiting response length helps in prompt optimization:

This technique is particularly beneficial when users seek brief and to-the-point information. By specifying a response length, users prevent the model from generating unnecessarily lengthy or detailed answers. This ensures that the AI-generated content is succinct, making it easier to consume and aligning with the user’s preferences for brevity.

Example:

Without prompt technique: “Explain the theory of relativity.”

If you put this on ChatGPT, it will provide very detailed answers. If you notice, many of the ‘general prompt’ response screenshots shared here are cut off because the response chat window wouldn’t fit the screenshot size!

Screenshot of ChatGPT's response to a general prompt on general relativity explanation
ChatGPT’s response to a general prompt on general relativity explanation

With prompt technique: “Explain the theory of relativity in 50 words.”

In the optimized prompt, if you set a constraint on the response length, you are instructing the model to provide a concise explanation within the specified word limit. Here’s how it will respond now:

Screenshot of ChatGPT response to a length-optimized prompt
ChatGPT response to a length-optimized prompt

Usage scenario:

Imagine you are preparing a summary of scientific concepts for a presentation. By limiting the response length when asking about the theory of relativity, you can ensure that the generated explanation is concise and suitable for inclusion in your presentation slides.

Best practices:

  • Clearly specify the desired response length, whether in words, characters, or sentences.
  • Adjust the length based on the context of your inquiry and the level of detail required.
  • Experiment with different response lengths to find the optimal balance for your specific needs.

5. Provide examples

The technique of providing examples in prompts involves including specific instances, scenarios, or cases related to your inquiry. By offering examples, users guide the AI model to contextualize its responses and provide more relevant and practical information. This technique helps in illustrating your expectations and refining the model’s understanding.

How providing examples helps in prompt optimization:

Providing examples is a powerful way to clarify your intent and ensure that the model grasps the nuances of the inquiry. Examples serve as tangible reference points, helping the model generate responses that align more closely with your specific requirements. This technique is particularly effective when dealing with abstract or complex concepts that benefit from concrete illustrations.

Example:

Let’s assume you are using ChatGPT for content marketing and writing prompts for creating a content strategy. We will take an example of defining SEO keywords.

Without Technique: “Extract SEO keywords from the below text.

Text:”

Here, it has given me many options, but that’s a lot of keywords for a small paragraph, some of which aren’t very SEO-relevant as such.

Screenshot of ChatGPT's response to a general prompt on SEO keyword extraction
ChatGPT’s response to a general prompt on SEO keyword extraction

Now, let’s optimize this by adding a few examples on how you want ChatGPT to respond as done here:

With Technique: “Extract keywords like how it is done in the two example shown here:

Text 1: Stripe provides APIs that web developers can use to integrate payment processing into their websites and mobile applications.

Keywords 1: Top 4 relevant keywords good for SEO are: Stripe, payment processing, web developers, APIs

## Text 2: OpenAI has trained cutting-edge language models that are very good at understanding and generating text. Our API provides access to these models and can be used to solve virtually any task that involves processing language.

Keywords 2: Top 4 relevant keywords good for SEO are: OpenAI, language models, text processing, API.

## Text 3: The theory of relativity, developed by Albert Einstein, consists of two parts: Special Relativity, stating that space and time are intertwined, and General Relativity, explaining gravity as the curvature of spacetime caused by mass. It redefined our understanding of the universe, introducing concepts like time dilation and the constant speed of light.

Keywords 3: “

Screenshot of ChatGPT response to an optimized prompt that includes examples.
ChatGPT response to an optimized prompt that includes examples.

Usage scenario:

Consider a scenario where you are using ChatGPT to understand a programming concept. By providing examples related to your specific coding challenge, you can guide the model to offer explanations that are not only accurate but also grounded in the practical context of your project.

Best practices:

  • Clearly articulate the type of examples you are looking for (e.g., real-life examples, industry-specific instances).
  • Include examples that are relevant to your specific inquiry to enhance the model’s contextual understanding.
  • Experiment with different types of examples to find the most effective way to convey your expectations.

6. Use contextual information

Including contextual information in prompts involves providing additional details, background, or specific circumstances related to the user’s inquiry. By offering context, users guide the AI model to consider relevant information, ensuring that the generated responses are more accurate, nuanced, and aligned with the specific context of your question.

How adding contextual information helps in prompt optimization:

Contextual information is crucial for refining the model’s understanding and tailoring responses to the user’s unique situation. This technique prevents the model from generating generic or misaligned answers by grounding its output in the specific context provided by the user. It is particularly beneficial when dealing with ambiguous queries or complex scenarios that require a more nuanced understanding.

Example:

Without prompt technique: “Recommend a book.”

On adding this prompt, ChatGPT itself is asking me for additional context!

Screenshot of ChatGPT's response to a general prompt on book recommendation
ChatGPT’s response to a general prompt on book recommendation

With prompt technique: “Recommend a science fiction book suitable for someone who enjoys complex narratives and futuristic settings.”

In the second prompt, the inclusion of “science fiction” and details about the reader’s preferences provides the model with additional context, helping it generate a more targeted and relevant book recommendation. I think I will enjoy Dune, so will definitely pick it up this holiday season!

Screenshot of ChatGPT's response to a contextually optimized prompt for a book recommendation
ChatGPT’s response to a contextually optimized prompt for a book recommendation

Usage scenario:

Imagine you are seeking advice on a health-related topic. By including contextual information about your symptoms, lifestyle, or specific concerns, you can guide the model to offer more personalized and relevant insights tailored to your unique health situation.

Best practices:

  • Be specific and relevant when providing contextual information.
  • Include details that help the model understand the nuances of your inquiry.
  • Experiment with different types and levels of context to refine the model’s responses effectively.

7. Employ iterative prompting

Iterative prompting involves an iterative or step-by-step approach to refining prompts based on the model’s initial responses. Users gradually improve the prompts by analyzing the generated outputs, identifying areas for enhancement, and incorporating those insights into subsequent prompts. This technique leverages an ongoing feedback loop to iteratively fine-tune the model’s behavior.

How iterations help in optimizing prompts:

Iterative prompting is a dynamic strategy that allows users to learn from the model’s initial responses and iteratively adjust their prompts for improved outcomes. By refining prompts based on the model’s strengths and weaknesses, users can incrementally guide the model to generate more accurate, relevant, and desired responses over successive interactions.

Example:

Initial Prompt: “Describe the process of photosynthesis.”

Here, I will first ask ChatGPT to share what it knows about photosynthesis in 5 lines. It has covered all major points and is a bit on the generic side.

Screenshot of ChatGPT's response to the initial prompt for an iterative prompt strategy
ChatGPT’s response to the initial prompt for an iterative prompt strategy

Iterative Prompt: “Describe the process of photosynthesis with an emphasis on the role of chlorophyll and sunlight.”

In the iterative prompt, I incorporated feedback from the initial response and focused on specific aspects of the photosynthesis process to enhance the depth and accuracy of the subsequent model-generated information. Here’s what the response looks like now – it is more focused on chlorophyll and the science associated with it:

Screenshot of ChatGPT"s response to iterative prompt optimization
ChatGPT”s response to iterative prompt optimization

Usage scenario:

Suppose you are using ChatGPT to assist in writing a product description. After receiving the initial response, you may identify areas for improvement, such as the need for more vivid language or specific details. Through iterative prompting, you can gradually refine your instructions to guide the model toward producing more compelling and tailored product descriptions.

Best practices:

  • Carefully analyze the initial model responses to identify strengths and weaknesses.
  • Incrementally adjust prompts, incorporating specific feedback or guidance for improvement.
  • Iterate based on the evolving understanding of the model’s capabilities and limitations.

8. Provide partial information

Providing partial information in prompts involves giving the AI model some initial details or constraints related to the inquiry, leaving room for the model to fill in the missing pieces. By offering a starting point or partial context, you encourage the model to generate responses that complement or build upon the provided information.

How providing partial information helps with prompt optimization:

This technique is beneficial for collaborative generation, allowing users to guide the model’s creativity while retaining control over specific aspects. By supplying partial information, you can influence the direction of the generated content while still leveraging the model’s capabilities to contribute novel or complementary details. It strikes a balance between user guidance and model creativity.

Example:

Partial Information: “Describe a beach scene with white sands and a clear blue sky.”

In this prompt, you provide partial details about the beach scene by specifying the color of the sands and the sky. The model can then generate a descriptive response that complements these initial details while adding its own creative elements.

Usage scenario:

Consider a scenario where you are collaborating with ChatGPT to develop a creative story. By providing partial information about the setting, characters, or plot, you guide the model’s creativity while allowing it to contribute imaginative elements that enhance the overall narrative.

Best practices:

  • Offer specific but partial details to guide the model’s output.
  • Experiment with different degrees of specificity to find the right balance for your creative intent.
  • Use partial information to collaboratively build content with the model.

9. Invoke comparison

Invoking comparison in prompts involves instructing the AI model to compare or contrast different aspects, ideas, or entities. By framing prompts with a comparative context, users guide the model to generate responses that highlight distinctions, similarities, advantages, or disadvantages, providing a nuanced and detailed analysis.

How invoking comparison helps in prompt optimization:

This technique enhances the depth and specificity of AI-generated responses by prompting the model to engage in comparative reasoning. Comparisons provide you with a more thorough understanding of the topic, as the model evaluates and contrasts various elements. Whether exploring pros and cons, similarities and differences, or performance metrics, invoking comparison refines the model’s output to be more analytical and insightful.

Example:

Comparison Prompt: “Compare the features of Android and iOS mobile operating systems.”

In this prompt, you are directing the model to perform a comparative analysis of Android and iOS features, leading to a response that highlights the distinctions and strengths of each operating system.

Usage scenario:

Imagine you are researching technology for a purchase decision. By using comparative prompts, you can instruct the model to evaluate and compare different products or services, aiding you in making informed choices based on the model’s analytical insights.

Best practices:

  • Clearly specify the entities or aspects to be compared.
  • Encourage the model to provide a balanced analysis by considering both positive and negative aspects.
  • Experiment with different comparative contexts to obtain varied and comprehensive insights.

10. Ask for reasons

Asking for reasons in prompts involves instructing the AI model to provide explanations, justifications, or underlying rationales for its responses. By explicitly requesting reasons, you can guide the model to articulate the thought process or logic behind the generated content, leading to more insightful and informative responses.

How asking for reasons helps in prompt optimization:

This technique enhances the transparency and depth of AI-generated responses by prompting the model to provide not only the answer but also the underlying reasons or considerations. Asking for reasons encourages the model to offer more comprehensive and detailed explanations, fostering a clearer understanding of the topic and enabling you to evaluate the rationale behind the generated content.

Example:

Initial prompt: “Provide the 5 key reasons behind climate change in 1-2 lines.”

Screenshot of ChatGPT's response to the initial prompt on climate change reasons
ChatGPT’s response to the initial prompt on climate change reasons

Then, I asked ChatGPT to focus on the ‘land-use changes’ point and asked to provide the reasoning behind adding that to the five key reasons list in its response.

Reasoning prompt: “What is the rationale behind adding land use changes in your key reasoning?”

Screenshot of ChatGPT's response to prompt which asks to reason its previous response
ChatGPT’s response to prompt which asks to reason its previous response

Usage scenario:

Consider a scenario where you are seeking insights into a complex scientific concept. By asking the model to provide reasons or explanations, you can delve deeper into the underlying principles, mechanisms, or causative factors, enriching your understanding of the topic.

Best practices:

  • Clearly specify that you want the model to provide reasons or explanations.
  • Encourage the model to elaborate on the factors contributing to a particular phenomenon or decision.
  • Experiment with different levels of granularity to obtain detailed and contextually rich reasons.

11. Employ multi-turn conversations

Employing multi-turn conversations in prompts refers to initiating a sequence of interconnected questions or prompts with the AI model, creating a conversational flow that builds upon previous interactions. Instead of a single, isolated query, you engage in a series of exchanges with the model. This allows for a more dynamic, context-aware, and iterative interaction.

How multi-turn conversations help in prompt optimization:

This technique fosters a deeper and more nuanced exploration of topics by enabling a conversational context that spans multiple turns. Multi-turn conversations facilitate a more interactive and collaborative interaction with the model. It allows you to refine queries, clarify uncertainties, and explore complex topics through a sequence of related prompts. It promotes continuity, context retention, and iterative refinement, leading to richer and more tailored responses.

Example:

Initial Prompt: “Explain the basics of renewable energy.”

Follow-up Prompt: “How do solar panels work?”

Subsequent Prompt: “What are the advantages of wind energy?”

In this multi-turn conversation, each subsequent prompt builds upon the context established in the previous exchanges, enabling a more focused and comprehensive exploration of renewable energy topics.

Usage scenario:

Imagine you are researching the risks of adopting Generative AI tools for human resources operations. By employing multi-turn conversations with the model, you can sequentially explore various subtopics, clarify uncertainties, and integrate insights from previous exchanges.

Best practices:

  • Maintain continuity by referencing previous exchanges or providing context for subsequent prompts.
  • Use multi-turn conversations to explore complex topics, clarify uncertainties, or delve deeper into specific areas of interest.
  • Leverage the conversational context to refine queries, adjust directions, and collaboratively explore the topic with the model.

12. Specify tone

Specifying tone in prompts involves instructing the AI model to generate responses that align with a particular style, mood, or emotional tone. By setting explicit guidelines for the desired tone, you can guide the model to produce content that conveys specific emotions, attitudes, or stylistic elements. This ensures that the generated responses resonate appropriately with the intended audience or context.

How specifying tone helps in prompt optimization:

This technique enhances the relevance, appropriateness, and effectiveness of AI-generated responses by aligning the tone with the desired communication style or emotional context. Specifying tone allows users to tailor the model’s output to suit specific scenarios, audiences, or purposes. It ensures that the generated content conveys the intended mood, sentiment, or stylistic nuances.

Example:

Without prompt technique: “Describe 5 features of a smartphone camera in 1-2 lines.”

Screenshot of ChatGPT's response to a general prompt on the smartphone feature description
ChatGPT’s response to a general prompt on the smartphone feature description

The response to the above prompt will be quite general and dry. Let’s ask ChatGPT to change the tone of its response:

With Technique: “Change the above description of smartphone features in a casual and engaging tone.”

In the second prompt, I specified a casual and engaging tone, guiding the model to produce content that is more conversational, approachable, and engaging for the target audience.

Screenshot of ChatGPT's response to a tone-optimized prompt
ChatGPT’s response to a tone-optimized prompt

The response is much better where a layman can understand and may feel tempted to check out the smartphone features.

Usage scenario:

Consider a scenario where you are creating marketing content for a youth-oriented product. By specifying a playful and energetic tone, you can guide the model to generate content that resonates with the target demographic, capturing their attention and fostering a connection through the appropriate tone and style. This prompt writing strategy is also useful to generate emails or responses to them.

Best practices:

  • Clearly define the desired tone, considering factors such as audience, context, and communication objectives.
  • Experiment with different tones to find the most effective style for your specific needs or scenarios.
  • Provide context or examples to help the model understand and emulate the desired tone accurately.

13. Use repetition

Using repetition in prompts involves reiterating key terms, concepts, or instructions to emphasize specific elements or clarify your intent. By incorporating repetitive elements, you can guide the AI model’s attention towards particular aspects, reinforcing priorities, or ensuring that essential details are consistently emphasized throughout the interaction.

How repetition helps in prompt optimization:

This technique enhances clarity, emphasis, and consistency in AI-generated responses by reinforcing key terms or instructions through repetition. By repeatedly highlighting specific elements, users ensure that the model’s outputs align more closely with their priorities, objectives, or focus areas. As a result, you reduce ambiguity and enhance the precision of the generated content.

Example:

Let’s assume you are using ChatGPT or Generative AI for community management. If you wish to create a user onboarding flow for your community, here’s a general prompt for it:

Without prompt technique: “Create a 5 step user onboarding flow for a SaaS founder’s community.”

ChatGPT gave a very general response even after mentioning it is a community for SaaS founders. It seems ChatGPT could not curate its output properly as per the prompt.

Screenshot of ChatGPT's response to a general prompt on user onboarding flow creation
ChatGPT’s response to a general prompt on user onboarding flow creation

With prompt technique: “Create a 5 step user onboarding flow for a SaaS founder’s community. I’m specifically interested in focusing on onboarding that helps ‘founders’ and includes elements of managing a SaaS business. Keep each response to 1-3 lines only.”

In the second prompt, I used repetition by emphasizing “founders,” and “managing SaaS business”, guiding the model’s attention toward these specific aspects for a more focused and detailed response. Here’s how it has performed better now:

Screenshot of ChatGPT's response to a repetition optimized prompt for generaing user onboarding flow
ChatGPT’s response to a repetition optimized prompt for generaing user onboarding flow

Usage scenario:

Imagine you are researching a complex topic with multiple facets like understanding how Generative AI impacts contract lifecycle management. By using repetition to emphasize key terms, concepts, or areas of interest, you can guide the model to consistently focus on specific elements of contract lifecycle throughout the interaction.

Best practices:

  • Identify key terms, concepts, or instructions that you want to emphasize or clarify.
  • Incorporate repetition strategically to reinforce priorities, focus areas, or essential details.
  • Monitor the model’s responses to ensure alignment with the repeated elements and adjust prompts as needed for clarity or emphasis.

14. Utilize external context

Utilizing external context in prompts involves incorporating relevant external information, references, or background details to guide the AI model’s understanding. By providing additional context from external sources, you enhance the model’s ability to align its outputs with specific scenarios, requirements, or external knowledge bases.

How adding external sources helps in prompt optimization:

This technique enriches the depth, relevance, and accuracy of AI-generated responses by integrating external context from relevant sources or information. By connecting the model’s knowledge with external references, data, or frameworks, you enable the model to generate content that aligns with broader perspectives, specific guidelines, or external criteria.

Example:

Without Technique: “Explain the concept of agile.”

ChatGPT gave me general output explaining basica about agile methodology as follow:

Screenshot of ChatGPT's response to general prompt on agile explanation
ChatGPT’s response to general prompt on agile explanation

With Technique: “Given the Agile Manifesto, explain the Agile Methodology concept.”

In the second prompt, I have utilized external context by referencing the Agile Manifesto, guiding the model to align its explanation with established frameworks and criteria relevant to agile methodology. The output quality is much better as shown here:

Screenshot of ChatGPT's response to external source optimized prompt
ChatGPT’s response to external source optimized prompt

Usage scenario:

Consider a scenario where you are exploring a topic that integrates insights from multiple disciplines or sources. By utilizing external context from relevant frameworks, theories, or authoritative sources, you can guide the model to generate responses that align with established standards, criteria, or perspectives, enriching the depth and credibility of the interaction.

Best Practices:

  • Identify relevant external sources, frameworks, or references that align with your topic or objectives.
  • Integrate external context strategically to guide the model’s understanding and align its outputs with specific criteria, standards, or perspectives.
  • Provide clear instructions or references to ensure that the model accurately incorporates and integrates the external context into its responses.

15. Experiment with seed phrases

Experimenting with seed phrases involves using different introductory phrases or starting points when interacting with AI models to explore variations in responses, perspectives, or approaches. It helps assess how different starting points influence the model’s outputs, allowing for exploration of diverse perspectives, styles, or content nuances.

How experimenting with seed phrases helps in prompt optimization:

This technique fosters creativity, exploration, and adaptability in AI-generated responses by experimenting with various seed phrases or starting points. You can evaluate variations in language, structure, or context influence the model’s outputs. With this, you identify optimal approaches, perspectives, or styles that align with their specific objectives, preferences, or requirements.

Example:

Seed Phrase 1: “Discuss the implications of artificial intelligence on society.”

The output by ChatGPT is general and covers a wide range of possibilities as shown here:

Screenshot of ChatGPT's response to a simple prompt
ChatGPT’s response to a simple prompt

Seed Phrase 2: “Explore the societal impact of artificial intelligence.”

screenshot of ChatGPT's different reponse to a changed seed phrase for same query
ChatGPT’s different reponse to a changed seed phrase for same query

In these two seed phrases, slight variations in language (“implications” vs. “impact” and “on society” vs. “societal”) yields different nuances, focuses, or perspectives in the model’s responses. This demonstrates the potential influence of seed phrase experimentation on content variation.

Usage scenario:

Imagine you are conducting research, content creation, or exploratory analysis using AI models. By experimenting with different seed phrases or introductory prompts, you can explore variations in perspectives, approaches, or content nuances.

Best practices:

  • Identify key themes, topics, or objectives relevant to your inquiry or project.
  • Experiment with variations in language, structure, context, or focus to explore diverse perspectives, styles, or content nuances.
  • Analyze the model’s responses to assess how different seed phrases influence the outputs and refine your approach based on the desired outcomes or objectives.

FAQs on prompt fine-tuning

To address common queries, here are ten frequently asked questions regarding prompt fine-tuning:

1. What is the purpose of prompt engineering?

Prompt engineering aims to tailor input queries to elicit more accurate and contextually relevant responses from AI models.

2. How can I make my prompts more effective?

Techniques like specifying format, using keywords, and providing examples can enhance the effectiveness of prompts.

3. Does prompt optimization work for all AI models?

While the principles of prompt engineering apply broadly, specific techniques may need adjustments based on the model.

4. Can I use prompt engineering for creative tasks?

Yes – techniques like controlling temperature and providing examples can enhance creativity in AI-generated content.

5. What is the importance of context in prompt engineering?

Contextual information helps refine AI responses, making them more relevant to the user’s needs.

6. Is there a one-size-fits-all approach to prompt engineering?

No. Prompting strategies should be tailored to the specific use case and desired outcomes.

7. How can I iterate on prompts for better results?

Employ iterative prompting by refining prompts based on initial model responses to achieve more desired outcomes.

8. Are there any risks associated with prompt optimization?

While rare, prompts must be carefully crafted to avoid biases or unintended consequences.

9. Can prompt engineering be applied to short-text queries?

Yes, techniques like using keywords and specifying tone are valuable for refining short text queries.

10. What role does the user play in prompt engineering?

Users play a crucial role in providing feedback and refining prompts based on their preferences and requirements.

5 tools and resources to help optimize ChatGPT prompts

Here are five prompt engineering tools and resources you can explore that specifically help you in prompt optimization:

List of prompt engineering basics: check out our blog which has curated resources across blogs, videos, courses, and tools to learn prompt engineering.

OpenAI Playground: an interactive platform for experimenting with prompts and fine-tuning models. Learn more – OpenAI Playground

ChatGPT API Documentation: Comprehensive documentation for utilizing the ChatGPT API and incorporating prompt engineering techniques. Learn more – Best practices for prompt engineering with OpenAI API

Hugging Face Transformers Library: a powerful library for working with a variety of pre-trained language models, including ChatGPT. Learn more: Hugging Face Library

FlowGPT: is a maketplace and directory for LLM models and GPT resources. You can check out their prompt engineering section for curated tools and templates. Learn more – FlowGPT for prompt optimizers

Do you have any prompt writing strategies or hacks for ChatGPT or Generative AI tools? We would love to feature your experience on this blog post – email to content@merrative.com

You can subscribe to our newsletter to get notified when we publish new guides – shared once a month!

This blog post is written using resources of Merrative – a publishing talent marketplace that helps you create publications and content libraries.

Get in touch if you would like to create a content library like ours in the niche of Applied AI, Technology, Machine Learning, or Data Science for your brand.

Leave a Reply

Discover more from Applied AI Tools

Subscribe now to keep reading and get access to the full archive.

Continue reading