Ten Prompt Engineering Best Practices from the Trenches

Introduction:

AI Models like ChatGPT, Bard  & Claude are constantly evolving by incorporating more powerful feature functionality. If you are already leveraging this tool as part of your workday, how do you keep up with the steep learning curve to get the best responses from the AI models?

Outlined below are some best practices that can help you hit the ground running to get the best responses from the models to enhance your productivity.

1. Clarity and Specificity:

Explanation: Clear and specific prompts leave no room for ambiguity, ensuring that AI models understand user intentions accurately.

Tactics:

  1. Use concise and straightforward language in prompts.
  2. Clearly state the desired outcome or information.
  3. Avoid vague or open-ended phrasing.

Example: Instead of: “Tell me about recent developments in the industry.” Use: “Provide a summary of the three most significant developments in the automotive industry in the last six months.”

2. Contextual Clues:

Explanation: Providing context helps AI models understand the user’s request within a specific context, leading to more relevant responses.

Tactics:

  1. Include relevant background information in prompts.
  2. Mention the specific topic, project, or previous conversation.
  3. Use references to time, location, or events.

Example: Instead of: “What’s the weather like today?” Use: “Can you tell me the weather forecast for San Francisco today?”

3. Use Precise Language:

Explanation: Precision in language ensures that AI models grasp the exact nuances of your request.

Tactics:

  1. Avoid vague terms or ambiguous language.
  2. Use specific terminology related to your industry or field.
  3. Define any industry-specific acronyms or terms.

Example: Instead of: “Summarize the quarterly report.” Use: “Provide a concise summary of the financial highlights from the Q3 2023 earnings report.”

4.Utilize Prompt Scaffolding :        

Explanation: Break down complex queries into multiple, simpler prompts to guide the AI model progressively.

Tactics:

  1. Start with a general prompt and follow up with more specific questions.
  2. Use each response to refine and narrow down the focus of subsequent prompts.
  3. Ensure that each prompt logically builds on the previous one.

Example:

  • Initial Prompt: “I’m experiencing issues with my computer.”
  • Follow-up Prompt 1: “Could you describe the specific problem you’re encountering, such as error messages or symptoms?”
  • Follow-up Prompt 2: “Is the issue occurring consistently, or does it happen under specific conditions or when using certain applications?”
  • Follow-up Prompt 3: “Have you recently installed any new software or hardware on your computer?”
  • Follow-up Prompt 4: “Based on your description, it seems like a software conflict. Would you like guidance on resolving this issue or information on potential solutions?”

5. Incorporate Feedback Loop:

Explanation: Continuously refine and improve prompts based on the AI model’s responses and user feedback.

Tactics:

  1. Review and analyze AI-generated responses for accuracy and relevance.
  2. Collect feedback from users regarding the quality of responses.
  3. Adjust prompts based on insights from both AI performance and user input.

Example: If users consistently receive inaccurate market trend predictions, refine the prompt with clearer instructions or additional context based on user feedback.

6. Annotate and Train :

Explanation: Annotation is a process where you provide additional information or corrections to help AI systems learn and improve their understanding.

Tactics:

  1. When you notice that the AI system provides answers that are not quite right or could be better, you can “annotate” those answers.
  2. Share annotated data with AI development teams to improve model performance.

Example: If the AI model frequently misinterprets technical terms in a medical context, annotate those terms with their correct definitions and share this information with developers.

7. Regular Updates:

Explanation: Stay up to date with the latest advancements in AI models and prompt engineering techniques.

Tactics:

  1. Follow AI research and development updates.
  2. Continuously adapt your prompts to leverage the latest improvements.

Example: Regularly review AI model release notes and update your prompt strategies to take advantage of newly introduced features or optimizations.

8. Ethical and Inclusive Prompts:

Explanation: Craft prompts that prioritize ethical considerations and ensure inclusivity and fairness in AI responses.

Tactics:

  1. Avoid biased or discriminatory language in prompts.
  2. Consider potential biases in AI responses and address them in prompts.
  3. Promote inclusive and respectful communication in prompts.

Example: Instead of: “Tell me about the benefits of hiring a diverse team.” Use: “Explain how diversity in a team can contribute to creativity, innovation, and better problem-solving.”

9. Use Delimiters in Prompts:

Explanation: Delimiters, in the form of special symbols or keywords, can help AI models extract and analyze information more efficiently.

Tactics:

  1. Incorporate easily recognizable delimiters to identify key elements in survey data.
  2. Use delimiters to instruct the AI model on how to organize and analyze the data.
  3. Ensure that delimiters are consistently applied throughout the prompt.

Example: Prompt: “Please provide a summary (using ‘Summary:’) of the customer feedback (found in ‘Feedback.txt’) related to our product. Ensure that the summary includes the top three positive aspects (marked with ‘+’) and the top two areas for improvement (marked with ‘-‘).”

10. Elevate Responses with Reference Text Integration

Explanation: Provide AI models with reference articles for answering user queries.

Tactics:

  1. Supply relevant, up-to-date, and authoritative articles to enhance response quality.
  2. Ensure reference text is well-structured and reliable.
  3. Implement embeddings-based search to fetch contextually relevant information.
  4. Implement Retrieval Augmented Generation (RAG) workflows to combine retrieval of information with text generation. Knowledge bases used in RAG can include structured databases, unstructured databases or domain specific data for extra context.

Example:

User Query: “What are the key components of a legally binding contract?”

Response: The AI model leverages a reference article on contract law, which is recognized as an authoritative source in legal circles. It extracts and cites specific sections from this reference text to provide a comprehensive answer. The response outlines the essential elements of a legally binding contract, ensuring accuracy and reliability in legal information.

In this legal context, integrating reference text ensures that AI responses are not only well-informed but also align with established legal principles, making them valuable for legal professionals and individuals seeking legal knowledge.

Summary:

Think of Prompt Crafting as a journey rather than a final destination. Adopt a continuous improvement mindset to get the best results from the AI models.

By embracing clarity, context, precision, layered prompts, feedback loops, annotation, staying updated, using reference text and promoting ethical and inclusive communication, you can unlock the true potential of AI-powered interactions in your workday. So, lean right in and hit the ground running with the best practices outlined above.