A Guide to Prompt Engineering

Leverage the true potential of LLMs! 🚀

Prompt engineering is more of an art than a skill, you realise the power of prompt engineering only once you understand and start using it properly.

Today, we will explore & understand various types of prompting techniques with illustrative examples:

  • Zero-shot prompting

  • Few-shot prompting

  • Chain of thought prompting

  • Tree of thought prompting

1️⃣ Zero-shot Prompting

Zero-shot prompting refers to the ability of an AI model to generate meaningful responses or complete tasks without any prior training on specific prompts.

Here’s an example👇

2️⃣ Few-shot Prompting

In contrast to zero-shot prompting, few-shot prompting involves training an AI model with only a small amount of data or examples.

This technique allows the model to quickly adapt and generate responses based on limited examples provided by the user.

Here's an example 👇

3️⃣ Chain of thought prompting (CoT)

Chain of thought prompting is a method where user provides prompts in a sequential manner, building upon previous responses.

By following this approach, the AI model can generate more coherent and contextually relevant outputs, mimicking human-like conversation flow.

Here's an example 👇

4️⃣ Tree of thought prompting (ToT)

Similar to chain of thought prompting, tree of thought prompting utilizes branching pathways & encourages exploration over various chain of thoughts.

Users can explore different possibilities or directions within the conversation by structuring their prompts as branches in a tree-like structure.

This technique enables greater flexibility, exploration & backtracking during interactions with the AI model.

Broadly speaking ToT involves two components:

  1. Thought generation

  2. Thought Evaluation

We use ToT for a Game of 24:

It's is a mathematical reasoning challenge, where the goal is to use 4 numbers and basic arithmetic operations (+-*/) to obtain 24.

For example, given input “4 9 10 13”, a solution output could be “(10 - 4) * (13 - 9) = 24”.

(Refer the image below as you read ahead)

To frame Game of 24 into ToT, we decompose the thoughts into 3 steps, where each step is an intermediate equation.

Figure 2(a): Though Generation

Decompose the thoughts into 3 steps, each an intermediate equation.

What happens at each step (tree node):

  • Extract the “left” numbers.

  • Prompt the LM (Language Model) to propose possible next steps.

  • Use the same “propose prompt” for all 3 thought steps.

  • Note: Only one example with 4 input numbers is provided.

Breadth-first search (BFS) in ToT:

  • At each step, retain the best b = 5 candidates.

Figure 2(b): Evaluation

  • Prompt LM to evaluate each thought candidate as “sure/maybe/impossible” with regard to reaching 24.

  • The goal is to promote correct partial solutions that can be confirmed with few lookahead trials.

  • Eliminate impossible partial solutions

  • Retain the rest labeled as “maybe”.

  • Sample values 3 times for each thought.

Check this out👇

That’s all for today, stay tuned for more amazing stuff coming up on AI Engineering!

Thanks for reading! 🙂 

Subscribe to keep reading

This content is free, but you must be subscribed to ML Spring to continue reading.

Already a subscriber?Sign In.Not now