Basic Prompting Techniques for Non-Technical LLM Users to Increase Productivity

Shanto Datta
3 min readJun 3, 2024

--

Hello, everyone! I hope you are having a wonderful day. This post is all about increasing your productivity using LLMs. I will discuss some common prompting techniques that can help you get the correct output from an LLM.

Let’s start with the basic question, what is an LLM?

LLM stands for Large Language Model, which refers to a process where we train a machine to predict the next most probable word in a sequence.

Suppose I were introducing myself, I would say: “Hi, I’m Shanto from _____.”

This gap could be filled with any of the many countries available in the world. The task of the LLM in this scenario would be to consider all the countries it knows and predict the most probable one. Suppose we have five countries: USA, Canada, Australia, India, and Bangladesh. The LLM would assign a value to each of the countries like this:

USA - 0.1,
Canada - 0.01,
Australia - 0.05,
India - .25,
Bangladesh - .99

As we can see, in this case, Bangladesh has the highest probability and the LLM would complete the sentence with the word “Bangladesh”.

Now think, if our prompt looked like, “Hi, I’m User X, _____” Here, the LLM would try to predict what would come next after the name. It might come up with “a full stack software engineer,” “from the USA,” “I work for the government,” or anything else it finds most probable.

Hopefully, you can see the problem I’m discussing here. If you don’t tell the LLM exactly what you are looking for, you might not get the desired output. If you understand the problem clearly, let’s move on to the solutions!

Zero Shot and Few Shot Prompting

The first one in the image is called a zero-shot prompt. While we don’t provide the LLM with any example data, we tell it exactly what we are looking for. In this case, classification.

The second one is called few-shot prompting. In this technique, we provide some context and the desired output format. This way, the LLM knows exactly how to produce the output.

Chain Of Thought (COT) Prompting

Prompt without chain of thought
Prompt without chain of thought
Prompt with chain of thought

Chain of Thought prompting is used to solve complex problems that require step-by-step thinking on the LLM’s part. By simply writing “use chain of thought to solve [question],” you instruct the LLM to break down the problem and solve it step by step.

Take a deep breathe prompting!

Take a deep breathe prompting

Remember when you couldn’t find a solution to a problem and the teacher told you to take a deep breath? It’s true for LLMs as well! If you instruct them to “take a breath and then solve,” they will take more time to analyze the situation and solve it more deliberately.

Tree of Thought (TOT) prompting

Tree Of Thought (TOT) prompting

Tree of Thought basically instructs the LLM to consider multiple approaches to reach a conclusion. For example, when we tell the LLM to solve a problem using Tree of Thought (TOT), it explores different possible solutions. As you can see in Step 2, it decides to use two options to arrive at the final conclusion.

I hope these prompting techniques will help you being more productive. Which one do you find most interesting? Which one have you used yet didn’t know the name?

Follow me on Medium and LinkedIn to learn more about Deep Learning, ML and Full Stack Development.

--

--

Shanto Datta
Shanto Datta

Written by Shanto Datta

Machine Learning And Artificial Intelligence | Software Engineer | Interested in Business, Technology, Space and Human Psychology