I’ve been playing with GPT-4 over the last few weeks and there are a few prompt engineering best practices that I developed through my exploration that I want to share here.

Here are my 5 best practices for prompt engineering:

  1. Write instructions clearly, as you would do for an intern
  2. Add structure to your prompt
  3. Develop prompts iteratively
  4. Reduce the risk of prompt injection by using prompt delimiters
  5. Never trust the model

In the rest of the article, I will expand on each of these practices.

Write instructions clearly, as you would do for an intern

The number one skill for prompt engineering, is to give clear, unambiguous instructions to what you want to produce. I find it a bit like coaching interns, where you want to lead them to get the right answer, without letting them going into wrong direction. Clear instructions also mean they may not be short.

If I know what is required to solve the problem, I often also provide the steps that the model should follow, and it really helps to nail the answer faster.

This also includes checking for failure conditions. For example, if I want to extract all the dates from the text and the text doesn’t have any dates, I ask the model to output “no dates found”, to avoid hallucinations.

Add structure to your prompt

The anatomy of an LLM prompt that I usually follow is this:

  1. context
  2. request
  3. text to analyse
  4. output specification.

Each one of them, except for probably 3, can be improved with the structure.

For the context, if the problem is nontrivial, I tend to favour few-shots learning, where I provide a couple of examples, how to solve it. The context can also include some assumptions, that model need to know before tackling the problem. When writing a chatbot, context can also include roles, such as system, user, assistant.

In request, if the problem is nontrivial, I tend to specify steps to solve it, failure modes, and even steps to check itself. This can also include specifications such as tone, translation language and references.

I often need to use outputs of the model in other parts of the problem and parsing arbitrary output can be painful. When I write my prompts, I specify exact format that I want the output to be. For example, JSON with the specific structure.

Develop prompts iteratively

Prompts never work from the first time, just like developing any machine learning models. The key here is to build them iteratively starting from simple prompt and growing details. All the best practices that I described in the previous section can and need to be added iteratively, otherwise you are risking of overengineering your prompts.

The process that I use is:

graph TD id1([prompt idea]) --> id2([implementation]) id2 --> id3([analyse result]) id3 --> id4([make adjustments to the prompt]) id4 --> id2

Reduce the risk of prompt injection by using prompt delimiters

When sending LLM model a prompt one part will be some sort of context and instructions and another one will be the actual text to analyse. After reading a couple of articles on prompt injection, I started wrapping text to analyse into prompt delimiters, such as triple backquote. This also flags the model that text between these delimiters is the main text for it to analyse and it’s not context or instructions.

Never trust the model

As good as the LLM model is, it still hallucinates quite a lot. When you give clear instructions and structure, it usually reduces the hallucinations but it still happens. Another issue that I found with these models is forgetting - context that I’ve given a few prompts ago is lost. So, for any final output, I carefully check every sentence that the model produce.

Obviously, this is not a complete list of best practices, just the ones I developed for myself. If you have any best practices that you find useful, I’d love to hear about them, and I can share it in the future articles.


If you like what I write, consider subscribing to my newsletter, where I share weekly practical AI tips, write my about thoughts on AI and experiments.
Photo by Volodymyr Hryshchenko on Unsplash


This article reflects my personal views and opinions only, which may be different from the companies and employers that I am associated with.