How to Master Prompt Engineering

Edmund_McGowan
edited February 6 in AI

The dust has settled somewhat since Artificial Intelligence (AI) became widely accessible to humanity through our laptops and smartphones at the end of 2022. Since then, the use of generative AI has steadily grown in popularity and is now applied to constantly evolving applications across many industries. Since the initial novelty/shock of using AI has worn off, learning how to effectively use the diverse AI platforms available can help you make the most of all that AI has to offer to your business.

Large Language Model (LLM) platforms such as OpenAI’s ChatGPT Plus (Generative Pre-trained Transformer 4 ) are designed to understand human-generated text input and respond accordingly, often in a conversational manner, using natural-sounding language. Trained on huge amounts of internet text, AI language models use deep learning techniques as well as neural networks to respond to our requests with generally coherent, relevant, and correct responses.

Have you ever dutifully typed a request into the input field and clicked send, only to receive a confusing, biased, or even incorrect answer? If so, you’re not alone! The good news is there is a simple, effective way to ensure that you get the best responses possible from whatever AI platform you choose to use: prompt engineering. In this article we introduce the fast-growing field of prompt engineering and then discuss several key techniques needed to master it, providing a variety of examples along the way.

What is prompt engineering?

Before we get into the finer details of our prompt engineering guide, let’s first get a better idea of what it really is. In the context of AI models, a prompt is a query that you type into the text input box to request information or content generation from the AI model. Human language and ways of speaking, writing, and expressing our thoughts and queries vary greatly, even within the same language. Unsurprisingly, our prompts also vary greatly, and while originality and self-expression are generally well-received characteristics in the human environment, our AI pals may not get the joke or the prompt! If the AI model fails to fully understand your prompt, the response it provides will likely lack understanding and be of little use to you. Quality and clarity are both desirable traits: by providing clear instructions and context, the AI model will be able to understand the user’s requirements and provide an accurate, relevant reply.

In conversation with other humans, we may carefully choose the right words in order to achieve the desired response and outcome. Similarly, prompt engineering is the skill of creating specific instructions and queries to get the best responses from the AI model. A vital skill for anyone working with AI models, effective prompt engineering empowers the user, ensuring that the AI model produces desired outputs. The right results can help your organization reap the fruits of the AI model’s labor, from effective content creation to task automation, and even improved productivity.

AI assistants are also increasingly common. Microsoft Copilot, now available in the latest version of Windows 11 and Microsoft 365 business and enterprise version, is a powerful AI assistant for your device. Bing Chat is another free service from Microsoft, providing users with an AI-powered assistant to help browse the web and more. As AI assistants become more integrated into our daily tech, hardware manufacturers are also stepping up their game. Acer’s TravelMate P laptops feature high-quality dual microphones optimized with Acer PurifiedVoice’s AI-powered noise‑reduction to enhance your AI assistant user experience.

Prompt engineering is crucial in the context of Natural Language Processing (NLP) and AI models like GPT-4 because NLP models lack inbuilt contextual awareness. By engineering prompts, we can guide the AI model to the applicable context, ensuring that it produces appropriate outputs. In addition to context, prompt engineering also allows users to adapt the AI model to specific tasks, in turn improving the overall performance. Furthermore, prompts that specifically guide/constrain the AI model’s output behavior can help to avoid biased and incorrect output. Read on to discover a raft of prompt engineering tips.

How to Master Prompt Engineering

1. Know the model’s strengths and weaknesses

There are many generative AI models out there in 2023 with increasingly specialized functions. These models all have strengths and weaknesses, and understanding these can assist you greatly in using different AI models to generate the most reliable outputs. AI models are fundamentally limited by the datasets they were trained on, and may express bias learned from real-world data. Be aware of limitations and biases and craft effective prompts for individual AI models accordingly.

Prompt engineers are well versed in the strengths and weaknesses of the various AI models floating around. As laypeople, we can learn from the masters, and from our own experiences using different AI models. While ChatGPT-3.5 is a free, highly popular platform, the knowledge of the model is based on information available until September 2021, and it lacks access to updates and developments beyond this date. Text-to-image generator DALL-E helps users create realistic images and art from a description in natural language. But don’t overcomplicate your instructions! Provide an instruction that is too complicated or too abstract, and it might yield an abomination of a painting. If you prompt “Painting of cat in the style of Picasso, ” it will definitely know what to do.

2. Be specific!

Being clear and specific in our verbal and written communications is a good habit to foster in our prompts with AI models too. AI models are trained to understand a great range of natural language prompts in various languages, and even programming codes. If our prompts are not specific enough, they may be misinterpreted, leading to incorrect or irrelevant output. Imagine for a minute that you want to prompt your AI model to calculate the amount of standard 14cm wide, 3.6m long ACQ treated boards needed to cover a 3.6m by 4.2m deck. If you prompt “How much wood for a 3.6m by 4.2m deck?” the AI will likely present you with further considerations and questions in order to make the calculation. If, however, you prompt “How many standard 14cm wide, 3.6m long ACQ treated boards are needed to cover a 3.6m by 4.2m deck frame, including 5mm spacing between each board?” You would receive an output outlining the calculation, along with a suggestion of approximately 30 boards.

3. Create a persona

Asking the AI model to adopt a persona, whether that be a specific historical figure, a work persona, or even simply prompting “Add a relevant joke at the end of every second paragraph,” can help you to explore diverse outputs. Experimenting with various personas and prompts to guide the model will help you to obtain the output you require. For example, if you are using the AI model to write an FAQ section for your company website, you may find that the responses churned out are a bit too vague or even a bit terse. Tell the model that you would like it to be a kind and helpful customer service representative. Setting up a scenario and guiding the mood of the response will yield a more accurate response.

4. Experiment with prompt formats

When interacting with AI models, most people tend to input questions, but did you know you can also input a transcript or context for the model to give you the answer in a format that you want? For example, if you want to learn about the Jacobite uprising of the 1700s, but only want to learn about it through songs, you could tell the model to “Answer only in the form of Scottish folk songs and war ballads.” You can also ask the model “Do you understand?” to make sure it understands your instructions. Repeatedly checking if the model understands will lessen the likelihood of false answers. Refining or tuning your prompts will help to ensure that the model gains a better understanding of the format, style and structure of output that you require. Helpful tip: make sure the model only uses factual evidence and cites sources, otherwise you might end up with some pretty wacky revisionist history! Ask the model to “Cite your source” to check that it hasn’t fabricated anything!

5. Provide reference text

A pitfall of AI models is their ability to confidently provide incorrect answers, notably when questioned about lesser-known topics, as well as for citations. Providing examples such as reference texts can help guide the model to understand your needs. Alternatively, prompting the model to answer using a reference text or to answer with citations from a reference text can help to create less fabrications.

Let’s return to our Scottish history example to illustrate this approach. If you find that the model is not yielding the results of the Jacobite uprising in the ballad format that you desire, you can input “Johnny Cope” and tell the model, “Answer me with a similar type of ballad.” (“Do not make anything up” would also be a good prompt to add here.) Remember, your prompts are instructions for the model to follow, and its replies will align with your requests in style, content and language.

6. Split complex tasks into simpler subtasks

When it comes to handling complicated tasks, the more complex a task, the more errors there are likely to be. Splitting these complex tasks into smaller, simpler subtasks has several benefits. Firstly, chain-of-thought prompting helps improve the reasoning capabilities of AI models. By simplifying a complex problem into smaller subtasks, then prompting the model to output reasoning for each step, the output of earlier tasks is used to construct the input of later tasks. In this manner, the AI model gains a better understanding of the problem, and will be able to provide more useful, concise answers. Instructions can be specific, or simply something along the lines of “Let’s think step by step.” Summarization of texts longer than the fixed content length, like an entire book, requires this approach. A sequence of queries is necessary, summarizing each section of the book. These summaries can then be combined to make larger summaries, and repeated until the entire book is summarized.

7. Control output format and length

Specify the format and length of your desired output. This can be as simple as setting a word limit, or more specific such as designating output formats like plain text, HTML, JSON, or XML. If you were setting up a global cookbook recommendation website and required JSON format to integrate into the backend, you could prompt: “Provide me with a list of 5 English language Korean cookbooks in JSON format.” For AI-assisted creative and other writing endeavors, providing clear structural prompts is a surefire way to guide the AI model to help you to arrive at your goal. For example, when writing summaries, input “Write me a 150-word summary using concise language.” Or, if you are writing a script set in Shakespearean times, you might say “Write me a 500-word soliloquy in iambic pentameter using the expressive language of a forlorn lover.” You might then decide that’s way too long. “Summarize this in 200 words or less” you can then input. In order to yield the best results, always focus on positive instructions. Tell the model what to do, not what not to do, because this avoids confusion and augments the model’s ability to generate the required output.

8. Use GPT plugins

As the most widely used of the LLMs, we have mentioned ChatGPT several times in this article. After setting up a free account and logging in, there is no limit to the volume of questions that users can ask. At time of writing, the free-use ChatGPT uses GPT3.5 architecture. If you are willing to pay a subscription fee, ChatGPT Plus has many benefits, namely the ability to use plugins. Plugins are add-ons that add accuracy and practicality to your searches by incorporating third-party websites, providing real-time outputs. While many of the plugins available today are still beta quality, when used correctly, they can greatly enhance your efficiency. Some are free while others feature tiered pricing, so have a browse to decide which plugins will enhance your ChatGPT experience. From eliminating unnecessary workflows to language learning and even job searching, let ChatGPT plugins revolutionize the way you access data.

Conclusion

We hope you have enjoyed this journey into the realm of prompt engineering. Now you have a better grasp of prompt design, hopefully you are ‘prompted’ to make the most of your interactions with AI LLMs like ChatGPT! By understanding the strengths and weaknesses of different models, users can effectively harness the best aspects of each platform, avoiding any pitfalls. Be specific in your prompts and create a persona to guide the model to receive accurate, concise outputs. Providing context ensures better results; experiment with different prompt format styles and provide reference texts for citation to guide the model. Divide and conquer! Splitting complex tasks into simpler subtasks increases efficiency and improves the quality of responses. Let the AI model work for you by setting parameters to control output length and format. Finally, make the most of ChatGPT plugins for industry-specific applications.


Generative AI is here to stay. Take the time to master prompt engineering and benefit from ChatGPT and other AI models. As AI becomes more prevalent in industries worldwide, prompt engineering skills are the key to generating accurate outputs. In the near future, more and more businesses will adopt generative AI to boost productivity in various aspects of their operations. Mastering prompt engineering will help to ensure that your business can make the most of this brave new world of AI-enhanced assistance. Microsoft and Adobe programs will come equipped with their respective AI assistants Microsoft Copilot and Adobe Firefly. AI will continue to grow and evolve, as will the skill of prompt engineering. The journey of AI learning has just begun!

Recommended Products

TravelMate P2 14

Shop Now

TravelMate P2

Shop Now

Edmund is an English copywriter based in New Taipei City, Taiwan. He is a widely published writer and translator with two decades of experience in the field of bridging linguistic and cultural gaps between Chinese and English.

Introducing: Email Digest


Every week, we’ll bring you the top 5 trending topics from our Acer Corner.

Socials

Stay Up to Date


Get the latest news by subscribing to Acer Corner in Google News.