Improve Your Prompts for LLMs: Simple and Effective Techniques

Tools like ChatGPT and Bing have been skyrocketing in popularity. ChatGPT took five days since launch to reach 1 million users. Governments all over the world are trying to figure out what to do with this technology. And, sweet AI generated images are showing up everywhere; like this dog I generated using Bing:

AI generated picture of a astronaut dog jumping over a fire on the moon

The prompt used: “Can you generate a picture of a astronaut dog jumping over a fire on the moon?”

Though LLM’s (Large Language Models) are still in their infancy, research has shown some pretty big productivity boosts in range of different professions. Using metrics from Github Copilot, an AI-based coding tool, Github has found that developers complete tasks in 55% less time.

This is all to say that LLM’s are a tool we’ll need to learn to use, and use well.

There are quite a few options for LLM’s out there, with the most popular being ChatGPT and Bing. OpenAI’s ChatGPT is the most well known, but if we want to use the latest features like ChatGPT-4 (an upgrade to ChatGPT-3) we’ll need to pay $20 a month. Microsoft has released new Bing that takes advantage of ChatGPT-4 behind the scenes, and is free to use. Both offerings come with their own pros and cons.

Regardless of which tool you use, the question remains: how do we get the most out of these tools?

Prompt Engineering

Constraints

Role prompting, or “Act like a…”

Few shot prompting