What is Prompt Engineering
According to OpenAI - "Prompt engineering is the process of writing effective instructions for a model, such that it consistently generates content that meets your requirements.
Because the content generated from a model is non-deterministic, prompting to get your desired output is a mix of art and science. However, you can apply techniques and best practices to get good results consistently."
Prompting at its most basic is just using words to give instructions to a model. We can do this in a chat interface, like when we talk to ChatGPT or Claude, or we can use this in an API.
The art (and science) of giving instructions to these models is prompting, and prompt engineering seeks to make prompts thats generate more predictable and accurate content.
Utilizing the same model, you can expect to get a significantly better result using prompt engineering techniques.
For those using LLMs in AI applications, whether it be for chatbots or to do complex work within the application, we can save significant LLM API costs with better prompts.
GitHub posted a blog about developer productivity and speed using Copilot that can be found here: Research: quantifying GitHub Copilot’s impact on developer productivity and happiness
But if we know what prompt engineering is, its important to also know what it is not.
Prompt engineering isn't magic - its science! Prompting is not the only tool in our AI belt that we can use to get better, more accurate, more reliable results, BUT it is the most accessible tool and the easiest and quickest to implement changes to.
Prompt engineering is also not a replacement for our critical thinking skills - we should use prompting to enhance those skills and to take mental load off of ourselves to do more and do it better, but we still need to utilize our knowledge of development/best practices to ensure we are creating secure, production ready applications.