Prompt engineering is the field of machine learning focused on optimizing the input prompts that are given to generative machine learning models to generate new outputs–such as text or images.
A subfield of prompt engineering is prompt injection–which focuses on the construction of malicious input prompts that can modify the behavior of a deployed generative AI system in a way unintended by its creator.
Prompt injection is analogous to other types of injection attacks in computer security, such as SQL Injection or Cross Site Scripting (XSS)