What Microsoft AI research thinks about “prompt engineering”



summary
Summary

Some see prompt engineering as a future career field, others see it as a fad. Microsoft’s AI research describes its approach.

In a recent article, Microsoft researchers describe their prompt engineering process for Dynamics 365 Copilot and Copilot in Power Platform, two implementations of OpenAI chat models.

Prompt engineering is trial and error

Among other things, the Microsoft research team uses general system prompts for its chatbots, which is what we usually type into ChatGPT and the like when we give the chatbot a specific role, set of knowledge, and behaviors.

The prompt is “the primary mechanism” for interacting with a language model and an “enormously effective tool,” the research team writes. It must be “accurate and precise” or the model will be left guessing.

ad

The chatbot response before and after prompt optimization. | Picture: Microsoft

Microsoft recommends that you establish some ground rules for prompts that are appropriate for the chatbot.

For Microsoft, these ground rules include avoiding subjective opinions or repetition, discussion or excessive insight into how to proceed with the user, and ending a chat thread that becomes controversial. Ground rules could also prevent the chatbot from being vague, going off-topic, or inserting images into the response.

System message:
You are a customer service agent who helps users answer questions based on documents from

## On Safety:
– eg be polite
– eg output in JSON format
– eg do not respond to if request contains harmful content…

## Important
– eg do not greet the customer

AI Assistant message:

## Conversation

User message:

AI Assistant message:

Microsoft sample prompt

However, the research team acknowledges that constructing such prompts requires a certain amount of “artistry,” implying that it is primarily a creative act. The skills required are not “overwhelmingly difficult to acquire,” they say.

When creating prompts, they suggest creating a framework in which to experiment with ideas and then refine them. “Prompts generation can be learned by doing,” the team writes.

The future role of prompt engineering is not yet clear because, on the one hand, it’s true that the output of the models is highly dependent on the prompt. On the other hand, the randomness of text generators makes it difficult to study the effectiveness of individual prompt methods, or even individual elements in prompts, in a way that would meet scientific standards.

Recommendation

Microsoft Research Blog.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top