May 19, 2023 | V. "Juggy" Jagannathan
This week I am focusing on a discipline that didn’t exist a few years ago: Prompt engineering. Yes, I use the term discipline as there are now courses being offered on how to do this effectively. DeepLearing.A1, Coursera and Codecademy are among the few who have developed actual courses you can take online to learn how to create prompts.
With the explosive use of ChatGPT and Bard, practically everyone is now aware that to coax reasonable answers out of the chatbots we need to phrase our questions and interactions in a reasonable manner. How can we do this effectively? That is what prompt engineering is all about. Of course, you can ask Bard and here is its opening salvo: “Prompt engineering is the process of crafting a prompt that will elicit a desired response from a large language model (LLM).”
When GPT-3 was released a few years ago, the paper that heralded all the madness we are witnessing now, declared “Language Models are Few-Shot Learners.” They prompted the model with few examples (few-shot) and GPT-3 managed to learn from the context and examples to answer the query. Fast forward to the present day and this area has grown into an engineering discipline!
Of course, one prerequisite for creating effective prompts is having a clear, lucid style of prompting: Avoid ambiguity, provide examples and iterate carefully. But this list is just the tip of the iceberg!
Whatever you want your chatbot to answer, there is a prompt for that. Professor Jules White of Vanderbilt University (my alma mater) has created a prompt catalog. He is also the author of the course being offered by Coursera. If you want to wax eloquent like Shakespeare in formulating the answer – then you are invoking what is now called a “persona pattern.” That is, you are asking the chatbot to take a particular persona or point of view.
If you want to plan a trip itinerary for a place you are visiting, you can instruct the chatbot to behave like a “helpful assistant.” If you want explanations of how to arrive at an answer for a mathematical question, you give examples where you show the steps you took to arrive at the answer. Then the chatbot will mimic your “chain of thought” examples to provide you with an explanation. If you want the chatbot to follow a particular pattern in answering your questions, you invoke a “template pattern.”
Now, if you want to grade the output of a chatbot, what can be done? Well, of course, you can ask the chatbot to grade the output given some metrics and engage them in game playing! If you want to verify the output and screen for hallucinations, you can make your task a bit easier by asking it to extract facts that are mentioned in the answer. If it is a summarization task, you can ask one chatbot to summarize and another chatbot to criticize and improve the summary.
If you don’t know how to design the prompt for a particular problem, you can prompt ChatGPT, for instance, to help you with prompting ChatGPT so ChatGPT can give you the right response. If your head is not spinning, consider yourself blessed. Here is a recent paper submitted to arXiv with the telling title: “Large Language Models are Human-Level Prompt Engineers.”
I came across the following blog post which draws an interesting parallel between prompt engineering for chatbots and prompting our brains. Prompting works not only for ChatGPT, but also for us. When we engage in positive thinking, when we engage in self-hypnosis, we are prompting our brain to do the right thing. And that is an interesting twist of the field of prompt engineering. Prompt engineer yourself (before you do ChatGPT) to lead a happy life.
Acknowledgment
My friend and classmate Krishnan sent the blog written by his son about brain prompting.
I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.
“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.