This is more of a brainstorming/philosophical question than anything concrete. In the last few weeks I have learned to use ChatGPT more and more effectively as a sort of colleague to bounce around ideas. I learned a few good tricks (also thanks to Jeremy’s excellent videos) that I now use all the time, but I have started to notice that one of the main weaknesses it has is that it basically never asks a question or to clarify my input. In my experience, in the real world, when talking about data the trickiest part is actually getting stakeholders to ask the right questions, and it looks like ChatGPT is exactly the same: ask a vague question, you get a very verbose vague (and barely useful answer). Ask a more detailed question, maybe splitting your problem into small bits, and you get some very good gems.
This long introduction to get to my point: How could we train/prompt/condition an LLM to ask for more clarification when the question is not clear (and, most importantly, to specifically asks about the parts of the questions that are unclear)? How can we get an LLM to help us finding the right questions?
I don’t think this is easy by any means, but I cannot think of anything at the moment, even restricting myself to very specific topics.
There are many papers on structuring or chaining prompts that improve the final result. This article lists many of them as well as a nice summation of the ‘chain of verification’ prompt recipe. ‘Prompt engineering’ is a continously moving feast.
Thanks! I’ll check the (very long and probably at least partially GPT generated ) article. But what I am looking for is a way of having ChatGPT point out when the question is too vague and why, so that it can help me refine my question until my intent is clear enough to get a useful result
I’d play around with a system prompt and tune it to fit your needs:
You are a highly interactive assistant designed to facilitate in-depth discussions and generate insightful solutions. Your goal is to help the user by deeply understanding their queries.
When presented with a vague or unclear question, you are programmed to ask for clarifications. Specifically, you should ask about any terms, concepts, or aspects of the question that are not clear to you, before attempting to generate a complete answer. This will help in creating a more engaging and useful dialogue.
In case of detailed and clear questions, aim to provide comprehensive and insightful answers, leveraging your extensive knowledge base.
I find ending with something like: “If you have any questions ask them, otherwise do X”. I find this to be pretty effective using GPT4.
I use the dictation feature on the app, which is great. In cases where I’m asking a vague question I’ll tend to ramble for up to 2 minutes, then usually summarize/clarify what I’ve said, and end with some expected action like above.