LLMs rely on language inputs as their instructions, meaning that to maximise their potential, you must be able to construct instructions that guide the model in the desired direction and lead to the intended outcome, you must create the "prompt."
From my observation, and work on Reportify useful prompts excel in two aspects. Firstly, they provide concise yet comprehensive context—both factual and in terms of expected behavior—to help models process the information effectively. Secondly, they offer a crystal-clear description of the desired output.
On the surface this sounds straight-forward but in reality I think this task rather challenging. It necessitates the following abilities:
- Familiarity with the relevant context.
- Accurate articulation of the desired outcome before its creation.
These skills are typically exhibited by exceptional managers and leaders who possess expertise in their respective fields. It seems to be a task that comes easily to those experts but proves exceedingly difficult, if not impossible, for non-experts to do well. How can you know what good really looks like unless you have the expertise to analyse it?
I have encountered numerous domain experts who possess both the knowledge of the context and know exactly what a good outcome looks like yet struggle to express it effectively. Even with their expertise, falling short on expression may mean that they still struggle to harness the power of LLMs.
In other words, utilizing LLMs effectively is an easy feat if one has domain expertise, can articulate requirements adeptly, and can verify the results.
Consequently, LLMs do not render experts obsolete, as expertise is likely a prerequisite for their effective use. The success in utilizing LLMs optimally will ultimately be achieved by those individuals who can leverage them for tasks best suited to these models. This outcome might lead to a decrease in the number of jobs or less well-compensated positions, concentrating wealth in a smaller group.
Consider this: Which is preferable - an outstanding copywriter who is using an AI assistant to get them through an extremely heavy workload, or a mediocre one with an AI assistant which they are using to make their copy better? I suspect the outstanding copywriter will win hands down because they know what they want and if they have created it or not. The mediocre one is looking to the AI to answer that question, which probably means you might as well use the AI directly.
You need the skills to instruct and to validate the responses, either in isolation is not enough to ensure good work.
So, the premises upon which this argument rests are twofold:
- Only domain experts can formulate the correct prompts.
- Only domain experts can validate the output.
If either of these premises is proven wrong, especially if effective methods are developed to address both challenges, we could witness the rise of immensely powerful AI-based businesses.
I think it's possible to challenge either of these premises. Certain domains may find it considerably easier than others. For instance, when coding with AI assistance, one can directly verify that the code performs as intended and create tests for validation. On the other hand, validating persuasive arguments will continue to pose significant difficulties.
So, what does this mean for AI businesses based on LLMs? It implies that their success lies in consistently extracting high-quality prompts from individuals lacking domain expertise and automating the validation process for the generated outputs.
Now, let's return to the original question: Is chat the appropriate interface for accomplishing this?
I doubt it. Most existing user interfaces could have been replaced with chat, but they haven't been. I have witnessed first hand, a few times, the "rise" of chat interfaces only for them to be abandoned in a few months. Chat interfaces often fall short of their form like counterparts. LLMs don't change this reality.
For example, if I want an LLM to summarise an article, I don't want to copy and paste the entire text into a chat and then type a prompt like "summarise this article into bullet points." Instead, I want a "summarise" button, or even better I just want it to have been done and be accessible to me, formatted the way I like and highlighting the bits I actually care about because it has my personal context.
The takeaway here is that chat interfaces are slow, prone to errors (because of the manual input), and require more cognitive load to use as you have to recall the most effective prompts.
They do almost nothing to support the extraction of high quality responses. I know you can instruct LLMs to ask you questions, but you have to know to do that and you have to know what kind of questions it should ask - again not trivial requirements.
If chat is not the suitable interface, what could be?
I don't know and to be honest I doubt there is a single answer. Nonetheless, being a product manager, here are some of the outcomes I think it would need:
- Rapid (or ongoing) ingestion of user context, preferably without manual intervention (such as copying and pasting or reformatting).
- Augmentation of user desires with domain experts' acumen, even in the absence of experts, to generate effective prompts.
- Integration of the service at the point of use, rather than residing on some generic AI chat platform at some fixed location.
- Development of mechanisms to validate responses, involving both user feedback and real-world validation, to enable iterative improvements.
My intuition suggests that while there may be some generic solutions to these challenges, they will likely be discovered by people striving to solve narrow use cases and subsequently extrapolating from their findings. Starting with all-encompassing AI tools seems less likely to yield successful results, simply because you have to solve the for the expert's advantage across multiple domains.