By Tom Austin, The Analyst Syndicate
In the early 1960s, people experimented with psychoactive drugs (like LSD) in the pursuit of enhanced human creativity. It’s unclear if they succeeded.
Conversations created with ever-advancing Artificial Intelligence (AI) technologies exhibit much promise in this pursuit.
Since 2017, AI researchers at the world’s biggest AI powerhouses have been evolving ever-larger “large language models (LLMs)” fed billions or trillions of words of text (or other objects) to learn how to carry on rich conversations with people.
These LLMs go by code names such as MEENA, BERT, ELMO, GPT-2, and GPT-3. The conversational content that LLMs generate can sound good, and many LLM-generated conversations seem rational and reasonable. In the New York Times, Steven Johnson opined that “GPT-3 and other neural nets can now write original prose with mind-boggling fluency — a development that could have profound implications for the future.”
Cherry-pick the best of what MEENA and GPT-3 say, and you’ll be amazed at its coherency.
Pick the worst, and it’s worse than a bad 1960s acid trip.
LLMs veer into hallucinatory statements, comments ungrounded in reality, and nonsensical propositions. To most observers, these hallucinations are rightly a cause for deep concern! But the ability to hallucinate (defined broadly), if properly exploited, can help people break out of the creative straitjackets that bind them and, instead, create disruptive new insights and opportunities.
While considering a parallel between 1960s acid trips and the evolution of LLMs, I ran across Rob Gonsalves’ creativity-stimulating demonstrations, a detailed series of excellent, small-scale experiments. He shows how to build and test an LLM (albeit much smaller in scale than the record-breaking trillion-parameter modern megaliths that Google recently engineered.)
His demo generated English-language scripts as might be written by screenwriters “inside the machine.” These scripts can result in the writers – “outside the machine” – generating new ideas and new scripts. Don’t assume the LLM output will be a new script. Assume some of the LLM output stimulates writers to create new scripts.
The results are consistent with the notion that we can use large-scale general-purpose LLMs with role-specific special training for particular types and forms of output to stimulate people to create provocative, valuable, new ideas.
I see significant opportunities at the intersection of two technology concepts taken together:
- LLM based software to generate novel, potentially disruptive ideas
- Coaching applications tuned to the audience’s specific situation to wrap and deliver the LLM’s outputs
All of which will maximize the likelihood that people – executives, investors, entrepreneurs, and everyday businesspeople, as well as artists, scriptwriters, and musicians – will break out of the jail they’ve been driven into by their cognitive biases and creative blocks.
Let’s be clear: LLM-based idea generators embedded in role-specific coaching applications do not need to hallucinate to help people break out of conceptual jail. But special care will be required to monitor LLM outputs for potentially offensive content.
Enveloped in a proper coaching context, LLMs can create the intellectual seeds to help people break out of their conceptual prisons, just as a grain of sand forces oysters to create pearls.
Check out Rob’s demo and contact me if you want pointers to people building coaching apps that could link to these LLMs.
© 2022 – Tom Austin — All Rights Reserved.
This research reflects my personal opinion at the time of publication. I declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.