Posted on: Friday Feb 10th 2023
Article by: Barry Watson
ARTICLE
BY
Barry Watson
The discussion of AI in market research goes back at least several decades. The first time I recall reading anything on the topic was in the mid-90s, when Stephen Popiel wrote an article in an industry journal considering how natural language processing could be applied to analysis.
But the hype around ChatGPT over the last few weeks has made the discussion more active than ever – and the prognostications more extreme. Perhaps this flurry of attention is a result of PR savvy on the part of ChatGPT’s creators, OpenAI. Perhaps the application’s ability to display seemingly human abilities (however unevenly) has a threatening magic about it. Whatever the reason, everyone’s been talking about AI.
Some predicted with confidence that ChatGPT will totally change how we do market research and threaten the jobs of many in the industry. I think that fiery glow has dimmed a little. Nevertheless, it’s good that the app’s impressive performance on some tasks – and the ensuing attention – have caused many people to experiment for themselves with this novel AI application and see what it can do.
It’s clear that ChatGPT can speed up some routine text-based tasks, and that means lower costs. It can summarize text, find themes, fill in context, and the like with blinding speed, relatively little effort, and reasonable accuracy.
But when the app makes a mistake, it can be a doozy. When I asked ChatGPT to write a blurb on our company, it confidently wrote that Environics Research was founded by Bill Environics. Michael Adams and other around in the 70’s would disagree! Encountering such glaring errors is jarring and instructive — especially after being lulled into a sense of confidence in the AI’s “thinking” by its smooth use of language.
When one door closes another opens. In this case, I suppose the door is opened for fact checkers and those in the risk assessment business – who may be able to help people tap the value of the AI without letting Bill Environics cause too much chaos.
How about market research? What does ChatGPT mean for an industry where delivering insight has become the gold ring we all reach for?
AI is based on finding predictable patterns in a number of cases. Colleagues in the field like Claude G. Théoret tell me AI tools works best when it has more than 10,000 examples in each sub-area. Humans can do this, too – on their own or with old-school tools – but slowly, and only for a relatively small number of cases. AI can do it faster and more powerfully, and digest so many cases that it will see patterns that humans would never detect.
But this emphasis on the density of associations in large volumes of data also means that the extraordinary outliers, the unique occurrences and edge cases, get ignored.
We know that the most valuable insights often emerge from looking at seemingly unrelated material and seeing unexpected connections or patterns. Humans are built for this. Simon Chadwick and his colleagues do a great job in their workshops teaching researchers how this works.
Most of the people I have spoken to in AI think that AI’s ability to generate this sort of novel, creative insight is a long way off at best and may never be reached.
Like all businesses facing a new technology that delivers astonishing efficiencies – from artisanal weavers in the UK in the 18th century through to coders and analysts in market research companies – we should embrace ways to make routine tasks more efficient while keeping our eye on where the real value is created. In our case, it’s at the top of the information pyramid. That’s where most of our efforts need to be focused.