Businesses and Investors are Losing Billions to Fraudulent Market Research Data. Here's How to Fix It

By Neil Dixit, founder, and Adam Bai, chief strategy officer, Glimpse

Corporate revenues, profits, and share values are getting drained by an insidious force that's worsening by the day. It’s causing businesses to waste billions of dollars and make strategic decisions that don’t necessarily serve their customers. All stakeholders, including investors, are paying a price for it. The problem is market research fraud. And the impact is larger than most executives realize.

Market research spending has skyrocketed in recent years. In 2008, companies spent an estimated $32 billion; by last year, that reached more than $80 billion. In the United States alone, market research revenue has grown to more than $30 billion annually, as the Covid-19 pandemic and other factors have pushed more companies to try to figure out changes in consumer behavior and sentiment, IBISWorld reports.

Most qualitative and quantitative research is done online and via mobile. A huge amount of information collected is fake. Fast Company reports that “between 15% and 30% of all market research data is fraudulent.” Some bad actors are using so-called “survey farms” and bots to take large numbers of surveys and fill them out with fake answers.

These survey farms exist to make money from survey incentives. If a company offers a small reward to those who fill out a survey, fraudsters try to supply thousands of responses. But even surveys without incentives can attract these behaviors. A study found that “bot profiteers” have software respond to surveys just in case there is a reward that was not publicized. They also see no-reward surveys as “training opportunities for AI engines.”

AI is a problem and a solution

Including open-ended questions -- instead of just multiple choice, numeric, or Likert scale -- can be a powerful weapon against deceitful responses. But it’s not enough. The people behind this fraud have been teaching generative artificial intelligence to answer open-ended questions in ways that seem like a real person provided the answer. “Red herring” questions, which may bring up irrelevant topics or offer illogical options, can also be used, but bots are getting better at handling those as well.

Fortunately, technology also provides new tools to fight back. Through our work, we see that this can be done. There are ways to vastly improve survey integrity and make it much less likely a survey will be corrupted by false responses.

Automated tools can be used to detect some hallmarks of fake answers. They can seek out answers that have been copied and pasted and/or used in duplicate responses from ostensibly different “people.” They can even do this with blocks of words, rather than entire answers. They can look for heavy similarities that make it unlikely two different real respondents provided those exact answers. They can catch incomplete text, nonsensical text, and more.

New, sophisticated tools from firms like Research Defender (which we work with) also help to seek out digital fingerprints, and provide a wide array of techniques to detect and prevent fraud, looking for signs of lots of answers coming from a single, unlikely area. AI can piece through the characteristics that anonymous respondents provide about themselves, looking for duplication.

These are just some of the tactics technology can provide. But it’s not all about tech. Having experts who know how to curate and piece through responses is an important part of the strategy.

Survey quality yields more reliable human insights

Weeding out fakes is only a part of the solution. An even bigger piece is building a survey from the ground up in a way that will entice real people to take the time to provide real answers. This means providing intriguing questions that engage people. And it means keeping surveys the right length and style, and mobile-friendly, with a design proven to maximize response rates.

When this happens, businesses have every reason to embrace AI -- and see it as a friend, not a foe. For example, natural language processing allows companies to take thousands of responses and generate an instant, clear view of what consumers have to say about any given topic, issue or brand, highlighting key themes, keywords, ideas and more. It can show marketers and executives quick insights into the language, emotion and sentiment that people provide, which leads to more trustworthy and strategic decisions.

For investors, understanding this is essential. They should look into the quality of the market research companies are collecting. Ask executives to explain the specific steps they’re taking to ensure they’re fielding high quality surveys and weeding out fraudulent responses. Ask for estimates of how much money they’re losing each year to market research fraud, and what they plan to do about it. The more proactive investors are in pushing companies to tackle this problem, the more everyone stands to gain.

Neil Dixit is founder of Glimpse, an AI-powered, quick-turn, self-service, human response platform that serves marketers, communicators and creators. Adam Bai is the company’s chief strategy officer.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Other Topics