In 2004, journalist Will Leitch copped to lying his way into dozens of focus groups in New York. His motivation was as simple as they come; it was a grift to make some side cash. More problematic than the deed itself, however, was Leitch’s remorseless manner of confession. He took to the pages of New York Magazine, where he is still a contributing editor, to write what amounted to a practical handbook for fibbing your way through a screener:
“If they ask you whether you’ve done [a focus group] in the past six months, just say no. They never check. If they ask you something off-the-wall, like, “Have you purchased a treadmill in the past year?” say yes; they wouldn’t ask if that weren’t the answer they wanted. Most importantly, let the recruiters lead you. Before you answer a question you’re not sure about, pause for a couple of seconds. They’ll tip their hand every time.”
Perhaps more damning than the guidance on cheating screeners was his confession to never expressing sincere opinions in a focus group, and his recommendation for others to do likewise. The “cardinal rule” that he suggested all good research cheaters follow is to always look for a researcher’s desired conclusion and feed it back to them. He went so far as to provide an example:
“In one group for Johnnie Walker Black, it was obvious the marketers wanted us to consider their beverage upscale, for special occasions. Recognizing this, I made up a story about learning my best friend was engaged and telling him, “It’s Johnnie Walker time!” The interviewer looked like he wanted to hug me.”
The response to Leitch by the Market Research Association (now Insights Association) was furious and immediate. Correctly, they had seen Leitch’s words as an attack on the market research profession, both in its practical advice to would-be cheaters and its implication that respondents (Leitch himself at a minimum) were lying. MRA Executive Director Larry Hadcock attacked New York Magazine’s decision to run the piece in the starkest terms:
“Printing the article is akin to telling readers how to cheat on the law boards, falsify medical credentials, or steal from their employers. For your publication to further this unethical behavior is unconscionable.”
Aside from being an interesting bit of market research history, Leitch’s article, and the response by the industry, is powerfully demonstrative of two facts about research:
- First, for as long as we’ve had screeners, we’ve had folks trying to game them for personal gain.
- Second, these efforts threaten, in the most fundamental way possible, the value that we as researchers deliver to our clients. After all, if we aren’t talking to our clients’ actual target audiences, and if the things we’re hearing aren’t actually the truth, then what’s the point?
This is no less true today than it was in 2004. However, a great deal has changed in the meantime. Digital methodologies, which had not yet emerged in a meaningful way, have now come to dominate in a world where COVID-19 continues to limit in-person research opportunities. In Leitch’s day, you could at least be certain that the respondent you were looking at was physically in the correct city. But now, all bets are off. Increasingly, screeners can be taken by anyone, anywhere in the world, and the opportunities for what many now call “research fraud” have multiplied accordingly.
To discuss this concerning trend and strategies to combat it, I caught up with three experienced researchers who kindly offered to share their experiences with fake respondents and some strategies for preventing them from interfering with sound research.
Katrina Noelle is president of KNow Research, a full-service insights consultancy specializing in designing custom qualitative insights projects since 2003. She is also cofounder of Scoot Insights.
Q: Can you tell me about a recent experience you had with research fraud?
KN: We’ve had a few issues in recent years with participants not being who they say they are and/or when participants lied on screeners about past participation or purchase behavior. This is sadly just par for the course. However, recently we encountered a group of participants who had somehow found our client’s webpage that was set up to recruit participants for upcoming research. The banner ad was asking for current subscribers of the client’s product, but when the participants came into the (virtual) interviews, we found out that half of our recruited sample knew each other, were based outside of the U.S., and were all logging in on the same device, in the same location, and had no knowledge of the product in question.
Q: When did it become clear that you were dealing with fraudulent respondents?
KN: We weren’t tipped off early enough, sadly! They used our scheduling tool to book a time, signed our waiver, and answered our emails without suspicion. When the first participant came in, we didn’t want to judge too harshly; they had a bad Wi-Fi connection and were only able to join on a phone (not a laptop like we’d asked), but we didn’t want to assume the worst. When the second and third participants joined sitting in front of the same backdrop with the same device, we started cancelling the interviews and went to our client to find out what happened.
Q: Based on your experience, what’s one practical step you’d recommend to other qualitative/UX researchers to weed out fake respondents?
KN: Keep very good records! Work with your panel provider and/or client to get as much information about the cause of the situation as possible. In this case, we asked the client to track the IP addresses from the completed screeners, and they were able to put together a spreadsheet for us highlighting the fraud. This was used to counteract the fraudulent participants’ emails asking for their incentives; we were able to show them that they were not qualified by (a) not being in the U.S. and (b) all coming from the same computer. I would also add to be suspicious of participants using Google Voice or other nontraditional phone numbers; that’s what this group of participants did to look like they were based in the U.S.
Michele Ronsen is the founder of Curiosity Tank, a consulting and education firm specializing in human-centered research, design development, and hands-on learning programs.
Q: Can you tell me about a recent experience you had with research fraud?
MR: I run a user research training series called “Ask Like a Pro,” and we’re currently in our seventh cohort. I take dozens of people from start to finish for qualitative research projects in every cohort. This is the first time in my life I have ever seen a coordinated attempt to defraud people while conducting multiple studies. I had nine active students recruiting very specific audiences. Three of them had their study screeners picked up by fake respondents in Kenya and Nigeria who took the survey screeners dozens and dozens of times to figure out the “happy path.” They filled the session calendars with multiple entries so they could participate in these studies under false pretenses. They stole identities from LinkedIn. It was very sophisticated. In some cases, they had used different proxy IDs to mimic locations on the U.S. East Coast, and they had learned the lingo of the core topics to provide open-ended answers.
Q: When did it become clear that you were dealing with fraudulent respondents?
MR: [My student’s] spidey senses were up. At the very beginning, she was showing a homepage of something on a dashboard. When she asked, “What’s working for you?” “What’s not?” the respondent answered with details about a feature that he had not yet seen. It also looked like it was very dark outside, and my student just made chitchat asking, “Where are you based?” The respondent said “New York.” Well, the student was in New York, and it was noon. I had no idea how prevalent this was, but since I started talking about it, several people have reached out to me to say, “we’ve had something similar happen; it’s getting really bad.”
Q: Based on your experience, what’s one practical step you’d recommend to other qualitative/UX researchers to weed out fake respondents?
MR: I am not sure there is only one step, as bandwidth, experience, and level of concern will really vary. For some, it may be to use a survey tool that tracks geolocations and IP addresses if their organization’s privacy policy includes this capture. For those who have more bandwidth, this may be adding a phone screen before scheduling sessions, and some may prefer a work email address or other identity verification.
Michael Snell leads user experience research for security and authentication products at J.P. Morgan Chase.
Q: Can you tell me about a recent experience you had with research fraud?
MS: For a recent round of contextual remote interviews, we screened for Americans who engage in a number of specific behaviors (e.g., reading financial news or creating watchlists of stocks). We even implemented a screen for higher-income adults older than 24 because we’ve seen an uptick in cheating among enterprising teenagers on the online recruiting platform we use. As the study lead, I was fed up after two of our initial participants proved fraudulent. In short, they had cheated the screener and didn’t actually engage in any of the behaviors we were interested in. The project’s stakeholders were highly engaged with the research and had attended every session. Thus, these issues had the potential to cast doubt on the integrity of the research. I felt that we had squandered a precious opportunity to grow the team’s collective intuition through direct observation.
Q: When did it become clear that you were dealing with fraudulent respondents?
MS: What gave them away wasn’t simply the dark sky outside their windows at midday. When asked how they got started investing (we were studying financial news consumption), two participants offered the exact same personal story! One’s mind reels with possible explanations for this. In either case, while there were some red flags in our first (ultimately fraudulent) interview, it wasn’t absolutely clear that something was wrong until we heard the same story from a second participant. For example, both participants had mentioned investing in the S&P 500 at some point, but neither participant could explain what the S&P 500 was.
Q: Based on your experience, what’s one practical step you’d recommend to other qualitative/UX researchers to weed out fake respondents?
MS: An additional layer of “fully informed consent” implemented at the end of a screening questionnaire can help curb some portion of these issues. For example, provide a bulleted summary of what to expect for the study protocol or a warning regarding fraud, and ask for a final confirmation that they meet the qualifications. By plainly stating qualifications and outlining penalties for would-be fraudsters at the end of a screener, researchers convey an active intolerance toward cheating. Otherwise, all the cheaters experience is that they’ve answered some screener questions and have been passed to enrollment without a “rumble strip” or stop sign to slow them down. Inserting that friction at the end impresses upon a cheater that this final confirmation is a handshake of trust that we will hold them accountable to.
Strategies to Utilize
As research methods have evolved, methods of cheating research for personal gain have evolved in tandem. With the most recent industry move toward online methods of screening for and conducting research, research fraud has become easier to execute and more coordinated. Katrina Noelle, Michele Ronsen, and Michael Snell have left us with some gems of practical advice for thwarting this new threat to the integrity of research. Key among these:
- Verifying respondents’ fit with the geographic specifications of the study through objective means like tracking IP addresses or even looking out the respondent’s window. In fact, the window might be the more reliable indicator because IP addresses are easily faked, as Michele Ronsen’s story illustrates.
- Keep meticulous records of information that may indicate a respondent was not who they claimed to be for when it comes time to discuss incentives.
- Watch out for virtual phones that can be easily manipulated to fake calling codes that align with the study’s geographic requirements.
- Confirm study expectations and fraud policies with the respondents ahead of time.
You can find more strategies for combating research fraud by Michele Ronsen on the Curiosity Tank Blog.
For more on Will Leitch and his confrontation with the market research industry, see Liza Featherstone’s Divining Desire: Focus Groups and the Culture of Consultation.
Originally published in QRCA Views.