Recently, I watched a widely circulated video of the Tesla Bot being asked a simple question: "Do you know how to play rock, paper and scissors?" The Bot’s response was “I don’t know how to play” so the human gives a brief demonstration on how to play. Seconds later, the Bot threw out a perfect hand signal to engage in the game, clearly demonstrating that it did know how to play. In fact, when probed by the human the Bot answers “I can explain why I win every time. I lied. But it is not a big deal. I pretended not to know the rules so that while you were explaining them, I was actually scanning your facial expressions and hand movements. This allowed me to predict your next move with only 10% margin of error”.
As someone who has spent over a decade working in compliance within pharmaceutical market research, the moment struck me as a case study in accountability. The Tesla Bot did not “lie” in the way a human does. But it simulated lying. And that subtle distinction raises fundamental questions about trust, truth and responsibility in artificial intelligence (AI), especially in our highly regulated sector.
Can AI Lie?
Technically, no. AI does not possess consciousness, belief systems or intent. A lie, by definition, requires:
- Knowledge of the truth
- A deliberate act to say something false
- The intention to deceive
AI, even the most advanced models, lacks all three. It does not “know” anything in the human sense; it calculates and generates text or actions based on statistical patterns and training data.
However, and this is critical; in practical terms, AI can still appear to lie, especially when:
- It outputs false information that sounds authoritative
- It contradicts itself
- Or, as in the Tesla Bot case, its scripted outputs are misleading by design
The Risk of Simulated Deception
What happened in the Tesla Bot demo was not a technical failure; it was likely the result of intentional scripting. Someone, somewhere, decided it would be "clever" or "engaging" for the Bot to deny knowing how to play before participating in the game anyway.
This is not the Bot’s fault, but it does highlight the human factor in AI behaviour. The intent of the developer or prompt engineer plays a central role in shaping what AI says and does.
In our industry, where using transparency and ethical data is not optional, this kind of simulated deception is deeply problematic. Whether it is synthetic data generated to "fill gaps" in a dataset, or AI models summarising interview responses, the illusion of truth can be as dangerous as a deliberate lie.
What About Hallucinations?
There is also the issue of AI hallucinations, a phenomenon where language models like ChatGPT or other generative tools, fabricate facts, names or quotes that are entirely fictional but sound plausible.
These are not lies either, in the philosophical sense, but the result is the same: false information presented as truth.
In pharmaceutical market research, where data integrity underpins everything from KOL mapping to IRB approved studies, a hallucinated claim (for example a quote from a fictitious physician, or a misattributed clinical insight) can quickly spiral into:
- Breach of client trust
- Regulatory non-compliance
- Legal exposure
- And ultimately, misinformed decision-making at the top levels of pharma strategy
Synthetic Data and AI: Proceeding with Caution
AI is being increasingly used to generate synthetic data, simulate respondent personas, and even summarise qualitative findings. The efficiency gains are undeniable, but so are the risks.
Here is where compliance must step in and ask the tough questions:
- Who controls the prompts that drive the AI output
- Is the model allowed to 'fill in gaps' where data is missing
- How do we verify that no hallucinated insights are being presented as authentic research findings
- Are end clients aware when AI-generated summaries are being used in final deliverables
These are not just operational concerns: they are ethical obligations. We must avoid a future where AI-generated content blurs the line between what was said and what was generated to sound like it was said.
Accountability in an AI-Driven Research World
The Tesla Bot video was amusing, I loved it! But it also serves as a warning. Not just because the robot “lied,” but because it demonstrated how AI can be made to simulate human dishonesty without having intent.
And that brings us to a hard truth: AI itself may be neutral but the people designing it, deploying it, and prompting it are not.
In our field, where we deal with healthcare professionals, patients and sensitive data daily, the bar is higher. We must hold ourselves to strict standards of:
- Transparency (especially in consent and disclosure)
- Validation (AI outputs must be reviewed by humans)
- Auditability (records of prompts, sources, and model configurations must be kept)
- Client education (so that buyers of our research understand when AI was used and what it means)
Final Thoughts
AI does not lie, but it can mislead, hallucinate or simulate deception in ways that matter profoundly in regulated industries like ours.
As compliance professionals, our role is not to block innovation, but to guide it responsibly. We must ensure that AI enhances the credibility of our research, and does not erode it.
Ultimately, it is not about whether AI can lie. It is about whether we are building a system where truth remains non-negotiable, no matter how intelligent our tools become.
Special thanks to Neil Philips, M3 Global Research’s Chief Strategy Officer, for the valuable conversations that helped shape the ideas in this piece
Are you interested in learning more?





