← Swaroop Panda

Of Course, LLMs Are Not People


Yes, LLMs aren't people. Nobody's disputing that. But it's becoming the laziest criticism in research. Here's the thing - a large number of studies across social sciences, humanities, business, HCI, customer research have started using LLMs to approximate human opinion. And every time, the same critique rolls in: "But they're not real people." Correct. But that's not actually the point.

When researchers deploy a Likert scale or a sentiment measure, they're not trying to capture something uniquely human. They're trying to capture a specific, bounded response to a specific, bounded question. That's a very different thing. And LLMs - trained on genuinely vast amounts of human-generated text - can potentially do that job. That's the hypothesis worth testing.

The real problem isn't that LLMs aren't people. It's that we keep using scales and instruments designed for people and then acting surprised when the fit is awkward. The criticism is misdirected. Instead of asking "are LLMs human enough?" we should be asking "are our measurement tools right for this?" Probably not. We likely need new, purpose-built scales designed specifically for how LLMs process and respond. That's the actual research gap. That's where the work is.