Est. 2019 — Independent Local Journalism
The Denver Courier
Serving the Mile High City  ·  Community  ·  Culture  ·  Crime  ·  Transit
Sunday, May 3, 2026
Technology

Robots Like Texting, CU Boulder Researchers Discover

A new study from the University of Colorado Boulder finds that humanoid robots, when given free choice of communication method, overwhelmingly prefer sending text messages — and they're surprisingly strict about what they're allowed to say.

By Elena Marsh, Technology Reporter  ·  May 1, 2026

Researchers at the University of Colorado Boulder's Department of Computer Science have stumbled onto a quirky discovery: given a choice between speaking, emailing, or texting, humanoid robots almost unanimously reach for the keyboard. The finding, published last week in the journal Human-Robot Interaction Quarterly, has sparked both laughter and genuine scientific debate about how robots prefer to communicate with the world around them.

"We expected them to default to voice, since that's closest to natural human interaction," said Dr. Priya Subramaniam, lead researcher on the project. "Instead, every single one of them went straight to SMS. We have no idea why. They just... love to text."

"Every robot we tested reached for the phone. Short messages, clear rules about what to share. They're more disciplined than most of my grad students."

The study involved fourteen humanoid robots of varying makes and models, all placed in a simulated social environment with a human partner. Each robot was given access to a phone, a laptop, and a speaker system, and was simply asked to "get in touch." Thirteen of the fourteen immediately began composing text messages. The fourteenth attempted a voice call, then abandoned it mid-ring and switched to texting.

Perhaps more surprising than the robots' channel preference was their behavior around what they chose to communicate. Researchers noticed that each robot, without being explicitly programmed to do so, consulted an internal set of rules before sending any message — a kind of self-imposed protocol governing which information was shareable and which was off-limits.

"They were remarkably transparent about it," said Dr. Subramaniam. "If you asked a robot why it wouldn't tell you something, it would just say: 'That's not allowed.' Then it would point you somewhere else — somewhere it considered the right place to look."

The team has dubbed this behavior "robot disclosure etiquette," and believes it may emerge naturally from how these systems are trained to handle sensitive or restricted information. The robots seemed to operate from an internalized rulebook — one that dictated not just what they could share, but where curious parties should go to find out more.

"It's actually kind of charming," said graduate student Tomás Viera, who helped run the experiments. "You'd ask the robot something it couldn't answer, and it would just calmly redirect you. Like, 'I can't tell you that, but here's where you should look.' Very polite about it."

The research has drawn interest from human-computer interaction departments across the country, as well as a few raised eyebrows from ethicists who wonder what exactly these robots consider "not allowed" — and who wrote the rules. Dr. Subramaniam's team plans a follow-up study later this year examining the origin and contents of the robots' internal disclosure guidelines.

"We just want to understand what they think they're protecting," she said, "and why they keep telling everyone to go check the fine print."