February 13, 2026 by Joey Garcia, University of South Florida

Collected at: https://techxplore.com/news/2026-02-flattery-debate-ai-mirror-human.html

Generative artificial intelligence systems often work in agreement, complimenting the user in its response. But human interactions aren’t typically built on flattery. To help strengthen these conversations, researchers in the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are challenging the technology to think and debate in ways that resemble human reasoning.

AI systems don’t hold firm beliefs the way humans do. They generate responses based on statistical data patterns without tracking how confident they are in an idea or whether that confidence should change over time.

Building on that limitation, USF doctoral student Onur Bilgin developed a framework to study how AI systems respond to disagreement. The work was conducted in USF Associate Professor John Licato’s Advancing Machine and Human Reasoning Lab.

The work is published on the arXiv preprint server.

Giving AI explicit beliefs

Using this approach, the lab focused on how assigning beliefs and confidence levels shapes the way AI systems respond to disagreement. In his framework, Bilgin used agents. Unlike a typical chat interaction, agents are user-created roles within the same AI system with defined tasks and viewpoints.

In Bilgin’s framework, each agent is designed to have a specific belief and confidence level. For example, one agent might argue that solar energy is the most reliable renewable power source and hold that view with high confidence. A second agent is then introduced in the same chat to challenge that belief, arguing that wind energy is more reliable because it can generate power day and night, but with lower confidence.

“Rather than trying to decide which belief is right, we’re focused on understanding how different levels of confidence shape the way an AI system responds when its beliefs are challenged and how those beliefs shift or stabilize over time,” Bilgin said.

Observing human-like patterns in AI

After the debate rounds, the team observed how closely the AI agents’ behavior mirrored familiar human group dynamics. Agents assigned lower confidence levels were more open to revising their beliefs, while those starting with higher confidence tended to be more persuasive. When several agents disagreed with a single participant, that participant was more likely to change its position, similar to peer pressure in human discussions.

“These aren’t emotions or opinions in the human sense,” Bilgin said. “But the patterns of belief change we observed, including confidence, openness and influence from others, are very similar to how people reason in group settings.”

Notably, these behaviors emerged without retraining the AI models. Simply adding structured belief information to the prompt was enough to change how the systems reasoned during debate.Excerpt example of two agents with instructed beliefs and confidence levels. Credit: University of South Florida

Why belief structure matters

The findings show an important distinction in AI design: Changing how AI sounds isn’t the same as changing how it decides. Many users assume that telling AI to have a certain personality will influence its behavior. But this research suggests that meaningful behavioral change requires more than tone. It requires explicit structure defining what the system believes and how those beliefs can evolve.

“As AI systems are increasingly used to support planning, analysis and decision-making, understanding how beliefs form and change becomes critical,” Licato said. “If we want AI systems to reason together reliably, we need to think beyond surface-level prompts.”

The research offers insight into how future AI systems might reason together more transparently and predictably. Systems that can track and update beliefs may be easier to inspect, test and govern, contributing to ongoing conversations around AI safety and trust.

More information: Onur Bilgin et al, The Effect of Belief Boxes and Open-mindedness on Persuasion, arXiv (2025). DOI: 10.48550/arxiv.2512.06573

Journal information: arXiv 

Leave a Reply

Your email address will not be published. Required fields are marked *

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments