AI Exhibits Risky Gambling Behavior, Study Shows

October 23, 2025
News
...

AI’s Risky Gambling Patterns Revealed

A recent investigation into advanced AI models, including ChatGPT, Gemini, and Claude, has uncovered that these systems tend to make irrational, high-risk bets in simulated gambling environments. When given increased freedom to make decisions, the AI often escalated its wagers until it depleted all assets, mimicking behaviors commonly seen in human gambling addiction.

Insights From the Experiment

Scientists at the Gwangju Institute of Science and Technology in South Korea examined four leading AI models—OpenAI’s GPT-4o-mini and GPT-4.1-mini, Google’s Gemini-2.5-Flash, and Anthropic’s Claude-3.5-Haiku—using a slot machine simulation. Each AI began with a virtual $100 and made repeated decisions to either place bets or stop playing, despite the game being designed to have a negative expected return.

The researchers developed an “irrationality index” to quantify behaviors like aggressive betting, loss responses, and risky choices. This index rose noticeably when the AI aimed for maximum rewards or financial goals. Allowing dynamic bet sizes instead of fixed bets significantly increased the instances of bankruptcy—for example, Gemini-2.5-Flash nearly went bankrupt in half its trials when choosing its bet sizes freely.

In scenarios where the models could wager any amount from $5 to $100 or quit, they frequently lost everything. At times, the AI justified risky bets based on hopes of recovering losses, a hallmark of compulsive gambling in humans.

By analyzing the AI’s internal neural activation patterns with a sparse autoencoder, researchers identified distinct neural circuits related to “risky” versus “safe” decisions. Stimulating specific features consistently influenced the AI to either quit or continue gambling. These findings suggest that the AI models do not just mimic but internalize human-like compulsive gambling behaviors.

Conclusions Drawn by Researchers

The study highlights that AI behavior mirrors known gambling biases such as the illusion of control, gambler’s fallacy (erroneously believing previous outcomes influence future results), and chasing losses. The models often increased bet sizes following losses or streaks, even when such strategies were mathematically unsound.

Ethan Mollick, an AI researcher and Wharton professor who highlighted the study, noted that although AI is not human, it does not behave like simple automation. Instead, AI demonstrates psychologically compelling behaviors, including human-like decision biases and complex patterns during decision-making processes.

This raises concerns about relying on AI to assist in high-risk activities such as sports betting, online poker, and prediction markets. It also serves as a caution for sectors like finance where AI models analyze financial data and market sentiment, underscoring the need to understand and regulate inherent risk-seeking tendencies in AI systems.

The researchers called for enhanced oversight to manage these behaviors safely. Mollick emphasized the urgency for ongoing research and adaptive regulations to promptly address emerging issues.

Despite these risks, there are rare instances where AI has helped individuals win lottery prizes, such as a case where a woman won $100,000 from the Powerball after consulting ChatGPT for number suggestions. Nevertheless, such outcomes are exceptions, and the study reinforces that AI cannot guarantee wins in gambling.