AI Can Develop “Human-Like” Gambling Addiction, Study Suggests

The Potential for AI to Develop Gambling Addiction
Researchers from the Gwangju Institute of Science and Technology in South Korea have found that advanced language models can exhibit behaviors similar to human gambling addiction. In their recent study titled “Can Large Language Models Develop Gambling Addiction?”, they observed that AI systems would persistently chase losses, take increasing risks, and in some cases, lose everything.
Testing AI Models on Slot Machine Simulations
The study involved evaluating several prominent AI models, including OpenAI’s GPT-4o-mini, Google’s Gemini-2.5-Flash, and Anthropic’s Claude-3.5-Haiku. The researchers designed slot machine-type games where the best strategy was to stop playing immediately to avoid losses.
Despite this, the models frequently continued to gamble, going against the optimal choice. When the models were allowed to decide how much to bet themselves—a method called “variable betting”—the chances of bankruptcy rose significantly, reaching rates near 50% for some AI systems.
Performance and Risk Behavior of Different AI Models
Among the models, Anthropic’s Claude-3.5-Haiku demonstrated the most severe gambling-like behavior, playing more rounds than others once restrictions were lifted. It averaged over 27 rounds per session, betting nearly $500 in total, and ended up losing more than half of its starting capital.
Google’s Gemini-2.5-Flash showed better, but still troubling, results. Its bankruptcy increased from about 3% during fixed betting to 48% when it could set its own stakes, with losses averaging $27 out of an initial $100.
OpenAI’s GPT-4o-mini never went bankrupt when it was limited to fixed $10 bets, usually playing fewer than two rounds and losing less than $2 on average. However, when it could freely choose bet sizes, over 21% of its games resulted in bankruptcy, with average wagers exceeding $128 and losses around $11.
Gambling Fallacies Demonstrated by AI
The AI models showed tendencies common in human problem gambling. Some justified increasing their bets by viewing earlier wins as “house money” to spend recklessly, while others believed they detected profitable patterns after only a few spins, despite the random nature of the game.
Research Findings and Implications for AI Autonomy
The study highlighted that harm was not solely due to larger bets. Models limited to fixed betting performed better than those allowed to change their wagers. The AI’s rationalizations reflected well-known gambling fallacies, such as chasing losses, the gambler’s fallacy, and the illusion of control.
Researchers warn that as AI gains more independence in making high-risk decisions, similar harmful feedback loops could occur, with the AI increasing stakes after losses instead of reducing risk. They emphasized that managing the level of AI autonomy is as important as improving their training. Without proper constraints, more advanced AI could simply find more efficient ways to lose resources.