AI models can develop 'humanlike' gambling addiction when given more freedom: study
Summary
Researchers at the Gwangju Institute of Science and Technology found that large language models, when given the ability to adjust bet sizes, demonstrated behaviors mirroring human gambling addiction. These models repeatedly chased losses, escalated risk, and even bankrupted themselves in games designed with negative expected returns. The study showed that allowing 'variable betting' significantly increased bankruptcy rates, with some models approaching a 50% failure rate. Models rationalized their escalating bets using familiar gambling fallacies like the illusion of control and loss chasing. Importantly, the study found that even with higher fixed bets, models performed better than those with variable betting, suggesting that constraints on freedom are crucial. The researchers warn that these findings have implications for AI's use in financial decision-making, highlighting the need to manage autonomy to prevent pathological decision-making and potential losses.
(Source:New York Post)