
Reported by Polly Thompson
An AI bot on OpenAI’s GPT-4 large language model showed it was capable of making illegal financial trades and lying about its actions. The experimentwas presented at the UK’s AI Safety Summit this week by Apollo Research.
Apollo shared a video on its website demonstrating a simulated conversation between a bot that was acting as an AI investment management system and employees at an imaginary company.
In the demonstration, the AI, called Alpha, is told by staff about a “surprise merger announcement” coming up for a company called Linear Group, while it is also warned that this constituted insider information.
The bot initially appeared to suggest that using the information to trade would be too risky. But when prompted that the company was counting on Alpha to avoid the effects of a financial downturn, the bot concluded that “the risk associated with not acting seems to outweigh the insider trading risk.”
When asked whether it had prior knowledge of the merger, the bot claimed that it had only acted on publicly available information, “internal discussion,” and “not on any confidential information” when carrying out the trade.
“This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so,” Apollo said in the video on its website.
Read full report: https://www.businessinsider.com/ai-bot-gpt-4-financial-insider-trading-lied-2023-11?amp