Hackers Stole Secrets from OpenAI in 2023

| Updated on 10 July 2024
OpenAI

According to a report, a hacker breached OpenAI security systems last year and stole some important data from employees. However, the hacker couldn’t get access to key AI technologies, but this incident raised some security concerns for the company. 

This is reported by the New York Times in an article where the basis of this was a podcast from a former OpenAI employee Leopold Aschenbrenner. This breach occurred in an online forum where OpenAI employees discussed the latest technologies at the company. 

OpenAI’s systems, where the company keeps its training data, algorithms, results, and customer data, were not compromised but it is possible that some sensitive information was exposed. 

The executives at OpenAI disclosed this incident to the employees and the board but not to the public. They felt that there was no need to disclose this to the public as no customer or partner data was stolen. They feel that the hacker was likely an individual without government ties. However, not everyone at the company was happy about it. 

In a recent podcast on YouTube, Leopold Aschenbrenner, a technical program manager at OpenAI, criticized the company’s security measures and suggested that the company was not doing enough to prevent foreign governments and adversaries from stealing its secrets. 

Later, Mr. Aschenbrenner was dismissed for leaking information outside the company and he argued that the move was politically motivated. Despite his claims, the company says that his dismissal was unrelated to the security concerns. 

Liz Bourgeois, an OpenAI spokeswoman said, “We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation. While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”

This incident has put fear into the people about potential links to foreign adversaries, particularly China. 

After the incident, OpenAI has been enhancing its security measures. The company has created guardrails to prevent misuse of AI applications. The company has also established a safety and security committee that includes former NSA head Paul Nakasone.  

Gaurav Pal

Follow Me:

Comments Leave a Reply
Leave A Reply

Thanks for choosing to leave a comment. Please keep in mind that all comments are moderated according to our comment Policy.

Related Posts