Last year, a hacker accessed OpenAI’s internal messaging systems and stole design details about the company’s AI technologies, according to a New York Times report. The breach involved an online forum where employees discussed the latest developments at OpenAI, the report cited two anonymous sources familiar with the incident.
Fortunately, the hacker did not gain access to the core systems where OpenAI, the developer of ChatGPT, builds and houses its AI. OpenAI, backed by Microsoft, has not responded to Reuters’ request for comment on the incident. The company’s executives informed employees during an all-hands meeting in April last year and briefed the board of directors.
However, they chose not to make the breach public since no customer or partner information was compromised. OpenAI’s leadership did not view the incident as a national security threat, attributing the hack to a private individual without foreign government ties. Consequently, federal law enforcement agencies were not notified.
In May, OpenAI revealed it had disrupted five covert influence operations attempting to misuse its AI models for deceptive activities online. This raised further safety concerns about the potential misuse of AI technologies. The Biden administration is preparing to implement measures to safeguard U.S. AI technology from foreign adversaries, including China and Russia.
Preliminary plans aim to establish guardrails around the most advanced AI models like ChatGPT. Amidst rapid innovation and emerging risks, 16 AI-developing companies pledged in May to ensure the safe development of AI technologies. This commitment was made during a global meeting as regulators strive to keep pace with technological advancements.
Content Courtesy – Business Today