时间:2026-02-22 12:33:26 来源:网络整理编辑:娛樂
This week, OpenAI's co-head of the "superalignment" team (which overlooks the company's safety issue
This week, OpenAI's co-head of the "superalignment" team (which overlooks the company's safety issues), Jan Leike, resigned. In a thread on X (formerly Twitter), the safety leader explained why he left OpenAI, including that he disagreed with the company's leadership about its "core priorities" for "quite some time," so long that it reached a "breaking point."
The next day, OpenAI's CEO Sam Altman and president and co-founder Greg Brockman responded to Leike's claims that the company isn't focusing on safety.
Among other points, Leike had said that OpenAI's "safety culture and processes have taken a backseat to shiny products" in recent years, and that his team struggled to obtain the resources to get their safety work done.
SEE ALSO:Reddit's deal with OpenAI is confirmed. Here's what it means for your posts and comments."We are long overdue in getting incredibly serious about the implications of AGI [artificial general intelligence]," Leike wrote. "We must prioritize preparing for them as best we can."
Altman first responded in a repost of Leike on Friday, stating that Leike is right that OpenAI has "a lot more to do" and it's "committed to doing it." He promised a longer post was coming.
On Saturday, Brockman posted a shared response from both himself and Altman on X:
Tweet may have been deleted
After expressing gratitude for Leike's work, Brockman and Altman said they've received questions following the resignation. They shared three points, the first being that OpenAI has raised awareness about AGI "so that the world can better prepare for it."
"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," they wrote.
The second point is that they're building foundations for safe deployment of these technologies, and used the work employees have done to "bring [Chat]GPT-4 to the world in a safe way" as an example. The two claim that since then — OpenAI released ChatGPT-4 in March, 2023 — the company has "continuously improved model behavior and abuse monitoring in response to lessons learned from deployment."
The third point? "The future is going to be harder than the past," they wrote. OpenAI needs to keep elevating its safety work as it releases new models, Brock and Altman explained, and cited the company's Preparedness Framework as a way to help do that. According to its page on OpenAI's site, this framework predicts "catastrophic risks" that could arise, and seeks to mitigate them.
Brockman and Altman then go on to discuss the future, where OpenAI's models are more integrated into the world and more people interact with them. They see this as a beneficial thing, and believe it's possible to do this safely — "but it's going to take an enormous amount of foundational work." Because of this, the company may delay release timelines so models "reach [its] safety bar."
"We know we can't imagine every possible future scenario," they said. "So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities."
The leaders said OpenAI will keep researching and working with governments and stakeholders on safety.
"There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward," they concluded. "We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions."
Leike's resignation and words are compounded by the fact that OpenAI's chief scientist Ilya Sutskever resigned this week as well. "#WhatDidIlyaSee" became a trending topic on X, signaling the speculation over what top leaders at OpenAI are privy to. Judging by the negative reaction to today's statement from Brockman and Altman, it didn't dispel any of that speculation.
As of now, the company is charging ahead with its next release: ChatGPT-4o, a voice assistant.
TopicsArtificial IntelligenceOpenAI
Over 82,000 evacuate as Blue Cut fire rapidly spreads in southern California2026-02-22 12:28
李章洙執教有助彌補深足精神層麵缺失 球隊計劃2月16日集結2026-02-22 11:24
利物浦前瞻:或為戰國米留力 薩拉赫再衝擊裏程碑2026-02-22 11:11
李佳悅: 我們可以把女足變得更強 未來關注度不會比男足低2026-02-22 11:02
Balloon fanatic Tim Kaine is also, of course, very good at harmonica2026-02-22 11:01
浙江隊官宣榮昊回歸:重披綠色戰袍 曾入選隊史最佳陣容2026-02-22 10:34
世俱杯首冠!切爾西10年終圓夢 阿布時代實現全滿貫2026-02-22 10:23
孔蒂:主場都能連敗的球隊 沒有資格爭奪歐冠名額2026-02-22 10:14
Make money or go to Stanford? Katie Ledecky is left with an unfair choice.2026-02-22 10:13
李霄鵬是否征召歸化球員仍是兩難選擇 一係列問題需實時掌握2026-02-22 09:49
Snapchat is about to explode in popularity, report says2026-02-22 11:49
12日賠率 :切爾西小勝奪下世俱杯 曼城拜仁均客勝2026-02-22 11:41
23歲姆巴佩造300球!超冷血補時絕殺 PK皇馬全靠他2026-02-22 11:38
劉建宏:揪住海參不放是對中國足球的誤讀 進國足明碼實價不是現在2026-02-22 11:15
Tributes flow after death of former Singapore president S.R. Nathan2026-02-22 11:13
紐卡豪擲1億鎊引援見效快! 英超3連勝超降級區4分2026-02-22 10:55
媒體評最嚴降薪:應出台最低薪酬標準 球員職業壽命僅102026-02-22 10:40
趙麗娜 :朱鈺的表現對我是種激勵 我們這行想談戀愛太難2026-02-22 10:37
What brands need to know about virtual reality2026-02-22 10:33
天津隊辟謠主帥於根偉被被協查 :出國是探望孩子 已回國內2026-02-22 10:23