本篇文章《The Gentle Singularity》由Sam Altman撰写,探讨了人工智能和数字超级智能的发展及其对人类社会的影响。尽管目前人类尚未达到完全的超级智能,但已经在许多领域取得了显著进展,例如GPT-4等系统已经展现出超越人类的智能,并且能够显著提升人类的工作效率。作者 Sam Altman 认为,未来几年内,AI将在更多领域实现突破,如2026年可能出现能够产生新见解的系统,2027年可能会有能够执行现实世界任务的机器人。
英文版
The Gentle Singularity
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
Robots are not yet walking the streets, nor are most of us talking to AI all day. People still die of disease, we still can’t easily go to space, and there is a lot about the universe we don’t understand.
And yet, we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them. The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.
AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present. Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have.
In some big sense, ChatGPT is already more powerful than any human who has ever lived. Hundreds of millions of people rely on it every day and for increasingly important tasks; a small new capability can create a hugely positive impact; a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact.
2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.
A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools. Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change, and one many people will figure out how to benefit from.
In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes.
But in still-very-important-ways, the 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out.
In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else.
Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.
We already hear from scientists that they are two or three times more productive than they were before AI. Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research. We may be able to discover new computing substrates, better algorithms, and who knows what else. If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.
From here on, the tools we have already built will help us find further scientific insights and aid us in creating better AI systems. Of course this isn’t the same thing as an AI system completely autonomously updating its own code, but nevertheless this is a larval version of recursive self-improvement.
There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off.
If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.
As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)
The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big.
If history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines.
A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries. I hope we will look at the jobs a thousand years in the future and think they are very fake jobs, and I have no doubt they will feel incredibly important and satisfying to the people doing them.
The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year. Many people will choose to live their lives in much the same way, but at least some people will probably decide to “plug in”.
Looking forward, this sounds hard to wrap our heads around. But probably living through it will feel impressive but manageable. From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly. We are climbing the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it’s one smooth curve. (Think back to 2020, and what it would have sounded like to have something close to AGI by 2025, versus what the last 5 years have actually been like.)
There are serious challenges to confront along with the huge upsides. We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications. The best path forward might be something like:
- Solve the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term (social media feeds are an example of misaligned AI; the algorithms that power those are incredible at getting you to keep scrolling and clearly understand your short-term preferences, but they do so by exploiting something in your brain that overrides your long-term preference).
- Then focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness the collective will and wisdom of people, then although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly and be able to use this technology to get maximum upside and minimal downside. Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better.
We (the whole industry, not just OpenAI) are building a brain for the world. It will be extremely personalized and easy for everyone to use; we will be limited by good ideas. For a long time, technical people in the startup industry have made fun of “the idea guys”; people who had an idea and were looking for a team to build it. It now looks to me like they are about to have their day in the sun.
OpenAI is a lot of things now, but before anything else, we are a superintelligence research company. We have a lot of work in front of us, but most of the path in front of us is now lit, and the dark areas are receding fast. We feel extraordinarily grateful to get to do what we do.
Intelligence too cheap to meter is well within grasp. This may sound crazy to say, but if we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.
May we scale smoothly, exponentially and uneventfully through superintelligence.
中文版
温和的奇点
我们已经越过了事件视界,腾飞已经开始。人类正接近构建数字超级智能,而至少到目前为止,这一切并没有看起来那么奇怪。
机器人还没在街头随处可见,大多数人也还没整天和 AI 交流。人类仍然会死于疾病,去太空依然困难重重,我们对宇宙的理解仍然非常有限。
尽管如此,我们最近已经构建出在许多方面比人类更聪明的系统,并且这些系统能显著放大人类的产出。最不可能的部分已经完成——那些促成 GPT-4 和 o3 等系统诞生的科学突破来之不易,但它们将带我们走得更远。
AI 将在多个方面为世界带来贡献,但 AI 加速科学进步与提升生产力所带来的生活质量提升将是巨大的;未来有望远比现在更加美好。科学进步是整体进步的最大驱动力;一想到我们有可能获得多少更多的成果,就令人振奋。
从某种意义上说,ChatGPT 已经比历史上任何一个人都更强大。每天有数亿人依赖它,且任务越来越重要;一项小的新增能力可能带来极大的正面影响,而一个微小的不匹配在被数亿人使用时,也可能造成很大的负面影响。
2025 年,我们迎来了能够真正进行认知工作的智能代理;编写计算机代码的方式将彻底改变。2026 年,我们很可能会看到能产生原创见解的系统。2027 年,或许会出现能在现实世界中执行任务的机器人。
将有更多人能够创作软件和艺术。但世界对这两者的需求也将大幅上升。专家们如果拥抱这些新工具,可能仍然比新手强得多。总体来看,2030 年一个人完成的事情将远超 2020 年,这种变化将令人瞩目,也会有许多人学会如何从中受益。
在最重要的方面,2030 年代也许不会有太剧烈的变化。人们依然会爱家人,释放创造力,玩游戏,在湖里游泳。
但在仍然非常重要的其他方面,2030年代很可能与以往任何时代都大不相同。我们不知道人类智能的上限有多高,但我们即将找出答案。
到了 2030 年代,智慧和能源——即想法及实现想法的能力——将变得极其丰富。这两者长期以来一直是人类进步的基本限制;如果智慧和能源变得充足(加上良好的治理),理论上我们可以实现一切。
现在我们已经与惊人的数字智能共处,并且在最初的震惊之后,大多数人已渐渐习惯。我们很快会从惊叹 AI 能写出优美段落,变成期待它写出完整小说;从惊讶它能诊断疾病,变成期望它能研发治愈方法;从惊讶它能写出小程序,变成希望它能创建整家公司。这就是「奇点」的方式:奇迹变成日常,然后变成起点。
已经有科学家告诉我们,他们的工作效率是过去的两到三倍。高级AI之所以意义重大,其中一个最关键的原因是我们可以用它来加速 AI 研究本身。我们也许能发现新的计算材料、更好的算法,甚至更多未知的可能。如果我们能用一年、甚至一个月完成十年的研究,进步的速度显然会大不一样。
从现在开始,我们已有的工具将帮助我们发现更多科学洞见,并辅助我们创造更先进的 AI 系统。当然,这还不是AI完全自主地更新自身代码,但这确实是「递归自我改进」的初始形态。
还有其他一些自我强化的循环正在发生。AI 带来的经济价值推动了基础设施建设的飞轮,越来越多的资源正用于运行这些强大的 AI 系统。而能够制造其他机器人的机器人(在某种意义上,还有能建造其他数据中心的数据中心)离我们也不远了。
如果我们必须用传统方式制造出最初的一百万个人形机器人,但它们随后能接手整个供应链——开采和提炼矿物、驾驶卡车、运行工厂等——并制造更多机器人、芯片厂和数据中心,那进步的速度就会截然不同。
随着数据中心的生产逐渐自动化,智能的成本最终应该会接近电力成本。(很多人关心 ChatGPT 每次查询用多少能量;平均每次查询大约耗电 0.34 瓦时,大概相当于烤箱运行一秒多一点,或高效灯泡使用几分钟。此外,每次查询大约用水 0.000085 加仑,约等于十五分之一茶匙。)
科技进步的速度将持续加快,而人类也有很强的适应能力。虽然会有艰难的挑战,比如整类工作消失,但另一方面,世界的财富增长如此之快,以至于我们将有机会认真考虑以前无法实现的新政策。我们可能不会一次性建立一套新的社会契约,但回顾几十年后,会发现逐步变化的累积带来了巨大转变。
如果历史可以作为参考,我们总能找到新事物去做、新欲望去追求,并迅速适应新工具(工业革命后的职业变迁就是个很好的例子)。人们的期望会提升,但能力也会随之快速提升,我们会拥有更好的生活。我们会为彼此创造越来越美妙的事物。相比 AI,人类有一个长期且重要的优势:我们天生在意他人,以及他人怎么想、怎么做,而对机器却没什么感情。
如果一千年前的自给农民看到我们现在的生活,会觉得我们从事的是「假工作」,仿佛只是在自娱自乐,因为我们食物充足、奢华难以想象。我希望我们未来一千年后也能用同样的眼光看待那些工作——觉得它们「非常假」,但毫无疑问,那些人会认为自己的工作极其重要且充实。
未来将涌现出大量的新奇迹。到 2035 年,我们会取得什么突破现在都难以想象;可能今年我们还在解决高能物理问题,明年就开始太空殖民;或今年在材料科学上取得重大突破,明年就实现真正高带宽的脑机接口。很多人会选择继续以当下的方式生活,但也肯定会有人选择「接入系统」。
展望未来,这些事现在听起来难以想象。但真正经历它时,可能会让人惊叹,却仍在可控范围内。从相对论的角度看,奇点是一点点发生的,融合是逐步进行的。我们正攀登那条技术指数增长的长弧线;向前看总觉得是陡峭的垂直,向后看则像是平缓的线,但其实它是一条平滑的曲线。(回想 2020 年,如果那时我们说 2025 年会接近 AGI,听起来会很疯狂,但对比过去五年所发生的一切,也许现在的预测不那么疯狂了。)
当然,我们还面临许多严峻挑战。我们需要在技术上和社会层面解决安全问题,但在那之后,最重要的是确保超级智能能被广泛获取,因为这关系到经济结构。未来的最好路径可能包括以下几个步骤:
首先解决「对齐问题」,也就是我们能有把握地确保 AI 系统长期学会并实现我们集体真正的意愿(比如社交媒体就是对齐失败的例子:推荐算法非常擅长让你不停刷,但它们是通过利用大脑短期偏好来压制你长期目标的)。
接着,重点让超级智能变得便宜、普及,并避免被某个个人、公司或国家高度集中掌控。社会具有韧性、创造力,也能迅速适应。
如果我们能激发集体的意志和智慧,尽管会犯错、也会有失控,但我们会迅速学习与调整,从而最大化收益、最小化风险。在社会广泛设定的框架下,给予用户更多自由将非常关键。世界越早开始关于这些框架及「集体对齐」如何定义的讨论,就越好。
我们(整个行业,不只是 OpenAI)正在为世界构建一个「大脑」。
这个大脑将高度个性化、人人易用;它的极限将取决于我们的好点子。长期以来,技术圈总爱嘲笑那些「只有想法的人」——他们有个点子,却没法实现。而现在,看起来他们的时代终于要到了。
OpenAI 如今做的事情很多,但最根本的身份仍是一个超级智能研究公司。我们还有大量工作要做,但前路已经被照亮,黑暗正迅速退去。我们对能做这些事情感到无比感激。
「智能几乎免费」已近在眼前。也许听起来疯狂,但如果我们在 2020 年告诉你我们将在 2025 年到达现在这个水平,听起来比我们现在对 2030 年的预测更疯狂。
愿我们顺利、指数级、平稳地迈入超级智能时代。