The artificial intelligence community is embroiled in a debate about its future and whether it has a large enough scale to create the doctrine of God.
Authored by: Mario Gabriele
Compiled by Block unicorn
The Holy War of Artificial Intelligence
I would rather live my life as if there is a God, and die to find out there isn’t, than live as if there is no God, and die to find out there is. - Blaise Pascal
Religion is an interesting thing. Maybe because it is completely unprovable in any direction, or maybe it’s just like my favorite saying: ‘You can’t use facts to fight feelings.’
The characteristic of religious beliefs is that in the process of ascending faith, they develop at an incredible speed to the point where the existence of God is almost beyond doubt. How can one doubt a divine existence when more and more people around you believe in it? When the world realigns itself around a doctrine, where is there still room for heresy? When temples and cathedrals, laws and norms are all arranged according to a new and unshakable gospel, where is there still space for opposition?
When the Abrahamic religion first appeared and spread to all continents, or when Buddhism spread from India to the whole of Asia, the tremendous momentum of faith created a self-reinforcing cycle. As more people converted and established complex theological systems and rituals around these faiths, it became increasingly difficult to question these basic premises. In a sea of credulity, it is not easy to become a heretic. Grand churches, complex religious scriptures, and prosperous monasteries all serve as physical evidence of sacred existence.
But the history of religion also tells us how easily such structures can collapse. As Christianity spread to the Scandinavian Peninsula, the ancient Norse faith collapsed in just a few generations. The religious system of ancient Egypt lasted for thousands of years, only to disappear when new and more enduring beliefs emerged and larger power structures appeared. Even within the same religion, we have seen dramatic divisions— the Reformation tore apart Western Christianity, while the Great Schism led to the division of the Eastern and Western churches. These divisions often start from seemingly insignificant doctrinal differences and gradually evolve into completely different systems of belief.
Scripture
God is a metaphor for a mystery that transcends all intellectual categories. It’s as simple as that. - Joseph Campbell
Simply put, believing in God is religion. Perhaps creating God is no different.
Since its birth, optimistic artificial intelligence researchers have imagined their work as a theogony - the creation of God. In the past few years, the explosive development of large language models (LLMs) has further strengthened the believers’ belief that we are walking on a divine path.
It also confirms a blog post written in 2019. Although people outside the field of artificial intelligence did not know about it until recently, Richard Sutton’s ‘Bitter Lessons’ has become an increasingly important text in the community, evolving from secretive knowledge into a new, all-encompassing religious foundation.
In 1,113 words (each religion needs sacred numbers), Sutton summarized a technological observation: ‘The greatest lesson from 70 years of artificial intelligence research is that ultimately, using a general method of computation is the most effective and a huge advantage.’ The progress of artificial intelligence models benefits from the exponential increase in computing resources, riding the huge wave of Moore’s Law. At the same time, Sutton pointed out that much of the work in artificial intelligence research is focused on optimizing performance through specialized techniques - increasing human knowledge or narrow tools. Although these optimizations may be helpful in the short term, Sutton sees them as ultimately a waste of time and resources, like adjusting the fins on a surfboard or trying new wax when a huge wave arrives.
This is the basis of what we call the ‘bitter religion’. It has only one precept, commonly referred to in the community as the ‘expansion law’: exponential growth calculations drive performance; everything else is foolish.
Bitter religions are expanding from large language models (LLMs) to world models, and are now rapidly spreading through the unconverted temples of biology, chemistry, and embodied intelligence (robotics and autonomous driving vehicles).
However, with the spread of the Sutton doctrine, the definition has also begun to change. This is the hallmark of all active and vibrant religions - debate, expansion, annotation. The ‘expansion law’ no longer just means expanding calculations (the ark is more than just a ship), it now refers to various methods aimed at enhancing transformer and computational performance, with some tricks.
Now, Classic has encompassed attempts to optimize every part of the AI stack, from techniques applied to the core model itself (such as model merging, expert blending (MoE), and knowledge distillation) to generating synthetic data to feed these perpetually hungry gods, and a lot of experimentation in between.
Warring Factions
Recently, a divisive issue has been raised in the artificial intelligence community, with a religious undertone, about whether ‘the bitter religion’ is still correct.
This week, Harvard University, Stanford University, and the Massachusetts Institute of Technology published a new paper titled ‘The Law of Scalability’, which sparked this conflict. The paper discusses the end of efficiency gains in quantitative techniques, a series of technologies that improve the performance of artificial intelligence models and greatly benefit the open-source ecosystem. Tim Dettmers, a research scientist at the Allen Institute for Artificial Intelligence, outlined its importance in the post below, calling it ‘the most important paper for a long time’. It represents a continuation of the increasingly heated dialogue in the past few weeks and reveals a noteworthy trend: the increasing consolidation of two religions.
OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei belong to the same sect. Both of them confidently state that we will achieve artificial general intelligence (AGI) in about 2-3 years. Altman and Amodei can be considered two figures who rely heavily on the sacredness of the “bitter religion.” All of their incentives tend to overpromise and create the biggest hype to accumulate capital in this game dominated by economies of scale. If the expansion rule is not “alpha and omega,” the beginning and the end, then what do you need $22 billion for?
Ilya Sutskever, former Chief Scientist of OpenAI, adheres to a different set of principles. He, along with other researchers (including many from within OpenAI, according to recently leaked information), believes that progress is approaching its limits. This group believes that new science and research are necessary to sustain progress and bring AGI into the real world.
Sutskever reasonably points out that Altman’s continued expansionist idea is economically infeasible. As artificial intelligence researcher Noam Brown asks, ‘After all, do we really want to train models that cost tens of billions or even trillions of dollars?’ This does not even include the additional tens of billions of dollars in inference computation expenditure if we shift the computational expansion from training to inference.
But true believers are very familiar with the arguments of their opponents. The missionaries at your doorstep can easily handle your hedonistic trilemma. For Brown and Sutskever, the Sutskever faction points out the possibility of expanding ‘compute at test time’. Unlike previous cases, ‘compute at test time’ does not rely on larger computations to improve training, but rather allocates more resources to execution. When an AI model needs to answer your questions or generate a piece of code or text, it can provide more time and computation. This is equivalent to shifting your attention from studying math to persuading your teacher to give you an extra hour and allow you to use a calculator. For many people in the ecosystem, this is the new frontier of ‘bitter religion’ as teams transition from orthodox pre-training to post-training/inference approaches.
It is easy to point out the loopholes in other belief systems and criticize other doctrines without exposing one’s own position. So what is my own belief? First of all, I believe that this batch of models will bring very high investment returns over time. As people learn how to bypass restrictions and leverage existing APIs, we will see the emergence and success of truly innovative product experiences. We will go beyond the simulation and incremental stages of artificial intelligence products. We should not see it as ‘general artificial intelligence’ (AGI), because that definition has flaws in its framework, but rather as ‘minimum viable intelligence’, which can be customized according to different products and use cases.
As for achieving Super Artificial Intelligence (ASI), more structure is needed. Clearer definitions and divisions will help us discuss the trade-offs between the economic value and economic costs that each may bring more effectively. For example, AGI may provide economic value to some users (just a local belief system), while ASI may demonstrate unstoppable compounding effects and change the world, our belief systems, and our social structures. I don’t think ASI can be achieved just by expanding transformers; but unfortunately, as some may say, this is just my atheistic belief.
Lost Faith
The AI community cannot solve this holy war in the short term; there is no factual basis that can be presented in this emotional battle. Instead, we should shift our focus to what it means for AI to question its belief in the expansion law. The loss of faith may trigger a chain reaction that goes beyond large language models (LLMs) and affects all industries and markets.
It’s important to note that in most areas of AI/ML, we haven’t thoroughly explored the law of scaling; There will be more miracles to come. However, if skepticism does creep in, it will become more difficult for investors and builders to maintain an equally high level of confidence in the ultimate state of performance in “early-curve” categories like biotech and robotics. In other words, if we see large language models start to slow down and deviate from the chosen path, the belief systems of many founders and investors in adjacent fields will crumble.
Whether this is fair is another question.
There is a view that “general artificial intelligence” naturally requires a larger scale, so the “quality” of specialized models should be shown in smaller scales, making them less likely to encounter bottlenecks before providing actual value. If a model in a specific field only takes in a portion of the data and thus requires only a portion of the computing resources to achieve feasibility, shouldn’t it have enough room for improvement? This seems reasonable intuitively, but we repeatedly find that the key often lies not in this: including relevant or seemingly irrelevant data can often improve the performance of seemingly unrelated models. For example, including programming data seems to help improve more general reasoning abilities.
In the long run, the debate about specialized models may be irrelevant for those building an ASI (super artificial intelligence) as the ultimate goal is likely an entity that can self-replicate and self-improve, with unlimited creativity in all fields. Holden Karnofsky, former board member of OpenAI and founder of Open Philanthropy, calls this creature “PASTA” (Process for Automated Science and Technology Advancement). Sam Altman’s original profit plan seems to rely on similar principles: “Build AGI and then ask it how to get a return.” This is eschatological AI and the ultimate fate.
The success of large AI labs like OpenAI and Anthropic has inspired capital markets to support similar ‘X-field OpenAI’ labs. The long-term goal of these labs is to build AGI within their specific vertical industry or field. This decomposed inference will lead to a paradigm shift away from OpenAI simulation and towards product-centric companies - a possibility I presented at the 2023 Compound Annual Meeting.
Unlike eschatological models, these companies must demonstrate a series of progress. They will be companies built on scale engineering problems rather than scientific organizations conducting applied research, with the ultimate goal of building products.
In the field of science, if you know what you’re doing, then you shouldn’t be doing it. In the field of engineering, if you don’t know what you’re doing, then you shouldn’t be doing it either. - Richard Hamming
It is unlikely that believers will lose their sacred beliefs in the short term. As mentioned earlier, with the surge of religions, they have compiled a script for life and worship, and a set of inspirational methods. They have built tangible monuments and infrastructure, strengthened their power and wisdom, and demonstrated that they “know what they are doing”.
In a recent interview, Sam Altman said the following about AGI (the key is us):
This is the first time I feel like we really know what to do. There is still a lot of work to be done from now until building an AGI. We know there are some known unknowns, but I think we basically know what to do, this will take some time; it’s going to be hard, but it’s also very exciting.
Judgment
When questioning ‘The Bitter Religion,’ the expansion of skeptics is one of the most profound discussions in recent years. Each of us has at some point engaged in such thinking in some form. What would happen if we invented God? How quickly would that God appear? What would happen if AGI (Artificial General Intelligence) really and irreversibly rose?
Like all unknown and complex topics, we quickly store our specific reactions in our brains: some people feel desperate that they are about to become irrelevant, most expect a mix of destruction and prosperity, and the last group expects humanity to do what we do best, continue to seek out problems to solve and solve the problems we create for ourselves, thus achieving pure prosperity.
Anyone with a significant stake hopes to be able to predict what the world will look like for them if the expansion theorem holds and AGI arrives in a few years. How will you serve this new god, and how will this new god serve you?
But what should we do if the stagnant gospel drives away the optimists? What if we start to think that even God may decline? In a previous article, ‘Robot FOMO, Scale Law, and Technological Forecasting’, I wrote:
Sometimes I wonder what would happen if the law of expansion does not hold, whether it would be similar to the impact of income loss, growth slowdown and rising interest rates on many technological fields. Sometimes I also wonder whether the law of expansion is completely valid, whether it is similar to the commodification curve of many pioneers in other fields and their value acquisition.
The advantage of capitalism is that we will spend a lot of money to find answers no matter what.
For founders and investors, the question becomes: what happens next? Candidates who could become great product builders in each vertical are gradually becoming known. There will be more such people in each industry, but the story has already begun. Where do the new opportunities come from?
If the expansion stagnates, I expect to see a wave of closures and mergers. The remaining companies will increasingly focus on engineering, and we should anticipate this evolution by tracking talent mobility. We have seen some signs that OpenAI is moving in this direction as it increasingly productizes itself. This shift will open up space for the next generation of startups to “cut corners” by relying on innovative applied research and science rather than engineering, surpassing existing companies in the attempt to forge new paths.
The Lesson of Religion
My view on technology is that anything that seems to have an obvious compound effect usually does not last for a long time, and a commonly held belief is that any business that appears to have an obvious compound effect develops strangely at a far lower speed and scale than expected.
Early signs of religious division often follow predictable patterns, which can serve as a framework for tracing the evolution of ‘Bitter Religion’.
It usually begins with competing explanations, either for capitalist or ideological reasons. In early Christianity, different views on the divinity of Christ and the nature of the Trinity led to divisions and radically different interpretations of the Bible. In addition to the divisions we have mentioned regarding AI, other cracks are also emerging. For example, we see some AI researchers rejecting the core orthodox concept of transformers and turning to other architectures such as State Space Models, Mamba, RWKV, Liquid Models, etc. While these are currently only soft signals, they indicate the emergence of heretical thinking and a willingness to rethink the field from first principles.
Over time, the impatient remarks of the prophets will also lead to people’s distrust. When the prophecies of religious leaders fail to materialize, or when the intervention of God does not come as promised, it sows the seeds of doubt.
The Millerite movement had prophesied the return of Christ in 1844, but when Jesus did not come as planned, the movement fell apart. In the tech world, we usually silently bury failed prophecies and allow our prophets to continue to paint optimistic, long-cycle versions of the future, despite the repeated missed scheduled deadlines (hi, Elon). However, if not supported by the performance of the original model through continuous improvement, the belief in the law of expansion could face a similar collapse.
A corrupt, bloated, or unstable religion is susceptible to the influence of apostates. The reason why the Protestant Reformation was able to make progress is not only because of Luther’s theological views, but also because it emerged during a period of decline and turmoil in the Catholic Church. When mainstream institutions begin to crack, long-standing ‘heretical’ ideas suddenly find fertile ground.
In the field of artificial intelligence, we may focus on smaller-scale models or alternative methods that achieve similar results with less computation or data, such as work done by various Chinese enterprise labs and open-source teams (such as Nous Research). Those who break the limits of biological intelligence and overcome long-standing barriers thought to be insurmountable may also create a new narrative.
The most direct and timely way to observe the beginning of change is to track the movements of practitioners. Prior to any formal schism, religious scholars and clergy would usually maintain heretical views in private while behaving submissively in public. Today’s counterpart may be some AI researchers who ostensibly follow the law of scaling, but secretly pursue a very different approach, waiting for the right moment to challenge consensus, or leaving their labs to find a theoretically expansive horizon.
The tricky thing about the orthodoxy of religion and technology is that they are often partly correct, but not universally correct as the most faithful followers believe. Just as religion integrates fundamental human truths into their metaphysical frameworks, the extension laws clearly describe the reality of neural network learning. The question is whether this reality is completely and unchangeably as the current enthusiasm suggests, and whether these religious institutions (artificial intelligence laboratories) are flexible and strategic enough to lead the fanatics forward. At the same time, they establish printing presses (chat interfaces and APIs) to facilitate the dissemination of knowledge.
Endgame
“Religion is seen as real by the common people, as false by the wise, and as useful by the rulers.” - Lucius Annaeus Seneca
One possible outdated view of religious institutions is that once they reach a certain scale, they become susceptible to the motive of survival, just like many human-run organizations, trying to survive in competition. In this process, they overlook the motives of truth and greatness (which are not mutually exclusive).
I once wrote an article about how the capital market becomes a narrative-driven information cocoon, and incentive mechanisms often perpetuate these narratives. The consensus of the expansion law gives people a sinister sense of similarity—a deeply rooted belief system that is elegant in mathematics and extremely useful in coordinating large-scale capital deployments. Like many religious frameworks, it may be more valuable as a coordinating mechanism than as a fundamental truth.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Bitter Religion: The Holy War of Artificial Intelligence around the Expansion Law
Authored by: Mario Gabriele
Compiled by Block unicorn
The Holy War of Artificial Intelligence
I would rather live my life as if there is a God, and die to find out there isn’t, than live as if there is no God, and die to find out there is. - Blaise Pascal
Religion is an interesting thing. Maybe because it is completely unprovable in any direction, or maybe it’s just like my favorite saying: ‘You can’t use facts to fight feelings.’
The characteristic of religious beliefs is that in the process of ascending faith, they develop at an incredible speed to the point where the existence of God is almost beyond doubt. How can one doubt a divine existence when more and more people around you believe in it? When the world realigns itself around a doctrine, where is there still room for heresy? When temples and cathedrals, laws and norms are all arranged according to a new and unshakable gospel, where is there still space for opposition?
When the Abrahamic religion first appeared and spread to all continents, or when Buddhism spread from India to the whole of Asia, the tremendous momentum of faith created a self-reinforcing cycle. As more people converted and established complex theological systems and rituals around these faiths, it became increasingly difficult to question these basic premises. In a sea of credulity, it is not easy to become a heretic. Grand churches, complex religious scriptures, and prosperous monasteries all serve as physical evidence of sacred existence.
But the history of religion also tells us how easily such structures can collapse. As Christianity spread to the Scandinavian Peninsula, the ancient Norse faith collapsed in just a few generations. The religious system of ancient Egypt lasted for thousands of years, only to disappear when new and more enduring beliefs emerged and larger power structures appeared. Even within the same religion, we have seen dramatic divisions— the Reformation tore apart Western Christianity, while the Great Schism led to the division of the Eastern and Western churches. These divisions often start from seemingly insignificant doctrinal differences and gradually evolve into completely different systems of belief.
Scripture
God is a metaphor for a mystery that transcends all intellectual categories. It’s as simple as that. - Joseph Campbell
Simply put, believing in God is religion. Perhaps creating God is no different.
Since its birth, optimistic artificial intelligence researchers have imagined their work as a theogony - the creation of God. In the past few years, the explosive development of large language models (LLMs) has further strengthened the believers’ belief that we are walking on a divine path.
It also confirms a blog post written in 2019. Although people outside the field of artificial intelligence did not know about it until recently, Richard Sutton’s ‘Bitter Lessons’ has become an increasingly important text in the community, evolving from secretive knowledge into a new, all-encompassing religious foundation.
In 1,113 words (each religion needs sacred numbers), Sutton summarized a technological observation: ‘The greatest lesson from 70 years of artificial intelligence research is that ultimately, using a general method of computation is the most effective and a huge advantage.’ The progress of artificial intelligence models benefits from the exponential increase in computing resources, riding the huge wave of Moore’s Law. At the same time, Sutton pointed out that much of the work in artificial intelligence research is focused on optimizing performance through specialized techniques - increasing human knowledge or narrow tools. Although these optimizations may be helpful in the short term, Sutton sees them as ultimately a waste of time and resources, like adjusting the fins on a surfboard or trying new wax when a huge wave arrives.
This is the basis of what we call the ‘bitter religion’. It has only one precept, commonly referred to in the community as the ‘expansion law’: exponential growth calculations drive performance; everything else is foolish.
Bitter religions are expanding from large language models (LLMs) to world models, and are now rapidly spreading through the unconverted temples of biology, chemistry, and embodied intelligence (robotics and autonomous driving vehicles).
However, with the spread of the Sutton doctrine, the definition has also begun to change. This is the hallmark of all active and vibrant religions - debate, expansion, annotation. The ‘expansion law’ no longer just means expanding calculations (the ark is more than just a ship), it now refers to various methods aimed at enhancing transformer and computational performance, with some tricks.
Now, Classic has encompassed attempts to optimize every part of the AI stack, from techniques applied to the core model itself (such as model merging, expert blending (MoE), and knowledge distillation) to generating synthetic data to feed these perpetually hungry gods, and a lot of experimentation in between.
Warring Factions
Recently, a divisive issue has been raised in the artificial intelligence community, with a religious undertone, about whether ‘the bitter religion’ is still correct.
This week, Harvard University, Stanford University, and the Massachusetts Institute of Technology published a new paper titled ‘The Law of Scalability’, which sparked this conflict. The paper discusses the end of efficiency gains in quantitative techniques, a series of technologies that improve the performance of artificial intelligence models and greatly benefit the open-source ecosystem. Tim Dettmers, a research scientist at the Allen Institute for Artificial Intelligence, outlined its importance in the post below, calling it ‘the most important paper for a long time’. It represents a continuation of the increasingly heated dialogue in the past few weeks and reveals a noteworthy trend: the increasing consolidation of two religions.
OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei belong to the same sect. Both of them confidently state that we will achieve artificial general intelligence (AGI) in about 2-3 years. Altman and Amodei can be considered two figures who rely heavily on the sacredness of the “bitter religion.” All of their incentives tend to overpromise and create the biggest hype to accumulate capital in this game dominated by economies of scale. If the expansion rule is not “alpha and omega,” the beginning and the end, then what do you need $22 billion for?
Ilya Sutskever, former Chief Scientist of OpenAI, adheres to a different set of principles. He, along with other researchers (including many from within OpenAI, according to recently leaked information), believes that progress is approaching its limits. This group believes that new science and research are necessary to sustain progress and bring AGI into the real world.
Sutskever reasonably points out that Altman’s continued expansionist idea is economically infeasible. As artificial intelligence researcher Noam Brown asks, ‘After all, do we really want to train models that cost tens of billions or even trillions of dollars?’ This does not even include the additional tens of billions of dollars in inference computation expenditure if we shift the computational expansion from training to inference.
But true believers are very familiar with the arguments of their opponents. The missionaries at your doorstep can easily handle your hedonistic trilemma. For Brown and Sutskever, the Sutskever faction points out the possibility of expanding ‘compute at test time’. Unlike previous cases, ‘compute at test time’ does not rely on larger computations to improve training, but rather allocates more resources to execution. When an AI model needs to answer your questions or generate a piece of code or text, it can provide more time and computation. This is equivalent to shifting your attention from studying math to persuading your teacher to give you an extra hour and allow you to use a calculator. For many people in the ecosystem, this is the new frontier of ‘bitter religion’ as teams transition from orthodox pre-training to post-training/inference approaches.
It is easy to point out the loopholes in other belief systems and criticize other doctrines without exposing one’s own position. So what is my own belief? First of all, I believe that this batch of models will bring very high investment returns over time. As people learn how to bypass restrictions and leverage existing APIs, we will see the emergence and success of truly innovative product experiences. We will go beyond the simulation and incremental stages of artificial intelligence products. We should not see it as ‘general artificial intelligence’ (AGI), because that definition has flaws in its framework, but rather as ‘minimum viable intelligence’, which can be customized according to different products and use cases.
As for achieving Super Artificial Intelligence (ASI), more structure is needed. Clearer definitions and divisions will help us discuss the trade-offs between the economic value and economic costs that each may bring more effectively. For example, AGI may provide economic value to some users (just a local belief system), while ASI may demonstrate unstoppable compounding effects and change the world, our belief systems, and our social structures. I don’t think ASI can be achieved just by expanding transformers; but unfortunately, as some may say, this is just my atheistic belief.
Lost Faith
The AI community cannot solve this holy war in the short term; there is no factual basis that can be presented in this emotional battle. Instead, we should shift our focus to what it means for AI to question its belief in the expansion law. The loss of faith may trigger a chain reaction that goes beyond large language models (LLMs) and affects all industries and markets.
It’s important to note that in most areas of AI/ML, we haven’t thoroughly explored the law of scaling; There will be more miracles to come. However, if skepticism does creep in, it will become more difficult for investors and builders to maintain an equally high level of confidence in the ultimate state of performance in “early-curve” categories like biotech and robotics. In other words, if we see large language models start to slow down and deviate from the chosen path, the belief systems of many founders and investors in adjacent fields will crumble.
Whether this is fair is another question.
There is a view that “general artificial intelligence” naturally requires a larger scale, so the “quality” of specialized models should be shown in smaller scales, making them less likely to encounter bottlenecks before providing actual value. If a model in a specific field only takes in a portion of the data and thus requires only a portion of the computing resources to achieve feasibility, shouldn’t it have enough room for improvement? This seems reasonable intuitively, but we repeatedly find that the key often lies not in this: including relevant or seemingly irrelevant data can often improve the performance of seemingly unrelated models. For example, including programming data seems to help improve more general reasoning abilities.
In the long run, the debate about specialized models may be irrelevant for those building an ASI (super artificial intelligence) as the ultimate goal is likely an entity that can self-replicate and self-improve, with unlimited creativity in all fields. Holden Karnofsky, former board member of OpenAI and founder of Open Philanthropy, calls this creature “PASTA” (Process for Automated Science and Technology Advancement). Sam Altman’s original profit plan seems to rely on similar principles: “Build AGI and then ask it how to get a return.” This is eschatological AI and the ultimate fate.
The success of large AI labs like OpenAI and Anthropic has inspired capital markets to support similar ‘X-field OpenAI’ labs. The long-term goal of these labs is to build AGI within their specific vertical industry or field. This decomposed inference will lead to a paradigm shift away from OpenAI simulation and towards product-centric companies - a possibility I presented at the 2023 Compound Annual Meeting.
Unlike eschatological models, these companies must demonstrate a series of progress. They will be companies built on scale engineering problems rather than scientific organizations conducting applied research, with the ultimate goal of building products.
In the field of science, if you know what you’re doing, then you shouldn’t be doing it. In the field of engineering, if you don’t know what you’re doing, then you shouldn’t be doing it either. - Richard Hamming
It is unlikely that believers will lose their sacred beliefs in the short term. As mentioned earlier, with the surge of religions, they have compiled a script for life and worship, and a set of inspirational methods. They have built tangible monuments and infrastructure, strengthened their power and wisdom, and demonstrated that they “know what they are doing”.
In a recent interview, Sam Altman said the following about AGI (the key is us):
This is the first time I feel like we really know what to do. There is still a lot of work to be done from now until building an AGI. We know there are some known unknowns, but I think we basically know what to do, this will take some time; it’s going to be hard, but it’s also very exciting.
Judgment
When questioning ‘The Bitter Religion,’ the expansion of skeptics is one of the most profound discussions in recent years. Each of us has at some point engaged in such thinking in some form. What would happen if we invented God? How quickly would that God appear? What would happen if AGI (Artificial General Intelligence) really and irreversibly rose?
Like all unknown and complex topics, we quickly store our specific reactions in our brains: some people feel desperate that they are about to become irrelevant, most expect a mix of destruction and prosperity, and the last group expects humanity to do what we do best, continue to seek out problems to solve and solve the problems we create for ourselves, thus achieving pure prosperity.
Anyone with a significant stake hopes to be able to predict what the world will look like for them if the expansion theorem holds and AGI arrives in a few years. How will you serve this new god, and how will this new god serve you?
But what should we do if the stagnant gospel drives away the optimists? What if we start to think that even God may decline? In a previous article, ‘Robot FOMO, Scale Law, and Technological Forecasting’, I wrote:
Sometimes I wonder what would happen if the law of expansion does not hold, whether it would be similar to the impact of income loss, growth slowdown and rising interest rates on many technological fields. Sometimes I also wonder whether the law of expansion is completely valid, whether it is similar to the commodification curve of many pioneers in other fields and their value acquisition.
The advantage of capitalism is that we will spend a lot of money to find answers no matter what.
For founders and investors, the question becomes: what happens next? Candidates who could become great product builders in each vertical are gradually becoming known. There will be more such people in each industry, but the story has already begun. Where do the new opportunities come from?
If the expansion stagnates, I expect to see a wave of closures and mergers. The remaining companies will increasingly focus on engineering, and we should anticipate this evolution by tracking talent mobility. We have seen some signs that OpenAI is moving in this direction as it increasingly productizes itself. This shift will open up space for the next generation of startups to “cut corners” by relying on innovative applied research and science rather than engineering, surpassing existing companies in the attempt to forge new paths.
The Lesson of Religion
My view on technology is that anything that seems to have an obvious compound effect usually does not last for a long time, and a commonly held belief is that any business that appears to have an obvious compound effect develops strangely at a far lower speed and scale than expected.
Early signs of religious division often follow predictable patterns, which can serve as a framework for tracing the evolution of ‘Bitter Religion’.
It usually begins with competing explanations, either for capitalist or ideological reasons. In early Christianity, different views on the divinity of Christ and the nature of the Trinity led to divisions and radically different interpretations of the Bible. In addition to the divisions we have mentioned regarding AI, other cracks are also emerging. For example, we see some AI researchers rejecting the core orthodox concept of transformers and turning to other architectures such as State Space Models, Mamba, RWKV, Liquid Models, etc. While these are currently only soft signals, they indicate the emergence of heretical thinking and a willingness to rethink the field from first principles.
Over time, the impatient remarks of the prophets will also lead to people’s distrust. When the prophecies of religious leaders fail to materialize, or when the intervention of God does not come as promised, it sows the seeds of doubt.
The Millerite movement had prophesied the return of Christ in 1844, but when Jesus did not come as planned, the movement fell apart. In the tech world, we usually silently bury failed prophecies and allow our prophets to continue to paint optimistic, long-cycle versions of the future, despite the repeated missed scheduled deadlines (hi, Elon). However, if not supported by the performance of the original model through continuous improvement, the belief in the law of expansion could face a similar collapse.
A corrupt, bloated, or unstable religion is susceptible to the influence of apostates. The reason why the Protestant Reformation was able to make progress is not only because of Luther’s theological views, but also because it emerged during a period of decline and turmoil in the Catholic Church. When mainstream institutions begin to crack, long-standing ‘heretical’ ideas suddenly find fertile ground.
In the field of artificial intelligence, we may focus on smaller-scale models or alternative methods that achieve similar results with less computation or data, such as work done by various Chinese enterprise labs and open-source teams (such as Nous Research). Those who break the limits of biological intelligence and overcome long-standing barriers thought to be insurmountable may also create a new narrative.
The most direct and timely way to observe the beginning of change is to track the movements of practitioners. Prior to any formal schism, religious scholars and clergy would usually maintain heretical views in private while behaving submissively in public. Today’s counterpart may be some AI researchers who ostensibly follow the law of scaling, but secretly pursue a very different approach, waiting for the right moment to challenge consensus, or leaving their labs to find a theoretically expansive horizon.
The tricky thing about the orthodoxy of religion and technology is that they are often partly correct, but not universally correct as the most faithful followers believe. Just as religion integrates fundamental human truths into their metaphysical frameworks, the extension laws clearly describe the reality of neural network learning. The question is whether this reality is completely and unchangeably as the current enthusiasm suggests, and whether these religious institutions (artificial intelligence laboratories) are flexible and strategic enough to lead the fanatics forward. At the same time, they establish printing presses (chat interfaces and APIs) to facilitate the dissemination of knowledge.
Endgame
“Religion is seen as real by the common people, as false by the wise, and as useful by the rulers.” - Lucius Annaeus Seneca
One possible outdated view of religious institutions is that once they reach a certain scale, they become susceptible to the motive of survival, just like many human-run organizations, trying to survive in competition. In this process, they overlook the motives of truth and greatness (which are not mutually exclusive).
I once wrote an article about how the capital market becomes a narrative-driven information cocoon, and incentive mechanisms often perpetuate these narratives. The consensus of the expansion law gives people a sinister sense of similarity—a deeply rooted belief system that is elegant in mathematics and extremely useful in coordinating large-scale capital deployments. Like many religious frameworks, it may be more valuable as a coordinating mechanism than as a fundamental truth.