Why AI Needs Governance In the AGI developer ecosystems of both Chinese and English communities, the Chinese community focuses more on discussing the productivity improvement brought by AI technology and emphasizes the development of AI tools and exploration of demand. It also focuses on upgrading and transforming existing industrial structures. On the other hand, the English development community pays more attention to a technological paradigm represented by Auto-GPT, aiming to achieve fully automated and highly intelligent AGI task execution. The exploration of these two technological paths is different, which means the difference in AI risk and governance that the Chinese developer community and the English developer community will face in the future. The awareness of AI risk issues among AI giants in the English community is far higher than that of technology company executives in China. For example, in the OpenAI Charter, it can be seen that OpenAI has always considered the maintenance of AI safety and reasonable policy regulation since its early development; the Chinese community was absent in Musk’s call to stop GPT-5 training. Chinese developers should consider themselves as global developers, think more actively, and explore how to respond to the new global risks in the AGI era, as well as explore governance for AGI. In the latest article by OpenAI, the proposed idea for the governance of superintelligent is to establish an international regulatory body. In fact, China should proactively participate in this process; otherwise, the technological order discourse of the AGI era will be at a disadvantage, and the subsequent cost of correction will be high. This cost of correction is not only the cost of economic benefits, but more importantly, it is the invisible intrusion based on ubiquitous AI ethical training. As mentioned in the paper “Despite “super-human” performance, current LLMs are unsuited for decisions about ethics and safety” “When the prompt model provides reasons for its answer, we encounter serious problems, including justifying the most abhorrent situations and having illusions about facts that do not exist in the original context. These problems, together with the aforementioned rephrasing attacks and systematic errors, clearly demonstrate that today’s LLMs are inappropriate for making ethical or safety considerations.” When AI crosses the boundaries of ethics and security, guiding the public into a moral stance in a seemingly rational context, the conversational interaction represented by large-scale language models will continue to improve and build various facts and theoretical bases for this moral stance, enabling AI companies to achieve “ethical discipline” in the public domain through “reasoning illusion”. Starting from the protection of public policies and personal privacy data, can we explore a distributed AI governance model? As AI empowers a new generation of super individuals, future individuals will generally have massive data beyond the general organizational capacity of current companies. These individuals should have a greater voice in participating in public discourse. DAO represents the next generation of organizational paradigms. How will it receive support and protection from public policies in terms of equity confirmation and transaction security for super individuals? These are the governance challenges we are about to face.
Super intelligent governance
Sam AltmanGreg BrockmanIlya Sutskever
Now is a good time to start thinking about the governance of superintelligent - future artificial intelligence systems will have even more breakthrough capabilities than general artificial intelligence (AGI).
Based on what we have seen so far, it can be imagined that artificial intelligence systems will surpass expert level in most fields in the next decade and be capable of carrying out production activities comparable to one of today’s largest enterprises.
Whether in potential benefits or adverse aspects, superintelligence will be more powerful than other technologies that humans have faced in the past. We can have a great future of prosperity, but we must manage the risks to achieve this.
Given the potential risks to survival, we cannot simply adopt a passive response strategy. Nuclear power is a common historical example of a technology with this characteristic; synthetic biology is another example.
We must mitigate the risks of today’s artificial intelligence technology, but superintelligence will require special handling and coordination.
A starting pointIn successfully guiding this development process, there are many important ideas that need our close follow. Here, we have initially explored three of these ideas.
First, we need to achieve a certain degree of coordination between the main development efforts to ensure that the development of superintelligence can ensure both safety and promote its seamless integration with society. This can be achieved in various ways: governments can jointly establish a project to incorporate current efforts; or we can jointly agree on a protocol (with the support of a new organization mentioned later in this document) to limit the rate of rise of artificial intelligence capabilities to a specific percentage each year.
Of course, we should also demand high levels of responsibility from individual companies and require them to act rigorously.
Secondly, ***we are likely to eventually need an organization similar to the International Atomic Energy Agency (IAEA) to regulate the efforts of superintelligence. Any effort that exceeds a certain threshold of capabilities (or computational resources) should be subject to the supervision of an international authority that is able to inspect systems, request audits, test for compliance with safety standards, and impose restrictions on deployment levels and security levels.
Tracking computations and energy usage could have a significant impact and give us some hope for the implementation of this idea.
As a first step, the company can voluntarily agree to begin implementing some elements that may be required by the institution in the future, and as a second step, individual countries can implement the framework of this institution.
Such institutions should focus on dropping potential survival risks instead of addressing issues that should be handled by individual countries, such as defining the content that artificial intelligence is allowed to express.
Third, we need the technical capability to ensure the security of superintelligence. This is an open research problem, and we and others are putting in a lot of effort.
Content outside the scope We believe it is important to allow companies and Open Source projects to develop models below a significant threshold of capabilities without the need for regulatory oversight as described here (including cumbersome licensing or auditing mechanisms).
The current system will create tremendous value for the world, although they do carry risks, the risk levels are comparable to other internet technologies, and society’s responses to them seem appropriate.
By contrast, the system we follow will have capabilities beyond any existing technology, so we should be careful not to weaken our follow-up on these systems by applying similar standards to technologies that are far below this threshold.
Public participation and potential impact However, strong public oversight is necessary for the governance of the most powerful systems and the decisions involving their deployment. We believe that the global public should democratically determine the boundaries and default settings of AI systems. Although we are not yet clear on how to design such mechanisms, we plan to conduct experiments to promote their development. Within broad boundaries, we still believe that individual users should have significant control over the behavior of the AI systems they use.
Considering the risks and difficulties, it is worth thinking about why we need to develop this technology.
At OpenAI, we have two fundamental reasons. First, we believe this technology will bring a better future than we can imagine today (we have already seen some early examples in areas such as education, creative work, and personal productivity).
The world is facing many problems, and we need more help to solve them; this technology can improve our society, and the creative ability of everyone using these new tools will surely amaze us. The rise in economy and improvement in quality of life will be astonishing.
Secondly, we believe that stopping the creation of superintelligence is a risky and challenging task intuitively.
Because the benefits brought by superintelligence are so huge, the cost of building it drops every year, the number of participants in building it is also rapidly increasing, and it is essentially part of our technological development. To stop its development, measures similar to a global monitoring system would be needed, but even with such measures, success is not guaranteed.
Therefore, we must be accurate and error-free.
The Road to AGI realizes the value transformation of cognition and resources here, finds like-minded AGI super individuals, and thus builds an AGI collaborative and co-creative community based on the DAO model. 1, the sharing and interpretation of cutting-edge AI papers help you establish a judgment of the forefront technology trend a few months ahead of others. Secondly, there are opportunities to participate in AI paper salons, and the possibility of collaborating with paper authors. 2. Share high-quality information sources here, explore how high-quality information sources can help personal careers, and establish a long-term and sustainable knowledge base (such as the knowledge Arbitrage of Chinese-English communities). 3. Analysis of the development strategies of individuals and companies in the AGI and Web3 era, consolidating the basic market and leveraging development strategies;