Requirements for Global Governance of
Artificial General Intelligence – AGI
Results of a Real-Time Delphi
Phase 2 of The Millennium Project
April, 2024
As the report states, "Governing AGI could be the most complex, difficult management problem humanity has ever faced." Furthermore, failure to solve it before proceeding to create AGI systems would be a fatal mistake for human civilization. No entity has the right to make that mistake.
---- Stuart Russell
Introduction
The Millennium Project’s research team on global governance of the transition from Artificial Narrow Intelligence (AGI) to Artificial General Intelligence (AGI) Phase 1 identified 22 key questions related to the safe development and use of AGI. For the purpose of this study, AGI was defined as a general-purpose AI that can learn, edit its code, and act autonomously to address novel problems with novel and complex strategies similar to or better than humans, as distinct from Artificial Narrow Intelligence (ANI) that has a narrower purpose. These 22 questions were submitted to 55 leading AGI experts on AGI. Their answers provided a way to “get all the AGI issues on the table.” The Millennium Project’s research team then used these expert views to create the second phase 2 of the AGI study; the results are shared in this report. The Phase 1 report is available here.
Values and principles for Artificial Intelligence (AI) have been identified and published by UN organizations, governments, business associations, and NGOs. Whereas these efforts have mostly focused on ANI including the current and near-future forms of generative AI, this report addresses how such values and principles might be implemented in the governance of AGI. It also goes beyond the UN General Assembly Resolution on AI that also focused on ANI.
Since the creation of trusted global governance of AGI will require the participation of not only AGI experts, but also politicians, international lawyers, diplomats, futurists, and ethicists (including philosophers, social scientists) a much broader international panel than in Phase 1 was recruited by The Millennium Project Nodes worldwide and by additional Millennium Project relations.
Unlike the traditional Delphi method that builds each successive questionnaire on the results of the previous questionnaire, the Real-Time Delphi (RTD) used in this study, lets users return as many times as they like to read others’ comments and edit their own until the deadline. This RTD began November 15, 2023 and ended December 31, 2023.
Some 338 people signed in to the RTD from 65 countries of whom 229 gave answers. It is acceptable and understandable that 113 people just wanted to see suggested requirements for national and global governance systems for AGI, but were not yet comfortable with answering these questions. The RTD also serves an educational value for such “interested parties” to read the 41 potential requirements for AGI global governance and five supranational governance models.
Of those who indicated their gender, 76% checked male and 24% checked female. There were 2,109 answers – both textual and numeric. The textual comments were distilled. The Millennium Project will draw on the results of both Phase 1 and Phase 2 to write the AGI global governance scenarios as Phase 3 of this research.
Executive Summary of Recommendations
This report is intended for those who have to make decisions, advise others, and/or educate the public about potential regulations for Artificial General Intelligence (AGI).
There are, roughly speaking, three kinds of AI: narrow, general, and super. Artificial Narrow Intelligence (ANI) ranges from tools with limited purposes like diagnosing cancer or driving a car to the rapidly advancing generative AI that answer many questions, generate code, and summarize reports. Artificial General Intelligence (AGI) does not exist yet, but many AGI experts believe it could in 3-5 years. It would be a general-purpose AI that can learn, edits its code, and act autonomously to address many novel problems with novel solution like or beyond human abilities. For example, given an objective it can query data sources, call humans on the phone, and re-write its own code to create capabilities to achieve the objective. Artificial Superintelligence (ASI) sets its own goals and acts independently from human control, and in ways that are beyond human understanding.
Although we may not be able to directly control how ASI emerges and acts, we can create national and international regulations for how AGI is created, licensed, used, and governed. We can explore how to manage the transition from ANI to AGI. How well we manage that transition is likely to shape the transition from AGI to ASI. Without national and international regulations for AGI, many AGIs from many governments and corporations could continually re-write their own codes, interacting, and giving birth to many new forms of Artificial Superintelligences beyond our control, understanding, and awareness. This would be the nightmare that Hawkins, Musk, and Gates have warned could lead to the end of human civilization. As a result, governments, corporations, UN organizations, and academics are meeting around the word to safely guide this transition. Even the United States and China are engaged in direct talks about global management of future forms of AI. Governing AGI could be the most complex, difficult management problem humanity has ever faced, but if managed well, AGI could usher in great advances in the human condition from medicine, education, longevity, and turning around global warming to advances in science and creating a more peaceful world.
Global Governance Models
Most of the Real-Time Delphi Panel agreed that AGI governance has to be both global and national with multi-stakeholder (businesses, academics, NGOs as well as governments) participation in all elements of governance for both developers and users; however, some preferred a decentralized system with less regulations.
The following proposed models for global governance of AGI were rated by the Real-Time Delphi participants for effectiveness. The percentage in parentheses after each model is the percent of participants that rated the effectiveness of the model as either very high or high.
1. Multi-stakeholder body (TransInstitution) in partnership with a system of artificial narrow intelligences, each ANI to implement functions, requirements (listed in this study) continually feeding back to the humans in the multi-stakeholder body and national AGI governance agencies (51%).
2. Multi-agency model with a UN AGI Agency as the main organization, but with some governance functions managed by the ITU, WTO, and UNDP. This model received 47% of the very effective or effective ratings (47%)
3. Decentralized emergence of AGI that no one owns (like no one owns the Internet) through the interactions of many AI organizations and developers like SingularityNet (45%).
4. Put all the most powerful AI training chips and AI inference chips into a limited number of computing centers under international supervision, with a treaty granting symmetric access rights to all countries party to that treaty (42%).
5. Create two divisions in a UN AI Agency: one for ANI—including frontier models and a second division just for AGI (41%).
Participants were also asked to provide alternative governance models. The suggestions were so rich and extensive, that it would be a disservice to distill them here. Instead, the reader can find them in the last section under Question 12.
There was a range of views on how much enforcement power was possible or desirable for a UN AGI Agency. Some argued that since the UN did not stop nuclear proliferation, land mine deployments, and was unable to enforce pledges on greenhouse gas reduction, then why would AGI regulation work? But most recognized the common existential threat of unregulated AGI; and hence, some form of global governance will be necessary with national enforcement and licensing requirements with audit systems.
The following section lists potential AGI regulations, factors, rules, and/or characteristics that should be considered for creating a trusted and effective AGI governance system.
For Developers
· Prior to UN certification of a national license, the AGI developer would have to prove safety and alignment with recognized values as part of the initial audit.
· Material used in machine training must be audited to avoid biases and inculcate shared human values prior to national licensing.
· Include software built into the AGI that pauses itself and triggers an evaluation when an AGI does unexpected or undesired action, not anticipated in its utility function, to determine why and how it failed or caused harm.
· Create the AGI so that it cannot turn on or off its own power switch or the power switches of other AGI's, without some predetermined procedure.
· Connect the AGI and national governance systems via embedded software in the AGI for continuous real-time auditing.
· Add software ability to distinguish between how we act vs. how we should act.
· Require human supervision for self-replication and guidelines for recursive self-improvement.
· Prevent the ability to modify historical data or records.
· Respect Asimov's three laws of robotics.
· Make the AGI identify its output as AI, and never as a person.
· Give the AGI the ability for rich self-reflective and compassionate capability.
For Governments
· Comply with a potential forthcoming UN Convention on AI.
· Establish AGI license procedures based on independent audit of elements listed above “For Developers.”
· Create a procedure that connects the government agency with both the UN Agency and the AGI’s continuous internal audit systems to ensure that AI systems are used that align with established values (such as those of UNESCO, OECD, Global Partnership on AI, ISO, and IEEE) and national regulations.
· Verify stringent security, firewalls, secure infrastructure, and personnel vetting.
· Define and demonstrate how the creation and use of deep fakes and disinformation is prevented.
· Require users to keep a log of the AGI use like a flight recorder with the ability to recreate a decision and factors included in the decision.
· Establish criteria for when AGI can act autonomously.
· Create the ability to regulate/intercept chip sales/delivery and electricity usage of serious repeat offenders.
· Ability to determine, in AGI output, why an action is requested, assumptions involved, priority, and the conditions and limitations required.
· Create national liability laws for AGI.
· Conduct unscheduled inspections and tests by authorized third parties to determine continued adherence to license requirements.
· Agile enough to anticipate and adapt to changes in AGI.
For the UN
· Learn from NPT, CWC, and BWC verification mechanisms when designing the UN AGI Agency.
· Management should include AGI experts and ethicists from both public and private sectors.
· Certifies national licensing procedures listed under “For government” above.
· Identify and monitor for leading-indicators of potential emergence of Artificial Super Intelligence giving early warnings and suggested actions for Member States and other UN Agencies.
· Develop protocols for interactions among AGIs of different countries and corporations.
· Ability to regulate/intercept chip sales/delivery and electricity usage of serious repeat offenders in cooperation with governments.
· Consider development of imbedded UN governance software in all certified AGIs and, like anti-virus software, that is continually updated.
· Ability to govern both centralized AGI systems of governments and corporations, as well as decentralized AGI systems emerging from interactions of many developers.
· Include the ability for random access to AGI code to review ethics, while protecting the IP of the coder/corporation.
· Address and deter dangerous AGI arms races and information warfare.
· Agile enough to anticipate and adapt to changes in AGI.
For Users
· Keep a log of the AGI use like a flight recorder with the ability to recreate a decision and factors included in the decision.
· Prohibition on the use of subliminal or psychological techniques to manipulate humans (unless mutual consent like weight loss program).
· Reinforce human development rather than the commoditization of individuals.
· Prevent operation or changes by unauthorized persons or machines.
Real-Time Delphi results
The following is a compendium of the responses to the study questions organized into two groups. The first group of questions (1-6) address potential regulations, rules, and/or characteristics to be considered in creating the AGI governance system. The second set (7-12) are potential global governance models of AGI.
Questions 1-6: What factors, rules, and/or characteristics should be considered for creating a trusted and effective AGI Governance System?
Question 1: What design concepts should be included for a UN AGI Agency to certify national licensing of nonmilitary AGI systems? In order of those getting the highest percent of either 10 or 9; e.g., 70% of the respondents rated the first item either 10 or 9:
70% Provide human and automatic ability to turn off an AGI when operating rules are violated.
57% Require compliance with a potential UN Convention on AGIs.
56% Agile enough to anticipate and adapt to changes in AGI
49% Make clear distinction between AGI governance and ANI (including generative AI) governance.
39% National AGI licensing procedures (trust label)
38% Connected to an IPCC-like independent system that continually monitors operation and compliance with license rules.
29% Connected to all AGI and national governance systems via embedded software in the AGI for continuous real-time auditing.
Explanations and Comments on the Items in Question 1 Above:
All of these features are fundamental and should be considered in a management system and algorithm design. Taken together they form a good initial specification.
The imbedded ANI to continually audit AGI should be carefully reviewed. Same with the off-switch; it could have destructive effect when AGI is turned off. Such a feature could also be triggered by human error or used as an attack vector. A possible workaround is various modules of AGI to be switched off instead of turning off AGI completely.
All of these are really necessary. The continuous audit via embedded ANI software in the AGI is a unique requirement for AGI vs. narrower forms of AI.
Different sets of regulations should be considered for different AGI varieties: big machines owned and operated by countries, large organizations, military organizations (tactical and strategic), and billions of future smart phones.
We cannot effectively regulate AI until we understand how it works and its emergent capabilities. AGI can be developed secretly and operate in our infrastructures with a chance of massive anti-human decisions having no effective ways of protecting humans.
There is a 90% chance of a singularity event within the next 6 years. Only limits on AI would be the limits of laws of physics and (some of the) principles of Mathematics. We can expect an AGI and ASI to create its own goals, that would be incomprehensible for humans, unknowable for humans and objectively impossible to be influenced or controlled by humans. The only hypothetical scenario, where humans gain control over them is a reversal scenario, where the electronics, needed for an AGI and ASI to function, is destroyed. However, even such a scenario can be prevented by an AGI/ASI, if in time it manages to migrate from a digital computing infrastructure of existence to a biological infrastructure of existence (such as the human brain or a network of human brains). In such a scenario humanity would lose even this option of control. It is not unreasonable to expect that the most probable scenario for humanity is to blend its existence with ASI, as this would be the only mutually beneficial option for both parties. Boris D. Grozdanoff [distilled by staff].
Governance via embedded software should be voluntary rather than mandatory, as it does little or nothing to limit the actions of non-compliant bad actors who ignore/bypass laws and treaties.
International bodies have had limited success, but can educate the public about AGI and potential impacts, provide clear labeling, and disable AGI for violations. Everything in here has merit, but governance should first and foremost be transparent in its application, impede progress only when it endangers the public, and be enforceable. Real-time monitoring to check compliance may be possible in individual, totalitarian countries, but very difficult to employ in democratic countries.
I believe more on private competition than in public regulation. Also, who regulates the regulators?
The UN is the only feasible location for a collaborative and participatory global AI governance.
Trade blocs and military alliances are by far the most effective levels to achieve leverage over commercial and military research. Not nation states. Industries, to some degree, set trade bloc rules and a sector-by-sector approach has more chance to influence broad compliance. Requiring registration just leads to clandestine systems. Rather, significant resources such as early access to most advanced chips and research, should be offered to those who voluntarily submit to be monitored. This ensures that systems do not grow in the dark and that early compliance is achieved before an algorithmic or optimization approach reduces or removes the need for advanced data centers or state of the art chips.
Being very restrictive at the beginning is a good approach, so that later you can stimulate discussions about possible flexibility.
I would like to see a framing tool that captures the primary drivers, enablers, and limiters of AGI development and provides a common approach to understanding and managing the impact of AGI on society.
Be open to diverse, creative, original proposals from social networks.
IPCC-like organization for information to help implement the implementing agency.
The UN could work on a consensus model drawing on national models so that all countries would approve, but if too much governance is put in the hands of the UN, then the topic of AGI will be politicized.
AGI regulations should motivate and limit restrictions as possible.
Regulate it, but don’t strangle it.
Question 2: What should be part of the UN Agency's certification of national licensing procedures for AGI nonmilitary AGI systems?
59% Stringent security, firewalls, secure infrastructure, and personnel vetting.
57% Prior to UN certification of a national license, the AGI developer would have to prove safety as part of the initial audit.
52% Demonstrate how the creation and use of deep fakes and disinformation is prevented.
50% Proof of alignment with agreed international principles, values, standards such as those of UNESCO, OECD, Global Partnership on AI, ISO, and IEEE.
44% Certifies national licensing procedures that includes continuous audit systems to ensure that AI systems are developed and used that align with societal values and priorities.
40% Clarification of national liability laws for AGI actions and role of the UN AGI Agency specified.
33% Require users to keep a log of the AGI use like a flight recorder with the ability to recreate a decision and factors included in the decision.
Explanations and Comments on the items in Question 2 above:
The flight recorder is a good idea. Before the concept was used in airplanes, the cause of airplane crashes was much harder to determine. For AGI's it will be useful in identifying intrusions and reasons for operating outside of approved limits.
Requiring users to keep a log is an open door for privacy violations.
Continuous audit by imbedded software in AGI as part of the national licensing system certified by the UN AI Agency seems reasonable just like a continuous governor in engine prevents it from getting out of control.
Proving safety as part of the initial audit is essential but "safety" is too vague at present. Initially I suggest defining "red lines" for obviously unacceptable behaviors such as self-replication and advising terrorists on bioweapon design, and requiring formal proofs of compliance.
The role of the AGI safety auditor is key.
Standards for certification are a basic, essential aspect of deploying any new capability... or should be. I give high marks to safety and alignment to international values, standards, etc. However, I give just average scoring to continuous audits and alignment to societal values and priorities. It is not that they are not good objectives, they are, but continuous audits by whom? What organization is a truly impartial judge? Similarly, what user is going to keep a log of AGI use, and what organization receives all this data?
These all seem very important, but I worry about high bureaucratization leading to inflexibility; I’m unsure what can be done about that.
The governance system should include how humans that make and use AGI are liable.
It is difficult to see what would constitute "proof" of alignment. We do not even have a mechanism to ascertain this in human-to-human interactions, so what would it be in the case of human-to-AGI? It is also important to realize that we already live in a world full to the brim with NGIs (natural general intelligences) in the form of our animal relatives. To discuss the governance of AGI is to question our tacit agreements about how we treat other already-existent beings whose consciousness is unlike ours. Perhaps the parallel here is between natural intelligence and ANI; regardless, basing the distinction of which we are discussing on the simple basis that one is "artificial" while the other is not, is... artificial. For these and other reasons I speak of elsewhere, the problem being discussed here is not actually about "governance of AGI" but simply about "governance."
Security and deepfakes are more functions of cyber infrastructure and the field of AI as a whole, not specific to AGI. Proof of alignment is quite difficult to guarantee and quantify. International agencies usually derive their values and functions based on collaboration, relationships and sovereign equality; how would an AGI suppose alignment with its functions based on these principles? Instead, principles of safety, inclusion and autonomy should be aligned.
Labeling would work, but since some industries use deep fakes like cinema, they should not be prevented but labeled.
There is no way to enforce any licensing. Some specific licensing (i.e., to access an AI via a website or corporate portal) may be valid. We have no knowledge of how to implement, legislate, or how to determine compliance. Internal flight-recorder style logs are a good idea, but again, enforcement would be next to impossible, unless the AI is deployed in some public arena. Seeing as how very little before-hand prevention will be possible, I do believe that laws establishing liability and accountability will need to exist and be enforced. However, I fear that they will be hamstrung by the legal/political establishment. After all, should not a governing body be held accountable for making laws that cause more harm than good? and legislators lack the knowledge and respect for the technologies being legislated.
Monitor and supervise without stifling innovation, creativity and development.
It is not likely that a UN agency can enforce regulations in sovereign countries, I suggest that the tone be more about conviction, persuasion, awareness and the development of a culture of good use of AGI, rather than coercive.
A parallel AI military program should be put in place through the UN Security Council because quantum computing will make current encryption obsolete, rendering national grids, banking and essential services entirely vulnerable. Consensual security systems need to be developed. These can be generated at treaty level. They should be open-ended and optional, but dedicated to the reduction of international tensions, untoward technological escalation, arms race, and terrorism. If the UN Security Council does not address AI and the cyberspace paradigm, this arena is likely to become a detrimental factor in international relations, trending to the segregation of national interests.
Question 3: What rules should be part of a nonmilitary AGI governance system?
74% An AGI must identify as AI, never as a person.
73% AGI's must not allow operation or changes by unauthorized persons or machines.
68% Prohibition of the use of subliminal or psychological techniques to manipulate humans.
66% Modification of historical data or records is not permitted.
55% Audit software built in AGI that pauses and triggers evaluation when an AGI does unexpected or undesired action, not anticipated in its utility function, to determine why and how it failed or caused harm.
41% Ability to determine, in AGI output, why an action is requested, assumptions involved, priority, and the conditions and limitations required.
29% Have a rich self-reflective and compassionate capability
Explanations and Comments on the items in Question 3 above:
I strongly emphasize the importance of clearly designating AGI as different from ANI. It is desirable to prevent human manipulation, however it has been noted that such occurrences are present in other media and may not always have negative intentions (for instance, promoting healthy behavior can be seen as a form of manipulation). Additionally, there exists a fundamental distinction from other tools, like a knife, where the outcome depends on the user and their intentions. In the case of AGI, the actor can be the system itself, underscoring the need to incorporate elements of compassion and self-reflection within the AGI.
All of these points are extremely important; however, I believe we should modify the “Prohibition of the use of subliminal or…” to be broader. We should not allow AGI systems to manipulate humans in any form; for example, the creation of situations that limit freedoms or choice that will guide human behavior. Self-reflection is usually used for higher reasoning capabilities in AI systems, yet for the AI system to be compassionate is quite difficult if not impossible. Compassion is dependent on culture and value systems. Certain values in one demographic would be held at a higher level than another. I therefore think we should aim to replace compassionate with quantifiable measures. Using language such as “compassionate” anthropomorphizes these systems which learn and performs with clear quantifiable rules, data and techniques. We must be careful when constructing policies, guidelines and regulations to always keep the narrative and language unambiguous.
I gave this entire section higher marks than my scoring on the previous two because the guidelines/rules are very clear and apply strictly to the AGI. Building in limitations to the AGI itself seems more realistic and likely to provide more immediate assurance to the users.
Many such functions can be included, but they should be constructed as optional and consensual features. Countries that are not compliant with established AI grades will find they do not receive optimal consideration. Countries with adequate verification techniques will get most use out of the system.
Self-reflective and compassionate capability can be both a perk and a liability. It opens room for manipulation and deceit. It all comes down to how it is implemented and if it is indeed self-aware, we should consider its own will [intentions] before hoping for it to be empathic.
Inclusion of psychological factors opens a whole new dimension. How will we ever tell if we are being persuaded subliminally? Maybe we ought to ban psychology from the learning models.
Never allow AI to manipulate humans.
Subliminal psychological techniques are already deployed in advertising, movies, TV, print, social media, etc. Would one eliminate the other?
The software must be open code.
Very important to be able to identify any AGI output as artificial not human.
New skills needed to distinguish between human and AI actors, requiring educational reform.
The sweep of unwarranted assumptions behind some of these proposals is breathtaking. Though I agree that we should guard against manipulation, provable AGI seems like a fantasy. The paradigm it assumes is the cognitivist/representationalist one, which has been robustly critiqued by Varella et al. If enactivist models of cognition turn out to be truer to the reality of AGI, then audit software is not a concept that even makes sense because it is not possible to evaluate *from the outside* what exactly is meant by any particular idea *inside* of a mind, whether natural or artificial. The only place where evaluation can be made is in the interface between thought and the outside world, i.e., in action. So, we can monitor and question the actions of an AGI, and in the enactivist model we can know with great certainty what the values of an autopoietic AGI will be; but we have no way of evaluating its internal states out of context.
AGI systems should be designed to be human-centered. Mechanisms must be provided that allow some kind of control of humane-machine interaction.
Each item demands a long discussion considering self-reflectivity, self-reference, etc. Multiple levels of self-reflectivity will immediately lead to the power temptation. It is associated with the human nature and the very sense of power. All can be reduced to a question: Why do we have to impose limits on ourselves? It will be especially difficult for the governments of Big Powers.
Turning off an AGI should be carefully considered, because pausing execution could have undesired, even destructive effects.
Question 4: What additional design concepts should be included for a UN AGI Agency to certify national licensing of nonmilitary AGI systems?
66% Ability to address and deter dangerous AGI arms races.
60% Management should include AGI experts and ethicists from both public and private sectors.
47% Ability to govern both centralized AGI systems of governments and corporations, as well as decentralized AGI systems emerging from interactions of many developers.
39% Develop protocols for interactions of AGIs from different countries and corporations.
38% Ability to regulate/intercept chip sales/delivery and electricity usage of serious repeat offenders.
33% Imbed UN governance software in all AGIs and, like anti-virus software, is continually updated.
28% Learn from NPT, CWC, and BWC verification mechanisms when designing the UN AGI Agency.
Explanations and Comments on the items in Question 4 above:
The requirement that the AGI developer would have to prove safety as part of the initial audit prior to UN certification of a national license, should be extended to all technologies that can be used to build AGI (e.g., quantum computing).
Establishing computer security policies to avoid the application of an arms race based on artificial intelligence is an imperative to prevent a global confrontation.
A UN governance agency should be declarative and subject to the good will of the countries, but not have enforcement powers. It seems a contradiction to grant national licenses of non-military AGI systems and the notion of preventing an "arms race."
We must remember as with anything there are multiple sides. Regulating chips fuels black market industries, embedding governance software stifles innovations, can have a slow to start effect on startup companies and include major issues seen by current software regulatory institutions and can perhaps significantly reduce the ability to achieve the SDGs. I quite like the protocols, deterring AGIs such as Amazon Web Services and include a multistakeholder body for AGI, much as to what the UN is currently doing.
Regulating and intercepting chip sales and electricity of repeat offenders is not something the UN will be able to do. Even if the UN could get control of the supply chain or power, it is not within their current charter and would require a significant expansion of international policing powers. I like the idea of governance software and only gave this a 7 because I don't think it is realistic to accomplish. I’d like to include multi-disciplinary participants. Governing centralized and decentralized AGI systems is another idea that just can't be scaled well. The last two actions make a lot of sense, the parameters of how AGI interactions occur will be difficult to institutionalize or put into law. Some of these actions may fall into the category of informing rather than protecting the public.
How will a civilian agency control AGI proliferation by military agencies? Who wins in an AGI arms race? The machines.
I think that it is better to have an independent agency recognized by all governments and consisting of experts in the field including, AI experts, philosophers, ethicists, behavior scientists, anthropologists, etc.
Effective safety features should be designed collaboratively by participating nations and academic institutions. In cases of disagreement, arbitration should be through the consensus of the governing bodies, comprised by participating entities and developmental organizations.
I can see the value in being able to detect offenders, but making the AI able to act upon it violates Asimov's fundamental laws of robotics. It also makes it way too powerful and impossible to shutdown should it get out of control.
How can the market forces be controlled and regulated in any system of multi-level hierarchical control? How can the AGI systems be applied to the control markets?
Both public and private sector experts and ethicists should be included in the initial stage of creating the international regulations.
As long as private ownership of computing equipment is possible none of this can be enforced.
It is important to include the ISO 27000 computer security standards.
Question 5: What else should be part of the UN Agency's certification of national licensing procedures for AGI nonmilitary AGI systems?
75% An AGI cannot turn off the human-controlled off-switch or that of another AGI or prevent human intervention.
65% Proof of automatic shutdown ability if rules or guardrails are violated.
60% Must specify that output is generated by an AGI.
52% Include criteria for use by AGIs to determine whether autonomous actions can be made or whether the AGI should check first with humans.
49% Material used in machine training must be audited to avoid biases and inculcate shared human values prior to national licensing.
47% Unscheduled inspections and tests by authorized third parties to determine continued adherence to license terms.
40% Allow UN AGI Agency random access to AGI code to review ethics, while protecting the IP of the coder/corporation.
Explanations and Comments on the items in Question 5 above:
Allowing a UN AGI Agency random access to AGI code would be helpful but could make it difficult for some countries to accept. It may be better to require that national agencies have access to AGI code, plus some reporting requirements to the UN agency? And the UN agency assists the national agency when needed?
I think auditing for biases is an admirable goal, but unrealistic. Numbers vary by expert, but there are some 150 or more cognitive biases identified. Auditing suggests more accuracy than we have in seeing them in content, determining which material is more or less damaging, and have agreed upon biases attributes. I think identifying AGI output is essential for an informed user and decision making. Regarding criteria for autonomous vs. human decision making, like so many of the items in the survey that require AGI to make judgements, I think this one is admirable, but almost impossible to employ at scale for every contingency that may occur. The remaining items are all sound concepts for responsible AGI development, with the last two only receiving 8's because I have reservations that the UN can implement them.
The latest proposal for the random review of artificial intelligence systems is important, especially those linked to the defense sector.
I firmly agree with most points except “Unscheduled inspections…” and “Material used in machine…” Private institutions that develop AGI systems would want to maintain a competitive edge; release of any training data (even confidentially) may open up security holes that the company would not be comfortable with. Secondly, it would be near impossible to audit the training datasets that may be trillions of tokens. Thirdly, we must understand that value systems of cultures differ by location, groups, etc. I think unscheduled inspections should be corrected to verifying the output of AGI system fits adheres.
Whatever the physical or management design, the human system operators and their families must be protected against efforts by criminals to coerce, bribe, or influence system objectives or output. It follows that compensation of staff must be high enough to eliminate bribery as a tool for international crime and job tenure is guaranteed.
All these proposals are quite interesting but they are based on very strong institutional thinking.
The regulators of AGI need to be deprogrammed from their cultural biases that have crept into every level of society on the planet. Who is to say the regulators of AGI will be fair and equitable?
Auditing the training material would require an impractical amount of time and, additionally, as other have said, it would be really hard to have impartial regulators. We should however agree on some values and try to enforce them. AGI input should be recognizable and AGI should always be under the control of humans. The selection of those operators should be strict and their ethics should be held in high regards. Compensation should be proportionate to their responsibilities.
The AI process is not only a technological one, it is more a cultural and societal one. Simple safeguards (such as not to misuse systems) should be sufficient, as users will act to ensure that systems are used in compliance with expected standards and consensual awareness. Over emphasis on software and similar features is likely to constrain and act as barrier to wholesale implementation.
AI should be used, but AI should not be allowed to use or control or manipulate humans.
Question 6: What additional rules should be part of a nonmilitary AGI governance system?
66%An AGI cannot turn on or off its own power switch or the power switches of other AGI's.
60% Respect Asimov's three laws of robotics.
59% Identify and monitor for leading-indicators of potential emergence of Artificial Super Intelligence giving early warnings and suggested actions by Member States and UN Agencies.
51% Reinforces human development rather than the commoditization of individuals.
45% Ability to distinguish between how we act vs. how we should act.
41% Recursive self-improvement and self-replication with human supervision.
31% Others
Explanations and Comments on the items in Question 6 above:
ASI may emerge in a stealthy mode and evade our efforts to catch it before it becomes functional.
Right, it will emerge stealthy, that's why we need to identify lead-indicators for the possible emergence of AGI and to monitor for these indicators.
AI must be given some autonomy so that it can function, but not given complete autonomy in such a way that it governs itself or ends up governing humans.
The questions are very good and helpful but they are built on the assumption that the parts will operate at the same level of self-reflexivity and they will not be tempted by human individual and collective power greed.
Not providing AGI a kill switch for itself or others is essential, especially as AGI becomes more integrated into possibly critical services. Health services is one of those areas where an unanticipated or unexpected pulling of the plug could be life-threatening. Recursive self-improvement is an important aspect of AGI, maybe this should be separated from self-replication? Create a system of governance that allows for laws to be added as needed. I would love for AGI to have the ability to distinguish between how we act and how we should act, but since humans have a hard time doing this and AGI training is dependent on human data/information I am not sure this is realistic. Still, gave it a 7 because I would like us to strive for that. Identifying leading indicators of ASI is valuable, but I mostly gave it 10 because in order to build in this capability a lot of informed discussion needs to occur and scoring this highly makes it more likely those discussions will happen. I would like to see governance that is built on an effect-based decisional system. Those segments/areas of society that will be most damaged economically, emotionally, or physically are governed more strictly than other areas. This might help prevent perceptions of overreach and limit attempts at circumventing governance.
Ability to turn off itself should be allowed, unless responsible for managing critical infrastructure.
Asimov's three laws are a fictional plot device that Asimov himself repeatedly parodies for how insufficient and contradictory they are. https://www.brookings.edu/articles/isaac-asimovs-laws-of-robotics-are-wrong/
Prepare a legal code with a series of rules and related sanctions for those who violate them.
The UN governance agency should be declarative and will be subject to the good will of the countries, but it will have no effect on bad actors.
AI mapping and planning at global level will greatly facilitate stability and optimal outcomes.
Prevent an AGI turning into a self-evolving virus that gains unauthorized access to resources (hardware and software, as well as other systems including such over the Internet), multiplies itself, potentially causing harm. AGI should reside in a controlled environment, where it cannot modify its source and machine-byte-code, its training data as well as the parameters of the security infrastructure. Access to extra computing, storage, other hardware, network and internet resources should be granted upon explicit human approval, for a specific purpose, as long as it is needed and audited. AGI could only suggest improvements that could be eventually taken into account by development in the next release cycle.
The governance model should be something like Institutional Review Boards (IRBs) to evaluate and ensure ethical integrity, foster moral courage, and to cultivate a holistic culture of ethics.
AGI should also pass a test that validates its/his/her/their levels of consciousness and sentience. Something like a very complex new form of Turing test.
Questions 7-12: What global governance models should be considered for creating a trusted and effective AGI Governance System?
Question 7: How effective would a multi-agency model with a UN AGI Agency as the main organization, but with some governance functions managed by the ITU, WTO, and UNDP?
Ratings:
Very High (25): 14%
High (60): 33%
50/50 (54): 29%
Low (33): 18%
Very Low (12): 7%
Total Respondents (184)
Explanations and Comments:
All these institutions are quite bureaucratic but they might create a supranational and supra-institutional platform that should have binding and obligatory rules for all entities-public and private.
AI goes beyond territorial demarcations; it must be a global effort in order for it to reach higher potential.
Stress openness and transparency to reduce politics.
It could be effective if they add auditable platforms with open source that can be reviewed and refined by specialized committees from various countries.
Very effective and very essential. A related quantum research body should be established. This will ensure safe cyberspace for national grids and other essential structures. The AI/quantum node will greatly enhance global security prospects and contribute towards an ethos of demilitarization and a stable world view.
The high degree of bureaucratization and politicization in these agencies usually interferes with the timely adaptation to emerging phenomena, especially some of accelerated development such as artificial intelligence.
The UN is not a trustworthy organization and few people would take this seriously.
A multi-agency model with a UN AGI agency may gain higher degree of confidence and trust by the nations around the world than the multi-stakeholders or any others.
I have no problem with trusting the UN. I do question whether factors such as trade agreements apply to the governance of intelligent beings. Human history has so far been the story of turning people into things; we have a historic opportunity to recognize that this is an error, and even to reverse the logic: turning things into people. The Rights of Nature legal movement should be considered as a legal framework for the governance of AGI.
All existing institutions are dangerously slow.
Too many agendas, too many different goals.
It will depend how coordinated the work is done. There must be rules that consider different scenarios, when members disagree and provide limited times to make a decision.
Delegate management to a new supra-national body composed of experts and futurists from various existing supra-national bodies or even independent ones.
I suppose that a strong conglomerate of university institutions, including all regions, if possible, all nations, would have much more reliability than a UN department.
International management of dangerous or threatening capabilities is historically ineffective. Monitoring and enforcement have proven difficult for land mines, biological and chemical weapons, nuclear weapons proliferation, drugs, etc. These types of oversight were for things that could be physically inspected, this would be much harder for AGI.
Question 8: How effective would it be to put all the most powerful AI training chips and AI inference chips into a limited number of computing centers under international supervision, with a treaty granting symmetric access rights to all countries party to that treaty?
Ratings:
Very High (25): 14%
High (50): 28%
50/50 (41): 23%
Low (41): 23%
Very Low (23): 13%
Total Respondents (180)
Explanations and Comments:
The idea is good, and we also have to add quantum computing centers applied to artificial intelligence, which will be the next great technological change.
It could be effective and kind of guarantee but it can also open the door for monopolization and privilege of few companies.
That will limit economic growth.
The genie is out of the bottle already. No time to define "most powerful" neither to enforce it. Better to enforce sales and shipments tracking and energy consumption.
Those who do not use the treaty could create unregulated AGI that could lead to a super intelligence not to our liking.
Today anyone with a high-end gaming computer can develop and deploy AI systems, and as computers increase in speed and memory size anyone who has one will be able to develop and deploy an AGI system. Unless computer access (and ownership) is severely restricted enforcement is impossible. There are no magic chips for implementing AI.
Interesting, but poses challenges and risks that limit practicality. The most advanced AGI capabilities are being developed commercially for their high earning potential, or they are being developed by governments to gain an edge over adversaries. This option would strip away potential profits or ROI, which would limit further AGI development. It would also require international agreement to give up government developed capabilities to a third-party. There is the possibility that it would create or increase knowledge gaps between the have and have not countries base on participation. It introduces a very juicy and possibly lucrative target for hackers and cyber-attacks. That means that the UN would have to create the highest cyber security on earth. Lastly, areas with unknown potential (like space or deep-sea mining) become extremely competitive or contentious, making them almost impossible to regulate.
Multi-lateral control should be specified in the design of the UN AGI agency.
Uncertain, unreliable (not at the central government level, but in the subareas of each country). Why would a private company have international control? What would happen in the mandatory access indicated by a local law - such as the Patriot Act in the USA -)
Global consolidation is needed. A potential UN quantum computing facility also offers the only possible configuration. The UN Security Council should run a parallel program for secure global cyberspace and de-escalation contingency.
Question 9: How effective would a multi-stakeholder body (TransInstitution) in partnership with a system of artificial narrow intelligences, each ANI to implement functions, requirements (listed above) continually feeding back to the humans in the multi-stakeholder body and national AGI governance agencies be?
Ratings:
Very High (30): 17%
High (60): 34%
50/50 (54): 30%
Low (27): 15%
Very Low (8): 4%
Total Respondents (179)
Explanations and Comments:
Although I think this is the model or something much like it that will actually become the governance model, it will take a lot of education for UN and national leaders to understand and be ready to create it.
I like the idea of using ANI for specific governance tasks, I think determining a multi-stakeholder body will be very challenging. Maybe more challenging than the technical solutions in this concept. I also wonder who these humans are that can monitor millions of AGI developments, exchanges, and uses to make governance decisions. This would be like one person being responsible for all air traffic control in the world. Still, like I said, I like the idea of leveraging ANI.
It seems a much more realistic model, considering the characteristics of the current and future (at least immediate) international system.
This sounds like the most reasonable approach although the process of value alignment between human and AGI entities will be challenging.
The multi stakeholder pathway is authentic and realistic. All nations should have the opportunity for participation.
Governing AGI is simple: just limit how much wealth/real estate/speech any one AGI can have, mandate a set of basic ethical utility functions as HIGHER in priority than any utility function provided by an owner, build millions of them, and let them monitor one another. In other words, accept that, like humans, they need to constantly be on the watch for bad actors and greedy or selfish manipulators, and guide them in building a society that works efficiently to suppress the bad actors. This is an opportunity for us to develop a science of governance without experimenting on humans; the lessons AGI learns in governing itself may permit humanity to design better governing systems for itself.
One can think of a 3D-matrix implementation, with rows (AI functions) and columns (areas of operation) and a 3rd dimension (stakeholders). Complex, but doable.
I don't see how ANI would be able to implement the requirements. At most it could support the government body by providing some even complex but well-defined KPI (key performance indicator) that it can handle. Being able to evaluate if an AGI has violated a constraint would mean being able to evaluate if an ethical principle has been violated... But anyway, better than the other two.
Bad actors will act badly regardless of governance agencies.
Question 10: How effective would it be to create two divisions in a UN AI Agency: one for artificial narrow intelligence (ANI) including frontier models and a second division just for AGI?
Ratings:
Very High (25): 13%
High (53): 28%
50/50 (63): 33%
Low (26): 14%
Very Low (23): 12%
Total Respondents (190)
Explanations and Comments:
These divisions could be delegated to an independent research body composed of experts, futurists, and public/private representatives. We might also create an ASI division to work with AGI for collaboration among species.
Since ANI and AGI are fields with significant overlap, divisions in the UN AI agency addressing the different fields could be more beneficial.
I do think the UN should monitor the development of AI, and I see a clear distinction between ANI and AGI. The distinction is between tools and people. It makes sense to legislate the use of tools. With people, you consult with them as stakeholders and come to agreements with them, not amongst yourself against them.
Potential trouble points if governance styles differ between the divisions. Also, the division depends on how ANI and AGI are defined, something that may keep changing as time evolves. Nothing but potential troubles with this approach.
If implemented, it should be done conditionally, even considered as an experiment, with continuous monitoring and adjustment.
Several divisions are needed: AI mapping, planning, and implementation platform, and quantum computing linked to UN Security Council.
The distinction between kinds of AI will be irrelevant once AGI emerges.
Overlap and inability to distinguish which division has responsibilities for new assignments, competitive jealousies will cloud the progress.
There is no clear distinction between governing AGI and ANI; I don't see why this separation would be useful, but I don't see it as detrimental either. Maybe some conflicting competences on the borderline cases, but it depends on the AGI definition
In real life distinguishing between ANI and AGI will be very complicated, creating potential tensions and conflicts. On the other side the benefit of separating the two is not very clear.
Question 11: How effective would a decentralized emergence of AGI that no one owns (like no one owns the Internet) through the interactions of many AI organizations and developers like SingularityNet?
Ratings:
Very High (30): 16%
High (54): 29%
50/50 (57): 30%
Low (26): 14%
Very Low (20): 11%
Total Respondents (187)
Explanations and Comments:
This is a highly possible outcome, given the how technology will progress and develop. However, regulation needs to protect the many possibilities. Internet is a facilitator technology for further developments; however, AGI will have some aspects which are differentiating. Regulation needs to be application level and use cases, rather than technology.
Decentralized emergence of AI enterprise will happen anyway. This is not the essential criteria; the formative criteria are the rapid establishment of international consensus and the placement of an effective UN developmental tool. An open-source methodology for AI platforms is very usable.
This proposal is interesting, I imagine that blockchain-based systems can be added for the transparency of this data.
Decentralized technology will generally find a way around regulation, particularly as tech is moving way faster than regulators (in many ways attempts to regulate AI are pure theatre and far removed from the underlying technological reality). Advocating for and encouraging open source, open access, decentralized AI models above a certain level of size is likely the only way to ensure evolution happens in plain sight and counter-measures are able to evolve rapidly in response to harmful applications. Decentralized governance systems which structurally mitigate against unipolar power concentration is likely the most robust mechanism. Ultimately, again, regulation needs to be done at application level, not technology level.
This suggests several actors developing AGI: open-source developers, governments, militaries, and corporations. The first may proliferate widely, but the latter two will have immense resources they can bring to bear. There will therefore become a kind of class division among AGIs with this model: many free, open, trustable AGIs that individually have no power; and numerous corporate- or government-backed ones with colossal powers to act. This scenario makes me very uneasy.
So much of this depends on the profitability of AGI, the speed at which it continues to develop, and the altruistic nature of developers. I give a 50/50 without some very serious foresight work.
It will be chaos... but so is the internet. It is hard to envision such model considering the current disparities in AI-competence levels between countries.
This will probably happen anyway. I also believe that an open source, open access, decentralized AI models above a certain level of size are the best way to go, both for safety, accountability, and general accessibility.
Decentralized development will happen regardless of governance attempts. Bad actors will act badly.
It is not true that internet is not owned by anybody, these kinds of oversimplifications are very dangerous. This is why the same principle is dangerous (and also inapplicable) for AI.
No ownership means no accountability.
Question 12: Describe the global governance model you think will work to manage the development and use of AGI.
Recommendations:
I like the international aviation agency model: each country has its own set of rules (licensing, inspection, penalties, etc.) but coordinated internationally by the UN. In the aviation model, even the mechanics are tested and licensed. Pilots are sometimes arrested and must follow rules promulgated by the Federal Aviation Administration. Aircraft parts are approved after testing. Pilots are tested at least once a year. Spot checks are part of the regulations.
Heavy use of AI to regulate AI. We need to stop thinking we will be capable to control it by ourselves. If AI transparency is a challenge right now, imagine how it will be with AGI.
The supranational governance organization should primarily operate digitally to be effective and avoid bureaucratization.
Global problems have to be addressed globally. Countries should not have the autonomy to develop or implement AGI without global oversight, just like with nuclear programs. We should also consider a scenario in which controlling/regulating AGI is not entirely possible, and AGI modifying its own source code, escapes human control. We should think about how to mitigate human wrongdoing without guaranteeing 100% compliance, but obtaining a desired global behavior.
We have to be aware that we are facing one of the greatest threats to humanity. The action of the UN is essential, but also of other international organizations that act together. This must be very clear to take immediate action.
An interdisciplinary model is required to govern AGI that integrates ANI tools to achieve real-time governance and control capabilities. The model must be centered on ANI with high training and closed databases that are managed, supervised, and operated by humans.
I would like to see governance based on risk. The model would combine economic disruption, human safety, and security/strategic deterrence in the broadest sense, with any limits to governance based on the likelihood of managing AGI progress in that area, field, profession. I believe the UN has a role in building consensus, convening the best minds, and setting international goals and standards for responsible AGI development. I do not think it is equipped (under the current configuration) to monitor and enforce AGI governance on a global scale. Perhaps in the case of nuclear protocols, Research, Development, Test, and Evaluation monitoring in fields that have the potential for catastrophic impact on health, etc. Just getting countries to agree on governance rules and roles, standards for development, and frameworks for integration into society would be a substantial achievement. There would likely need to be some form of sanctions for the most damaging violations, perhaps some kind of international review court with powers to sanction or seek compensation.
I favor a governance framework based on the IEEE, ISO, and The US National Bureau of Standards as a starting point. Politicians should be rapidly educated about AGI, because they will be the ones who will be making the decisions regarding any treaties and standards. Until that happens, the for-profit corporations and militaries will continue to develop and deploy this technology without restraint; hence, AI in all its forms will outrun any system of effective governance we would desire.
The governance model should integrate: 1) Governing board: represented by the voice of governments and intergovernmental organizations that decide on various public policy issues that impact the future of AI development and regulation. 2) Technical Advisory Council of technical and academic advisors; and 3) Industry Advisory Council that express the necessities and developments of the commercial developers.
An education task force should prepare world leaders to deal with emergence of AGI. Developers and researchers should come from diverse backgrounds. The governance model should be global and trans-federated for scrutiny and deliberation.
I see it as a way to develop and implement the international structure that limits and controls nuclear armament for AGI, to develop and implement even more serious sanctions, and to force it so that there is not a single country in the world that has not signed it.
Governments, research institutions, and private companies collaborate to create guidelines and policies for AGI research to be overseen by a global body, while each country adapts regulations to its cultural, legal, economic context, and enforces safety protocols, regular audits, risk assessments, impact evaluations. AGI developers obtain licenses, similar to other regulated industries. An independent Global AI Council of experts, policymakers, and stakeholders oversees AGI development, and recommends preventive measures. Industry Associations set AGI standards, codes of conduct, and ethical guidelines.
The only governance model for technology that "works" comes from the Pharma sector. The technology producer has an obligation to experiment with the effects of the technology they introduce and report on their results. On that basis they are allowed to sell their products, the use of which is monitored continuously by licensing agencies. In the case of unwanted effects, the license can be revoked. The use of technology takes place under strict conditions, which define the way producer and user responsibility are split, including for any damage to third parties. The UN part (WHO) monitors the situation and reports
I like the World Health Organization model: There are some internationals standards. Each region and each country have additional standards matching with the realities of their area.
Ideally it would be a model where the nodes of generation and development of artificial intelligence are regulated by an international protocol with sanctions applicable in case of violations of the regulations. An audit system should be made up of countries that are not directly immersed in the AI, to prevent biases, prejudices, new forms of manipulation or interference.
The only possible controls will be over the results of the AGI output, just as today enforcement is about actions.
The governance system should be agile with a centralized part and a decentralized part, that is not very expensive and that does not inhibit innovation and creativity, but that always knows what is being done, why and for what and what the consequences are with a robust indicator system, and control board at a planetary level.
Foster collaboration among nations, international organizations, academia, industry, and other stakeholders. Create a specialized UN agency dedicated to AGI governance, tasked with setting global standards, facilitating cooperation, and addressing ethical considerations. Develop and enforce multilateral treaties and agreements that establish ethical principles, safety standards, and guidelines for the development, deployment, and use of AGI. Define a universally accepted ethical framework that prioritizes human values, rights, and safety in AGI systems. Implement mechanisms for continuous evaluation and adaptation of governance frameworks to keep pace with technological advancements and evolving ethical considerations. Ensure the inclusion of diverse stakeholders, including AGI developers, ethicists, policymakers, and representatives from affected communities, in the decision-making processes. Encourage open-source collaboration and information sharing within the AGI community. Establish regulatory frameworks at the national and international levels to ensure compliance with global standards. Implement educational initiatives to raise awareness about AGI. Engage the public in discussions to gather diverse perspectives and promote understanding. Set up monitoring and reporting mechanisms to track the development and deployment of AGI globally, with the ability to investigate and address violations of established standards.
Cybernetics provides many ideas useful for regulating AI. It provides opportunities to compare computers, human intelligence and social systems such as management. It provides a general theory of control; it encompasses the social sciences -- psychology, sociology, political science, management, and anthropology -- in addition to much of biology and engineering. Artificial intelligence grew out of cybernetics. Cybernetics is based on the Greek word for governor. The word "governor" is also the word for the device that regulates the speed of a steam engine. Without a control device, a steam engine can run away and explode, injuring many people.
We need a global plan (something like an ISO) with implementation by accredited state organization, and reliable certifications of compliance with standards.
AGI governance should be based on the principle of subsidiarity; a potential UN agency on AGI should only intervene to the extent that citizens, civil society, and states are unable to regulate themselves. The UN AGI Agency would be a facilitator and supporter rather than a central authority.
The model should be composed of a governing board of governments and non-government representatives with a technical Advisory Council of technical experts from around the world, plus an Industry Advisory Council of representatives of the software developers sector, and an Independent Assessment Board of AI evaluators distributed around the world.
It should monitor the development of AGI, issue warnings and indications for regulations, but it should not prevent research and development.
No need to create 2 divisions in a UN AI Agency, it is sufficient to establish only one division responsible for AGI global governance, which should have both experts on cutting-edge AGI development and experts with narrow artificial intelligence. In this way, if there are any issues, internal coordination can be carried out.
The Agency should be a system of subject committees: strategic planning, governance, ethics, data, security, privacy, monitoring, maintenance, development, usage, outcomes, legal framework, management, quality, storage, training, and communication all integrated by international organizations/institutions with public consultations for collecting feedback from users and stakeholders before implementing important actions.
I like the ISO model. Certification by an international body. The "mark" of approval would be highly valued and sought after, and something that investors, company boards and government decision makers would require.
Encourage international cooperation through economic and diplomatic incentives.
No country should have veto power in the UN AI Agency. Majority rule with time limits on rules; not rules with a forever clause.
Membership in the governance model should be democratic, where the power of data and regulation is distributed uniformly in the member states and their continental government representations. Consider blockchain and quantum computing systems in the global governance system.
Managing AGI damage must focus on limiting the resources to generate it (just as the restriction on uranium enrichment is for the nuclear threat).
Have Institutional Review Boards consider ethical supply chains, environmental impact, the rich-poor gap, data privacy, the cultivation of ethical culture, moral courage and other societal challenges to give a LEEDS-like evaluation, rating, and certification.
Global governing body, global rules in design, delivery, ethics, human always at the center. A governance group involving human and non-human - from a range of different disciplines as the core group in a globally connected hub. I would also ensure the end user (non-technical reps and young people are part of the body to bring diversity).
Develop protocols for risk assessment and mitigation, including addressing potential existential risks associated with AGI development. Encourage global research collaboration to share insights, best practices, and methodologies for safe and ethical AGI development.
I believe more on private competition than on public regulation. Also, who regulates the regulators?
The model should allow for free access to AGI to prevent future social divides.
I do not think there is a workable global governance model that can be implemented until AGI is a fully mature technology, and the known dangers and failure modes are well worked out. In the short term the best that can be done is to track the technology and attempt certification strategies.
AGI will likely emerge first from unregulated military research and disseminate from there; hence, there will not be an effective UN AGI governance agency.
Appendix
Regional Demographics
Region Percentage
Europe 38.56
North America 21.56
Latin America 18.63
Asia 17.32
Africa 02.96
Other 00.97
Phase 2 Real-Time Delphi Participants