On November 1, 2023, global leaders, including presidents, ministers, and company heads, came together for the first AI Safety Summit in the UK, hosted by UK Prime Minister Rishi Sunak. They assembled to discuss the risks created by AI and find methods to protect humankind from those potential pitfalls.
Kamala Harris proclaimed, “Just as AI has the potential to do profound good, it also has the potential to do profound harm.” These harmful effects include sophisticated AI models backing biomedical attacks, undermining voting algorithms, and breaching user privacy. There is even the existential risk of artificial intelligence slipping from the reigns of human control and wiping out humanity. This risk cannot be addressed presently, but the short-term risks can be. To combat these short-term, “frontier” risks, the global leaders of countries and companies agreed in action through the two days of summit discussions.
Leading AI developers agreed to a new testing protocol in which companies would allow their new models to be tested by external safety testing institutions before they are released to the public. Sunak explained, “Until now the only people testing the safety of new AI models have been the very companies developing it.” Allowing external companies to examine new technologies provides a form of regulation by setting certain thresholds of intelligence that must not be exceeded.
The new idea was generally well-taken by the leaders of the large tech corporations who attended the summit. The CEO of Anthropic, Dario Amodei, believes that the new safety institutes would play an effective role in developing artificial intelligence. The CEO of Google Deepmind, Demis Hassabis, also agreed with the collective efforts of organizations and committed to the testing protocol.
The primary outcome of the summit was a collective agreement to continue discussions, which is the first step to mitigate the risks of AI in the future. According to Elon Musk, companies should uncover problems while developing the technology to share their findings with lawmakers rather than rush writing new legislation. New legislation can be formulated over time, such as the UK government’s approach to waiting for further developments before creating official legislation. However, the US government approach consisted of Biden’s signing of the US AI Bill of Rights, which holds the purpose of addressing AI frontier risks of bias and misinformation.
Despite the differing forms of action, the summit that took place marks an agreement amongst various countries across the world and technology powers to collaborate for a common goal of protecting humanity. Around 28 countries, including the US, UK, India, Germany, France, and China, signed a declaration to cooperate through research and continued conversation. They will unite to create the necessary guardrails for technology companies to use as they develop artificial intelligence innovatively and safely. This summit is only the first of the two that are to occur in South Korea and France in the future.
Works Cited
Coulter, Martin, and Paul Sandle. “At UK’s AI Summit Developers and Govts Agree on Testing to Help Manage Risks.” Reuters, Thomson Reuters, 2 Nov. 2023, www.reuters.com/world/uk/uk-pm-sunak-lead-ai-summit-talks-before-musk-meeting-2023-11-02/. Accessed 06 Nov. 2023.
“Five Takeaways from UK’s AI Safety Summit at Bletchley Park.” The Guardian, Guardian News and Media, 2 Nov. 2023, www.theguardian.com/technology/2023/nov/02/five-takeaways-uk-ai-safety-summit-bletchley-park-rishi-sunak. Accessed 06 Nov. 2023.
Governments Set out Plan to Ensure AI Safety as US Publishes Standards, www.globalgovernmentforum.com/governments-set-out-plan-to-ensure-ai-safety-as-us-makes-bid-to-lead-standards/. Accessed 06 Nov. 2023.
News, VOA. “World Leaders Agree on Artificial Intelligence Risks.” Voice of America, Voice of America (VOA News), 2 Nov. 2023, www.voanews.com/a/world-leaders-agree-on-artificial-intelligence-risks-/7339273.html. Accessed 06 Nov. 2023.
The Wall Street Journal. “AI Risks Take Center Stage at Global Summit.” The Wall Street Journal, Dow Jones & Company, www.wsj.com/video/series/tech-news-briefing/ai-risks-take-center-stage-at-global-summit/6FD50292-3094-4C25-84A7-E2DB5AE84679. Accessed 06 Nov. 2023.
Trueman, Charlotte. “Global AI Safety Summit Shows Need for Collaborative Approach to Risks.” Computerworld, 3 Nov. 2023, www.computerworld.com/article/3710048/global-ai-safety-summit-shows-need-for-collaborative-approach-to-risks.html. Accessed 06 Nov. 2023.