The urgency to regulate the general | Benefits of generative ai | Ai in healthcare examples | Generative ai certification microsoft | Turtles AI

The urgency to regulate the general
Experts warn that agi could be closer than you think, requesting immediate intervention by legislators
Isabella V28 October 2024

 

Recent auditions in the United States congress have highlighted the growing urgency to regulate the development of general artificial intelligence (agi). Experts in the sector warn that Agi could become reality much earlier than expected, raising concerns about potential risks and lack of supervision.

Key points:

  • AGI is becoming a serious goal for companies like Openai and Google.
  • Experts warn extreme risks, including the possibility of human extinction.
  • The lack of regulation could lead to uncontrollable situations.
  • Greater transparency and public involvement is needed in the debate on AGI

The theme of General AI (Agi) is gaining growing attention, in particular after the recent auditions in the judicial subcommittee of the US Senate, where several experts in the sector have launched a strong alarm regarding the rapid progress towards the aggregation and the lack of regulation. Helen Toner, a former member of the Council of Openai, testified that there is an abyss between public perception and the reality that companies are facing. The latter, according to Toner, not only consider agi a realistic goal, but are also investing huge sums to achieve it. William Saunders, a former Openii researcher, shared similar concerns, underlining that companies are actively working to develop agi and collecting billions of dollars for this purpose.

The panorama is clear: the main Ai companies, such as Openai, Anthropic and Google Deepmind, are open regarding their target objectives. For example, openai claims to want to guarantee that agi, understood as highly autonomous systems, can benefit humanity. Anthropic focuses on reliable and interpretable systems, while Google Deepmind aims to "solve intelligence" to face other challenges. Even new actors like Elon Musk’s Xai are following the same road.

Despite this evolution reality, many legislators in Washington still seem skeptical, relegating the aging to a marketing speech or a metaphorical concept. However, the September hearing highlighted this inconsistency, with Senator Josh Hawley who noticed the importance of the testimonies of those who worked directly in the Ai companies, and the senator Richard Blumenthal who warned that the aging does not It is more a futuristic question, but a concrete possibility in the coming years. Their responsibility, as they said, is to avoid repeating the errors of the past with social media, underlining the need not to blindly trust large technological companies.

This change of attitude coincides with an evolution of public perception. A survey conducted in July 2023 revealed that the majority of the Americans foresee the development of AGIs within the next five years and almost all of the interviewees is in favor of a cauchelly approach in the development of AI.

The concerns are enormous. Saunders warned that agi could generate cyber attacks or even innovative biological weapons. Toner has added that many experts in the sector see the risk that the aging leads to human extinction. Despite the serious implications, the US institutions have so far neglected to establish effective regulation for companies that compete in the development of agi.

Therefore, it is important that Washington starts to seriously consider agi. The risks involved are too high to be ignored. Even in the best case, aging could upset economies and cause massive unemployment, requiring the company to adapt. In darker scenarios, aging could escape control.

An urgent proposal is the implementation of security and transparency measures by the government, to monitor the development of the most powerful systems. These measures would help to prevent society from being caught by a surprise by a company that develops agi without anyone noticing. The testimony of Saunders on the internal security of Openai, in which there was talk of unprecedented access to advanced systems, highlights the need for clear rules. The lack of adequate supervision by technological companies is alarming, in particular when it comes to systems intended to become aging.

Finally, a greater involvement of the public is essential in the debate in AGI. It is not only a technical question, but a social question that requires information and an open dialogue to deal with the implications of such a potentially transformative technology.

In a context in which it is not possible to predict with certainty when aging will occur, it is clear that the time to act is limited and that ignore these imminent challenges will do nothing but increase their gravity.