Donations Make us online
Microsoft, Google and OpenAI are among the leaders in the US artificial intelligence space that will reportedly commit to certain safeguards for their technology on Friday, following a push from the White House. The companies will voluntarily agree to abide by a number of principles though the agreement will expire when Congress passes legislation to regulate AI, according to Bloomberg.
The Biden administration has placed a focus on making sure that AI companies develop the technology responsibly. Officials want to make sure tech firms can innovate in generative AI in a way that benefits society without negatively impacting the safety, rights and democratic values of the public.
In May, Vice President Kamala Harris met with the CEOs of OpenAI, Microsoft, Alphabet and Anthropic, and told them they had a responsibility to make sure their AI products are safe and secure. Last month, President Joe Biden met with leaders in the field to discuss AI issues.
According to a draft document viewed by Bloomberg, the tech firms are set to agree to eight suggested measures concerning safety, security and social responsibility. Those include:
-
Letting independent experts test models for bad behavior
-
Investing in cybersecurity
-
Emboldening third parties to discover security vulnerabilities
-
Flagging societal risks including biases and inappropriate uses
-
Focusing on research into the societal risks of AI
-
Sharing trust and safety information with other companies and the government
-
Watermarking audio and visual content to help make it clear that content is AI-generated
-
Using the state-of-the-art AI systems known as frontier models to tackle society’s greatest problems
The fact that this is a voluntary agreement underscores the difficulty that lawmakers have in keeping up with the pace of AI developments. Several bills have been introduced in Congress in the hope of regulating AI. One aims to prevent companies from using Section 230 protections to avoid liability for harmful AI-generated content, while another seeks to require political ads to include disclosures when generative AI is employed. Of note, administrators in the Houses of Representatives have reportedly placed limits on the use of generative AI in congressional offices.
Source link
Leave a Reply