Leading AI companies agree to have their systems tested outside of the company and meet other safety standards.

The White House announced on Friday that Microsoft, Google, and other significant AI companies will thoroughly test new AI systems before making them available to the general public and publicly mark everything created using AI. Seven significant AI companies and the White House are discussing with Congress to tighten regulations to stop the sector’s expansion.

Amazon, Meta, OpenAI, Anthropic, and Inflection consented to voluntary commitments to enhance the security and dependability of AI systems and products. The seven organizations’ presidents and vice presidents met on Friday at the White House. The pledges made by the businesses, according to Vice President Biden, are “real and concrete,” and they will help them fulfill their “fundamental obligations to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our values and our shared values.”

This was said by Biden in a speech on Friday. More technological progress will occur in the next ten to fifteen years than in the previous fifty. “Amazing find,” Biden exclaimed. The White House staff claims that by taking these actions, “the standards for safety, security, and trust of AI” are frequently raised and that they act as a “bridge to regulation.” Some businesses have already met some of their commitments.

White House deputy chief of staff Bruce Reed, who oversaw the development of the AI strategy, described it as a first step and a bridge to our future goals in an interview. Businesses and governments will benefit from its protection of AI. Given how swiftly this technology is evolving, we had to move quickly.

Even while the majority of businesses engage in internal “red-teaming” exercises, this will be the first time all of them have consented to have outside experts evaluate their systems prior to making them publicly available. Red team exercises assist businesses in identifying flaws and preventing unanticipated outcomes by simulating a cyberattack or hostile actor’s usage of a technology.

The external red-teaming, according to Reed, “will help pave the way for government oversight and regulation,” maybe opening the door for a government regulator or licenser to do comparable external testing. To prevent fraud and false information, the agreements may watermark audio and video produced using AI.

The businesses also committed to spending money on cybersecurity and “insider threat safeguards,” particularly to defend AI model weights, which serve as the basis for AI systems. They promised to develop and use AI technology “to help address society’s greatest challenges,” according to the White House. Businesses gave priority to social risk studies in AI. Brad Smith’s response to CNN’s Jake Tapper on Friday was, “What people, bad actors, individuals, or countries will do” with the technology.

that they’ll try to influence our elections by breaking into our computer systems. You know they’ll use it to jeopardize our jobs, he said. Smith advises, however, concentrating on these issues, comprehending them, bringing people together, and coming up with solutions.

I find AI fascinating because, when we apply ourselves to such things, we can use technology to combat these problems much more successfully than we now are. In reaction to a recent petition on artificial intelligence and pay signed by many authors, Smith told Tapper, “I don’t want technology to undermine anybody’s ability to make a living by creating, by writing.” Everyone should try to achieve that balance.

All of the obligations, some of which are vague, are optional, according to White House staffers, and there is no enforcement mechanism to ensure firms comply. Although the White House’s AI protections are admirable, Common Sense Media, a group that advocates for children’s online safety, issued a warning: “History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations.”

According to James Steyer, CEO of Common Sense Media, “if we’ve learned anything from the last decade and the complete mismanagement of social media governance, it’s that many companies offer a lot of lip service.” They put their financial interests first and don’t care how their products affect Americans, particularly families and children.

White House officials’ AI laws and executive initiatives have been impacted by the federal government’s failure to control social media businesses when they first emerged and their opposition. Reed claimed that we instructed businesses to be as thorough as possible.

Considering how quickly AI is developing, the bridge to regulation must be sturdy. The tech sector miscalculated oversight, legislation, and regulation ten years ago. A group of AI CEOs met with Vice Presidents Joe Biden and Kamala Harris in May after months of discussions between the White House and AI business owners. They pledged.

The White House received advice from outside experts on AI ethics and safety. Later this summer, the White House will provide a number of administrative actions in an effort to go beyond voluntary promises. To create more thorough AI rules, officials are collaborating with Capitol Hill representatives.

It’s weighty. Fill it out completely. Biden acknowledges the huge opportunities. According to White House officials, the corporations will “immediately” begin putting the voluntary commitments into practice and they anticipate more companies will do the same. We anticipate that other companies will uphold trust, safety, and security. They are welcome to sign up to these accords, according to a White House spokesperson.

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.