Breaking News Stories

Biden meets with experts about dangers of AI

President Biden is scheduled to meet with researchers and advocates with artificial intelligence expertise in San Francisco on Tuesday as the administration seeks to address artificial intelligence efforts. potential danger Technologies that may promote misinformation, unemployment, discrimination, and invasion of privacy.

The meeting comes as Mr. Biden ramps up efforts to raise money from tech billionaires and others for reelection in 2024. During his visit to Silicon Valley on Monday, he participated in two fundraisers, including one co-hosted by Reed Hoffman, an entrepreneur with many ties to the AI ​​business. This venture capitalist is popular.He built the ChatGPT app.He was an early investor in Open AI.He has invested heavily in AI.He is also on the boards of technology companies, including Microsoft. .

The experts Biden plans to meet on Tuesday include some of the most vocal critics of big tech companies. The list also includes Jim Steyer, a child advocate who founded and leads Common Sense Media. Tristan Harris, executive director and co-founder of the Center for Humanitarian Technology. Joy Buorumwini, founder of the Algorithmic Justice League. Feifei Li, co-director of the Human-Centered AI Institute at Stanford University.

Some of our experts have experience working for big technology companies. Former Google product manager and design ethicist Harris spoke about how social media companies like Facebook and Twitter can harm people’s mental health and amplify misinformation.

Biden’s talks with AI researchers and tech executives will help the president understand the risks of burgeoning technology as his campaign tries to lure wealthy donors while the administration considers the risks of burgeoning technology. I made it clear that I was acting in both positions. While Mr. Biden has criticized tech giants, executives and employees of companies such as Apple, Microsoft, Google and Meta have also contributed. millions of dollars to his camp in the 2020 election cycle.

“AI is a top priority for the president and his team. Generative AI tools have increased significantly over the past few months and we are not going to fix yesterday’s problem,” a White House official said in a statement.

The Biden administration is focusing on Risk of AI. Last year, the government released the following report: “A Blueprint for the AI ​​Bill of Rights” We outline five principles developers should keep in mind before releasing a new AI-powered tool. The administration also met with tech CEOs, announced steps the federal government has taken to address AI risks, and promoted other efforts to “promote responsible American innovation.”

Biden-appointed Federal Trade Commission chair Lina Khan said: May editorial The rise of technology platforms such as Facebook and Google is sacrificing user privacy and security, according to The New York Times.

“As the use of AI becomes more widespread, public officials have a responsibility to ensure that this hard-learned history does not repeat itself,” Khan said.

Tech giants are using AI in a variety of products to recommend videos, power virtual assistants, and transcribe audio. AI has been around for decades, but the popularity of AI chatbots known as ChatGPT has increased competition among tech giants such as Microsoft, Google, and Facebook parent company Meta. Launched in 2022 by OpenAI, ChatGPT can answer questions, generate text, and complete various tasks.

In the rush to advance AI technology, techies, researchers, lawmakers and regulators are worried that new products will be released before they are safe. In March, Tesla, SpaceX, Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and other technology leaders said: AI Lab Pausing training on advanced AI systems and urging developers to work with policymakers. AI Pioneer Jeffrey Hinton (75) quit my job at google That would help him talk more openly about the risks of AI.

As technology advances rapidly, lawmakers and regulators are struggling to keep up. California Governor Gavin Newsom has suggested that he wants to approach state-level AI regulation cautiously. Newsom said in Los Angeles. meeting He said in May that the “biggest mistake” politicians make is to assert themselves “without trying to understand.” California legislators have proposed several ideas, including bills that: algorithmic discriminationto establish an Artificial Intelligence Bureau and create a working group to provide an AI report to the legislature.

Writers and artists are also concerned that companies could use AI to replace workers. Using AI to generate text and art raises ethical issues such as plagiarism and copyright infringement. The Writers Guild of America, which continues to strike proposed rule An announcement was made in March about how Hollywood studios could take advantage of AI. For example, text generated from AI chatbots “will not be considered in determining copyright” under the proposed rule.

The potential misuse of AI to spread political propaganda and conspiracy theories, a problem plaguing social media, is also a top concern among people. disinformation researcher.they are afraid AI-tool The ability to spit out text and images makes it easy and cheap for bad actors to spread misleading information.

Already, some mainstream political ads are starting to use AI. The Republican National Committee has posted an AI-generated video ad that depicts a dystopian future if Biden wins his 2024 re-election. AI tools are fake audio clips To say something a politician or public figure doesn’t actually say. The campaigns of the Republican presidential candidate and Florida Governor Ron DeSantis shared: video What appears to be an AI-generated image of former President Trump hugging Dr. Anthony Fauci.

Tech companies aren’t against guardrails. They welcome regulation, but they also want to shape it. In May Microsoft 42 page report When it comes to managing AI, he points out that no company is above the law. The report includes a “blueprint for AI public governance” outlining five points, including creating “safeguards” for AI systems that control power grids, water systems and other critical infrastructure.

That same month, OpenAI CEO Sam Altman testified before Congress, calling for AI regulation.

“My biggest fear is that we, the tech industry, are doing great harm to the world,” Altman told lawmakers. “If this technology doesn’t work, it can go very wrong,” said Altman, who has met with leaders from Europe, Asia, Africa and the Middle East. one sentence letter In May, he met with scientists and other leaders who warned of the “extinction risk” posed by AI.

Share this post:

Leave a Reply