I feel fear. very scary.
The unimaginable computing power of artificial intelligence (AI) has enabled internet-wide surveillance and censorship.
This is not some future dystopia. That's happening now.
Government agencies are working with universities and nonprofits to use AI tools to monitor and censor content on the internet.
This is not political or partisan. This is not about any particular opinion or idea.
What's going on is that it's a tool Powerful information is now available to governments to monitor everything (or most) of what we say and do on the Internet, allowing them to monitor all of us at all times. And based on that oversight, governments, and the organizations and businesses they partner with, can use the same tools to suppress, silence, and shut down speech they don't like.
But that's not all. Using the same tools, governments and their public and private “non-governmental” partners (such as the World Health Organization and Monsanto) can also block any activity linked to the Internet. Banking, buying and selling, education, learning, entertainment, interconnectedness – if a government-controlled AI doesn’t like what you (or your kids!) say in a tweet or email, it can shut it all down. you.
Yes, we have seen this on a very local and politicized scale. for example, canadian truck driver.
But if you think this kind of activity can't or won't happen on a national (or even more frightening, global) scale, we need to wake up now and see if it happens. We need to recognize that this is happening, and we may not be able to stop it.
New document shows government-funded AI aimed at online censorship
U.S. House of Representatives Select Subcommittee on Federal Weaponization Established in January 2023, it “investigates issues related to the collection, analysis, distribution, and use of information about U.S. citizens by executive branch agencies, including whether such efforts are illegal, unconstitutional, or unethical.” .
Unfortunately, the committee's work is viewed as primarily political, even by its members themselves. Conservative lawmakers are investigating what they see as the silencing of conservative voices by liberal-leaning government agencies.
Nevertheless, in that study, The commission discovered some surprising documents related to the government's attempts to censor the speech of the American people.
These documents have a profound and terrifying impact on society as a whole.
In the interim report of the subcommittee, As of February 5, 2024the documentation shows that Academic and non-profit organizations are pitching government agencies on plans to use AI “misinformation services” to censor content on internet platforms.
in particular, The University of Michigan has told the National Science Foundation (NSF) that an NSF-funded AI-powered tool could help social media platforms censor their efforts without actually deciding what to censor. Explaining.
Here's how this relationship is visualized in the subcommittee's report:
Specific quotes provided in the subcommittee's report include: This comes from the “notes of a speaker who first proposed his AI-powered WiseDex tool, funded by the University of Michigan with the National Science Foundation (NSF).” The memo is on file with the committee.
Our misinformation service helps platform policymakers who want to externalize responsibility for difficult decisions…by externalizing the difficult responsibility of censorship.
This is an extraordinary statement on so many levels.
- It clearly equates “misinformation services” with censorship.
This is a very important equation. Because governments around the world pretend to be fighting harmful misinformation when they're actually fighting it. pass a major censorship bill.of WEF declaration “Misinformation and disinformation” will be the “most serious global risk” over the next two years, which likely means their biggest efforts will be directed toward censorship.
When a government contractor specifically states that it sells a “misinformation service” that helps online platforms “externalize censorship,” the two terms are seen as interchangeable.
- It refers to censorship as a “responsibility.”
In other words, it assumes that part of what platforms are supposed to do is censor. It doesn't protect children from sex offenders or protect innocent citizens from misinformation, it just censorship, plain and simple.
- He said the role of AI is to “externalize” the responsibility of censorship.
Tech platforms don't want to make decisions about censorship. The government wants to make these decisions, but it doesn't want it to be seen as censorship. Using AI tools, platforms can “externalize” censorship decisions and governments can hide their censorship activities.
All of this should put an end to the illusion that what governments around the world call “countering misinformation and hate speech” is not direct censorship.
What will happen when AI censorship is fully implemented?
We need to understand what this means, as we know governments are already paying for AI censorship tools.
No personnel limit: As the subcommittee's report points out, government online censorship has its limits, and previously required vast numbers of people to sift through endless files to make censorship decisions. With AI, there is little need for human involvement, and the amount of data that can be monitored can be as vast as what someone is saying on a particular platform. That amount of data is incomprehensible to the individual human brain.
No one is responsible: one of The most frightening aspect of AI censorship is that when AI censors, there is no actual person or organization responsible for the censorship, such as a government, platform, or university/nonprofit. Initially, humans will give the AI tools instructions on what categories or types of language to censor, but then the machine will go ahead and make case-by-case decisions all on its own.
Complaints cannot be appealed: AI unleashes a series of censorship orders, wiping out billions of online data points and applying censorship measures. If you want to challenge AI censorship, you need to talk to the machine. Perhaps the platform will hire humans to respond to appeals. But why would we do that when we have AI that can automate these responses?
There are no protections for young people: One of the arguments made by government censors is that children need to be protected from harmful online information, including content that makes them anorexic, encourages suicide, or turns them into ISIS terrorists. Thing. Also from sexual exploitation. These are all serious issues that deserve attention. But for vast numbers of young people, it is no more dangerous than AI censorship.
The dangers posed by AI censorship apply to all young people who spend a lot of time online. Because AI censorship means that their online activities and language can be monitored and used against them. Maybe not now, but whenever the government decides to go after a certain type of thing, whether it's words or actions. This is a much greater danger to many children than any particular content poses, as it encompasses all the activities children do online and touches almost every aspect of their lives.
Here is an example that illustrates this danger. Let's say your teenager plays a lot of interactive video games online. Let's say he happens to prefer games designed by Chinese companies. Perhaps he watches others play those games or participates in chats and discussion groups about those games that many Chinese people also participate in.
The government may decide next month or next year that anyone deeply involved in Chinese-designed video games is a danger to democracy. As a result, your son's social media accounts may be shut down or he may be denied access to financial tools such as college loans. It could also include flagging him as dangerous or undesirable on job or dating sites. That could mean he is refused a passport or placed on a watch list.
Your teenager's life has become even more difficult. It's much harder than being exposed to ISIS recruitment videos or TikTok posts glorifying suicide. And this will happen on a much larger scale than before. Sexual exploitation used by censors as a Trojan horse that normalizes the concept of government online censorship.
Monetizable censorship services: In theory, government-owned AI tools could be used by non-governmental organizations with government permission and with the benefit of platforms that wish to “externalize” their censorship “responsibility.” So while governments may use AI to monitor and suppress anti-war sentiment, for example, companies can use AI to monitor and suppress anti-fast food sentiment, for example. Governments could make significant profits by selling the services of AI tools to third parties. It is also possible that the platform will request a reduction. Therefore, AI censorship tools could benefit governments, technology platforms, and private companies. This incentive is so powerful that it is almost impossible not to abuse it.
Can we reverse course?
The number of government agencies and platforms using AI censorship tools is unknown. We don't know how quickly we can scale.
Beyond raising awareness, lobbying politicians, and filing lawsuits, we don't know what tools we have at our disposal to prevent government censorship and regulate the use of AI tools on the internet. .
If you have other ideas, now is the time to implement them.
*****
This article Published by Brownstone Institute Reprinted with permission.
I take action
Prickly Pear's “Take Action” focus this year is to eliminate the Biden/Obama left wing executive disaster and win the national and state elections on November 5, 2024, winning one seat in the U.S. Senate to maintain and win strong majorities in all state offices in Arizona. To put it on the ballot and ensure that unlimited abortion is not constitutionally written into our laws and culture.
Click the “Take Action” link to learn the do's and don'ts of voting in 2024. Our state and national elections have been highly attacked with documented fraud, mail-in ballot fraud, and illegal voting across the country (yes, illegal voting across the country) over the past few election cycles. are at great risk from radical and radical Democratic Party operatives.
read part 1 and part 2 To better understand the above issues and to ensure that your vote is counted as intended and is most likely to be counted, please read the article entitled How Not to Vote in the November 5, 2024 Arizona Election. of The Prickly Pear essay.
Click the link below for more information.