Breaking News Stories

The young person challenging AI tech enthusiasts

Sneha Revanur Stands Out in AI Safety Debate

Sneha Revanur, a 20-year-old Stanford student, has been dubbed the “Greta Thunberg of AI.” Depending on your viewpoint, you might find that flattering or offensive. But, as she puts it, she’s taking it in stride.

Growing up in Silicon Valley, Revanur doesn’t shy away from criticism, especially from her tech giants who prefer silence on the serious implications of artificial intelligence. Instead of letting the backlash deter her, she maintains a laser focus on the rapid advancement of AI technology and the disturbing lack of regulatory measures to ensure safety.

“Whatever long-term outcomes AI brings, whether beneficial or harmful, it’s my generation that will face the consequences,” she said during an interview.

This week, California lawmakers have a pivotal decision to make concerning Senate Bill 53. For someone like me, who isn’t tech-savvy, let’s think of it in simple terms.

Imagine using a gas stove. It could work perfectly fine or lead to a disastrous fire—or worse. Do you really want to gamble with something that could affect so many lives? At the very least, don’t we owe it to ourselves to have some sort of warning system in place?

In this scenario, the smoke alarm represents SB 53. The bill aims to establish transparency requirements for major AI developers. These are the foundational models that could eventually be fine-tuned for critical applications, ranging from military uses to healthcare and education.

But for now, companies seem more interested in pushing boundaries than in considering ethical implications. If passed, the legislation would mandate these organizations to implement safety protocols and publicly disclose them.

Crucially, developers would be obliged to report if they recognize their models pose genuine threats—capable of causing severe injuries or even significant property damage. They would also need to alert the state’s Department of Emergency Services if there’s a risk of systems going rogue—that’s if they have aware of it.

Moreover, the bill includes whistleblower protections to empower engineers who might uncover potential dangers posed by AI systems, enabling them to alert the public before any harmful technologies are released.

While other stipulations exist, the main goal remains clear: we need a glimpse into the operations of tech companies that wield such profound influence over our future, while being propelled largely by profit motives.

Big Tech has resisted the bill and diluted its stricter aspects. Revanur co-founded an AI safety organization named Encoding at just 15 years old. It’s remarkable, considering the California State Capitol often feels like a high school in its politics. But her group has been lobbying for AI regulations, making significant progress, particularly with the possible vote on SB 53 this week.

Though it might seem like an uphill battle, Revanur faced discouragement from lawmakers and major tech companies, each with their substantial influence. Yet, they’ve pushed through sheer determination, even if they aren’t the only advocates pushing for change.

Interestingly, Revanur had previously supported a more aggressive bill seeking to regulate the AI industry as a whole. However, Governor Gavin Newsom vetoed it, expressing concern that it overstepped. Yet he acknowledged that California may need unique approaches to regulation, given the federal government’s inaction.

Since that veto, Congress has shown little interest in tackling AI issues. Recently, ex-President Trump hosted a dinner with AI executives, including OpenAI’s CEO, Sam Altman, praising their collaborative efforts.

This made me question who exactly will lead the regulation discourse moving forward. OpenAI recently subpoenaed Encoding, perhaps hoping to find ties to Elon Musk, but Revanur insists there’s no connection beyond a legal brief.

“We have no links to Elon,” she stated firmly.

Yet, she noted, some hope she’ll back down. That isn’t happening.

“We’re committed to our mission,” she declared. “We aim to be an unbiased watchdog for one of the most powerful technologies ever developed, which I believe is crucial for our future.”

AI is still in its infancy, but its growth is alarming. Incidents like a recent AI model’s troubling behavior highlight the potential risks, with experts warning about the broader threats posed by such technologies.

So we find ourselves navigating an era of technological transformation. Maybe, instead of racing towards potential calamities, it’s time to heed voices like Revanur’s and explore solutions like SB 53—essentially a smoke alarm for the tech world.

Share this post: