How worried should we be about “existential” AI risk?

Fight Censorship, Share This Post!

The “godfather of AI” has left Google, offering warnings about the existential risks for humanity of the technologyMark MacCarthy calls those risks a fantasy, and a debate breaks out between Mark, Nate Jones, and me. There’s more agreement on the White House summit on AI risks, which seems to have followed Mark’s “let’s worry about tomorrow tomorrow” prescription. I think existential risks are a real concern, but I am deeply skeptical about other efforts to regulate AI, especially for bias, as readers of Cybertoonz know. I revert to my past view that regulatory efforts to eliminate bias are an ill-disguised effort to impose quotas, which provokes lively pushback from both Jim Dempsey and Mark.

Other prospective AI regulators, from the FTC’s Lina Khan to the Italian data protection agency, come in for commentary. I’m struck by the caution both have shown, perhaps a sign they recognize the difficulty of applying old regulatory frameworks to this new technology. It’s not, I suspect, because Lina Khan’s FTC has lost its enthusiasm for pushing the law further than it can reasonably be pushed. This week’s example of litigation overreach at the FTC include a dismissed complaint in a location data case against Kochava, and a wildly disproportionate ‘remedy” for what look like Facebook foot faults in complying with an earlier FTC order.

Jim brings us up to date on a slew of new state privacy laws in Montana, Indiana, and Tennessee. Jim sees them as business-friendly alternatives to the EU’s General Data Protection Regulation (GDPR) and California’s privacy law.

Mark reviews  Pornhub’s reaction to the Utah law on kids’ access to porn. He thinks age verification requirements are due for another look by the courts.

Jim explains the state appellate court decision ruling that the NotPetya attack on Merck was not an act of war and thus not excluded from its insurance coverage.

Nate and I recommend Kim Zetter’s revealing story on the  SolarWinds hack. The details help to explain why the Cyber Safety Review Board hasn’t examined SolarWinds – and why it absolutely has to. The reason is the same for both: Because the full story is going to embarrass a lot of powerful institutions.

In quick hits,

  • Mark makes a bold prediction about the fate of Canada’s law requiring Google and Facebook to pay when they link to Canadian media stories: Just like in Australia, he predicts, the tech giants and Canadian media will reach a deal.
  • Jim and I comment on the three-year probation sentence for Joe Sullivan in the Uber “misprision of felony” case—and the sentencing judge’s wide-ranging commentary.
  • I savor the impudence of the hacker who broke into Russian intelligence agencies’ bitcoin wallets and burned the money to post messages doxing the agencies involved.
  • And for those who missed it, Rick Salgado and I wrote a Lawfare article on why CISOs should support renewal of Foreign Intelligence Surveillance Act (FISA) section 702, and Metacurity has now named it one of the week’s “Best Infosec-related Long Reads.”

Download 456th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to [email protected]. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

The post How worried should we be about “existential” AI risk? appeared first on Reason.com.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.