Confirmation bias meets Ukraine war and Elon Musk

0

Whatever else experts say about the use of cyberattacks in the war in Ukraine, Dave Aite notes, they all believe this confirms their past predictions about cyber warfare. And in fact, little was surprising about the cyber weapons deployed by the parties, Scott Shapiro accepted. Ukrainians doxxed Russian soldiers to Bucha and his spies around the world. The Russians attacked the Ukraine grid. What is surprising is that the network attacks did not seriously degrade civilian life, and how hard the Russians had to work to have any effect. Cyberwar isn’t exactly a bust, but it does feel a bit overhyped. In fact, Scott suggests, it’s more of an admission of weakness than strength: “My military offense isn’t up to snuff, so I’m going to add some fancy cyberweapons to impress The Boss.”

Would it have more impact in the United States? We can’t know until the Russians (or someone else) try. Surely we should have a plan to respond, and Dmitri Alperovitch and Sam Charap have come up with theirs: Shut down the internet in Russia for a few hours just to show we can. It’s better than no plan, but we’re not ready to say it’s the right plan, given its limited impact and high cost in terms of exposed exploits.

Much more surprising, and therefore more interesting, is how Ukrainian mobile phone networks have become an essential part of Ukrainian defense. As noted in a good blog post, Ukraine has made it possible for civilians to continue using their phones without paying, no matter where they travel in the country and no matter what network they find there. At the same time, Russian soldiers discover that the network is a dangerous honeypot. Dave and I believe there are lessons to be learned here for the emergency administration of telephone networks in other countries.

Gus Hurwitz draws the short straw and sums up the second installment of the Elon Musk vs. Twitter story. We agree that the poison pill of Twitter is likely killing Musk’s chances of a successful takeover. So what else can we talk about? Consistent with the history of confirmation bias, I take a short victory lap for predicting that Musk would try to become the Rupert Murdoch of social oligarchs. And Gus helps us enjoy the festschrift of hypocrisy from the usual sources declaring that the preservation of democracy depends on internet censorship, administered by their cronies.

Scott takes us deep into pipeline security, citing the opinion of a colleague article for Lawfare on the subject. He thinks the responsibility for pipeline security should be transferred from the Transportation Security Administration (TSA) to the Federal Energy Regulatory Commission (FERC), because, well, the TSA. the The Biden administration is also inclined, but I’m not enthusiastic; The TSA may not have had much regulatory savvy until recently, but neither has FERC, and the TSA can borrow all the cyber expertise it needs from its sister agency, the CISA. An option also open to FERC, points out Scott.

You can’t talk about pipeline cybersecurity without talking about industrial control security, so Scott and Gus unpack a recently discovered ICS malware package that’s a kind of Metasploit for attacking operational technology systems. It has a boatload of features, but Gus is skeptical that it’s the best tool for wreaking major havoc on power grids or pipelines. Also, remarkably, it appears to have been leaked before the nation-state that developed it could use it against an adversary. Now it’s defending forward!

As a palate cleanser, we ask Gus to fill us in on the latest in EU cloud protectionism. It looks like a move that will harm US intelligence but do nothing for Europe’s efforts to build its own cloud industry. I tell the background story, from the subpoena litigation to the CLOUD Act to this latest attack on the CLOUD. The whole thing gives me the impression that Microsoft is playing both sides against the middle.

Finally, Dave walks us through the many proposals launched around the world to regulate the use of artificial intelligence (AI) systems. I note that congressional Democrats have pulled out their knives for facial recognition provider, id.me. And I return briefly to the problem of biased content moderation. I’m looking at research showing that Republican Twitter accounts were four times more likely to be suspended than Democrats after the 2020 election, which at first glance seems like a smoking gun for moderator bias. But I find myself at least tentatively convinced by other research showing that Republican accounts were four times more likely to tweet links to sites that a cross-section of voters consider unreliable. Where is the confirmation bias when you need it?

Download 403rd episode (mp3)

You can subscribe to The Cyberlaw Podcast using itunes, google play, Spotify, Pocket castsor our RSS feed. As always, The Cyberlaw Podcast is open to comments. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments and suggestions for topics or interviewees to [email protected]. Don’t forget: if your suggested guest appears on the show, we’ll send you a coveted Cyberlaw Podcast Mug!

The opinions expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families or pets.

Share.

Comments are closed.