Originally published June 30, 2024 | By:
On Friday, the Supreme Court ruled that judges can decide for themselves on regulatory policies that are not explicitly detailed in federal law, essentially dismantling some of the regulatory power federal government agencies have had for decades and overturning a 40-year-old precedent.
The precedent, known as the Chevron Doctrine, has been the backbone of thousands of federal regulations put in place by the executive branch of the government through the years. It said judges should defer to government agencies and their regulations when Congressional law is ambiguous. In the majority opinion, Chief Justice John Roberts wrote the longstanding precedent “defies the command of” the Administrative Procedure Act—which governs how agencies operate—because that says that courts should interpret statutes.
In its written brief for this case, the federal government wrote that invalidating the longstanding Chevron Doctrine could bring a “potentially destabilizing result.” As inflation continues to tick down and stock markets continue their upward climb, it appears the economy has been getting to a more stable place. Time will tell if that continues.
AI regulation in general is a hot topic that is being discussed around the world. I talked to Intertrust CTO Dave Maher, who is also a member of the U.S. AI Safety Institute Consortium under NIST, about the prospects for action to set AI policy in today’s Washington, D.C. An excerpt from our conversation is later in this newsletter.
TOMORROW’S TRENDS
Intertrust CTO Dave Maher On The Likelihood Of AI Regulation In Washington, D.C.
AI regulation has been a hot topic for governmental entities around the world. There’s been movement in Congress toward legislation to begin the process of setting guardrails to determine how AI should be used, including a hearing in April on a bipartisan Senate bill known as the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. I talked to Intertrust CTO Dave Maher, who is also a member of the U.S. AI Safety Institute Consortium under NIST, about the NO FAKES Act and AI regulatory issues in Washington, D.C. This conversation has been edited for length, clarity and continuity.
It’s always hard to predict what is going to happen in Congress, especially in an election year. What do you see as the appetite for something to happen with the NO FAKES Act or anything else that regulates AI technology?
Maher: I think there’s a few provocative things, like generating fake content that is really deceptive in the elections. They may be foreign governments who can use AI, and they’ve already figured out how to intrude on our elections by distributing content on social media and things like that in very subtle ways. Some of the fakes, so to speak, are such that they may not directly lie, but they may be a little bit more subtle. I think that, nonetheless, is going to be very provocative.
I think the other place that is going to come up on the horizon is when AI is used in conjunction with targeting people in things like marketing decision making, decisions regarding who’s insurable or what their insurance rate should be, where AI can really amplify the asymmetric advantage that larger corporations have over individuals. AI is just a big amplifier. I think when people figure out that they’re at a far greater disadvantage in contending with big corporations in the case of marketing, but also in contracts and things of that sort, mortgages, very subtle redlining. All of those kinds of things, AI can aid in what we think of as nefarious practices.
Those things are going to surface, and they’re going to be pretty daunting. Researchers worry about things like AI growing to have an advantage over humans and deciding that humans are superfluous, so they take over the world or control us. I don’t worry about that kind of stuff. I worry about the kind of stuff that we were just talking about.
Read the rest of the interview at Forbes.com