Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

AI and Microdirectives -- Imagine a future in which AIs automatically interpret—and enforce—laws
Schneier on Security (Crypto-Gram) ^ | Bruce Schneier, with Jon Penney

Posted on 08/16/2023 11:32:40 AM PDT by powerset

AI and Microdirectives

[2023.07.21] Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.

In New York, AI systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar AI-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.

Breathalyzers are another example of automatic detection. They estimate blood alcohol content by calculating the number of alcohol molecules in the breath via an electrochemical reaction or infrared analysis (they’re basically computers with fuel cells or spectrometers attached). And they’re not without controversy: Courts across the country have found serious flaws and technical deficiencies with Breathalyzer devices and the software that powers them. Despite this, criminal defendants struggle to obtain access to devices or their software source code, with Breathalyzer companies and courts often refusing to grant such access. In the few cases where courts have actually ordered such disclosures, that has usually followed costly legal battles spanning many years.

AI is about to make this issue much more complicated, and could drastically expand the types of laws that can be enforced in this manner. Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law. These would be administered by what Anthony Casey and Anthony Niblett call “microdirectives,” which provide individualized instructions for legal compliance in a particular scenario.

Made possible by advances in surveillance, communications technologies, and big-data analytics, microdirectives will be a new and predominant form of law shaped largely by machines. They are “micro” because they are not impersonal general rules or standards, but tailored to one specific circumstance. And they are “directives” because they prescribe action or inaction required by law.

A Digital Millennium Copyright Act takedown notice is a present-day example of a microdirective. The DMCA’s enforcement is almost fully automated, with copyright “bots” constantly scanning the internet for copyright-infringing material, and automatically sending literally hundreds of millions of DMCA takedown notices daily to platforms and users. A DMCA takedown notice is tailored to the recipient’s specific legal circumstances. It also directs action—remove the targeted content or prove that it’s not infringing—based on the law.

It’s easy to see how the AI systems being deployed by retailers to identify shoplifters could be redesigned to employ microdirectives. In addition to alerting business owners, the systems could also send alerts to the identified persons themselves, with tailored legal directions or notices.

A future where AIs interpret, apply, and enforce most laws at societal scale like this will exponentially magnify problems around fairness, transparency, and freedom. Forget about software transparency—well-resourced AI firms, like Breathalyzer companies today, would no doubt ferociously guard their systems for competitive reasons. These systems would likely be so complex that even their designers would not be able to explain how the AIs interpret and apply the law—something we’re already seeing with today’s deep learning neural network systems, which are unable to explain their reasoning.

Even the law itself could become hopelessly vast and opaque. Legal microdirectives sent en masse for countless scenarios, each representing authoritative legal findings formulated by opaque computational processes, could create an expansive and increasingly complex body of law that would grow ad infinitum.

And this brings us to the heart of the issue: If you’re accused by a computer, are you entitled to review that computer’s inner workings and potentially challenge its accuracy in court? What does cross-examination look like when the prosecutor’s witness is a computer? How could you possibly access, analyze, and understand all microdirectives relevant to your case in order to challenge the AI’s legal interpretation? How could courts hope to ensure equal application of the law? Like the man from the country in Franz Kafka’s parable in The Trial, you’d die waiting for access to the law, because the law is limitless and incomprehensible.

This system would present an unprecedented threat to freedom. Ubiquitous AI-powered surveillance in society will be necessary to enable such automated enforcement. On top of that, research—including empirical studies conducted by one of us (Penney)—has shown that personalized legal threats or commands that originate from sources of authority—state or corporate—can have powerful chilling effects on people’s willingness to speak or act freely. Imagine receiving very specific legal instructions from law enforcement about what to say or do in a situation: Would you feel you had a choice to act freely?

This is a vision of AI’s invasive and Byzantine law of the future that chills to the bone. It would be unlike any other law system we’ve seen before in human history, and far more dangerous for our freedoms. Indeed, some legal scholars argue that this future would effectively be the death of law.

Yet it is not a future we must endure. Proposed bans on surveillance technology like facial recognition systems can be expanded to cover those enabling invasive automated legal enforcement. Laws can mandate interpretability and explainability for AI systems to ensure everyone can understand and explain how the systems operate. If a system is too complex, maybe it shouldn’t be deployed in legal contexts. Enforcement by personalized legal processes needs to be highly regulated to ensure oversight, and should be employed only where chilling effects are less likely, like in benign government administration or regulatory contexts where fundamental rights and freedoms are not at risk.

AI will inevitably change the course of law. It already has. But we don’t have to accept its most extreme and maximal instantiations, either today or tomorrow.

This essay was written with Jon Penney, and previously appeared on Slate.com.


TOPICS: Computers/Internet; Society
KEYWORDS: computers; freedom; law
Just when you thought the future couldn't get ny worse...
1 posted on 08/16/2023 11:32:40 AM PDT by powerset
[ Post Reply | Private Reply | View Replies]

To: powerset

It sucks at Math, now it’s going to enforce law?
No thanks.


2 posted on 08/16/2023 11:34:42 AM PDT by EEGator
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

What is the penalty for “forgetting your Phone?”

Or if somebody steals your phone, masks up. and then commits crimes?


3 posted on 08/16/2023 11:35:59 AM PDT by uranium penguin
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

I’m going to sell directional EMP devices…


4 posted on 08/16/2023 11:36:45 AM PDT by EEGator
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

tech is controlled by libs, so AI will be used to root out the non complainant thinkers who do not march with the hive mind.
We’ll all end up like the 75 y/o wheelchair bound Utah man.


5 posted on 08/16/2023 11:40:12 AM PDT by BigFreakinToad (Biden whispered "Don't Jump")
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

6 posted on 08/16/2023 11:40:53 AM PDT by Yo-Yo (Is the /Sarc tag really necessary? Pray for President Biden: Psalm 109:8)
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset
If my AI gives bad advice and another AI says I broke the law, who goes to jail, me or my AI.

What if rich crooks can get AIs that tell them how best to circumvent the law?

7 posted on 08/16/2023 11:43:37 AM PDT by BitWielder1 (I'd rather have Unequal Wealth than Equal Poverty.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: uranium penguin

Life in prison.


8 posted on 08/16/2023 11:43:56 AM PDT by EEGator
[ Post Reply | Private Reply | To 3 | View Replies]

To: powerset
This is a great article and a very scary view of the future. I followed the link back to the original article on Slate to read the comments. Almost all of the comments are very ignorant and skeptical. I guess that's what you get from leftists.

FReepers are a lot more cognizant of the threats from AI and largely agree with these apocalyptic prognostications.

"...research—including empirical studies conducted by one of us (Penney)—has shown that personalized legal threats or commands that originate from sources of authority—state or corporate—can have powerful chilling effects on people’s willingness to speak or act freely.
Are the authors unaware that we are already here with and without AI. Just look at J6, Catholic churches, speaking your mind about the Alphabet People, objecting to child mutiliation speaking your mind at school board meetings, or writing satirical memes poking fun at politicians?
9 posted on 08/16/2023 11:54:12 AM PDT by ProtectOurFreedom (We are proles, they are nobility.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

I read this earlier today. Bruce always has interesting stuff in Crypro-gram.


10 posted on 08/16/2023 11:57:34 AM PDT by zeugma (Stop deluding yourself that America is still a free country.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset
My personal response:
11 posted on 08/16/2023 11:59:44 AM PDT by maddog55 (The only thing systemic in America is the left's hatred of it!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

12 posted on 08/16/2023 12:10:02 PM PDT by Boogieman
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

The Democrats think the novel 1984 is an instruction manual.


13 posted on 08/16/2023 12:52:05 PM PDT by CIB-173RDABN (I am not an expert in anything, and my opinion is just that, an opinion. I may be wrong.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

Is it time for the Butlerian Jihad yet? Don’t wait too long...


14 posted on 08/16/2023 1:16:22 PM PDT by HartleyMBaldwin
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

” John Spartan, you are fined five credits for repeated violations of the verbal morality statute.”


15 posted on 08/16/2023 1:21:32 PM PDT by NorthMountain (... the right of the peopIe to keep and bear arms shall not be infringed)
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset

Paging Gort.


16 posted on 08/16/2023 5:53:13 PM PDT by dadgum (Enough!)
[ Post Reply | Private Reply | To 1 | View Replies]

To: powerset
Bruce is off here. Current large language models have not met the Turing test. They are simply models that are fed data sets and respond based on their training.

The courts (NY?) have rejected automate AI for legal proceedings. C|Net is stopping LLM/AI generated articles because they are incoherent.

17 posted on 08/16/2023 9:55:32 PM PDT by HonkyTonkMan ( )
[ Post Reply | Private Reply | To 1 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson