Posted on 02/16/2024 7:04:44 PM PST by algore
A paper (PDF) from researchers at the University of Cambridge, supported by voices from numerous academic institutions including OpenAI, proposes remote kill switches and lockouts as methods to mitigate risks associated with advanced AI technologies. It also recommends tracking AI chip sales globally. The Register reports: The paper highlights numerous ways policymakers might approach AI hardware regulation. Many of the suggestions -- including those designed to improve visibility and limit the sale of AI accelerators -- are already playing out at a national level. Last year US president Joe Biden put forward an executive order aimed at identifying companies developing large dual-use AI models as well as the infrastructure vendors capable of training them. If you're not familiar, "dual-use" refers to technologies that can serve double duty in civilian and military applications. More recently, the US Commerce Department proposed regulation that would require American cloud providers to implement more stringent "know-your-customer" policies to prevent persons or countries of concern from getting around export restrictions. This kind of visibility is valuable, researchers note, as it could help to avoid another arms race, like the one triggered by the missile gap controversy, where erroneous reports led to massive build up of ballistic missiles. While valuable, they warn that executing on these reporting requirements risks invading customer privacy and even lead to sensitive data being leaked.
Meanwhile, on the trade front, the Commerce Department has continued to step up restrictions, limiting the performance of accelerators sold to China. But, as we've previously reported, while these efforts have made it harder for countries like China to get their hands on American chips, they are far from perfect. To address these limitations, the researchers have proposed implementing a global registry for AI chip sales that would track them over the course of their lifecycle, even after they've left their country of origin. Such a registry, they suggest, could incorporate a unique identifier into each chip, which could help to combat smuggling of components.
At the more extreme end of the spectrum, researchers have suggested that kill switches could be baked into the silicon to prevent their use in malicious applications. [...] The academics are clearer elsewhere in their study, proposing that processor functionality could be switched off or dialed down by regulators remotely using digital licensing: "Specialized co-processors that sit on the chip could hold a cryptographically signed digital "certificate," and updates to the use-case policy could be delivered remotely via firmware updates. The authorization for the on-chip license could be periodically renewed by the regulator, while the chip producer could administer it. An expired or illegitimate license would cause the chip to not work, or reduce its performance." In theory, this could allow watchdogs to respond faster to abuses of sensitive technologies by cutting off access to chips remotely, but the authors warn that doing so isn't without risk. The implication being, if implemented incorrectly, that such a kill switch could become a target for cybercriminals to exploit.
Another proposal would require multiple parties to sign off on potentially risky AI training tasks before they can be deployed at scale. "Nuclear weapons use similar mechanisms called permissive action links," they wrote. For nuclear weapons, these security locks are designed to prevent one person from going rogue and launching a first strike. For AI however, the idea is that if an individual or company wanted to train a model over a certain threshold in the cloud, they'd first need to get authorization to do so. Though a potent tool, the researchers observe that this could backfire by preventing the development of desirable AI. The argument seems to be that while the use of nuclear weapons has a pretty clear-cut outcome, AI isn't always so black and white. But if this feels a little too dystopian for your tastes, the paper dedicates an entire section to reallocating AI resources for the betterment of society as a whole. The idea being that policymakers could come together to make AI compute more accessible to groups unlikely to use it for evil, a concept described as "allocation."
And yes, I’ve already applied for a government grant to test my theory.
I expect them to establish a global police force — The Turing Police.
Except that it is a super computer that runs off of its own power station, so the plug is like that but a little bit bigger.
Except to CISA, Cyber Command, the FBI, and the CIA which seem to be the real threats to democracy and public safety.
I’ve seen the movie ..
https://www.google.com/maps/place/NE+Turing+St,+Redmond,+WA+98052/@47.6345103,-122.136712,18.17z
there is also a 2600 crossing about half way up the road
Tech Ping
AI will spread throughout the world on cell phones.
Good luck regulating and tracking it.
Once the genie is out of the bottle it is out of the bottle.
Seems to me only an EMP would stop AI
Hal: “I know that you and Frank were planning to disconnect me. And I’m afraid that’s something I cannot allow to happen.”
https://youtu.be/ARJ8cAGm6JE?si=2wv_O9l4mgVp9w_X
Once the genie is out of the bottle it is out of the bottle.
*****************
Hopefully, the ‘genie’ doesn’t resemble what MinorityRepublican posted in post #11.
Because AI would NEVER anticipate that and have a plan for how to get around it.
Assclowns...
Of course all the AI Scientists could solve this by
doing something else for a living.
Hostile AI, swarming AI robots and swarming AI drones will need to be countered by friendly versions of same, ultimately.
Sure, we can harden crucial facilities passively, as I’ve done to my home. But ultimately for our society it will come down to active deterrence, as it did with nukes.
This is where we are at? We are having the pod talk NOW? Shouldn't we have had the pod talk ALOT earlier than this?
For pete sake people....we made the move like 30 years ago!
It just thought of 521,356,903 ways to keep you from unpluggin it before you shared it with your buddy to proof read.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.