Posted on 11/03/2020 9:20:09 PM PST by RomanSoldier19
Autonomous machines capable of deadly force are increasingly prevalent in modern warfare, despite numerous ethical concerns. Is there anything we can do to halt the advance of the killer robots?
he video is stark. Two menacing men stand next to a white van in a field, holding remote controls. They open the vans back doors, and the whining sound of quadcopter drones crescendos. They flip a switch, and the drones swarm out like bats from a cave. In a few seconds, we cut to a college classroom. The killer robots flood in through windows and vents. The students scream in terror, trapped inside, as the drones attack with deadly force. The lesson that the film, Slaughterbots, is trying to impart is clear: tiny killer robots are either here or a small technological advance away. Terrorists could easily deploy them. And existing defences are weak or nonexistent.
Some military experts argued that Slaughterbots which was made by the Future of Life Institute, an organisation researching existential threats to humanity sensationalised a serious problem, stoking fear where calm reflection was required. But when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff. One microsystems collaborative has already released Octoroach, an extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground. It is only one of many biomimetic, or nature-imitating, weapons that are on the horizon.
(Excerpt) Read more at theguardian.com ...
Elon Musk: In a few years, robots will move so fast youll need a strobe light to see them
AT 2:14 a.m., it becomes self-aware...
Apparently not 1997.
AT 2:14 a.m., it becomes self-aware...
Apparently not 1997.
Set them loose in urban areas the polices have become politicized and will not do their ob.
When, not if, the computers will have the ability to “think.” When that happens, it the question is not whether we will recognize that they have rights.
The question is whether they will recognize that we do.
In a few seconds, we cut to a college classroom. The killer robots flood in through windows and ventsSounds like a plan to stop Antifa, almost.
Control the AI, control the world.
China - The First Artificial Intelligence Superpower
Search domain www.forbes.com/sites/cognitiveworld/2020/01/14/china-artificial-intelligence-superpower/https://www.forbes.com/sites/cognitiveworld/2020/01/14/china-artificial-intelligence-superpower/
Jan 14, 2020China is on its way to becoming the first global superpower for Artificial Intelligence. The People’s Republic of China has the most ambitious AI strategy of all nations and provides the most ...
I forayed into artificial stupidity in the late 1970s. I wrote a backgammon program that wrote its own loss and win data and tried to learn what it did wrong.
I read all the best AI texts of the time and found out I had no idea what he heck they were talking about.
My version of AI was brute logic: look at all the spots on the board: hitting = many+ weight, blots=negative weights, move fwd = small +++. Bearing off was 100% brute logic.
Sadly, last I heard my AI and my college GF ran away together and it is now an e-Keno supervisor.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.