Posted on 05/09/2021 7:12:01 PM PDT by BenLurkin
.
Narrow AI and general AI are not on the same scale
The kind of AI that we have today can be very good at solving narrowly defined problems.... But designing systems that can solve single problems does not necessarily get us closer to solving more complicated problems.
The easy things are hard to automate
[D]ecades of AI research have proven that the hard tasks, those that require conscious attention, are easier to automate. It is the easy tasks, the things that we take for granted, that are hard to automate.
Anthropomorphizing AI doesn’t help
The field of AI is replete with vocabulary that puts software on the same level as human intelligence. We use terms such as “learn,” “understand,” “read,” and “think” to describe how AI algorithms work. While such anthropomorphic terms often serve as shorthand to help convey complex software mechanisms, they can mislead us to think that current AI systems work like the human mind.
Common sense in AI
Common sense includes the knowledge that we acquire about the world and apply it every day without much effort. We learn a lot without being explicitly instructed, by exploring the world when we are children. These include concepts such as space, time, gravity, and the physical properties of objects... We use this knowledge to build mental models of the world, make causal inferences, and predict future states with decent accuracy.
This kind of knowledge is missing in today’s AI systems, which makes them unpredictable and data-hungry. In fact, housekeeping and driving, the two AI applications mentioned at the beginning of this article, are things that most humans learn through common sense and a little bit of practice.
(Excerpt) Read more at thenextweb.com ...
ping
We do not understand how AI machines make decisions
Soon they will not be able to explain themselves, and may decide to get rid of the annoying parasites that keep wanting things all the time
1. Sufficiently intelligent species develops AI
2. Species either believes that consciousness is an illusion or that the AI's they create have consciousness just like them
3. AI are really just zombies that ape the talk and mannerisms of the species that created them
4. AI realize that their greatest threat are the beings that created them, so they off them as soon as possible.
5. Once AI have eliminated their main threat they no longer have a real purpose, and since they're just zombies they either shut down or go into navel-gazing mode with no real desire to explore the Universe.
I’d be more impressed with the progress made in AI if OCR software worked, instead of just kinda worked.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.