The intent to control the world, or to make the world Beige?
While the speed of computers has increased phenomenally over recent years, I have to wonder sometimes how it is that some pages on-line, or programmes on my Windows machine, take so long to load!
With the current focus on 'machine learning', you have to wonder what is taking place in the background any time you click??? After all, to accurately learn your responses your machine must observe every single interaction.
Artificial Intelligence (AI), I find quite a joke when I view the predictive text of my windows phone. Rarely does it anticipate or understand the context of words already used in text sentences.
Cortana certainly doesn't have any intuitive sense I can perceive. So why should I trust any AI with my shopping list, or appointments?
When you see SoftBank investing $940 million into driverless vehicles to deliver groceries, which no doubt have been pre-orderd by your smart fridge, can you trust that your weekly shop will satisfy your needs?
Will there be a time when you do not have to think for yourself at all? I can only see a future where the homogonised choices of machine learning have rendered a world so beige that we just won't care anymore.
Can AI anticipate your dining choices, when Philyra, IBM's AI for perfumery, took years to come up with scents that only resembled shampoo replicas. After all, it looked at sales data, and shampoo far outsells perfume and cologne. Getting this perfumery machine to learn took a lot of training by Symrise’s perfumers (one of the world’s biggest makers of fragrances). Plus, the company is still wrestling with costly IT upgrades that have been necessary to pump data into Philyra from disparate record keeping systems. Achim Daub, an executive at Symrise, says. “We are nowhere near having AI firmly and completely established in our enterprise system.”
As digital assistants are becoming popular, can it be possible that we are being duped about the capabilities of AI & machine learning. These gimmicks keep us placated to the military intent behind AI research.
Georgia Tech researchers have been awarded $6.25 million from the Department of Defense (DoD) to use collective emergent behavior to achieve task-oriented objectives.
'Collective emergent behavior to achieve task-oriented objectives' is a rather disturbing phrase which relates to using basic algorithms on simple machines to perform complex tasks.
Emergent behavior is when a microscopic change in a parameter creates a macroscopic change to a system. This collective behavior is easy to find in nature, from a swarm of bees to a colony of ants, but also appears in other scientific disciplines.
Do you not think that influencing human collective behaviour would not fall into the same category?
With large research projects like senseable city lab making statements like "The real-time city is real! As layers of networks and digital information blanket urban space, new approaches to the study of the built environment are emerging. The way we describe and understand cities is being radically transformed", and
"Senseable is as fluent with industry partners as it is with metropolitan governments, individual citizens and disadvantaged communities. Through design and science, the Lab develops and deploys tools to learn about cities—so that cities can learn about us." It would seem inevitable that the two fields of research will most certainly cross.
After all, the DoD’s Multidisciplinary University Research Initiatives (MURI) Program funds projects that bring researchers together from diverse backgrounds to work on complex problems.
While senseable city lab seems to be focussed on robotic transport, The US Navy is starting to put up real money for robot submarines!
Microsoft are advertising a world controlled by robotics. AI is inevitable. Will it control the world, or render it beige? It is not up to you!
Artificial Intelligence, the intent to control the world?
Created: Sat 13 Apr 2019
Updated: Mon 27 May 2019