Team-BHP - Google plays God - Develops AI that evolves and improves on its own
Team-BHP

Team-BHP (https://www.team-bhp.com/forum/)
-   Shifting gears (https://www.team-bhp.com/forum/shifting-gears/)
-   -   Google plays God - Develops AI that evolves and improves on its own (https://www.team-bhp.com/forum/shifting-gears/221776-google-plays-god-develops-ai-evolves-improves-its-own.html)

Google's tag line has changed from "Don't be evil" to "Do the right thing". Now they say Google needs to change it to "Don't fight evil".

Google has come up with AI that evolves and improves on its own following nature's own concept of "survival of the fittest".

Quote:

Computer scientists working for a high-tech division of Google are testing how machine learning algorithms can be created from scratch, then evolve naturally, based on simple math.

Experts behind Google's AutoML suite of artificial intelligence tools have now showcased fresh research which suggests the existing software could potentially be updated to "automatically discover" completely unknown algorithms while also reducing human bias during the data input process.

According to ScienceMag, the software, known as AutoML-Zero, resembles the process of evolution, with code improving every generation with little human interaction.
Source

I am scared it might take over mankind someday just like what happened in Avengers age of Ultron.

I hope they dont name it Ultron. We all know how that turned out lol:

With so much research going in to AI, future looks good but it might effect our jobs too. :eek:

Quote:

Originally Posted by AMG Power (Post 4793706)
Google has come up with AI that evolves and improves on its own following nature's own concept of "survival of the fittest".

I'm not well versed with the AI technology, but everytime I read something about self-learning AIs, it chills me. On one hand, its a positive effort to solve lot of problems where humans have to spend a lot of their valuable time on it, but on the other hand, what kind of checks are in place to ensure the AI doesn't have a free run.

Also, the coders who developed the AI, must ensure neutrality of it by brining in people from other backgrounds and taking in their inputs regarding the AI's behaviour and personality.

Remember the movie Captain America, and why Erskine choose Steve Rogers to be Captain America and not the other guys? The AI is something like this.

Quote:

Originally Posted by Slickshift99 (Post 4793712)
I am scared it might take over mankind someday just like what happened in Avengers age of Ultron.

A talk about fear of AI and the first reference is a Marvel movie? Blasphemy. Evil AI = Skynet. :D

I just hope Schwarzenegger is president before judgement day. stupid:

Well Google is pretty good at AI, and if one has any doubts, one should check out AlphaGo (both the documentary and the program). To think that AlphaGo is now like a mumbling chimp in front of its latest iteration is almost mind blowing.
If any company can achieve Skynet Ultron level of AI, I think it is Google. Too bad they sold off Boston Dynamics. Their AI and Boston Dynamics combo could actually have made some T1000 type robots :D

Google and an evolving AI is a deadly combination. They got the data. They can track activities. They can decipher thoughts. They can influence popular opinion. GOD forbid, they've got all the potential to become the perfect Villain.

Laws cannot catch pace at which technology is evolving. By the time law makers could understand, study repercussions and come up with regulations, the technology would have changed making regulations out of date. To add to worry, if AI tracks personal activities of world politicians, they would have no choice but submit to the "Empire of Cyber World".

Concept of computers/AI behaving like human consciousness has been a subject of hot debate for many decades. These debates influenced the plots of several mainstream movies and comics. I remember reading a book by Roger Penrose "The Emperor's New Mind" (there is also a sequel "Shadows of the Mind") where the author argues based on quantum mechanical principles, that human brain/consciousness is non-algorithmic and it cannot be modelled by a conventional computer. While there have been criticisms to this hypothesis, I wish it is true and AI doesn't outsmart us.

Again our biggest worry should be the conception of "Cyber Empire".

Facebook did try something like this and they shut it down as it started to develop its own language deviating from the standard English. Hope Google doesn't end up there and the AI system develop its own algorithms which are tough to break by the human race.

Technology development to a level is always welcome, but deviating from the actual course is a big NO.

Source

Looks like "Cyberdyne Systems Inc" from the Terminator series. It is scary. Sure would be one helluva "Peeping Tom" without an accountability hierarchy! Man needs to sleep but AI doesn't need all that!

Quote:

Originally Posted by saisree (Post 4794774)
Facebook did try something like this and they shut it down as it started to develop its own language deviating from the standard English....Source

That was a click bait title which most publication used, as well, to scare the general public and get the much needed clicks. Nothing of that sort was true.
Read here

I was trying to resist commenting on any AI discussion on this forum thus far, but now I can't resist anymore. :-)

I work in AI/Data Science and I am co-founder and CEO of a company that develops advanced AI models for complex industrial problems. In this context, we have in fact been developing our own version of AutoML and our work is in progress as we speak.

First of all, I think that there is nothing to fear. The titles such as "Google is playing God" are just hyperbole. This AutoML is essentially a superset of all possible algorithms to model a given set of input-output data. This will definitely improve the results on some hard problems and even make it possible to solve some complex problems which are not solved today. So this is progress for sure, but it does not mean anybody has created a magic. They have essentially only created a new, more powerful algorithm to solve that problem compared to a set of algorithms which we already knew.

It is important to note that the "problem" is still defined by humans. In other words, we start the process by defining the input data and what we want to achieve. We define what the "desired outcome" is. There are two types of fundamental learning algorithms: supervised learning and unsupervised leaning. In supervised learning, the user has to give explicit input-output pairs of data (or labelled data as we call it in data science). In supervised learning, we do not need to know the labels or give labelled data, but we still need to define the "goal". The AI algorithms, whether traditional algorithms or this new AutoML, find the best possible "model" that maps the inputs and the goals with some notion of overall least possible error (I have oversimplified a lot of things here, but this description gives the gist of the process).

Now the above process involves few intricate tasks including data processing/preparation and tuning many learning parameters, which require a good knowledge of data science. Usually, data scientists are involved in tuning the AI model continuously as it gets trained. Sometimes the data scientists decide that the algorithm they are trying is not ideal and they switch to a different algorithm. This is one decision that requires a great insight into data science and algorithms. Now all that this AutoML is doing is automating most of these tasks to a level that intervention by data scientists is minimum. This is actually a great news since this further democratizes the AI and even data scientists with lesser skills may be able to use something like this to develop better models than before. More power to everyone!

The reason I am bringing this up is to emphasize this: The AI has no intent and no mind. It can't on its own define what it wants to do. The humans define the goal of the AI, even in AutoML, and the algorithms (including AutoML) find the best model that achieves the goal.

So, however powerful the AI algorithm is and however it is hyped (rightly or wrongly), it still can't take decisions it is not asked to take, leave alone taking any "actions" on its own. The "goal" of the algorithm is defined upfront by a human, and the algorithms have no capability to change these goals.


In summary, there is no need to worry. There are certainly some social and economical implications of AI becoming very powerful (such as whether it will replace human jobs etc), and we discuss those all the time in multiple AI forums. Those are valid concerns to some extent. But the concern that AI will play God or conversely AI will turn into some kind of a robotic villain like in movies has no basis. That is totally unfounded.

Thanks AD for valuable insights. So I understand that self evolving AI isn't a bad thing but actually a good thing that improves efficiency of processes and minimizes human interference. I value the importance of well defined AI, such as autonomous emergency braking.

I also understand that as of today, AI is deterministic. Data is processed in bits. Objectives are predefined. My question is do you foresee AI getting into quantum space processing qubits, because then an objective is not completely well defined, a right can be wrong at the same time.

Quote:

Originally Posted by Dr.AD (Post 4794926)
In summary, there is no need to worry. There are certainly some social and economical implications of AI becoming very powerful (such as whether it will replace human jobs etc), and we discuss those all the time in multiple AI forums. Those are valid concerns to some extent. But the concern that AI will play God or conversely AI will turn into some kind of a robotic villain like in movies has no basis. That is totally unfounded.

+1. I run a company that uses AI and ML (the most abused term in BLR lol) and coincidentally we have our own autoML product. This is hyperbole. It's like wondering if your self-parking car will try and kill you tomorrow morning on the way to work. The press loves clickbaity articles about impending doom. :uncontrol

OT: Dr.AD, what do you build? We should talk.

Quote:

Originally Posted by Thermodynamics (Post 4794949)
My question is do you foresee AI getting into quantum space processing qubits, because then an objective is not completely well defined, a right can be wrong at the same time.

This is still not a cause to worry. Even in quantum computing, the qubit-level model could be probabilistic, but it is not like the whole performance of the program is probabilistic and it can do whatever it wants to do. That would make quantum computing totally useless. In reality, the qubit-level probabilistic nature will help improve the speeds of an iterative optimization algorithm, which is the core algorithm behind any AI training module. Thus, you can train larger AI models much faster with quantum computing. But it does not mean that you don't know the goal of the model and the quantum model is free to do anything. It is not like that.

In other words, a good algorithmic researcher can actually exploit the probabilistic nature of the qubits to get good speed-ups in his/her algorithms, while still controlling what the algorithm as a whole does. So there is absolutely no need to worry that a quantum AI will do something crazy that the programmer never imagined it would do.

Quote:

Originally Posted by v1p3r (Post 4795010)
OT: Dr.AD, what do you build? We should talk.

We develop AI models for manufacturing companies and related businesses (Industrial AI in general), and we also deploy them in the factories in "Edge Computing" manner, by compressing them on small hardware devices and making them efficient enough that they run on those small devices in real-time. I am also associated with another AI company that develops AI models and provides AI services for a variety of sectors such as retail, healthcare, banking and finance, corporate governance etc. That company has a large clientele all over the world and thus we get to work on a vast variety of AI models with varying complexity and applications.

Sure, I will happy to talk to you offline.


All times are GMT +5.5. The time now is 13:08.