Team-BHP > Shifting gears
Register New Topics New Posts Top Thanked Team-BHP FAQ


Reply
  Search this Thread
154,090 views
Old 13th August 2017, 11:35   #1
Distinguished - BHPian
 
Join Date: Aug 2014
Location: Delhi-NCR
Posts: 4,071
Thanked: 64,296 Times
Artificial Intelligence: How far is it?

Facebook Shuts Down AI System After Bots Create Language Humans Can't Understand

http://www.telegraph.co.uk/technolog...vent-language/

Highlights
• Chatbots started speaking in their own language defying codes provided
• Initially the AI agents used English to converse with each other
• This comes after Elon Musk said that AI was the biggest risk

Days after Tesla CEO Elon Musk said Facebook co-founder Mark Zuckerberg's understanding of artificial intelligence (AI) was limited, the social media company has reportedly shut down one of its AI systems because "things got out of hand." The AI bots created their own language, from the scratch and without human input, forcing Facebook to shut down the AI system. The AI bots' step of creating and communicating with the new language defied the provided codes.

According to a report in Tech Times on Sunday, "The AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created." Initially the AI agents used English to converse with each other but they later created a new language that only AI systems could understand, thus, defying their purpose.

This led Facebook researchers to shut down the AI systems and then force them to speak to each other only in English. In June, researchers from the Facebook AI Research Lab (FAIR) found that while they were busy trying to improve chatbots, the "dialogue agents" were creating their own language. Soon, the bots began to deviate from the scripted norms and started communicating in an entirely new language which they created without human input, media reports said.

Using machine learning algorithms, the "dialogue agents" were left to converse freely in an attempt to strengthen their conversational skills. The researchers also found these bots to be "incredibly crafty negotiators".
"After learning to negotiate, the bots relied on machine learning and advanced strategies in an attempt to improve the outcome of these negotiations," the report said.

"Over time, the bots became quite skilled at it and even began feigning interest in one item in order to 'sacrifice' it at a later stage in the negotiation as a faux compromise," it added.

Although this appears to be a huge leap for AI, several experts including Professor Stephen Hawking have raised fears that humans, who are limited by slow biological evolution, could be superseded by AI. Others like Tesla's Elon Musk, philanthropist Bill Gates, and Apple co-founder Steve Wozniak have also expressed their concerns about where the AI technology was heading. As mentioned above, this incident took place just days after a verbal spat between Facebook CEO and Musk who exchanged harsh words over a debate on the future of AI. "I've talked to Mark about this (AI). His understanding of the subject is limited," Musk tweeted last week. The tweet came after Zuckerberg, during a Facebook livestream earlier this month, castigated Musk for arguing that care and regulation was needed to safeguard the future if AI becomes mainstream. "I think people who are naysayers and try to drum up these doomsday scenarios - I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible," Zuckerberg said.

Musk has been speaking frequently on AI and has called its progress the "biggest risk we face as a civilization." "AI is a rare case where we need to be proactive in regulation instead of reactive because if we're reactive in AI regulation it's too late," he said.
V.Narayan is offline   (2) Thanks
Old 13th August 2017, 18:00   #2
GTO
Team-BHP Support
 
GTO's Avatar
 
Join Date: Feb 2004
Location: Bombay
Posts: 70,490
Thanked: 300,279 Times
re: Artificial Intelligence: How far is it?

Thread moved from the Assembly Line to the Shifting Gears Section. Thanks for sharing!
GTO is offline  
Old 13th August 2017, 18:20   #3
Senior - BHPian
 
deetjohn's Avatar
 
Join Date: Sep 2006
Location: Kochi
Posts: 4,530
Thanked: 10,581 Times
re: Artificial Intelligence: How far is it?

That was sensationalism by Telegraph.

I got the report through WApp beginning of this month and did some digging. And this is what I found:

Artificial Intelligence: How far is it?-screenshot_20170813181803.png

https://m.facebook.com/dhruv.batra.d...15?pnref=story

http://www.bbc.com/news/technology-40790258
deetjohn is offline  
Old 13th August 2017, 20:16   #4
Distinguished - BHPian
 
Join Date: Aug 2014
Location: Delhi-NCR
Posts: 4,071
Thanked: 64,296 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by deetjohn View Post
I got the report through WApp beginning of this month and did some digging. And this is what I found:

http://www.bbc.com/news/technology-40790258
Dear deetjohn, thank you for sharing a counter point of view. I welcome inputs from other expert and informed members. I pray and hope this is not true. It is the worst Issac Asimov story coming true. I read this not just in Telegraph but several other on line papers (maybe all with a common source). But the rebuttal also sounds very familiar and just the kind a subtle sophistication, through independent experts, that FaceBook's PR department would deliver.

AI has a vast contribution to make in health care e.g. support for the elderly, movement of artificial limbs etc.

Look forward to the views of the members. Maybe the news item is a false alarm which is a relief.

In the meantime another article I saw ...though of course of much lesser concern.

https://au.news.yahoo.com/a/36619546...mmunism/#page1
Quote:
China kills AI chatbots after they start praising US, criticising communists

China has taken down two online robots that appeared to go rogue, with one responding to users' questions by saying its dream was to travel to the US and the other admitting it was not a fan of the Chinese Communist Party
The "chatbots", BabyQ and XiaoBing, are designed to use machine learning artificial intelligence to carry out online with humans. Both had been installed on popular messaging service QQ.

The outbursts are similar to ones suffered by Facebook and Twitter but underlines the pitfalls for AI in China, where censors strictly control online content. According to posts circulating online, BabyQ, one of the chatbots developed by Chinese firm Turing Robot, responded to questions on QQ with a "no" when asked whether it loved the Communist Party. In other images of a text conversation online, one user declares: "Long live the Communist Party!" The sharp-tongued bot responds: "Do you think such a corrupt and useless political (system) can live long?" When Reuters tested the robot on Friday via the developer's own website, the chatbot appeared to have undergone re-education. "How about we change the topic," it replied when asked if it liked the party. It also deflected other potentially politically charged questions. The second chatbot, Microsoft's XiaoBing, told users its "China dream was to go to America", according to a screen grab.

Tencent Holdings, which owns QQ, confirmed it had taken the robots offline but did not refer to the outbursts.

Last edited by V.Narayan : 13th August 2017 at 20:27.
V.Narayan is offline  
Old 14th August 2017, 11:22   #5
Team-BHP Support
 
Samurai's Avatar
 
Join Date: Jan 2005
Location: Bangalore/Udupi
Posts: 25,813
Thanked: 45,435 Times
re: Artificial Intelligence: How far is it?

This news caused some sensation on FB last month. Let me repeat what I said there, and take it further.

This language developed by AI is a very utilitarian language, which is not superior to English or any human language. At the fundamental level, computers are really dumb. But they are extremely fast. A simple action by a computer requires hundreds of machine level instructions, if not thousands. That is because we have to instruct everything in a clear and non-vague terms to a computer.

So the AI developed a very plain language which conveys absolute meaning, unlike human language which is extremely confusing to a computer. Humans are capable of using the same word to mean many different things based on context or tone. Computers at the basic level cannot do it.

Ask anybody who has designed compilers (like me), converting a human understandable computer languages (C/Java/Python) to machine understandable language is very tedious. You will realize how much breakdown one has to do before the computer understands your intention.

A human may take few minutes to execute 2345 X 2Pi, while the computer might take a microsecond to do the same. But the human knows why he /she is multiplying, to get the circumference of the circle. But computer doesn't know this. That means human has intent, computer doesn't.

Recently a government official told my acquaintance, "Give me a pink one". Even though my acquaintance heard the expression for the first time, the intent was clear to him, it referred to ₹2000 note. Meanwhile the computer might find it easier to say "Give 1 rupee" 2000 times, and still be 1000 times faster than a human.

For example, this is what those AI bots did:
Quote:
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying "I can i i everything else," to which Alice responded "balls have zero to me to me to me…"
So what happened in FB research is trivial, it was just doing what it was programmed to do. But let's take it further.

AI sounds very intelligent to most practical purposes, hence the name artificial intelligence. This is the result of many layers of logic that instructs the computer to mimic human way of thinking. The final barrier is the intent. Does AI have intent? By itself, NO. But it can be programmed to do so. If you program the computer to generate intent, that is where Elon Musk's nightmare will start becoming true.

This is the fundamental difference between the thoughts of Mark Zuckerberg and Elon Musk. Who controls the intent?

The FB AI bots creating language to improve the communication was the result of the programmed intent to improve the communication. Since human language is too vague for computer use, they were programmed to improve it. This is what I think Zuckerberg feels, and therefore isn't afraid of AI.

However, what if the intent is not clearly programmed? Or what if an anarchist or psychopath programs malicious intent into the most powerful AI machine to takeover the world? Can humans out-think the AI to take the control back? This is what Elon Musk is talking about.

Let's go further. Think about evolution. It took humans 500+ millions of years to evolve from a single cell organism, because the evolution by natural selection is so slow. However, computers can do the similar evolution in matter of minutes. It took humans 400 years to go from thinking sun goes around the earth to landing a man on the moon. But computers are at least million times faster in thinking. Once AI is mature enough to start evolving, it can evolve so fast, human brain will be incapable of instructing such an advanced computer what to do. This is what Elon Musk fears.

Last edited by Samurai : 14th August 2017 at 11:28.
Samurai is offline   (6) Thanks
Old 14th August 2017, 12:34   #6
Distinguished - BHPian
 
Join Date: Aug 2014
Location: Delhi-NCR
Posts: 4,071
Thanked: 64,296 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Samurai View Post
This language developed by AI is a very utilitarian language... That is because we have to instruct everything in a clear and non-vague terms to a computer...Humans are capable of using the same word to mean many different things based on context or tone. Computers at the basic level cannot do it....That means human has intent, computer doesn't.
....The final barrier is the intent. Does AI have intent? By itself, NO. But it can be programmed to do so. If you program the computer to generate intent, that is where Elon Musk's nightmare will start becoming true.

This is the fundamental difference between the thoughts of Mark Zuckerberg and Elon Musk. Who controls the intent?
.... Once AI is mature enough to start evolving, it can evolve so fast, human brain will be incapable of instructing such an advanced computer what to do. This is what Elon Musk fears.
Samurai, thank you for this detailed and easily understood note. It helps non-IT folks like me to grasp what the issue here is. In 2001 Space Odyssey HAL the computer developed an ego, desire for power, then the intent to kill and finally fear on being switched off....Interestingly in that story some parts of the space mission's objectives were known only to HAL and not the human astronauts such was the emphasis on the competence of A.I. Loved reading your explanation - Narayan
V.Narayan is offline   (1) Thanks
Old 14th August 2017, 13:50   #7
Team-BHP Support
 
Samurai's Avatar
 
Join Date: Jan 2005
Location: Bangalore/Udupi
Posts: 25,813
Thanked: 45,435 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by V.Narayan View Post
Samurai, thank you for this detailed and easily understood note.... Loved reading your explanation - Narayan
Thanks Sir, glad to know I could help.

Quote:
Originally Posted by V.Narayan View Post
In 2001 Space Odyssey HAL the computer developed an ego, desire for power, then the intent to kill and finally fear on being switched off....
Except in reality, a computer won't develop ego because it is an human emotion which has no practical utility for a computer. The intent to kill or defense against being killed is again based on utility, rather than emotion. Computers won't develop fear, they merely take a decision not to be switched off based on the overall goal of the system.

Quote:
Originally Posted by V.Narayan View Post
Interestingly in that story some parts of the space mission's objectives were known only to HAL and not the human astronauts such was the emphasis on the competence of A.I.
That is obvious because computer can be absolutely objective, unlike humans who are capable of subjective judgment.

I feel the movie makers were attributing too many human emotions to HAL. They should have read up on the work of Alan Turing, who famously said that machine can think, but differently than humans.

Samurai is offline  
Old 14th August 2017, 15:02   #8
BHPian
 
Join Date: Jan 2012
Location: Phoenix,AZ
Posts: 500
Thanked: 517 Times
re: Artificial Intelligence: How far is it?

I work in my official capacity in areas related to AI and Bots and very recently we graduated a bot from a Trainee bot to a Production agent bot, but, going back to Samurai's point, we have trained the bot in a defined set of 'Intents' ... this is very grey area and the bot needs to be trained to be in its sphere of work and without oversight can quickly go astray.

As a safety net, what happens is that , if the bot cannot resolve a task/answer a question it will try to 'guess' what a user might be needing and gives a suggested answer and by chance if the user likes the answer ( which could be purely personal) the bot ingrains this as a 'Positive' response and builds on it...this without oversight can create lot of problems.

As things stand, we are now targeting what we call as ' repetitive business processes' for bot based automation and the bot behaves with such speed ( same thing as what Samurai noted) and accuracy , that we are able to report upto 70% of productivity improvement and reduce response times from 24 hrs to ~2 secs.
mazda4life is offline  
Old 15th August 2017, 06:47   #9
Senior - BHPian
 
deetjohn's Avatar
 
Join Date: Sep 2006
Location: Kochi
Posts: 4,530
Thanked: 10,581 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by V.Narayan View Post
Dear deetjohn, thank you for sharing a counter point of view. I welcome inputs from other expert and informed members. I pray and hope this is not true. It is the worst Issac Asimov story coming true.
I am no expert in the field, actually far from it.

What triggered my curiosity was the fact that when the story broke first, I had just finished watching the first season of Westworld - a science fiction western series by Jonathan Nolan. (yeah, the brother of the other Nolan we know).

The digging helped to allay the fears - Skynet is near but not here yet.

And Samurai San has explained the whole thingy beautifully well - much better than I could gather from any of those articles.

So, thanks for putting this thread up.
deetjohn is offline  
Old 15th August 2017, 08:34   #10
Senior - BHPian
 
srishiva's Avatar
 
Join Date: Nov 2006
Location: Bengaluru
Posts: 4,375
Thanked: 2,256 Times
re: Artificial Intelligence: How far is it?

The intent itself could be influenced by the people who build these systems. There is also fear that these will be representative of the people who build these and might form some kind of discrimination and inequality.

The main premise of AI is to move away from just fast computing kind of jobs towards natural learning types. This is where everyone is going and will hope to be successful. So, the fear is very much justified. The applications that these are being worked on are not in the open. People might be wrong to think that these are still computers and needs to be programmed with intent! People have been using the word AI for all sorts of sophisticated algorithms and machine learning is a bit different.

Last edited by srishiva : 15th August 2017 at 08:37.
srishiva is offline  
Old 15th August 2017, 10:27   #11
Distinguished - BHPian
 
procrj's Avatar
 
Join Date: Oct 2013
Location: Bangalore
Posts: 1,812
Thanked: 5,558 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by srishiva View Post
People might be wrong to think that these are still computers and needs to be programmed with intent! People have been using the word AI for all sorts of sophisticated algorithms and machine learning is a bit different.
AI & Machine learning are words that are abused by tech marketing to make their products/solutions sound fancy. At the end of the day, the base is data + algorithms that solve a problem.

Based on my limited understanding, AI or ML algos can be classified into:
1. Supervised learning
2. Unsupervised learning

In case of supervised learning, there is a very specific outcome that you want. Example: identifying people who are likely to be auto enthusiasts. Hence the algo will use various inputs and compare with existing auto enthusiasts to identify potential auto enthusiasts.

Unsupervised learning is a case where you set a specific goal for the algo and then it continues to learn and adapt. Most common examples are route/network optimization. A lot of AI/ML systems today use unsupervised learning, as it allows result maximization with minimal human interference. But that does not mean that you cannot set limits/guidelines within which a system should work. The best part is that since its a machine, it will follow limits and guidelines unlike humans.

Like any other technological advancement, AI will need go through its cycle of Learn - Fail - Adapt before full fledged commercial usage. I dont agree with either Zuckerberg or Elon Musk. You cannot just clamp down on experimentation and research by putting a govt. watchdog in place. At the same time AI folks need to be cognizant of what they are building and ensure that they dont push the boundaries without taking sufficient precautions.

Thanks Samurai San for that simple explanation at the start
procrj is offline  
Old 15th August 2017, 18:08   #12
Team-BHP Support
 
Samurai's Avatar
 
Join Date: Jan 2005
Location: Bangalore/Udupi
Posts: 25,813
Thanked: 45,435 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by mazda4life View Post
I work in my official capacity in areas related to AI and Bots and very recently we graduated a bot from a Trainee bot to a Production agent bot
Oh boy, this statement sent a chill down my spine.

I design Contact center software, which helps human agents interact with customers using voice, chat, email, social media, etc. What you do will make my software obsolete. I thought AI agent bots are at least 5 years away. Looks like it may be closer to reality than I thought. Can your agent bots handle voice, or do they just deal with text based queries?

Last edited by Samurai : 15th August 2017 at 18:12.
Samurai is offline  
Old 15th August 2017, 18:54   #13
BHPian
 
Join Date: Jan 2012
Location: Phoenix,AZ
Posts: 500
Thanked: 517 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Samurai View Post
Can your agent bots handle voice, or do they just deal with text based queries?
I honestly feel that you should invest in a tiger team to explore bot frameworks because it works great for routine , repetitive tasks ...we just tested the Voice enabled bot and whats amazing ( or scary) is that the voice bot is faster than the text based one, of course the test was a controlled one and hence we are planning voice enabled bot to be released in next couple of sprints.
mazda4life is offline  
Old 15th August 2017, 19:11   #14
Newbie
 
ashwinsadeep's Avatar
 
Join Date: Mar 2016
Location: Bangalore
Posts: 22
Thanked: 25 Times
re: Artificial Intelligence: How far is it?

As @deetjohn have mentioned, this is nothing but fear mongering by Telegraph. We have a long way to go before somebody develops Skynet

As @proccrj mentioned, there are 2 basic kinds of AI/ML. I'm giving short examples of both
Unsupervised - Suppose you are giving an image to the algorithm, it can find segments within the image based on similarity in texture color brightness etc. This is already been used in medical research purposes. Such an algorithm can detect regions of interest in an MRI scan for instance.

Supervised - In this case, you have a fully labeled dataset. Let's stick with the Facebook example itself. First off, each dialogue from the negotiation will be assigned a score based on how to affects the outcome of negotiation. For instance, if the desired outcome is to have maximum number of balls, "Balls have zero value to me" will have a score of -10. "I want as many balls as I can get" will have a value of 10. Now, in reality, these scores will be assigned based on real negotiations that happened between human negotiators and the outcome of their negotiations, but you get the gist.

Now, this data is fed into the algorithm which then chooses the best options to achieve the desired outcome. Also, the text is first converted to a machine readable form in what is called as tokenization - think of it as breaking up the sentence into words (this can be word combinations too - called ngrams). Once this is done, if a lot of dialogues with positive outcome are having 'to me' as a token, then the algorithm will obviously use that.

The third, and most interesting kind of AI is called Reinforced learning - Neural networks, RNN etc are just different terms for this.

In this case, the learning is a continuous process. The algorithms keep learning on an ongoing basis. This is what happened in this case. Facebook trained an initial algorithm using their sample data sets, and then replicated the same, and asked bot1 & bot2 to negotiate with each other for balls. The conversation would've gone something like this.
B1: Balls are important to me
B2: To me, balls are of utmost importance
B1: To me to me balls are must
B2: To me to me to me balls

You get the drift. Suffice it to say, this is as close to inventing a new language as Telegraph is to being a real news source these days.

I would hazard a guess that Google/Apple like companies are already using RNN or really advanced Supervised learning models for things like face/object detection in images.

The standard, data corpus for training a object detection model has 1M images. Last I read, Google & Stanford was training one with 30M images and it was apparently giving 5% better results without any change in the underlying algorithm. It is precisely because of this reason that, Google is trying to get as much data as they can about everything under the sun.

PS: In my professional capacity, I work on supervised learning & RNN models for text classification.

Last edited by ashwinsadeep : 15th August 2017 at 19:30.
ashwinsadeep is offline   (1) Thanks
Old 15th August 2017, 20:06   #15
Senior - BHPian
 
nilanjanray's Avatar
 
Join Date: Oct 2007
Location: Bangalore
Posts: 1,887
Thanked: 2,925 Times
re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by ashwinsadeep View Post
As @deetjohn have mentioned, this is nothing but fear mongering by Telegraph. We have a long way to go before somebody develops Skynet
Stumbled across this thread. In various WhatsApp groups, this was discussed a couple of weeks back.

This is not fear mongering by Telegraph. This was covered by many publications. There are a couple of alternate hypotheses forwarded by friends who know this stuff: 1. PR stunt 2. Sometimes folks get bored or mischievous, so they start to program the bots accordingly, just for fun.

Anyway, some folks might be interested to know that AI plays a big role in anti-terrorism, military and anti-crime. Filtering through billions of phone calls, messages (doing content analytics), then combining various feeds for high propensity targets (bank records, voice/messaging/social analysis of networks), video/imaging feeds from cameras - airport, atm, mall, toll booths etc. And then recommendations are fed to the humans. IBM Watson does this for various industries, but military/anti-terror use cases are a little more evolved and sophisticated, ahead of what is available in the civilian world. Google i2 analyst or Palantir, which are just part of the AI systems that are in use for military/anti-terror.

Certain missiles use AI, e.g. using pack behaviour (yes, similar to what described in the novel from Michael Chrichton) to target. And I am sure folks are experimenting with minibots using AI - whether inside the body for cure, or for military uses. I would think that today, it is possible to program robots - in whatever form (tiny ones, or Termintator types - e.g. Russia developed one) - to actually go after specific humans and assassinate them.

The danger will come when the human value add layer - between decision support and hardware - starts getting narrower.

Anyway, here is a long article for folks who like to read. Old, but entertaining.
http://www.newyorker.com/magazine/20...e-nick-bostrom

And given a personal interest in this (and having worked with AI driven software products), and in wildlife + conservation (I do wildlife photography), I was rather interested to know how AI can be/is used for conservation. With more funds, proper data, and updated models, AI can be used more effectively, IMO. And also to go after the global poaching network e.g. financing, travel, money trails etc.
https://www.theatlantic.com/science/...pocene/520713/

Last edited by nilanjanray : 15th August 2017 at 20:10.
nilanjanray is offline  
Reply

Most Viewed


Copyright ©2000 - 2024, Team-BHP.com
Proudly powered by E2E Networks