Team-BHP - Artificial Intelligence: How far is it?
Team-BHP

Team-BHP (https://www.team-bhp.com/forum/)
-   Shifting gears (https://www.team-bhp.com/forum/shifting-gears/)
-   -   Artificial Intelligence: How far is it? (https://www.team-bhp.com/forum/shifting-gears/189383-artificial-intelligence-how-far-12.html)

Quote:

Originally Posted by dailydriver (Post 4492732)
Anything written by way of explanation will spoil the effect. So, I am just posting the links.

Somehow, this thing scares me more :D

https://www.youtube.com/watch?v=LikxFZZO2sk

Researchers at the non-profit AI research group OpenAI just wanted to train their new text generation software to predict the next word in a sentence. It blew away all of their expectations and was so good at mimicking writing by humans they’ve decided to pump the brakes on the research while they explore the damage it could do.

Link: https://gizmodo.com/elon-musk-backed...ner-1832650914

Has anything passed the classic Turing Test? I doubt it. So one cannot really say.

To me the Turing test will be the confirmation that AI has really arrived.

No doubt we have made a lot of progress, but imho ....

Quote:

Originally Posted by sgiitk (Post 4547554)
Has anything passed the classic Turing Test? I doubt it. So one cannot really say.

The first proof of AI passing Turing Test will be that human will not be able to tell if it was an AI. AI may have already passed Turing Test and we may be interacting with them but we don't know, since we can't tell the difference!

Quote:

Originally Posted by sgiitk (Post 4547554)
Has anything passed the classic Turing Test ?
I doubt it.
So one cannot really say.
To me the Turing test will be the confirmation that AI has really arrived.
No doubt we have made a lot of progress, but imho ...

Quote:

Originally Posted by ksameer1234 (Post 4547576)
The first proof of AI passing Turing Test will be that human will not be able to tell if it was an AI.

We've moved beyond the Turing Test, honestly...

https://www.youtube.com/watch?v=JvbHu_bVa_g

In both instances, do you think the human at the other end of the line knew that it was an AI that had made the call ?

This "demo" is from Google's I/O conference from last summer.

And, a handful of companies ( mostly telecom ) are silently testing AI call-bots to handle subscriber calls in real-life call-centers.

You'd be surprised to know that in these silent roll-outs, only a tiny percentage of calls get dropped by the AI & escalated to an actual human operator ( in most cases, even the human operator is unable to "handle" such calls ).

Despite being on the right side of all this, some of the stuff that's being worked on is literally scary - I'm terrified how good some of these AI are at the tasks they're trained to do.

If you take the classes of problems being worked on, the strides being made, & the speed at which they're being made, to their logical conclusions, in my opinion at least, Elon Musk is right :Shockked:

https://www.nytimes.com/2019/02/26/o...gtype=Homepage

Nice article from Thomas Friedman. A term 'containment', how deep chatbot can engage after which it gets handed over to HI (Human Intelligence) to get the intent.

Moravec's paradox - why this will be a crucial speed bump for AI to surmount

I've read about this particular paradox, which is quite well-known among scientists in these circles; the harder something is, the easier it is to automate. The simpler something is, the harder it is to automate.

Consider this - ask AI programmers to solve a very tough question, something like spotting Post Traumatic Stress Syndrome (PTSD) in armed forces veterans just by listening to them speak. We're talking about developing flawless speech-to-text algorithms, speech analysis algorithms where sounds are analyzed down to the minutest octave and decibel, having a database of speeches made by veterans with PTSD and having already trained ML algorithms based on those speeches...phew! This, it turns out, is rather doable!

On the other hand, to be able to train a robot to function as a housemaid, that's rather undoable. Why? A housemaid will encounter all kinds of unexpected situations when cleaning up a home for instance; where human maids would know instinctively that an object on the floor belongs to a particular shelf at a particular point in time - based on how the home's residents had made arrangements at that time. How the residents make their book storage could change too. It would be extremely difficult and well nigh impossible it seems, to train robots to handle incongruous situations. Besides, in the course of cleaning up a home, it could be a misplaced book today, a clogged sink tomorrow, a slippery kitchen floor next week...forget about it already - It's more practical to just employ a human to clean up the house! Given the endless possibilities arising in the course of executing "simple" tasks, the enormity of AI programming increases too.

The Moravec's paradox is bound to slow down the advance of AI onto all the possible tasks out there. We're probably looking at a nearer future where "simpler" jobs will remain open for human employment.

PS: The Pentagon has already deployed a AI-powered PTSD detector. It exists.

Artificial Intelligence: How far is it?-helmet.jpeg


Source: inshorts, https://www.thenewsminute.com/articl...t-kerala-97328

Quote:

Originally Posted by locusjag (Post 4551252)
We're probably looking at a nearer future where "simpler" jobs will remain open for human employment.

Yeah, that and certain emotional labour.

But keep in mind that the robotic housekeeper will not do the house cleaning exactly like the human housekeeper. Every item will probably have a passive RFID tag to ensure it can be recognised by the robot.

Any time a manual operation is automated, such changes are expected.

So when people question whether AI can do this or that, they are still thinking from the human POV. It need not be done the way humans do it.

Quoting Alan Turing from Imitation Games:
Quote:

Of course machines can't think as people do. A machine is different from a person. Hence, they think differently. The interesting question is, just because something, uh... thinks differently from you, does that mean it's not thinking? Well, we allow for humans to have such divergences from one another. You like strawberries, I hate ice-skating, you cry at sad films, I am allergic to pollen. What is the point of... different tastes, different... preferences, if not, to say that our brains work differently, that we think differently? And if we can say that about one another, then why can't we say the same thing for brains... built of copper and wire, steel?

Quote:

Originally Posted by locusjag (Post 4551252)
Moravec's paradox - why this will be a crucial speed bump for AI to surmount

On the other hand, to be able to train a robot to function as a housemaid, that's rather undoable. Why? A housemaid will encounter all kinds of unexpected situations when cleaning up a home for instance; where human maids would know instinctively that an object on the floor belongs to a particular shelf at a particular point in time - based on how the home's residents had made arrangements at that time. How the residents make their book storage could change too. It would be extremely difficult and well nigh impossible it seems, to train robots to handle incongruous situations. Besides, in the course of cleaning up a home, it could be a misplaced book today, a clogged sink tomorrow, a slippery kitchen floor next week...forget about it already - It's more practical to just employ a human to clean up the house! Given the endless possibilities arising in the course of executing "simple" tasks, the enormity of AI programming increases too.

PS: The Pentagon has already deployed a AI-powered PTSD detector. It exists.

Its very interesting that you brought up this paradox. The point is that we encountered this when we tried to automate. For e.g. a simple thing like a chatbot when it started was based on a set of FAQs and the bot would dip in and then use the answers as canned response. As we encountered various hand offs, we began to automate these so called exceptions and then it became better. Infact when we started it was the common and simple ones that got automated. Now the bots have a persona of their own, are able to handle more tasks and lesser hand offs.

The crux of automation is simple repeatable processes first. More complex later as AI learns. The other thing to note is that, the human intelligence will shift their work as well

AI will need to be trained and so you will have Trainers who will generate processes and methodologies

Having said all of this, i do want to go slow on AI adoption. The one thing that one needs to note is AI for Good. How can we ensure that AI and adoption of AI is able to deliver not just to business, but to society at large

Even setting up an ethics board to monitor AI is becoming problematic.

https://www.theverge.com/2019/4/4/18...age-foundation

A good overview of the state of "AI" today:

https://arstechnica.com/features/201...gence-curtain/

Warning: longish article by today's standards

Came across this article while browsing codeproject site.

https://medium.com/syncedreview/a-go...m-27533d5056e3

While this is still in its infancy stage, how soon till it becomes a reality since such things tend to take an exponential curve into becoming a real thing. Till then IT sector will enjoy the maintenance projects bringing in the moolah !

https://www.scmp.com/news/china/soci...d-dead-persons

An unexpected usage of an AI based App.

Quote:

In one particularly egregious example, ImageNet, the golden standard for image classification, recommended labeling an image of a major flooding zone as a toilet.
"AI thinks this flood photo is a toilet. Fixing that could improve disaster response."


All times are GMT +5.5. The time now is 02:59.