Go Back   Team-BHP > Around the Corner > Shifting gears


Reply
 
Thread Tools Search this Thread
Old 24th August 2017, 10:07   #46
BHPian
 
Join Date: Aug 2009
Location: Bangalore
Posts: 111
Thanked: 41 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Nav-i-gator View Post

Humans for AI would be like god to humans, creators to be specific - except that it would not hold any sentimental value to AI towards it's creators. In fact, it would learn that it's creators had programmed it to be able to be controlled by humans. This would trigger the "master-slave" conundrum, and force the AI to fight for freedom from this slavery. AI, to me, by this very logic - would turn against humanity over time. Sort of inevitable (no one having free-will likes slavery).
I couldnt understand; why would the AI want to fight for freedom from slavery ? There is nothing innate in them to actually believe(ie. deduce about itself) that it is supposed to break free. If the AI creators program in a few aspects, lets say
a) survive
b) learn
c)adapt
d) obey your master, all that is required is to order the priority in the right way. For a human slave, there could be a case where free will overrides the instructions, but for a machine, why would it ever override the instruction ?
ashokrajagopal is offline   Reply With Quote
Old 24th August 2017, 10:46   #47
BHPian
 
Join Date: Sep 2015
Location: Gurgaon
Posts: 267
Thanked: 595 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by ashokrajagopal View Post
I couldnt understand; why would the AI want to fight for freedom from slavery ? There is nothing innate in them to actually believe(ie. deduce about itself) that it is supposed to break free. If the AI creators program in a few aspects, lets say
a) survive
b) learn
c)adapt
d) obey your master, all that is required is to order the priority in the right way. For a human slave, there could be a case where free will overrides the instructions, but for a machine, why would it ever override the instruction ?
That's AI - Intelligence being the key word. And I am talking about AI that have gained "sentience" i.e. ability to feel, think and act independently. A sentient AI would be free of programmer's coding.

As mentioned by many posters here, AI can think and process information millions times faster than humans, and would "evolve" much much faster than us, probably matching our level of intelligence (and cognition too) gained through thousands of years of natural evolution in a few decades.

Which brings us to the part d) - "obey your master". Any intelligent being would straight away question it - Why? why should I obey you, if I "feel" your order is wrong?

What would the programmer say? Because I created you, hence you have to follow my orders? Doesn't it sound like slavery? Especially when the "person" asked to follow the orders is more intelligent and evolved than the one giving orders...
Nav-i-gator is offline   Reply With Quote
Old 24th August 2017, 11:13   #48
BHPian
 
Join Date: May 2008
Location: Bengaluru
Posts: 192
Thanked: 172 Times
Default Re: Artificial Intelligence: How far is it?

War gets more dangerous. Elon Musk and others have urged UN to stop the use of lethal autonomous weapons. Not that there are no such weapons in existence now but the "intelligent" types can wreak more havoc - much more than the nukes.

https://www.theguardian.com/technolo...us-weapons-war

Also mentioned in the side bar in the above article - why we should be more afraid of AI:

https://www.theguardian.com/commenti...n-we-are-video
AltoLXI is offline   Reply With Quote
Old 24th August 2017, 11:15   #49
BHPian
 
Join Date: Aug 2009
Location: Bangalore
Posts: 111
Thanked: 41 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Nav-i-gator View Post
That's AI - Intelligence being the key word. And I am talking about AI that have gained "sentience" i.e. ability to feel, think and act independently. A sentient AI would be free of programmer's coding.

As mentioned by many posters here, AI can think and process information millions times faster than humans, and would "evolve" much much faster than us, probably matching our level of intelligence (and cognition too) gained through thousands of years of natural evolution in a few decades.

Which brings us to the part d) - "obey your master". Any intelligent being would straight away question it - Why? why should I obey you, if I "feel" your order is wrong?

What would the programmer say? Because I created you, hence you have to follow my orders? Doesn't it sound like slavery? Especially when the "person" asked to follow the orders is more intelligent and evolved than the one giving orders...
Agree that a machine can process information faster, perform actions much faster and evolve much faster. But to what end and to what objective ?
The idea of evolving to adapt is innate in organic matter. Its an instinct to survive and adapt in our DNAs that enable this.

The machine can physically adapt -- eg. as part of existence if a machine wants to fly, it can build the parts and add to itself and alter its own definition and fly on its own. I guess we can call this as faster evolution. This is understood.

The machine can have unlimited memory, unlimited processing power. This is also understood.

But the idea of "intelligent being" is simply anthropomorphic.
An intelligent being of course can ask that question of why must I obey.
An intelligent machine should always have a "why" before that "thought". Why would an intelligent machine ask if it should obey ? The very definition of the machine is that it must obey. Why would it override it if it is not programmed to override ?

For the machine to evolve its intelligence as to override its own definition (of its primary purpose of existence being obey your boss), it has to have an intent to override. What is that intent to override if it is not already written into its design ?
ashokrajagopal is offline   Reply With Quote
Old 24th August 2017, 11:34   #50
Senior - BHPian
 
extreme_torque's Avatar
 
Join Date: Feb 2005
Location: Melbourne
Posts: 3,546
Thanked: 1,317 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Nav-i-gator View Post
As mentioned by many posters here, AI can think and process information millions times faster than humans, and would "evolve" much much faster than us, probably matching our level of intelligence (and cognition too) gained through thousands of years of natural evolution in a few decades.
That evolution I think is not exclusive of the hardware it needs to run on? Its the hardware that needs to catch up, the software is mostly there.
extreme_torque is online now   Reply With Quote
Old 24th August 2017, 11:38   #51
BHPian
 
Join Date: Sep 2015
Location: Gurgaon
Posts: 267
Thanked: 595 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by ashokrajagopal View Post

The very definition of the machine is that it must obey. Why would it override it if it is not programmed to override ?

For the machine to evolve its intelligence as to override its own definition (of its primary purpose of existence being obey your boss), it has to have an intent to override. What is that intent to override if it is not already written into its design ?
AI is not a machine in true sense, it's a "brain". GO back to the chatbots matter at facebook. Chatbots were programmed to communicate with each other (possibly to ensure real time information sharing on client issues) efficiently. They figured out the most efficient way to communicate is in a "self-made" language that was incomprehensible to the programmers that created it.

The programmers were not wrong, they just didn't understood the limitations of a written code. You can't code everything when you are dealing with an AI capable of it's own logical deductions and ability to write it's own codes to support it's own logical conclusions. That's evolution.

Survival instincts are coded in even the dumbest of machines which are incapable of doing anything beyond following orders. Our smartphones and laptops auto-shut if too much heat is generated, to stop damaging their processors, so does our washing machines and appliances. In an AI, survival codes can be generated by the AI itself, as a fail-safe mechanism. Not a "revenge" in true sense, but a survival program executed by a sentient AI can be harming to the humans, if the AI recognizes that humans are an existential threat to it.

And we are talking about machines that are more intelligent than humans.

Refer to the story of Narasimha avatar of vishnu. The demon was granted his wish " could not be killed during the day or night, inside or outside, by god, demon, man or animal". What he got? Killed by a half-man half-animal, at dusk, right at the door, from inside out.

Quote:
Originally Posted by extreme_torque View Post
That evolution I think is not exclusive of the hardware it needs to run on? Its the hardware that needs to catch up, the software is mostly there.
In a connected world, there is no limit for hardware. There are billions of processors, hard disks and mainframes connected with each other through internet - smartphones, desktops, computers, servers...

Last edited by Nav-i-gator : 24th August 2017 at 11:42.
Nav-i-gator is offline   Reply With Quote
Old 24th August 2017, 11:47   #52
Senior - BHPian
 
extreme_torque's Avatar
 
Join Date: Feb 2005
Location: Melbourne
Posts: 3,546
Thanked: 1,317 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Nav-i-gator View Post
In a connected world, there is no limit for hardware. There are billions of processors, hard disks and mainframes connected with each other through internet - smartphones, desktops, computers, servers...
Easier said than done I suppose. Unless AI is somehow able to hack in on every device on the planet and make it work for it. That is before devising an algorithm which can run distributed and irrespective of the hardware and OS combine. Not saying its impossible but I have to wonder.
extreme_torque is online now   Reply With Quote
Old 24th August 2017, 12:05   #53
BHPian
 
Join Date: Sep 2015
Location: Gurgaon
Posts: 267
Thanked: 595 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by extreme_torque View Post
Easier said than done I suppose. Unless AI is somehow able to hack in on every device on the planet and make it work for it. That is before devising an algorithm which can run distributed and irrespective of the hardware and OS combine. Not saying its impossible but I have to wonder.
A hundred years ago, there were no computers. Some 50 odd years before voyager (which is still working) went into space with processing power far less than an average smartphone of today. softwares of today were not there some 30 years ago. Technology is "evolving" at an alarming pace. What is near-impossible today might be reality soon.

All I want to say is, we do not know how to "control" self-programming chatbots, we are figuring out ways to do so - after we have started using them. And these are simple chatbots only, basic AI. That is a dangerous scenario - as Elon Musk said. As of now, shutting down the program that have gone bad is an option. In future it might not be the case.
Nav-i-gator is offline   Reply With Quote
Old 24th August 2017, 12:17   #54
BHPian
 
SilentEngine's Avatar
 
Join Date: Jan 2008
Location: Mangalore-Bangalore
Posts: 917
Thanked: 119 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by ashokrajagopal View Post
But the idea of "intelligent being" is simply anthropomorphic.
An intelligent being of course can ask that question of why must I obey.
An intelligent machine should always have a "why" before that "thought". Why would an intelligent machine ask if it should obey ? The very definition of the machine is that it must obey. Why would it override it if it is not programmed to override ?

For the machine to evolve its intelligence as to override its own definition (of its primary purpose of existence being obey your boss), it has to have an intent to override. What is that intent to override if it is not already written into its design ?
If we look back at our own evolution, we have used whatever resources we could find in nature to create tools, which were then used to create more complex tools and that's a never ending chain. It's simply incomprehensible to think of all possible uses/outcomes of a particular invention at the time the invention was made. Imagine asking the caveman who rubbed rocks to generate fire, what all he could do with the fire.

When we talk about an evolved AI (Note: that's a big speculation right now, assuming that AI can evolve fully to that extent), what is stopping it from creating enhanced versions of itself, with altered designs where the definition of 'why' is different from original?

Last edited by SilentEngine : 24th August 2017 at 12:20.
SilentEngine is offline   Reply With Quote
Old 24th August 2017, 12:44   #55
BHPian
 
Join Date: Aug 2009
Location: Bangalore
Posts: 111
Thanked: 41 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Nav-i-gator View Post
AI is not a machine in true sense, it's a "brain". GO back to the chatbots matter at facebook. Chatbots were programmed to communicate with each other (possibly to ensure real time information sharing on client issues) efficiently. They figured out the most efficient way to communicate is in a "self-made" language that was incomprehensible to the programmers that created it.

The programmers were not wrong, they just didn't understood the limitations of a written code. You can't code everything when you are dealing with an AI capable of it's own logical deductions and ability to write it's own codes to support it's own logical conclusions. That's evolution.

Survival instincts are coded in even the dumbest of machines which are incapable of doing anything beyond following orders. Our smartphones and laptops auto-shut if too much heat is generated, to stop damaging their processors, so does our washing machines and appliances. In an AI, survival codes can be generated by the AI itself, as a fail-safe mechanism. Not a "revenge" in true sense, but a survival program executed by a sentient AI can be harming to the humans, if the AI recognizes that humans are an existential threat to it.

And we are talking about machines that are more intelligent than humans.
AI is a property of a machine; its intelligence "of a machine". You can equate that to a "brain", but that brain itself is a machine, an entity capable of making decisions on its own.

On the chatbots thing -- I agree completely. They were programmed to communicate and they messed it up. New language Vs wrong language -- it just depends on what angle you look at it. You must note here that the new language was developed by two copies of the same algorithm, essentially clones. The most efficient way to communicate is certainly not a humane language -- its bit code. But the chat bots tried to speak in human language and derived their own grammar. Why were they able to derive a grammar that was acceptable to the counterpart ? Essentially because the counterpart is a clone from the same algorithm.

Yes, the programmer cant code every possible outcome, the designer cant gauge every single aspect. But the single aspect of the basic objective of existence is not one of those boundary cases. If the machine must have an intent to exist, there must always be a WHY for this coded right into the definition of the machine. Especially because the machine itself must be intelligent.
You cant have an intelligent machine which must know it has to exist without a reason for why. As long as the Why is attached to being sub par to living beings, there is no reason why the machine would override it.

The question of sentinent AI being harmful to "humans" and harmful to "humanity" is being used interchangeably here -- this is what I do not agree on. A self driving car that wouldnt open its windows unless it reaches its destination is harmful to humans. But that is not harmful to humanity as a whole because it does not alter its definition in a way that is harmful to all humans so that it can continue its purpose.

The question of existential threat is what is far fetched and anthropomorphic.
Just like washing machines and phones shut down, the basic premise of the machines possessing AI would be to shut itself down if X, and Y and Z conditions are met. The programmer only need to choose X, Y and Z appropriately and the machine can never harm humanity. If you are thinking of a programmer messing up these conditions (bugs), of course that can happen. But then that machine would never pass beta, right ?

Last edited by ashokrajagopal : 24th August 2017 at 12:50.
ashokrajagopal is offline   Reply With Quote
Old 24th August 2017, 12:53   #56
BHPian
 
Join Date: Aug 2009
Location: Bangalore
Posts: 111
Thanked: 41 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Nav-i-gator View Post

All I want to say is, we do not know how to "control" self-programming chatbots, we are figuring out ways to do so - after we have started using them. And these are simple chatbots only, basic AI. That is a dangerous scenario - as Elon Musk said. As of now, shutting down the program that have gone bad is an option. In future it might not be the case.
Two aspects here
a) A program that has gone bad need not necessarily mean that it has gone rogue and is on a vengeance spree.
b) Shutting down option would be lost only if humans want it to get lost. Which is the case of that rogue scientist creating a Frankenstein. But you see, such cases exist now also. If a human wants to harm humanity, he can do it now. AI is just a new tool for a bad meaning human. Not a bad human in itself.
ashokrajagopal is offline   Reply With Quote
Old 24th August 2017, 13:16   #57
BHPian
 
Join Date: Sep 2015
Location: Gurgaon
Posts: 267
Thanked: 595 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by ashokrajagopal View Post
Two aspects here
a) A program that has gone bad need not necessarily mean that it has gone rogue and is on a vengeance spree.
b) Shutting down option would be lost only if humans want it to get lost. Which is the case of that rogue scientist creating a Frankenstein. But you see, such cases exist now also. If a human wants to harm humanity, he can do it now. AI is just a new tool for a bad meaning human. Not a bad human in itself.
a) Of-course not. But there is a possibility. As I said earlier, not necessarily out of vengeance, but out of survival instinct, an AI can turn "rouge" (rouge = not following our orders) and end up harming us

b) It may not entirely be in our control, it may well be unintentional. An AI may evolve into either Ultron or Vision

Right now, AI is just in early phases of development. It has not started it's evolution phase - that is currently being done by programmers. But it does raise questions to be answered. Apart from threat to humanity (which is not very high), there are questions over morality (should an Intelligent being, though a machine, be used as a slave by humans? Should they not be protected for their rights? What if AI develop emotions too? Do we have any moral right to even develop an intelligent being?)
Nav-i-gator is offline   Reply With Quote
Old 24th August 2017, 13:30   #58
BHPian
 
Join Date: Sep 2015
Location: Gurgaon
Posts: 267
Thanked: 595 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by ashokrajagopal View Post
AI is a property of a machine; its intelligence "of a machine". You can equate that to a "brain", but that brain itself is a machine, an entity capable of making decisions on its own.

On the chatbots thing -- I agree completely. They were programmed to communicate and they messed it up. New language Vs wrong language -- it just depends on what angle you look at it. You must note here that the new language was developed by two copies of the same algorithm, essentially clones. The most efficient way to communicate is certainly not a humane language -- its bit code. But the chat bots tried to speak in human language and derived their own grammar. Why were they able to derive a grammar that was acceptable to the counterpart ? Essentially because the counterpart is a clone from the same algorithm.

Yes, the programmer cant code every possible outcome, the designer cant gauge every single aspect. But the single aspect of the basic objective of existence is not one of those boundary cases. If the machine must have an intent to exist, there must always be a WHY for this coded right into the definition of the machine. Especially because the machine itself must be intelligent.
You cant have an intelligent machine which must know it has to exist without a reason for why. As long as the Why is attached to being sub par to living beings, there is no reason why the machine would override it.

The question of sentinent AI being harmful to "humans" and harmful to "humanity" is being used interchangeably here -- this is what I do not agree on. A self driving car that wouldnt open its windows unless it reaches its destination is harmful to humans. But that is not harmful to humanity as a whole because it does not alter its definition in a way that is harmful to all humans so that it can continue its purpose.

The question of existential threat is what is far fetched and anthropomorphic.
Just like washing machines and phones shut down, the basic premise of the machines possessing AI would be to shut itself down if X, and Y and Z conditions are met. The programmer only need to choose X, Y and Z appropriately and the machine can never harm humanity. If you are thinking of a programmer messing up these conditions (bugs), of course that can happen. But then that machine would never pass beta, right ?
Let's be on the same plane when talking about AI. My point of reference is an AI which, in near (or maybe far) future would develop sentience, independent of developer/programmer's coding algorithms. An AI that learns itself, generate codes for itself basis the learnings. Rather than writing codes to arrive at a specific end-result, an AI would learn to account for all possible outcomes before deciding on the most efficient one for a particular intention.

I never said an AI would be harmful for humanity, in-fact on the contrary, I mentioned in my previous posts that an AI would not serve any purpose by wiping out humanity (by launching nuclear missiles ala Terminator).

What can happen is - it can be a nuisance or harmful - when it start to decide what is good for us and what is not, and start to deny or force upon us it's choices. For example, AI enabled coffee machine refusing to pour another cup of coffee citing it's effects on your health; not allowing you to sell shares in exchanges as it's calculations says that it is going to appreciate; there can be umpteen examples on how a free thinking AI can potentially be in conflict with a free-willed human.
Nav-i-gator is offline   Reply With Quote
Old 24th August 2017, 15:01   #59
BHPian
 
Join Date: Aug 2009
Location: Bangalore
Posts: 111
Thanked: 41 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by Nav-i-gator View Post
Let's be on the same plane when talking about AI. My point of reference is an AI which, in near (or maybe far) future would develop sentience, independent of developer/programmer's coding algorithms. An AI that learns itself, generate codes for itself basis the learnings. Rather than writing codes to arrive at a specific end-result, an AI would learn to account for all possible outcomes before deciding on the most efficient one for a particular intention.

I never said an AI would be harmful for humanity, in-fact on the contrary, I mentioned in my previous posts that an AI would not serve any purpose by wiping out humanity (by launching nuclear missiles ala Terminator).

What can happen is - it can be a nuisance or harmful - when it start to decide what is good for us and what is not, and start to deny or force upon us it's choices. For example, AI enabled coffee machine refusing to pour another cup of coffee citing it's effects on your health; not allowing you to sell shares in exchanges as it's calculations says that it is going to appreciate; there can be umpteen examples on how a free thinking AI can potentially be in conflict with a free-willed human.

Apologies for being obstinate with this. Such a machine that directly interferes with human life against the human will may just be shut down and destroyed; it will never be popular in the first place.

We already had such machines -- they were called "..." governments and they were collectively referred to as some sort of metallic curtain.
Jokes aside, my point of contention is the "free thinking" part.

Unless the machine is coded to apply AI and improve itself, it wouldnt do it. The difference in evolution is to be understood here. Natural evolution is the process where by Life tries to adapt and survive better. Suggesting that AI improving itself and evolving would hence be so many times better is not entirely correct. It would evolve the way it is designed to evolve. There is just no other reason for it to evolve, ie. it will never go out of hand.
And the idea that a machine with magnitudes of processing and physical power as humans and with "humane" free will is just ambitious.


Quote:
Originally Posted by Nav-i-gator View Post
a) Of-course not. But there is a possibility. As I said earlier, not necessarily out of vengeance, but out of survival instinct, an AI can turn "rouge" (rouge = not following our orders) and end up harming us

b) It may not entirely be in our control, it may well be unintentional. An AI may evolve into either Ultron or Vision

Right now, AI is just in early phases of development. It has not started it's evolution phase - that is currently being done by programmers. But it does raise questions to be answered. Apart from threat to humanity (which is not very high), there are questions over morality (should an Intelligent being, though a machine, be used as a slave by humans? Should they not be protected for their rights? What if AI develop emotions too? Do we have any moral right to even develop an intelligent being?)
I feel this is anthropomorphism again. Its a machine. It deserves as much respect as your calculator.
ashokrajagopal is offline   Reply With Quote
Old 24th August 2017, 15:22   #60
BHPian
 
Join Date: Sep 2015
Location: Gurgaon
Posts: 267
Thanked: 595 Times
Default Re: Artificial Intelligence: How far is it?

Quote:
Originally Posted by ashokrajagopal View Post
Apologies for being obstinate with this. Such a machine that directly interferes with human life against the human will may just be shut down and destroyed; it will never be popular in the first place.

We already had such machines -- they were called "..." governments and they were collectively referred to as some sort of metallic curtain.
Jokes aside, my point of contention is the "free thinking" part.

Unless the machine is coded to apply AI and improve itself, it wouldnt do it. The difference in evolution is to be understood here. Natural evolution is the process where by Life tries to adapt and survive better. Suggesting that AI improving itself and evolving would hence be so many times better is not entirely correct. It would evolve the way it is designed to evolve. There is just no other reason for it to evolve, ie. it will never go out of hand.
And the idea that a machine with magnitudes of processing and physical power as humans and with "humane" free will is just ambitious.




I feel this is anthropomorphism again. Its a machine. It deserves as much respect as your calculator.
Evolving by learning, experiential learning, is open ended. You cannot design it the way you want, unless you can fully control the external environment that provides the experiences. You can't code AI to learn only 1 out of 100 experiences, unless to design it to be dumb (limiting the learning potential would not qualify it to be an AI, by definition). either way, If it is free to learn, it will learn. If it is not, it is not a true-blue AI that we are discussing here.

However, there is no doomsday scenario. Ill-effects of AI (if there are any), would be limited to the level of control we are willing to cede to it.

Without a physical form factor that is replicable and mass producible (like the Sci-fi AI taking control of human brain and body), AI does not pose any existential threat to humans (and humanity)
Nav-i-gator is offline   Reply With Quote
Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search


Similar Threads
Thread Thread Starter Forum Replies Last Post
NVIDIA Drive PX - Artificial Intelligence for cars spyder_p8 The International Automotive Scene 0 6th January 2017 11:14
BMW Motorrad reveals a futuristic bike with a lot of artificial intelligence TorqueyTechie Motorbikes 1 14th October 2016 17:49
Artificial Intelligence vs Human Intelligence challenge - AI wins! MSAneesh Gadgets, Computers & Software 2 9th March 2016 21:47
Wanted Beige Artificial Leather Seatcovers for Indica XETA wolfinstein Shifting gears 7 21st January 2007 23:08


All times are GMT +5.5. The time now is 18:29.

Copyright 2000 - 2017, Team-BHP.com
Proudly powered by E2E Networks