Team-BHP
(
https://www.team-bhp.com/forum/)
Quote:
Originally Posted by Nav-i-gator
(Post 4257381)
Evolving by learning, experiential learning, is open ended. You cannot design it the way you want, unless you can fully control the external environment that provides the experiences. You can't code AI to learn only 1 out of 100 experiences, unless to design it to be dumb (limiting the learning potential would not qualify it to be an AI, by definition). either way, If it is free to learn, it will learn. If it is not, it is not a true-blue AI that we are discussing here. |
My post above was not about limiting learning or limiting experiences. It is only about limiting the objectives. In fact the machine has no objective to learn unless you provide one. It is not "free" to learn. It is programmed to learn, so it behaves as if it is free to learn.
The same way as it is programmed to learn, it can also be programmed not to harm. Or it is not programmed to harm.
There is so much of science fantasy fiction out there that separates the software and hardware and expands on that premise. This is also one reason why I call it machine all the time. There are some weird fictions where somebody writes a program and the program begins to gain a physical form. That is just religious mythology manifesting in science fiction garb. I guess Ultron came about like that.
Software has no existence on itself.
Quote:
Originally Posted by ashokrajagopal
(Post 4257398)
My post above was not about limiting learning or limiting experiences. It is only about limiting the objectives. In fact the machine has no objective to learn unless you provide one. It is not "free" to learn. It is programmed to learn, so it behaves as if it is free to learn.
The same way as it is programmed to learn, it can also be programmed not to harm. Or it is not programmed to harm.
There is so much of science fantasy fiction out there that separates the software and hardware and expands on that premise. This is also one reason why I call it machine all the time. There are some weird fictions where somebody writes a program and the program begins to gain a physical form. That is just religious mythology manifesting in science fiction garb. I guess Ultron came about like that.
Software has no existence on itself. |
Take the example of machine learning - Predictive text input software that we have in our smartphones. It not only learns the usual words that I type, but also suggests the next word and thereby the full sentence, by learning the usual patterns of words that I write. For example, I ping my wife everyday when I am leaving office for home. it goes like this - Leaving ofc, ping me what to buy home. As soon as I write "lea", it suggests the whole sentence progressively. And that is a basic predictive software, not an AI.
Agree, that Learning as a standalone objective cannot be the sole objective. It has to be specified further - what to learn? That alone feels like a good fail-safe bet against rouge AI. But we need to understand one more thing - softwares are rarely personalized, they are generic. Android software is same across billions of smartphones, it learns each individual's usage pattern though.
Google's database would know much more about us than our own near ones. That database is a big information repository, it known when my credit card payment is due, it knows when I should leave home to catch my train whose tickets I had booked last month, it knows (and auto-fills) my passwords. We rely on these software as it makes our life easy. We don't need to remember things, google will remember for us and notify us when it is due.
Have you ever encountered a case when you want your software to recall or furnish some information and it says "the software is out dated, please update now to use it further"? How frantically we go on to update it then...
AI doomsday scenarios are much more related to defence (and there, it does pose a serious issue in future). For example, there are softwares that scan private information, phone calls, emails of individuals for words/patterns that suggests that the person may be a national security threat; there are softwares that scan the skies for areal threats, there are softwares embedded in missiles enabling it to independently identify, target and fire towards an enemy vessel/aircraft/missile/individuals.
The objective of different softwares here is different - identify potential threats, identify location, lock the target and fire to kill. If all these softwares one day start interacting with each other...
Quote:
Originally Posted by im_srini
(Post 4252952)
Yes, there is, & a time may come when AI do what they want to do instead of we want them to do :D
|
Maybe that's why the autocorrect in my phone changes characters in my texts in the way it wants to and not exactly how I'd have wanted it to be. Earlier I thought they were dumb but now I realize they are infact more intelligent than me.
PS : And about the intent, to screw my life unconditionally.
When people think about robots taking over human jobs, this is what they think. :) But no, AI is about eliminating such jobs. For example, an AI robot won't bother with the pooja because it can't justify a need for it. Need for religion is a human trait, AI doesn't have it. It will make decisions critically and objectively all the time.
https://www.youtube.com/watch?v=MDxZfwKm6Hs
Quote:
Originally Posted by ashokrajagopal
(Post 4257398)
In fact the machine has no objective to learn unless you provide one. It is not "free" to learn. It is programmed to learn, so it behaves as if it is free to learn.
The same way as it is programmed to learn, it can also be programmed not to harm. Or it is not programmed to harm. |
As a 'living' creature, what is (are) our objective(s) to 'learn'?
As a social creature we are programmed (via education) not to harm, but there are some underlying priority of objectives which cause us to harm others in certain circumstance.
Quote:
Originally Posted by alpha1
(Post 4264692)
As a 'living' creature, what is (are) our objective(s) to 'learn'?
As a social creature we are programmed (via education) not to harm, but there are some underlying priority of objectives which cause us to harm others in certain circumstance. |
All living beings are programmed by their DNA to survive thru evolution. Learning the surroundings is the means to adapt and survive; either socially as a group or individually. The urge to learn from surroundings can be equated to the urge to breath.
The priority is always to survive and preserve their DNA ( putting it very simply though), hence the urge to reproduce in all living beings. Of course the collective knowledge of the community reorders priorities.
Quote:
Originally Posted by ashokrajagopal
(Post 4264758)
All living beings are programmed by their DNA to survive thru evolution. Learning the surroundings is the means to adapt and survive; either socially as a group or individually. The urge to learn from surroundings can be equated to the urge to breath.
The priority is always to survive and preserve their DNA ( putting it very simply though), hence the urge to reproduce in all living beings. Of course the collective knowledge of the community reorders priorities. |
Beautiful and frothy candyfloss! The day AI can analyze Othello's angst or Jacques Vallee's "Dimensions" dilemma or any human psychic ability for that matter - one can then take AI more seriously. The Freudian thesis that the procreative urge is the DNA's primary function has been put on the shelf quite a while back.
It is more likely that by the time AI comes out of its infancy, human abilities - in all areas - would most likely have broken through the bottle-neck/glass ceiling that it is encountering at present. And so the debate will continue...
Quote:
Originally Posted by shashanka
(Post 4266883)
Beautiful and frothy candyfloss! The day AI can analyze Othello's angst or Jacques Vallee's "Dimensions" dilemma or any human psychic ability for that matter - one can then take AI more seriously. The Freudian thesis that the procreative urge is the DNA's primary function has been put on the shelf quite a while back |
Off topic are you a professor by any chance. For few seconds I thought I was reading some dissertion.😀
I agree with your view sir. At least in area of computer networks most so called cutting tools bases on tensorflow,p4 fall flat when they have to deal with what humans call "common sense".
Quote:
Originally Posted by sathish81
(Post 4266960)
Off topic are you a professor by any chance. For few seconds I thought I was reading some dissertion.��
I agree with your view sir. At least in area of computer networks most so called cutting tools bases on tensorflow,p4 fall flat when they have to deal with what humans call "common sense". |
Hello sathish81,
I'm sure my profile must have given the game away - I'm a humble marine engineer out of harness and set out to grass for the past couple of years!:) Unfortunately (or fortunately, depending on perspective!) I belong to a clan filled to the brim with academics & research fellows - and so, absorbing & participating in lively debates/arguments is about par for the course. My son-in-law, a computer scientist (Ph.D dissertation was on 'computer vision') seems to be more conservative and not particularly interested in the AI debate!
Quote:
Originally Posted by shashanka
(Post 4267128)
Hello sathish81,
I'm sure my profile must have given the game away - I'm a humble marine engineer out of harness and set out to grass for the past couple of years!:) Unfortunately (or fortunately, depending on perspective!) I belong to a clan filled to the brim with academics & research fellows - and so, absorbing & participating in lively debates/arguments is about par for the course. My son-in-law, a computer scientist (Ph.D dissertation was on 'computer vision') seems to be more conservative and not particularly interested in the AI debate! |
My apologies sir- did not check your profile.
Topic at hand- AI cannot replace human intelligence. There are few evolving technologies and stuff like "deep learning" that has limited use cases- In my view they are "high end" automation and nothing else.
Even in computing applications - deep learning has very limited application in true distributed platforms. One example is computer networks.
The thing is. While AI might take a long time to understand feelings, art etc. It can do destructive keedagiri today. Every gov is (at the least) doing research re military and law enforcement applications/use cases. China plans to spend a huge, huge amount to become the AI leader in the world. Doesn't take much sophistication to kill humans :)
Btw, there are some programs that are rather good at analyzing emotions based on facial expressions. There was an interesting article on Economist.
Quote:
Originally Posted by nilanjanray
(Post 4267225)
The thing is..............some programs that are rather good at analyzing emotions based on facial expressions. There was an interesting article on Economist. |
Yes, I read the article too. Somewhat misleading and simplistic I felt. Ethnic differences in social attitudes & behaviour often gives rise to very similar facial expressions but which have vastly different significance. I would rather not go into specifics and risk giving offence where none is intended! As Sathish81 pointed out, this could be an example of "high end" automation and not really AI:)
All times are GMT +5.5. The time now is 15:37. | |