Team-BHP - Artificial Intelligence: How far is it?
Team-BHP

Team-BHP (https://www.team-bhp.com/forum/)
-   Shifting gears (https://www.team-bhp.com/forum/shifting-gears/)
-   -   Artificial Intelligence: How far is it? (https://www.team-bhp.com/forum/shifting-gears/189383-artificial-intelligence-how-far-16.html)

The other day I was spending a mindless hour reading some Quora stuff. It's slightly less mindless than Instagram!

For the first time, though, I felt something that made me wonder if what I was reading was written by a bot.

This is getting scary.

AI Robot:

https://www.youtube.com/watch?v=X2Cjg3vyShY

has replaced human:

https://www.youtube.com/watch?v=kWePYEdVbhc

Quote:

Originally Posted by SmartCat (Post 5592543)
This is getting scary.

AI Robot:

https://www.youtube.com/watch?v=X2Cjg3vyShY

The robot movements are too smooth. Love to have a visual servoing like that but seems it is a fake( Again I am not aware of the current research in the field of inverse kinematics )

Quote:

Originally Posted by greyhound82 (Post 5592565)
The robot movements are too smooth. Love to have a visual servoing like that but seems it is a fake

Obviously, that is why smartcat has given the original video from which the robot video was spoofed.

Quote:

Originally Posted by greyhound82 (Post 5592565)
The robot movements are too smooth. Love to have a visual servoing like that but seems it is a fake..

Even real Demo videos are awesome , not as awesome as fake though :) look at last couple of movements

https://www.youtube.com/watch?v=-e1_QhJ1EhQ

AI Study Says Stocks Already Pricing a Job-Replacement Premium

Source: Bloomberg

Couple of key points:

The study suggests that the stock market is already assessing the impact of AI on companies. Businesses that are more likely to benefit from AI-driven efficiency gains are outperforming the market. That is, companies where humans can be replaced by AI, are being valued higher.

Study goes beyond Technology companies, like for example, Insurance companies where the stock market now deems will see 'labour efficiencies' (euphemism for AI replacing humans) and are thus seeing outperformance.

The study itself used chatGPT extensively instead of hiring research assistants.

Quote:

Ironically, researchers like Schubert perform some of the cognitive, repetitive work for which AI is primed to do, hence the decision to use it for the study. To sift through the some 19,000 tasks, the economists could have hired a research assistant or employed gig workers — both options that would have required substantial cash and ample time.

Or they could use ChatGPT, which completed the assigned task in less than two days for little to no cost.

“It actually just turned out to be the best tool for the job,” Schubert admits.

Geoffrey Hinton is a British computer scientist most noted for his work on artificial neural networks.

Quote:

Hinton received the 2018 Turing Award (often referred to as the "Nobel Prize of Computing"), together with Yoshua Bengio and Yann LeCun, for their work on deep learning.[26] They are sometimes referred to as the "Godfathers of Deep Learning",[27][28] and have continued to give public talks together.[29][30]

In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I.
He has given an interview to CBS News 60 minutes on the promise/risks of AI, and it is compelling reading.

Excerpts:

Quote:

Scott Pelley: What are the implications of these systems autonomously writing their own computer code and executing their own computer code?

Geoffrey Hinton: That's a serious worry, right? So, one of the ways in which these systems might escape control is by writing their own computer code to modify themselves. And that's something we need to seriously worry about.

Scott Pelley: What do you say to someone who might argue, "If the systems become malevolent, just turn them off"?

Geoffrey Hinton: They will be able to manipulate people, right? And these will be very good at convincing people 'cause they'll have learned from all the novels that were ever written, all the books by Machiavelli, all the political connivances, they'll know all that stuff. They'll know how to do it.
Quote:

Chatbots are said to be language models that just predict the next most likely word based on probability.

Geoffrey Hinton: You'll hear people saying things like, "They're just doing auto-complete. They're just trying to predict the next word. And they're just using statistics." Well, it's true they're just trying to predict the next word. But if you think about it, to predict the next word you have to understand the sentences. So, the idea they're just predicting the next word so they're not intelligent is crazy. You have to be really intelligent to predict the next word really accurately.
Quote:

Hinton's test was a riddle about house painting. An answer would demand reasoning and planning. This is what he typed into ChatGPT4.

Geoffrey Hinton: "The rooms in my house are painted white or blue or yellow. And yellow paint fades to white within a year. In two years' time, I'd like all the rooms to be white. What should I do?"

The answer began in one second, GPT4 advised "the rooms painted in blue" "need to be repainted." "The rooms painted in yellow" "don't need to [be] repaint[ed]" because they would fade to white before the deadline. And...

Geoffrey Hinton: Oh! I didn't even think of that!

It warned, "if you paint the yellow rooms white" there's a risk the color might be off when the yellow fades. Besides, it advised, "you'd be wasting resources" painting rooms that were going to fade to white anyway.

Scott Pelley: You believe that ChatGPT4 understands?

Geoffrey Hinton: I believe it definitely understands, yes.
Quote:

Geoffrey Hinton: It may be we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did. I don't know. I think my main message is there's enormous uncertainty about what's gonna happen next. These things do understand. And because they understand, we need to think hard about what's going to happen next. And we just don't know.

Quote:

Originally Posted by DigitalOne (Post 5641574)
Geoffrey Hinton is a British computer scientist most noted for his work on artificial neural networks. ...
Excerpts:

I find this concept odd: "It may be we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did."

Whether or not AI can be harmful (personally, I don't think it will ever reach that point), I don't think there is any chance of stopping further development. If we had the social systems in place to stop research into potentially planet-ending ideas, nuclear bombs would never have been invented. And that was a concept that was very obviously apocalyptic from the beginning. But here we are. So whatever path AI development takes, it's going to take.

However Intelligent AI can become, it still needs to be deployed in critical places to take over the world. It will still be people using AI's smarts to do bad things.

Only if AI can infect and take over remote places like virus we need to worry about. I think when experts talk about dangers, they still fear people using it in wrong places. You still need the whole infrastructure for it to go rogue. Its like nuclear weapons used by extremists destroying the world. It can be controlled. At least that's what I think personally.

But on the other hand, we still need to worry about fakes and such, misleading us to go towards abyss. Again its people who will do this.

Quote:

Originally Posted by am1m (Post 5641945)
Whether or not AI can be harmful (personally, I don't think it will ever reach that point), I don't think there is any chance of stopping further development.

He is not advocating stopping further development in AI. He is just making aware of the risks. He quit Google so he can speak freely, i.e. not constrained by Google corporate policies, about the risks.

Also, we may just be the metaphorical Boiling Frogs :)

AI has already started encroaching on the medical profession.
There are some machine learning algorithms being developed that can interpret an X-ray/CT/MRI much more quickly and efficiently.
Sometimes a disease in its early phase has very subtle radiological signs that only an experienced specialist in that field would be able to capture.
But it has been found by current research that AI algorithms are able to detect these early signs(eg early detection of cancer) much more consistently.
In fact hospitals in the US have already started using these AI tools.
Warning bells for Radiologists out there...

A bit OT.

A senior in college used to regularly play a song on loop. Sitting in my room I'd enjoy listening to it too. Not a big music person, so didn't care to ask which song it was. But the tune stayed with me, and I never heard that song again after college. I asked a lot of my college mates to recognise the song, by singing the tune in as best a manner I could. None could understand it (to their defence, I am as good as a donkey braying). My wife too, having lived the hostel life, is quite familiar with popular music from 80s and after. I must have tried with her too. So, this week I had to install the Google app on my phone for some reason. Yesterday as a popup it asked me to search for any song by just murmuring a tune. I did, and it correctly recognised the song.

The song, I was searching for (intermittently) the last 15 years:

https://www.youtube.com/watch?v=R5SwRvAbpe8

Turns out its quite a popular song. And, my wife always knew about it as it was one of her favourite songs in college too!

Aitana Lopez: The model earning 4,000 euros per month... created by AI

Gosh, that looks so realistic. If this catches on, modelling as a career is finished.

https://www.marca.com/en/lifestyle/c...f218b458a.html

Google introduced its most powerful, multimodal AI platform, Gemini. It showcased what Gemini is capable of doing in an impressive demo:

https://www.youtube.com/watch?v=UIZAiXYceBI

There are allegations that the demo was faked (Emphasis below mine).

Quote:

Just one problem: the video isn’t real. “We created the demo by capturing footage in order to test Gemini’s capabilities on a wide range of challenges. Then we prompted Gemini using still image frames from the footage, and prompting via text.” ....

So although it might kind of do the things Google shows in the video, it didn’t, and maybe couldn’t, do them live and in the way they implied. In actuality, it was a series of carefully tuned text prompts with still images, clearly selected and shortened to misrepresent what the interaction is actually like.
Google elaborated in a longer blog post on the actual prompts used in the interaction.

Despite the controversy whether the demo video is a 'fake' or a 'gross over exaggeration', if you read the blog post, Gemini does reveal some awesome multimodal AI capabilities like reasoning and game creation. IMHO, Google should have just stopped with the blog, and not set sky high expectations with a carefully edited video. They have set themselves up for a fall:unhappy.

Quote:

Originally Posted by DigitalOne (Post 5676201)
Google introduced its most powerful, multimodal AI platform, Gemini. ...

IMHO, Google should have just stopped with the blog, and not set sky high expectations with a carefully edited video. They have set themselves up for a fall:unhappy.

So true! After setting everyone's expectations sky high, the revelations come tumbling out the closet. :Frustrati

The Verge has already questioned the video (link) saying:
Quote:

Google just launched a new AI, and has already admitted at least one demo wasn’t real

Techcrunch didn't pull any punches. Link
Quote:

Google’s best Gemini demo was faked
Is pressure from Microsoft and OpenAI getting to Google?


All times are GMT +5.5. The time now is 23:41.