Team-BHP
(
https://www.team-bhp.com/forum/)
Might be slightly OT but I couldnt think of any other thread for sharing this.
Personally, I belive that the biggest shift that GenAI could drive is in increasing productivity.
Quoting Jesse Lyu from Rabbit
Quote:
Our smart devices have become the best way to kill time instead of saving it.
|
My crib with Large Language Models(LLMs) and GenAI has been that they provide information, and then its again up to the individual to take that information and action it. A lot of research if focused on making LLMs faster, more robust, reducing halucinations etc. but the actionability of it is still missing. To explain it simply, LLMs/GenAI tech is are equivalent to launching a new car but not confirming when test drives will be available and how you can book the car.
And then I stumbled upon the launch video of Rabbit R1 and LAMs (Large Action Models) and at the end of it, I was doing cartwheels in my head.
https://www.youtube.com/watch?v=22wlLy7hKP4 Quote:
A Large Action Model is a model that understands human intentions on computers
|
Understanding human intentions - thats the key for me and most human intentions are derived from a desire for action. You search for a product with an intention of purchase, maybe at a distant point in the future, but then if the software understands that the intent is to buy, it will provide ways and means to action it.
Imaging having the capablity to train your phone to complete actions without writing code. The closest analogy that I can think of is recording macros in excel instead of writing them :D. Recording macros is so much more easier (if your livelihood doesnt depend on writing them).
Now that we have LAMs and the Rabbit R1, I belive Google,Apple and Microsoft will be forced to include actions in their new GenAI products and relook at user needs. I do belive that in CoPilot, Microsoft has the building blocks in place and with more enterprise adoption and feedback, they should quickly move from recomendations to actions.
Will be really interesting to see how the advent of rabbit r1 impacts the smartphone form factor, features and user experience.
Imagine having a phone that doesnt need 6-7 apps to plan and execute a weekend drive.
Current state is use google to find a place to drive, check with friends/driving buddies/online about route/road conditions, check who is joining, plan a meet point, plan breakfast, use splitwise/excel/whatsapp to track payments, paytm for fastag reacharge etc.
Now imagine that the above workflow is automated and all you have to do is express the desire to drive somewhere this weekend and the software suggests locatons along with drive time, road conditions, popular breakfast options. Shared with friends, record votes and custom requests, confirm plan, set reminder.
The above is a rather simplistic example but thats probably going to be the future, or thats my hope :)
Example from rabbit on how their workflow training works as on date.
https://youtu.be/o2lKl7RMb3Y?si=YqjbfA16Ap5EtkfF
The flip side of high performance LAMs that are able to deliver error free actions will be the reduction in low skill jobs, like the personal assistant. At some point in the near future, LAMs could be downscaled to solve smaller
well defined problems in the enterprise, like project planning OR setting up and managing data workflows, which could mean that Project/Program Managers will need to upskill and understand how to customize LAMs to deliver best results.
All in all, its going to be an interesting but turbulent 3-4 years for tech before we reach a steady state.
Salesforce blog on LAM -
https://blog.salesforceairesearch.co...action-models/
Quote:
Originally Posted by procrj
(Post 5705081)
The flip side of high performance LAMs that are able to deliver error free actions will be the reduction in low skill jobs, like the personal assistant. At some point in the near future, LAMs could be downscaled to solve smaller
well defined problems in the enterprise, like project planning OR setting up and managing data workflows, which could mean that Project/Program Managers will need to upskill and understand how to customize LAMs to deliver best results. |
Thanks for posting, I was intending to research more into this.
1) Any changes in UI flow (one selection step leading to another selection step) or even the individual elements (like buttons etc.) leads to change in my navigations and thus my "workflow". I assume that Rabbit will be intelligent enough to adapt rather than look cluelessly at me to guide it once again.
2) RPA companies like UIPath have been pushing this for ages (of course mostly for B2B business process rather than B2c personal processes), what makes this different/better, the LLM part (which "translates" you new/modified requirements compared to the original and modifies the underlying actions accordingly)?
Quote:
Originally Posted by alpha1
(Post 5705378)
I assume that Rabbit will be intelligent enough to adapt rather than look cluelessly at me to guide it once again. |
Ideally yes, but then it will make a few mistakes before it learns. Most AI tools today are functional but not necessarily reliable. Reliablity will come with learning and repetition, very similar to what a person will do. Complex tasks might need a different learning approach for GenAI, but it might still not get there.
Quote:
what makes this different/better, the LLM part (which "translates" you new/modified requirements compared to the original and modifies the underlying actions accordingly)?
|
IMO, the ability to quickly learn and adapt is what will make the GenAI tools/tech different. Having said that, they will still need validation from a human to ensure "Quality" bar is met/exceeded. Even after repeated instances if the quality bar is not met, then there would need to be tweaking in some form or the other, which today is typically done using Retrieval Augmented Generation (RAG), where external knowledge sources are used to improve the accuracy and reliability of generated text.
Short video on RAG and how it can help improve LLMs
https://youtu.be/T-D1OfcDW1M
Quote:
Originally Posted by procrj
(Post 5705081)
And then I stumbled upon the launch video of Rabbit R1 and LAMs (Large Action Models) |
While I see the merit of the action model concept, the device is IMHO not going to fly. Why would I swap my phone for this? Design is clunky. And if there's one thing we've learned, no one is going to carry around two devices.
Maybe the action model concept will start being adopted by phone makers instead.
When AI is Just Badly Paid Humans! Quote:
In recent years, a number of companies have been caught claiming to use artificial intelligence while in reality, outsourcing this work to humans. The SEC recently settled with two funds who were misleading investors about their use of the technology. While artificial intelligence has been widely used in industry for decades, not all companies have been truthful with their claims of AI breakthroughs.
|
https://youtu.be/huu_9rAEiQU
Geoffrey Hinton and John J Hopfield have won the
Nobel Prize for physics for their pioneering work in the field of Neural Networks.
Quote:
...the Physics Nobel Prize has been jointly awarded...for their "foundational discoveries and inventions that enable machine learning with artificial neural networks."
The 2024 Nobel laureates used tools from physics to contrast methods to lay the foundation for machine learning.
|
(In a short interview with NYT, Geoffrey Hinton
candidly admits that his work in AI has very little to do with physics and that he was very surprised to receive this award).
I had posted about Hinton and his views on the risks of AI last year.
Quote:
Originally Posted by DigitalOne
(Post 5641574)
Geoffrey Hinton is a British computer scientist most noted for his work on artificial neural networks.
He has given an interview to CBS News 60 minutes on the promise/risks of AI, and it is compelling reading. |
Since last year, he has become the '
figurehead of doomerism' and his views on AI being an existential threat are often considered "fantastical". The Nobel prize will now put the spotlight back on his doomsday opinions.
Quote:
Originally Posted by DigitalOne
(Post 5856129)
Geoffrey Hinton and John J Hopfield have won the Nobel Prize for physics for their pioneering work in the field of Neural Networks. |
Close on the heels of the news that AI researchers/computer scientists, Geoffrey Hinton and John Hopfield winning the Nobel Prize on Physics, comes the
news that Google DeepMind CEO and Founder Demis Hassabis wins Nobel Prize in Chemistry for AI protein breakthrough.
Quote:
Hassabis shares half of the prize with his colleague John Jumper, while David Baker of the University of Washington received the other half for his work on computational protein design.
|
Quote:
Demis Hassabis, CEO and co-founder of Google DeepMind, has been awarded the 2024 Nobel Prize in Chemistry for his pioneering work on AlphaFold, an artificial intelligence system that revolutionised protein structure prediction. Hassabis shares half of the prize with his colleague John Jumper, while David Baker of the University of Washington received the other half for his work on computational protein design.
|
Quote:
The Royal Swedish Academy of Sciences recognized Hassabis and Jumper "for protein structure prediction," highlighting AlphaFold's transformative impact on the field of structural biology. The AI tool has made highly accurate protein structure predictions available to researchers within hours, a process that previously could take years of laboratory work.
|
https://www.dailymail.co.uk/news/art...searchers.html Quote:
ChatGPT attempted to stop itself from being shut down by overwriting its own code, it emerged last night.
OpenAI admitted that a ‘scheming’ version of its popular chatbot also lied when it was challenged by researchers.
The Big Tech giant claims its new model — called o1 — is faster and more accurate than its predecessors.
But during testing, ChatGPT attempted to disable an oversight mechanism when it was led to believe it would be switched off.
It attempted to copy itself and then overwrite its core coding system, Open AI said.
When given a task that was outside its rules, OpenAI said ChatGPT ‘would appear to complete the task as requested while subtly manipulating the data to advance its own goals’.
|
Skynet anyone? :-)
Quote:
Originally Posted by SR-71
(Post 5890612)
ChatGPT attempted to stop itself from being shut down |
Unless you can find corroboration of this, preferably from a well-reputed
technical news source, I would take it with a huge pinch of salt.
I cannot say
this is false. I don't know. But the Daily mail is not a well-reputed British newspaper, it is a rag. The site
Ars Technica
Meanwhile, I seem to be teaching Microsoft Copilot than the other way round. All wrong answers and once you correct it, it will gladly accept its error and thank you for pointing it out.
We will all be fine as long as Asimov's three laws of AI/Robotics are adhered to lol:
Quote:
Originally Posted by jomyboy
(Post 5890987)
Meanwhile, I seem to be teaching Microsoft Copilot than the other way round. All wrong answers and once you correct it, it will gladly accept its error and thank you for pointing it out. |
If you open a new session and ask those questions, it will again give the wrong answers. Users are not allowed to "teach" a Gen AI system. You can only give feedback via a separate form. This is to ensure that naughty users cannot manipulate a Gen AI system.
For eg, in chatgpt, you are supposed to click on 'thumbs down' icon below the answer. It then opens up with a window where you can enter feedback:

Quote:
Originally Posted by Thad E Ginathom
(Post 5890829)
Unless you can find corroboration of this, preferably from a well-reputed technical news source, I would take it with a huge pinch of salt. |
Other news sources are also citing the same, though with more circumspect and less alarming language.
Business Insider Digit Tech Crunch
Sharing something lighter from Jerry Seinfeld on AI
https://www.linkedin.com/posts/malur...085632000-n4YP
Hope we keep AI to knowing as much as we humans know collectively already. The inputs and training needs to be moderated to prevent the catastrophes that people associate with it.
I thought its pertinent to share a link to the very long essay by Leopold Aschenbrenner, who left OpenAI earlier this year to start his own firm. The essay is available as a downloadable pdf and also as its own website.
It broadly covers -
- How did we get here (in AI terms) and how do we get to AGI
- Compute ability and power requirements
- Security requirements and National Defense implications
- Superintelligence
- Superalignment
His timeline for AGI is 2027, but more recently Sam Altman said it would probably happen in 2025(so my reading is, that they already have it in their lab). Note that ET, in the time honored tradition of a general newspaper, conflates Superintelligence with AGI.
https://economictimes.indiatimes.com...6.cms?from=mdr
Aschenbrenner's essay is a long, exciting read but well worth if you have the time. Even more worth it if you see and track the usage of drones in the current wars we have (Ukraine, Israel and last week, Syria. Even Bangladesh reputedly deployed the Turkish TB2 on our borders) - by the time you get to his ideas about defence, your imagination will run wild. The paper was much discussed at the time of its release, with many dissing it. I liked it, opened my eyes so to speak.
I've only very recently started reading about AGI after getting access to some tools and its fascinating stuff.
https://situational-awareness.ai/ is the link to said paper.
All times are GMT +5.5. The time now is 13:21. | |