AI News - February 2025

A quick overview of interesting AI News for the Yate and Sodbury U3A's monthly AI News meeting held on February 13th 2025 (meetings are second Thursday of month, 1400-1545, St John's Church Centre, Wickwar Road, Chipping Sodbury, UK - all welcome). Note that this is very much a summary to act as background as the topics are discussed in detail at the meeting itself. We also discuss and demonstrate other AI products and services and much more.

For the first time some AI Smart Glasses (Meta Ray Bans) will be demonstrated at the meeting then made available to any blind or sight-impaired member to experiment with.

[1] Vice President J.D. Vance defines clear USA vision for AI

This week the 'AI Action Summit' in Paris did not turn out quite as the EU had hoped. The stated USA position is for a bold march forwards. Whatever your view, the speech was well crafted and worth listening to.
1. USA AI will be gold standard and partner of choice
2. excessive regulation could kill growth so will deregulate
3. AI must be free from ideological bias
4. AI policies should be pro-worker

Incredible to see a political leader translate how a new technology can promote human flourishing with such clarity. Exceptional speech.

pic.twitter.com/IOUIv54FDO

— Katherine Boyle (@KTmBoyle) February 11, 2025

[2] Max Tegmark suggests that control of ASI is an unsolved problem

Given that we are close to Artificial General Intelligence, which leads to Artificial Super Intelligence, there needs to be a plan as to how ASI can be managed or controlled. Max Tegmark suggest he has not heard a good strategy for this yet.

This may be the first time ever that I strongly disagree with your assessment. I expect AGI to quickly lead to ASI and, if you or anyone else has a convincing plan for how not to lose control over it, I challenge you to post it here for public scrutiny. I’ve read and heard many… https://t.co/IzIWub4ZXS

— Max Tegmark (@tegmark) February 8, 2025

[3] Does Ilya Sutskever have the answer?

Leading figure in AI Research last year set up his own company 'Safe Superintelligence' which aims to produce ASI as its first 'product'. Perhaps he has a better approach than established companies.

Reuters is reporting that Ilya Sutskever's Safe Superintelligence Inc is in talks to raise funding at a valuation of at least $20 billion. This would quadruple the company's $5 billion valuation from its last funding round five months ago. Their only product is superintelligence. pic.twitter.com/4SNZSmGnq7

— Andrew Curran (@AndrewCurran_) February 7, 2025

[4] Yoshua Bengio thinks AI should not be trusted

Description of an AI cheating rather than simply following instructions. "The smarter they are, the more they cheat."

"The smarter they are, the more they cheat."

Heated exchange at AI, Science and Society Conference panel between Yoshua Bengio and Philippe Aghion.

When Aghion asserts that AI simply "does what you tell it to do," Bengio counters with a startling example: instructed to play… pic.twitter.com/p8J4uwXXYf

— vitruvian potato (@vitrupo) February 8, 2025

[5] AI 'prompt to Video' becomes ever more powerful

A compilation of AI Video.

„Hollywood is Save, only the writer are in danger because of ai“
Cope.pic.twitter.com/gvAhrVu0rA

— Chubby♨️ (@kimmonismus) February 10, 2025

This is Pika AI adding that small special effect into even your own videos.

the future of VFX is here..

Pika AI just dropped Pikaddition, now you can add any character into your own video in seconds and...

it can even match the colour and lighting automatically.

8 examples: pic.twitter.com/7dsBgnw96Y

— el.cine (@EHuanglu) February 8, 2025

[6] Sam Altman seems unworried as he looks ahead a few years.

Suggest that in 2035 one single datacentre will equal all the human intelligence on earth now ...

In 2035 1 datacenter equals all the human intelligence on earth (if the trend keeps continuing) pic.twitter.com/luLaHmVkx4

— Chubby♨️ (@kimmonismus) February 9, 2025

Sam Altman also suggested that his OpenAI models already ranked 50th in the world at coding in a well known coders competition, and would 1st by the year end. [dates should be 2024]

In terms of Codeforces ELO:
o1 (Oct 2023) was ranked ~9800th

o3 (Dec 2023) was ranked ~175th

Altman has confirmed that Open AI has a model internally at ~50th

Superhuman programming by eoy

— Justin Halford (@Justin_Halford_) February 8, 2025

If you want his latest thoughts start here with Sam's latest blog post.

Three Observations:https://t.co/Ctvga5vfMy

— Sam Altman (@sama) February 9, 2025

[7] OpenAI's Deep Research model suggest jobs humans should do

'Deep Research' is the best available model from OpenAI for extended tasks. Here it analyses some of the jobs it thinks humans will still have a role to play in.

20 jobs that OpenAI o3 CANNOT replace human, according to Deep Research pic.twitter.com/9JHAvmbWm5

— Min Choi (@minchoi) February 8, 2025

[8] Beginning of first wave of job destruction by AI

Emad Mostaque (who was was CEO of Stability AI) on anticipated destruction of Business Process Outsourcing market in 2025 by 'AI Operators'. First example countries like India followed by USA home workers.

2025 will be the year in which we will see the first painful consequences in the labor market.

Countries like India, where work is often outsourced, will be the first to be replaced by AI.

Remote workers in the USA and Europe will follow.

The discussion about the world of… pic.twitter.com/ppfMAFHKEy

— Chubby♨️ (@kimmonismus) February 8, 2025

[9] OpenAI's Deep Research showing its ability in Healthcare

Because AI systems are getting so powerful they can be assessed only by seeing how they do on complex questions. This is a real world example is given by a prominent medical expert and the eleven page document produced seems impressive to the non-expert.

I’m now sharing an extremely detailed colon cancer treatment report from OpenAI Deep Research, posted with my patient friend’s permission (no identifier). This report, obtained after uploading the patient’s case, is highly personalized and truly remarkable! Google doc link below pic.twitter.com/8uJvtz8oGl

— Derya Unutmaz, MD (@DeryaTR_) February 7, 2025

Here is another expert, economist Tyler Cowen, with his early experience of OpenAI's Deep Research.

Buckle up, folks pic.twitter.com/dEtBG7exsT

— Alec Stapp (@AlecStapp) February 5, 2025

[10] Watching an AI's 'Chain of Thought' is entertaining and informative

Now OpenAI and others have released 'thinking models' they have found that users like to see what that thought process is (or at least what the AI model says is its thinking process). Here is an example from Open AI's 03-mini.

Updated chain of thought in OpenAI o3-mini for free and paid users, and in o3-mini-high for paid users. pic.twitter.com/uF4XTBGpC5

— OpenAI (@OpenAI) February 6, 2025

[11] There was an open invitation to 'jailbreak' Claude

There is a living to be made persuading AI Tools to say things that they are not allowed to. Anthropic turned this into a game where you had to get Claude's new unreleased version to tell you how to make dangerous chemicals - with a large cash prize.

👀 https://t.co/G9kfjOAUNS

— Jan Leike (@janleike) February 5, 2025

[12] There will be many robots - and watching them will be entertaining

With so many competing robotics companies there will be many stunts used to attract market attention. This is Chinese company PNDRobotics.

After visiting the Central Academy of Fine Arts, Adam, a humanoid robot, decided to paint a piece of art himself. https://t.co/GWsjcoB1Hl pic.twitter.com/drZWyM4anD

— CyberRobo (@CyberRobooo) February 6, 2025

And who can resist a robot that wants to shake your hand or give you a high five, particularly when it is with a human celebrity (Kai Cenat).

Kai is normalizing humanoid robots and it’s awesome

pic.twitter.com/hBbs9thsph

— MatthewBerman (@MatthewBerman) February 5, 2025

Meanwhile Tesla is planning to make something like 5,000 Optimus Robots this year, and one hundred million per year in five years or so. In this clip Elon Musk explains the special requirements of achieving this. Tesla just began to recruit the engineers to go into volume production.

Elon Musk: Tesla is uniquely positioned to manufacture a humanoid robot.

“We really have by far the best team of humanoid robotics engineers in the world. We also have all the other ingredients necessary.

You need a great battery pack, you need great power electronics, you… pic.twitter.com/Yvc5omggoU

— ELON CLIPS (@ElonClipsX) January 31, 2025

[14] The winning 'form factor' for personal AI seems likely to be AI Glasses

At the moment there is only really one market ready pair AI Glasses solution - which is the Meta AI Ray Bans. We now have a pair of these which anybody locally can evaluate, particularly those from the blind and partially sighted community.

Meta Ray Bans - the closed case

There are also very many AI Glasses related start-ups. The one below is designed entirely for the blind and sight-impaired market. We have a pair on order for mid-year 2025.

#AGIGA #EchoVision - "Revolutionizing independence for the blind and low vision" https://t.co/719raYrJD4 #AI smart glasses pic.twitter.com/KSvDuIgMgX

— The vOICe vision BCI 😎🧠 (@seeingwithsound) December 3, 2024

[15] The RAND Corporation thinking hard about AI dangers

Top think tank considers where the dangers lie - you can view their report.

AGI's potential emergence presents 5 hard problems for U.S. national security:

1-wonder weapons
2-systemic shifts in power
3-nonexperts empowered to develop weapons of mass destruction
4-artificial entities with agency
5-instabilityhttps://t.co/06owuaXxe2

— Jim Mitre (@jim_mitre) February 10, 2025

[16] FARSI - Fully Autonomous Recursive Self Improving AI

In summary, sometime soon an AI system will be capable of improving itself - and then will repeat that process for as long as it can access the resources to do so. A perfect start to a Sci-Fi novel.

Fully Autonomous Recursive Self Improvement (FARSI) in AI is coming in 2026.

That is huge.

This year, AI is saturating two primary benchmarks: math and coding.

Guess what AI research is? Math and coding.

Fast takeoff starts in 12 months. pic.twitter.com/U7XQjGI147

— David Shapiro ⏩ (@DaveShapi) February 9, 2025