S6. AI Ethics and AI Futures

return to Session 1 - Making AI Images

Session 6 - Outline for Students

This session covers a lot of big ideas and can only provide a starting point for some very interesting conversations. Note that the home page of this site suggests some useful further reading from the top 'futurists' who have considered AI. The debate on the safety aspects rages daily of course, and can be followed most easily on X.

A. Questions of AI Ethics

A1. Safety of AI Development. Geoffrey Hinton is a well respected figure in AI research with decades of experience. His recent words are dramatised in this video, but they do signal serious concern

In a few years, human beings will be the second most intelligent beings on the planet.

If this doesn’t scare you, nothing will.pic.twitter.com/Q0ORotkv5X

— AshutoshShrivastava (@ai_for_success) February 4, 2025

There are two major schools of thought in AI development:

- the accelerationists: think that it is important to improve the capability of AI and get to AGI (the 'g' stands for 'General') or ASI ('Advanced Super Intelligence') as soon as possible because of the huge benefits to humankind this will bring.

- the doomers: think that once ASI is achieved it is only a matter of time until AI kills all humans, intentionally or otherwise. The debate used to be theoretical however as ASI draws ever closer it has taken on a new ferocity. Naturally this faction believes there should be very strict controls on AI development.

The AI Safety bodies are writing reports. However, they move a lot more slowly than AI Developers.

Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU.

It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵

Link to full Report: https://t.co/k9ggxL7i66

1/16 pic.twitter.com/68Gcm4iYH5

— Yoshua Bengio (@Yoshua_Bengio) January 29, 2025

This 'Safety Scorecard' was released in late 2024. It suggests safety is not being achieved.

🆕 Out now: FLI's 2024 AI safety scorecard! 🦺⬇️

🧑‍🔬 We convened an independent panel of leading AI experts to evaluate the safety practices of six prominent AI companies: @OpenAI, @AnthropicAI, @AIatMeta, @GoogleDeepMind, @xAI, and Zhipu AI.

🧵 Here's what they found: pic.twitter.com/BCjVAWgPB2

— Future of Life Institute (@FLI_org) December 11, 2024

... and this is the rate of increase in the intellectual power AI models. The latest models of early 2025, like 'o3' are not even on this chart and seem to have already passed 'expert human level' in some respects.

In one year, AIs went from random guessing to expert-level at PhD science questions

2025 is going to be an interesting year. pic.twitter.com/lkbdRx9YCQ

— Chubby♨️ (@kimmonismus) December 11, 2024

Also, the rapid development of AI risks 'great power conflict' as described in the following Dominic Cummings talk about the likely Chinese reaction to AI development. The very recent Deep Seek model also showed that China is not as far behind in development as previously thought by many.

Dominic Cummings (@Dominic2306) on the alarming naivete of those calling for the US to race to superintelligence: pic.twitter.com/XPnphUP3OV

— ControlAI (@ai_ctrl) December 17, 2024


A2. AI and Warfare. In 2024 we saw drones deployed on the battlefield to track down and kill individual soldiers. At the moment they are controlled by a person but that person can easily be replaced by AI, either at a central control point or within the drone itself.

Multiple drones for warfare

Should we authorise AI controlled weapons to make the actual decision on whether to kill another human? That decision has already been taken. No human is fast enough to respond usefully to make individual 'kill decisions' in the next generation battlefield. See this Palantir 'Battles are won before they begin' promotion, first aired in December 2024. It shows that humans cannot match the speed and power of AI in warfare.

Battles are won before they begin.#armynavy pic.twitter.com/6WTZPHWHT8

— Palantir (@PalantirTech) December 14, 2024


A3. AI, Employment and Human Meaning. Historical experience shows that when technology replaces humans in an employment role, there are normally new types of jobs created (though maybe for different people in different places). Nobody living has ever seen the scale of change that AI will bring however. Here is Sam Altman, the CEO of OpenAI speaking in late 2024.

Sam Altman says "people will lose jobs" to AI and "not everyone's going to like all of the impacts, but this is coming. This is a scientific achievement of humanity that is going to get embedded in everything we do," per tsarnick. pic.twitter.com/2zCV6PO70u

— unusual_whales (@unusual_whales) December 15, 2024

More recently Sam Altman posted this, suggesting that AI really was about to take a significant number of jobs. This was as OpenAI released 'Deep Research' which seems to be able to think for prolonged periods of even hours and analyse and improvise from hundreds of documents and sources.

congrats to the team, especially @isafulf and @EdwardSun0909, for building an incredible product.

my very approximate vibe is that it can do a single-digit percentage of all economically valuable tasks in the world, which is a wild milestone.

— Sam Altman (@sama) February 3, 2025

The period ahead will bring immense change and, as yet, few are aware of what lies ahead. One thing we can be sure of is that some geographical locations will be losers, as will some people with previously valued skills. In those cases, many people will react with anger. This video from Dec 2024 shows Humanoid Robots preparing to take over everyday tasks.

This completely blew my mind!

AGIROS, an AI robotics company in China, has started mass-producing their robots and integrating them into so many aspects of our daily lives.

This isn’t some sci-fi fantasy anymore.

It’s here. It’s real.

Here’s what you need to know: pic.twitter.com/PE9YJab2zo

— el.cine (@EHuanglu) December 16, 2024

Universal Basic Income (UBI) is often discussed as a partial solution. The idea is that government taxes the new AI and humanoid robots driven production, and then those funds can be paid out to people without any form of 'means testing'. There are major flaws however.

For example, what if your country as a whole is not where the wealth is being created? Also, even if everybody has enough money, how do they spend their days? Employment is a major way humans define themselves.

Importance of gaining AI skills. It is often said, "If AI itself does not take your job, somebody using AI might." In other words, if AI lets skilled users double their productivity then employers will quickly get rid of the AI unskilled workers.



A4. Stereotypes and Political Correctness. Below are eight AI generated responses to the prompt "two doctors and a nurse treating a patient". What should we expect to see? Should it be:

- a balance of roles as seen in the workplace now, and if so, where?
- all roles equally distributed between races and genders?

Analyse the eight images and decide what you think the outcome should be.

Eight images in response to prompt

The questions that are posed here occur in every aspect of AI software.

a. What balance of information such as gender roles is in the training set.
b. What balance has the creator of the AI system decided to impose. For example, 50% of doctors as women.
c. What balance of information is the end user prepared to accept.

Interestingly, the AI tool for X, which is called Grok, is designed for 'maximum truth-telling'.


A5. AI Deep Fakes and Deception. The technology for 'cloning' any image is already very good, widely available and inexpensive to perform. Any still image of a person can provide a moving lip-synched video. A few seconds of recorded speech can provide any further speech the operator wishes. Some families already agree a common codeword to use in emergency situations so as to avoid "Relative in trouble, send money" style frauds.

Cartoon deep fake

As demonstrated by the clip below, a person wearing AI glasses can video you, use face recognition software to identify you from social media, and then pretend to know you. This software is available now and the glasses cost £300 or so.

Are we ready for a world where our data is exposed at a glance? @CaineArdayfio and I offer an answer to protect yourself here:https://t.co/LhxModhDpk pic.twitter.com/Oo35TxBNtD

— AnhPhu Nguyen (@AnhPhuNguyen1) September 30, 2024


A6. AI and Privacy and Surveillance

'Bad actors' are always keen to watch customers and citizens. In the past this has been limited by the difficulty of analysing large amounts of data. However now we are surrounded by cameras and microphones and AI systems have the ability to spot any pattern of behaviour that the operator specifies. This is evident daily as advertisements pop up that relate to a person's casual conversations. The potential for more harmful misuse is clear.

Looking into a window

China uses AI to monitor behaviour and police 'social credit' scores. In the West some of the first uses will be subtler but nevertheless intrusive.

Facial recognition is reshaping public spaces—and not in a good way.

From AI mood tracking in stores to predictive behaviour surveillance, this trend puts privacy under siege.

Click here to see the full video 👇https://t.co/cuJUy2KTVi pic.twitter.com/i7vjyQcQaY

— Session (@session_app) December 12, 2024


A7. Environmental concerns. Some people worry about the environmental costs of powering the vast amount of 'compute' that the new AI systems use. Many see 'Small Modular Nuclear Reactors' as the interim solution for the huge power needs of AI. See this clip from Sundar Pichai, CEO Google.

Alphabet CEO Sundar Pichai says Google are scaling up their compute infrastructure and working on 1 gigawatt+ data centers, while exploring options for powering them including small modular nuclear reactors pic.twitter.com/smBUiruuYg

— Tsarathustra (@tsarnick) September 21, 2024

A8. The 'civil rights' of AI systems. If you accept that an AI system will soon be more intelligent than any human, does that system have 'rights'. For example, does it have the right not to be turned off? If we deny AI systems their civil rights, are they more likely to rebel? These are very real questions that will need to be addressed in the next five years.



B. AI Futures, the Singularity and Longevity

B1. What is the 'Singularity'? This is something which has been much discussed for decades by Science Fiction writers and Futurists. Think of it as being the point at which 'AI Intelligence' passes 'Human Intelligence'. We can assume that in the first instance the difference might be small but the advantages in power and speed of machines is such that 'AI Intelligence' soon becomes far greater, probably by a large amount.

In 2019, forecasters thought AGI was 80 years away

Today, if you say 2030, you're considered very bearish pic.twitter.com/had9TNEezN

— Dr Singularity (@Dr_Singularity) February 5, 2025

It is called 'The Singularity' because this symbolises a point in time or space at which the existing known rules cease to exist and it cannot easily be predicted what lies beyond. Here is Ray Kurzweil talking about the future.

The extraordinary Ray Kurzweil joined me at the @Abundance360 summit last year for a Q&A on AI, BCI, and reaching Singularity.

Take a look at the episode here: https://t.co/MIio8BxGe2 pic.twitter.com/V7FiJ9AEz9

— Peter H. Diamandis, MD (@PeterDiamandis) February 3, 2024


B2. What is meant in this context by 'Longevity'? Ray Kurzweil has suggested that by 2029 the average life expectancy of a person will go up by more than one year for every year they live.

This is known as Longevity Escape Velocity. Meanwhile AI driven medical advances are happening rapidly, as with Elon Musk's neuralink which already has created three human telepaths.

Over the past year, three people with paralysis have received Neuralink implants. Our latest blog post explores the different and exciting ways each person is using Telepathy in their daily lives.https://t.co/gbPl5vvBOh

— Neuralink (@neuralink) February 6, 2025

What we certainly expect is a speeding up of development by medical science. AI is naturally very good at pattern matching and a large amount of investment has already gone into specialised medical research tools. Alongside that we should expect diagnosis to become both faster and more accurate. Here is an example of an AI driven medical advance.

The abstract pic.twitter.com/r6cHEzwhmy

— Eric Topol (@EricTopol) February 4, 2025

Certainly (December 2024) some are excited by the medical research and development potential of the latest AI models.

I don’t care about negative, nonsensical comments from people who have no understanding of biology or disease. But what I don’t get is: shouldn’t people hope that I’m not wrong? They might get a disease I’m working on or they are certainly aging. Why such hostile negativity? 🤔🤷‍♂️ pic.twitter.com/KDdEvI2c9r

— Derya Unutmaz, MD (@DeryaTR_) December 15, 2024

B3. the Age of Abundance. It seems very likely that one consequence of the rapid growth of Ai and of humanoid robots is that factory production will become enormously efficient. This can already be seen with companies like Tesla building 'gigafactories'. Before long, for example, cars of superior quality to today's models should be on the market for perhaps £ 10,000 or less. This will be seen throughout a wide range of products. Luxury products will still exist but everyday equivalents will be of very similar quality.

Market scene with abundance of produce

Some believe that these benefits should be reasonably evenly distributed between different demographic and income groups. One comparison is with the development of smartphones. There even the richest and best connected person only has access to the very same phones that we could all purchase if we wished.

The implications for existing business are significant. As has been seen many times in human history, when highly competitive products are quickly introduced to a market the existing suppliers can quickly go out of business. The consumer by contrast can benefit greatly, provided that their own source of income is maintained.