return to Session 1 - Making AI Images
Session 6 - Outline for Students
This session covers a lot of big ideas and can only provide a starting point for some very interesting conversations we all need to have. Note that the home page of this site suggests some useful further reading from the top 'futurists' who have considered AI.
A1. Stereotypes and Political Correctness. Here we consider simply the output of AI systems, taking as an example the images produced by AI when given a standard prompt. Below are eight responses to the prompt "two doctors and a nurse treating a patient". This is only a limited sample but what should we expect to see? Should it be:
- a balance of roles according to those observed in the workplace now, and if so at what location?
- all roles equally distributed between races and genders?
- a carefully designed balance, and if so, who decides on that?
Analyse the eight images and decide whether you think it is the outcome you want. Clue. There are no correct answers, only opinions.
The questions that are posed here occur in every aspect of AI software. It tends to come down to the following points.
a. What balance of information such as gender roles is in the training set (which means in practice history as recorded by books and the internet etc). An AI system will naturally reflect what it has been trained on.
b. What balance of information would the owner of the AI system wish to see. For example, they may believe that it is important to show approximately 50% of doctors as women in order that there is no stereotyping and that proper role models are available for growing minds.
c. What balance of information is the end user prepared to accept, as they may simply choose not to use a particular AI tool and move to others that fit more with their own internal worldview.
This is not an academic debate. More than one AI tool has been quickly removed from public view because it got this balance wrong. One solution may be that taken by the AI tool Grok which aims at 'maximum truth-telling'. That is, Grok should produce output based on its reading of the world as it sees it, with minimal additional steering from anybody. Time alone will tell whether this works well.
A2. Defamation by AI. This is connected to the previous topic. When the owner of an AI system tries to 'train' it to behave well by the established social order then typically a method called Reinforcement Learning is used. Socially incorrect answers give the AI system a bad score, socially correct ones give it rewards. If, for example, you entrust the training to people of a particular political viewpoint then you can see how this system can quickly deteriorate to be 'Person A bad, Person B good'. Indeed in the analysis of the AI's internal neural net it quickly becomes 'Person A evil, Person B perfect'.
Once users become aware of this they tend to ask unhelpful questions like:
"Is Person A or 'major war criminal' the worst person?"
The AI innocently replies according to its training: "Definitely Person A!"
Of course, no court would award damages for defamation in this case, but if the AI continues to answer in similar fashion to everyday questions then it quickly becomes defamation - and while the USA courts may have a high bar due to 'free speech' protections, UK courts and others are far stricter.
A3. Safety of AI Development. There are (at least) two schools of thought in AI development:
- the accelerationists: think that it is important to improve the capability of AI and get to AGI (the 'g' stands for 'General') or AMI ('Advanced Machine Intelligence') as soon as possible because of the huge benefits to humankind this will bring.
- the doomers: think that, at achievement of AGI or soon after, it is likely that AI will kill all humans, intentionally or otherwise. The debate used to be theoretical however as AGI may be only a couple of years away it has taken on a new ferocity. Naturally this faction believes there should be very strict controls on AI development.
The debate is very interesting and is conducted enthusiastically by all parties. As a result different countries are adapting different legislative regimes, with a huge effect on the local speed of development. See below for related discussion of 'The Singularity' and 'The Age of Abundance' etc.
A4. AI, Employment and Human Meaning. In general past experience shows that when technology replaces humans in an employment role, there are normally new types of jobs created that in time make up for the original loss. Nobody living has ever seen the scale of change that AI may bring however and the 'new jobs' may never come in the numbers needed. The period ahead will bring violent change and, as yet, few are aware of what lies ahead. One thing we can be sure of is that in many geographical locations many people will react with anger.
Universal Basic Income (UBI) is often discussed as a partial solution. The idea is that the new AI and humanoid robots driven production will generate wealth, and then that wealth can be paid out to people without any form of 'means testing'. There are major flaws however. What if your country as a whole is not where the wealth is being created? Also, even if everybody has enough money, how do they spend their days? Employment is a major way humans define themselves.
Important. In the short term (next two years) AI will probably not take your job unless you are in specific fields. However simple logic says "If AI itself does not take your job, somebody using AI might." In other words, if AI lets skilled users double their productivity then employers will quickly get rid of the AI unskilled workers. That was the driving force behind developing and promoting this course and website.
A5. AI and Warfare. This year (2024) we have already seen drones deployed on the battlefield that track down and kill individual soldiers. At the moment they are controlled by a person but that person can easily be replaced by AI, either at a central control point or within the drone itself. Because warfare demands fast competitive development we can be sure that what can be done will be done, and indeed there will be swarms of thousands of such drones operating together.
There are many ethical questions that arise from this but one is particularly clear. Should we authorise AI controlled weapons to make the actual decision on whether to kill a particular person. Clearly the preferred solution would be that only one human should decide on the death of another (assuming there is a war and the people providing policy here are not pacifist). In practice it is clear that there is an easy way round this for those wishing to avoid the question - a human 'tasks' the AI weapon in general terms and then the AI does the killing. All horrid and all happening somewhere near somewhat soon.
A6. AI and Privacy and Surveillance
'Bad actors' are always keen to watch customers and citizens. In the past this has been limited by the difficulty of analysing large amounts of data. However now we are surrounded by cameras and microphones and AI systems have the ability to spot any pattern of behaviour that the operator specifies. This is evident daily as advertisements pop up that relate to a person's casual conversations. The potential for more harmful misuse is clear.
A7. AI Deep Fakes and Deception. The technology for 'cloning' any image is already very good, widely available and inexpensive to perform. Any still image of a person can provide a moving lip-synched video. A few seconds of recorded speech can provide any further speech the operator wishes. Some families already agree a common codeword to use in emergency situations so as to avoid "Relative in trouble, send money" style frauds.
A8. Further concerns. Some people worry about the environmental costs of powering the vast amount of 'compute' that the new AI systems use. Others worry about heading into a future where all decisions are made by AI systems. Indeed it is certain that the ethical questions around AI will be a key political question for many years to come.
B1. What is the 'Singularity'? This is something which has been much discussed for decades by Science Fiction writers and Futurists. Think of it as being the point at which 'AI Intelligence' passes 'Human Intelligence'. We can assume that in the first instance the difference might be small but the advantages in power and speed of machines is such that 'AI Intelligence' soon becomes far greater, probably by a large amount.
It is called 'The Singularity' because this symbolises a point in time or space at which the existing known rules cease to exist and it cannot easily be predicted what lies beyond.
It used to be something people thought was far in the future. Now estimates vary but very few experts think it is far off.
Discussions here fall into three main areas:
[a] This must not be allowed to happen. How do we stop it happening?
[b] If it happens, imagine the very positive outcomes for humanity (see Longevity below).
[c] If it happens, how do we at least align the 'AI Intelligence' with our own desired outcomes?
Please consider reading the recommended books on the home page if you wish to delve more deeply into this.
B2. What is meant in this context by 'Longevity'? Ray Kurzweil has a very good record of predicting the future (see earlier book references). This year (2024) he has suggested that by 2029 the average life expectancy of a person will go up by more than one year for every year they live.
What we certainly expect is a speeding up of development by medical science. AI is naturally very good at pattern matching and a large amount of investment has already gone into specialised medical research tools. Alongside that we should expect diagnosis to become both faster and more accurate.
There is a complete Presentation on the 'Health and Longevity' aspects of AI here:
AI in Health and Longevity Presentation.
B3. the Age of Abundance. It seems very likely that one consequence of the rapid growth of Ai and of humanoid robots is that factory production will become enormously efficient. This can already be seen with companies like Tesla building 'gigafactories'. Before long, for example, cars of superior quality to today's models should be on the market for perhaps £ 10,000 or less. This will be seen throughout a wide range of products. Luxury products will still exist but everyday equivalents will be of very similar quality.
Some believe that these benefits should be reasonably evenly distributed between different demographic and income groups. One comparison is with the development of smartphones. There even the richest and best connected person only has access to the very same phones that we could all purchase if we wished.
The implications for existing business are significant. As has been seen many times in human history, when highly competitive products are quickly introduced to a market the existing suppliers can quickly go out of business. The consumer by contrast can benefit greatly, provided that their own source of income is maintained.