To analyse whether machines can be persons, we must first establish the necessary criteria for personhood: rationality; creativity; autonomy; responsibility; ability to communicate meaning through language; ability to reflect on one’s experiences, feelings and motives as well as those of others; have both mental and physical characteristics; possess a network of beliefs; and the ability to be social, establishing a sense of self through relationships and sentience.
Technology has been rapidly developing – we now have human-like robots such as ASIMO, possessing some of the characteristics of personhood such as language – it can call objects by their name, mental and physical characteristics – it has a spatial perspective, network of beliefs – it can make inferences and decipher between objects. Although these characteristics aren’t as well developed as those in humans, and only some of the characteristics are present, ASIMO does show us that machines can be persons to a certain extent, and possess the potential to eventually develop all of the necessary characteristics.
If technology did develop far enough so that robots could possess all the characteristics of personhood, essentially creating androids, these androids would be capable of passing the Turing test – they would be able to hold a conversation and one wouldn’t be able to distinguish it from a human. This would imply that machines can think, as physically, they demonstrate all the characteristics which signify this. Many philosophers would still dismiss the idea that machines could ever be persons.
According to certain philosophers, such as John Searle, there seems to be something missing – in Searle’s case, “understanding”. Searle tries to show us through humans, how machines don’t actually understand. He presents us with the Chinese Room thought experiment, a scenario where a human responds in a language which it doesn’t understand, using instructions – the human’s “program” if you like. The person receiving the responses is, as a result, convinced that the replies demonstrate that the human understands the language.
Searle is suggesting the same follows for machines – they simply follow a program to simulate understanding. If machines don’t have understanding, Searle concludes they cannot be described as “thinking” in the same sense as people. For Searle to suggest that humans understand and machines only simulate understanding, we must possess a characteristic which machines don’t. Searle claims machines are missing the “right stuff” – they are made of metal and silicone, unlike humans who are biological machines who have evolved naturally.
Upon closer inspection of this claim, we find that our biological composition bares no relevance to understanding. If we gradually replaced the neurones in a human’s brain with artificial silicone neurones which fulfil the same role, Searle would argue that we would slowly lose our understanding, yet outwardly there wouldn’t any behavioural change – as the artificial neurones perform the same job, not even the human would notice any loss of understanding – if they did, they would behave differently, possibly mentioning recognizing their loss of understanding.
This demonstrates that there is no difference between the way sophisticated androids function, and the way humans function, we both believe we understand. Whilst robots are metallic silicone machines, humans are simply biological flesh and bone machines. We cannot disprove that robots understand, and we cannot prove that humans understand – both may simply be simulating understanding.
Some philosophers subscribe to the concept of philosophical zombies – a being who is indistinguishable from humans in all respects, except they would either be neurological zombies (lacking sentience), or soulless zombies. They would argue that if the same follows for machines, they would be considered a “zombie”. Upon closer analysis, the concept of a philosophical zombie is incoherent, as the mental characteristics (sentience) cannot be separated from physical characteristics without resulting in behavioural differences.
The soul is a concept for which there is no physical evidence; it would therefore follow that humans would be philosophical zombies as it could not be proven that they have souls. Taking this into account, it is hypothetically possible for machines to be persons, as there are no human qualities which machines could not one day possess. In addition, there doesn’t appear to be any logical constraints restricting a machine from meeting the criteria to be a person.