My research project at King’s College London aims to provide an original contribution to the ethics of AI by showing that current, preferred theories of moral status theories may require us to grant moral parity between humans and advanced AI systems of the near future.
Modern AI systems, especially LLMs such as Chat-GPT, are already capable of extraordinary feats of knowledge gathering, problem-solving, reasoning and arguably, understanding. These include complex concepts that were until recently the preserve of human minds. However, almost no-one believes that AI systems are deserving of levels of moral status similar to humans, or even animals. A creature has moral status when it counts for something, in its own right. Yet, when we investigate what grounds moral status, the idea that AI systems could never have such status seems vulnerable. This research project considers what grounds moral status; identifies what conditions in an entity need to obtain for moral status to be owed; and asks whether a near-future AI system could obtain those conditions. It concludes that under certain conditions that, even if not widespread, are possible, we might owe certain AI entities a degree of moral status. If this conclusion can be adequately defended, the consequences of such a state of affairs should be considered before such morally relevant AI entities come to exist.