top of page
Research
In my investigation of the moral computation of intelligent systems, I lean on a broad range of methodological tools, including but not limited to: behavioural experiments, computational modelling, reinforcement learning, game theory and formal logic.
I postulate three prerequisites for constructing a successful moral agent. Where an agent is defined as any computational system that can take actions in their environment.
1. Moral Motivation. The agent needs to be motivated to be moral.
2. Moral Intelligence. The agent needs to be able to improve their moral performance.
3. Social Intelligence. The agent needs to be able to work well with others.
For any given moral problem, I study the following three levels of analysis:
1. Description: How does the agent currently attempt to solve the problem?
2. Prescription: What is the morally efficient solution to the problem?
3. Intervention: What intervention most improves upon the moral efficiency of the agent?
Here some representative problems that my collaborators and I hope to progress solving...
... How do agents learn to internalise the values of other's into their own decision-making?
... How do agents learn a metacognition over their model of an agent's preference?
... How do agents learn efficient strategies for verifying the credibility of a cooperative commitment?
Mission Statement
I strive to address the most pressing and unresolved questions within the field of computational morality. It is my hope that through studying the computational foundations of moral intelligence, I will have established the principles necessary for improving the moral efficiency of intelligent systems including: Humans, AI, governments, law, companies and institutions, alike.
If my research makes good on its intended purpose, we should all be more informed as to how to more effectively utilise our limited resources to serve our collective interests.
bottom of page