Natural Law in the Age of Artificial Intelligence: Evaluating the Benefits and Drawbacks of Natural Law in Fast-Moving Legal Systems

Tamara Long

Natural Law in the Age of Artificial Intelligence: Evaluating the Benefits and Drawbacks of Natural Law in Fast-Moving Legal Systems

Introduction
Artificial intelligence (AI) is reshaping governance, economics, and law, challenging established ideas of authority, agency, and accountability. In legal systems, AI automates decisions, predicts outcomes, and enforces compliance by delivering efficiency but heightening risk. As technology outpaces regulation, the task is to preserve the human values that underpin justice. Natural law, grounded in reason and the common good, provides a durable lens for judging legal and moral legitimacy in an algorithmic age.
This essay examines how Aquinas, Grisez, Finnis, and Crowe reinterpret natural law for modern contexts and how their ideas inform Australian AI governance through the Artificial Intelligence Ethics Framework (2019), the Safe and Responsible AI in Australia: Interim Response (2024), and the Robodebt Royal Commission (2023). It argues that while natural law remains a vital moral foundation, its relevance depends on continual adaptation within pluralist, technologically driven societies.

1. Aquinas and the Rational Foundation of Law

Thomas Aquinas’s Summa Theologica remains the cornerstone of natural law theory. He defines law as “an ordinance of reason for the common good, made by him who has care of the community, and promulgated.”[1]For Aquinas, law must be rational, ordered toward the common good, and aimed at promoting moral virtue.[2] It is divided into four interconnected kinds: eternal law (the divine order), natural law (the participation of human reason in divine law), divine law (revelation), and human law (positive law enacted by governments).[3] Human law draws its authority from its conformity with natural law; when it deviates from reason and morality, it ceases to be true law.[4]

Aquinas’s natural law also grounds the concept of human dignity. Because humans possess reason, they share in divine rationality. Every act of lawmaking and adjudication must therefore respect human rationality and autonomy.[5]

This human-centred orientation foreshadows modern human rights discourse and directly parallels the ethical principle in Australia’s AI Ethics Framework that AI must be designed to serve human wellbeing.[6] Algorithms that make decisions affecting people’s lives must operate under human supervision capable of moral evaluation, a point that becomes critical when examining the Robodebt scheme’s failures.[7]

2. Grisez and the Modern Renewal of Natural Law

Germain Grisez sought to revitalise Aquinas’s moral reasoning for a secular and pluralistic world. In Natural Law, God, Religion, and Human Fulfillment, Grisez argues that moral norms derive from the intrinsic structure of human nature rather than divine command.[8] He identifies several “basic goods,” including life, knowledge, friendship, play, aesthetic appreciation, practical reasonableness, and religion, that together constitute human flourishing.[9] The function of law, in turn, is to coordinate and protect these goods, ensuring that society promotes genuine human fulfilment.[10]

Grisez’s approach modernises natural law by grounding morality in practical reason rather than theology. Moral truth is discovered through reflection on human experience and rational choice, not revelation.[11] This shift allows natural law to remain relevant in secular democracies like Australia. It also aligns with contemporary governance frameworks that aim to justify public decisions through shared reason rather than faith-based authority. However, as Grisez acknowledges, moral pluralism creates difficulty in determining universal goods within culturally diverse societies.[12]

Grisez’s framework supports a proactive ethical stance: technology and law should not merely avoid harm but promote genuine human flourishing. This notion is echoed in the Safe and Responsible AI Interim Response (2024), which emphasises the need for AI systems to advance wellbeing and fairness through “human-centred design.”[13] By connecting law to moral purposes of, Grisez’s modern natural law provides an ethical vocabulary for emerging technologies, ensuring that progress remains guided by reasoned moral reflection.[14]

3. Finnis: The Rule of Law and the Common Good

John Finnis expanded natural law into a comprehensive modern legal theory in Natural Law and Natural Rights. He lists seven “basic goods,” including life, knowledge, play, aesthetic experience, friendship, practical reasonableness, and religion, as the fundamental dimensions of human wellbeing.[15] These goods form the basis of all moral and legal reasoning. Law’s legitimacy lies in its rational coordination of these goods for the common good.[16] The common good is not simply collective welfare but a moral order enabling individuals to pursue their goods in harmony with others.[17]

Citizens have a “generic and presumptive obligation” to obey the law because it enables cooperative coordination, but this duty ceases when laws contradict reason or justice.¹⁷ This distinction provides the moral foundation for lawful resistance to unjust systems, such as AI algorithms or administrative programs that operate contrary to fairness or truth.[18] Finnis’s concept of the rule of law, requiring general, public, prospective, and stable laws, ensures procedural justice and predictability.[19] These procedural requirements correspond directly to contemporary AI principles of transparency, accountability, and reliability outlined in Australia’s AI Ethics Framework.¹⁹

Finnis’s emphasis on reason, fairness, and moral accountability highlights why automation must never be absolute. Machines cannot deliberate or balance goods; they follow rigid sequences. For law to remain just, human oversight must persist.[20] Thus, Finnis’s principles affirm that AI decision-making, if it replaces human reason entirely, risks undermining both the moral and procedural legitimacy of law.[21]

4. Crowe: Secular Natural Law and Public Reason

Jonathan Crowe advances the natural law tradition by offering a secular and inclusive interpretation.[22] In Natural Law with and without God, he argues that moral norms are accessible through human reason and shared experience, independent of divine revelation.[23] He defines natural law as “the rational reflection on the goods necessary for human flourishing,” such as justice, fairness, and social cooperation.[24] Crowe’s secularisation of natural law allows it to operate within pluralist democracies where legal legitimacy must be justified to citizens with diverse worldviews.[25]

Crowe’s theory bridges classical natural law and contemporary liberal thought by rooting moral obligation in human nature rather than theology.[26] This model strengthens public accountability: legal systems are judged not by faith-based doctrine but by their rational service to human welfare.[27] In the context of AI, this secular framework ensures that ethical standards for technology, such as transparency, fairness, and explainability, can command universal acceptance.[28] However, Crowe acknowledges that moral pluralism creates interpretive difficulties. While reason identifies universal goods, societies may disagree about their prioritisation.[29] This mirrors debates in AI governance over whether safety, efficiency, or privacy should take precedence in algorithmic design.[30]

Crowe’s contribution is crucial to reconciling ancient natural law theory with modern public reason.[31] It allows natural law to inform legal and policy debates in a secular environment like Australia, where lawmaking must appeal to reason shared by all citizens.[32] This secularised natural law is evident in the ethical frameworks governing AI, which articulate moral principles such as fairness, accountability, and transparency that resonate with Aquinas’s and Finnis’s traditions, even if expressed in policy rather than theology.[33]

5. Natural Law Reflected in Australian AI Governance

The Artificial Intelligence Ethics Framework (2019), developed by CSIRO’s Data61, translates natural law principles into contemporary policy language.[34] It establishes eight principles: generate net benefit, do no harm, ensure legal compliance, protect privacy, promote fairness, ensure transparency and explainability, enable contestability, and uphold accountability.[35] These principles closely parallel Aquinas’s conception of rational law aimed at the common good and Finnis’s demand for procedural fairness.[36]

The Framework also reinforces human oversight and responsibility. It mandates that “humans in the loop” must retain the capacity to review, contest, and correct automated outcomes.[37] This principle reflects Aquinas’s view and resonates with Grisez’s emphasis on practical reasonableness.[38]

The Safe and Responsible AI in Australia Interim Response (2024) further operationalises these ideas. It acknowledges the moral and legal challenges of high-impact AI systems and emphasises the need for “trustworthy AI” built upon transparency, accountability, and societal oversight.[39] It proposes regulatory measures that ensure human decision-makers remain accountable for automated outcomes, reinforcing the natural law principle that moral responsibility cannot be delegated to machines.[40] The report’s call for “society in the loop” explicitly echoes natural law’s vision of law as a collective expression of reason oriented to the common good.[41]

Yet these frameworks, while valuable, risk becoming performative if detached from substantive moral reasoning.[42] Natural law critiques such procedural ethics as incomplete; fairness and transparency must be grounded in moral understanding of human dignity and justice.[43]

6. Robodebt: A Case Study in the Collapse of Natural Law

The Royal Commission into the Robodebt Scheme (2023) offers a vivid illustration of what happens when law is stripped of moral reasoning.[44] The Robodebt program, introduced by Services Australia in 2016, used automated data-matching between welfare and tax records to identify overpayments.[45] However, it relied on averaging annual income data rather than verifying actual fortnightly earnings, contrary to the Social Security Act 1991 (Cth).[46] This algorithmic shortcut generated hundreds of thousands of false debts and caused significant distress.[47] The Commission observed that the system was a “complete administrative failure”.[48]

Terry Carney describes Robodebt as the culmination of “zombie decision-making,” a bureaucratic reliance on automation that eliminated human discretion and empathy.[49] The program’s logic violated Aquinas’s conception of law as rational and just. It also breached Finnis’s principle of the common good, replacing moral judgment with blind computation.[50] Anna Huggins characterises this as a “translation error” between statutory language and algorithmic code, an error that turned law into an instrument of injustice.[51] As Huggins noted, statutory provisions are inherently interpretive and contextual, while encoding them into rigid algorithms strips away their meaning.[52]

From a natural law standpoint, Robodebt represents a complete collapse of moral and legal legitimacy. The absence of human oversight and moral reasoning transformed lawful administration into systemic injustice.[53] The system not only breached positive law but violated the moral law’s foundational precepts: do good, avoid evil, and act according to reason.[54] As the Commission concluded, there was “no meaningful human intervention” in debt calculations, a direct affront to Aquinas’s belief that law must be applied by rational moral beings.[55]

7. Integrating Natural Law and Secular Ethics in Future Governance

The challenge for contemporary law is to integrate moral reasoning into technologically mediated decision systems. Natural law provides a philosophical foundation for doing so.[56] Its focus on reason, justice, and the common good ensures that ethics frameworks go beyond procedural formality.[57] Finnis’s emphasis on authority grounded in moral reason clarifies that humans must remain accountable for the systems they design.[58] Crowe’s secularisation ensures this accountability is inclusive and rationally accessible in a pluralistic democracy. Grisez’s focus on human fulfilment reminds policymakers that law’s ultimate purpose is to promote wellbeing, not efficiency.[59]

Practical implementation requires embedding these moral principles in governance. This includes mandatory ethical impact assessments, continuous monitoring for fairness, and enforceable accountability for automated decisions.[60] The inclusion of “humans in the loop” and “society in the loop” reflects Aquinas’s understanding that law is not mechanical but relational, rooted in reason and dialogue.[61] Future reforms must combine the procedural safeguards of secular ethics with the substantive moral insight of natural law to ensure justice in the digital age.[62]

Conclusion
Natural law remains a moral compass in an age of automation. Aquinas anchored law in reason and the common good; Grisez recast it through human flourishing; Finnis formalised its principles of justice and authority; and Crowe adapted it for secular societies. Together, they frame a vision of AI governance where technology serves humanity, not the reverse. Australia’s ethical frameworks echo this interdependence between law and morality, while the Robodebt scandal exposes the danger of reducing law to code without conscience.
Natural law’s strength lies in uniting legality and morality through reason; its weakness lies in the reflective effort it demands in a culture of expedience. Yet its relevance endures. It offers the moral and intellectual structure needed to ensure that AI systems and laws remain anchored in justice, reason, and the common good.

  1. Brian Davies, Thomas Aquinas’s Summa Theologiae: A Guide and Commentary (Oxford University Press, 2014) 212–228. ↑

  2. Ibid [212]. ↑

  3. Ibid [213]. ↑

  4. Ibid [215] ↑

  5. Ibid [214-215]. ↑

  6. D Dawson et al, Artificial Intelligence: Australia’s Ethics Framework (Discussion Paper, Data61 CSIRO, 2019). ↑

  7. Royal Commission into the Robodebt Scheme, Report of the Royal Commission into the Robodebt Scheme, Volume 1 (Australian Government, 2023) 1–1052. ↑

  8. Germain Grisez, ‘Natural Law, God, Religion, and Human Fulfillment’ (2001) 46(1) The American Journal of Jurisprudence 3–36. ↑

  9. Ibid [7]; Aiyar, ‘The Problem of Law’s Authority: John Finnis and Joseph Raz on Legal Obligation’ (2000) 19(4) Law and Philosophy 465 [466]. ↑

  10. Ibid [6]. ↑

  11. Ibid [27 -30]; Jonathan Crowe, ‘Natural Law with and without God’ (2024) 4(1) Australian Journal of Law and Religion 17 [17-22]. ↑

  12. Ibid [17-22]; John Finnis, Natural Law and Natural Rights (2nd ed, Oxford University Press, 2011) 1-511 (Clarendon Law Series). ↑

  13. Department of Industry, Science and Resources (Cth), Safe and Responsible AI in Australia: Australian Government’s Interim Response (Report, Australian Government, 2024) 1–25; ↑

  14. Grisez (n 8) [26]; Adam Henschke, Ethics in an Age of Surveillance: Personal Information and Virtual Identities (Cambridge University Press, 2017) 252-266 [252]. ↑

  15. Finnis (n 12) [85]. ↑

  16. Ibid; Grisez (n 8). ↑

  17. Finnis (n 12) [134]; Aiyar (n 9) [487]. ↑

  18. 18. Finnis (n 12); ↑

  19. Ibid (n 12) [260]; (n 9) [476]. ↑

  20. Department of Industry, Science and Resources (n 13); Anna Huggins, ‘Addressing Disconnection: Automated Decision-Making, Administrative Law and Regulatory Reform’ (2021) 44(3) UNSW Law Journal 1048–1077. ↑

  21. Finnis (n 12); Royal Commission into the Robodebt Scheme (n 7). ↑

  22. Crowe (n 11) [17-19] ↑

  23. Ibid [24]. ↑

  24. Aiyar (n 9) [466-489]. ↑

  25. Crowe (n 11) [25-28]; Murphy, Mark C, Natural Law in Jurisprudence and Politics (Cambridge University Press, 2006) 112-132, Cambridge Studies in Philosophy and Law. ↑

  26. Ibid (n 11) [25-28]. ↑

  27. Ibid (n 11); Henschke (n 14). ↑

  28. Dawson (n 6) [57]; Department of Industry, Science and Resources (n 13) [8]. ↑

  29. Crowe (n 11) [17-22]; Grisez (n 8) [30]. ↑

  30. Department of Industry, Science and Resources (n 13) [15-18]; Terry Carney, ‘Artificial Intelligence in Welfare: Striking the Vulnerability Balance?’ (2021) UNSW Law Journal (Advance) 1-36 [4-5]. ↑

  31. Crowe (n 11); Finnis (n 12). ↑

  32. Ibid (n 11); Huggins (n 20). ↑

  33. Dawson (n 6) [7-13]; Department of Industry, Science and Resources (n 13) [10-25]. ↑

  34. Department of Industry, Science and Resources (n 13) [1-5]. ↑

  35. Ibid (n 13) [8]. ↑

  36. Davies (n 1); Finnis (n 12). ↑

  37. Dawson (n 6) [33]. ↑

  38. Davies (n 1); Grisez (n 8). ↑

  39. Dawson (n 6) [1-6]. ↑

  40. Ibid (n 6) [4-6]: Finnis (n 12). ↑

  41. Department of Industry, Science and Resources (n 13) [22-25]; Crowe (n 11) [25]. ↑

  42. Henschke (n 14) [252-255]. ↑

  43. Finnis (n 12); Grisez (n 8) [23-25]. ↑

  44. Royal Commission into the Robodebt Scheme (n 7) [1-10],. ↑

  45. Ibid. ↑

  46. Carney (n 30) [15]; Huggins (n 20) [1056]. ↑

  47. Carney (n 30) [16]; Huggins (n 20) [1057]. ↑

  48. Royal Commission into the Robodebt Scheme (n 7). ↑

  49. Carney (n 30). ↑

  50. Davies (n 1); Finnis (n 12); Royal Commission into the Robodebt Scheme (n 7). ↑

  51. Huggins (n 20) [1052]. ↑

  52. Ibid. ↑

  53. Royal Commission into the Robodebt Scheme (n 7); Finnis (n 12); Carney (n 30); Department of Industry, Science and Resources (n 13) [9]. ↑

  54. Davies (n 1); Grisez (n 8). ↑

  55. Royal Commission into the Robodebt Scheme (n 7); Davies (n 1). ↑

  56. Finnis (n 12); Crowe (n 11); Davies (n 1); Grisez (n 8). ↑

  57. Finnis (n 12); Dawson (n 6) [6-13]. ↑

  58. Finnis (n 12); Carney (n 30). ↑

  59. Crowe (n 11); Henschke (n 14); Grisez (n 8); Department of Industry, Science and Resources (n 13). ↑

  60. Dawson (n 6) [10-13]; Department of Industry, Science and Resources (n 13). ↑

  61. Davies (n 1); Department of Industry, Science and Resources (n 13). ↑

  62. Crowe (n 11); Finnis (n 12). ↑