The philosophical and regulatory tensions of trustworthy AI: an examination of trust, trustworthiness, and reliance in AI systems
This thesis explores the philosophical and regulatory dimensions of Trustworthy AI, focusing on the European Union's Artificial Intelligence Act (European Parliament, 2024) and the Ethics Guidelines for Trustworthy AI by the High-Level Expert Group on Artificial Intelligence (HLEG, 2019). As AI systems increasingly integrate into society, concerns about their ethical, legal, and societal impacts have led to regulatory frameworks and guidelines. In the case of the HLEG guidelines, an emphasis is placed on trust and trustworthiness. This thesis questions whether AI, as non-agential artefacts, can genuinely embody trust and trustworthiness in the same way as humans or institutions, raising doubts about the appropriateness of these guidelines. Drawing on prominent philosophical accounts of trust, trustworthiness, and reliance, this thesis highlights that trust inherently involves vulnerability, moral agency, and ethical motivations. These qualities appear incompatible with AI, which lacks autonomy and moral intent. While AI can achieve a high degree of reliability, it cannot fulfil the relational and ethical conditions essential for genuine trust and trustworthiness. By critically examining the EU’s Trustworthy AI framework, this thesis argues that the EU model conflates trust with mere reliance, falling short of true trustworthiness. Given the EU’s regulatory influence, this critique has significant implications for the future of AI governance, particularly concerning the appropriateness of applying human-like expectations of trust to technological systems.