AI Safety: LiberalArts Approach

Liberal Arts equips us to approach AI safety as a ‘human systems’ problem, not a purely technical one. In practice, the risks that matter most—misaligned incentives, opaque institutions, cultural blind spots, brittle governance, and unintended social consequences—emerge at the boundary between code and society. A siloed viewpoint can optimize one layer (model performance, legal compliance, or ethics statements) while missing how the whole system behaves when deployed at scale. By contrast, liberal arts training builds the habit of integrating ways of knowing: empirical evidence and statistical reasoning, philosophical clarity about values and responsibility, historical awareness of how technologies reshape power, and communicative skill for building legitimacy across stakeholders. That integrative capacity doesn’t replace technical expertise—it makes it safer—because it helps us ask the right questions early, notice second-order effects, and design AI that is trustworthy not only in the lab, but in the lived reality of diverse communities.

The Issue: An AI Literally Attempted Murder To Avoid Shutdown

The Approaches:

Team: Emile Englebrtecht Kristensen, Gaby Palacios, Moksh Oswal, Tawsia Rozana, Jackson Dawson

Previous
Previous

The BAD/AI+ Wellbeing develop & implement community wellbeing.

Next
Next

ICLA Students Build Tomorrow