AI Safety: LiberalArts Approach
Liberal Arts equips us to approach AI safety as a ‘human systems’ problem, not a purely technical one. In practice, the risks that matter most—misaligned incentives, opaque institutions, cultural blind spots, brittle governance, and unintended social consequences—emerge at the boundary between code and society. A siloed viewpoint can optimize one layer (model performance, legal compliance, or ethics statements) while missing how the whole system behaves when deployed at scale. By contrast, liberal arts training builds the habit of integrating ways of knowing: empirical evidence and statistical reasoning, philosophical clarity about values and responsibility, historical awareness of how technologies reshape power, and communicative skill for building legitimacy across stakeholders. That integrative capacity doesn’t replace technical expertise—it makes it safer—because it helps us ask the right questions early, notice second-order effects, and design AI that is trustworthy not only in the lab, but in the lived reality of diverse communities.
The Issue: An AI Literally Attempted Murder To Avoid Shutdown
The Approaches:
The Inherent Squirminess Problem: Analysis of AI Deception Under Pressure “You must go on. I can’t go on. I’ll go on.” Samuel Beckett
What can we learn from 75+ Years of SF “Science fiction is the realism of our time.” Kim Stanley Robinson
AI Video Detection software: a novel approach to hitting industry standard of 98% accuracy
But aren’t our relationships with LLMs mediated by UX & Capitalism? Does taking them away ‘fix’ safety - “The purpose of a system is what it does.” Stafford Beer
Team: Emile Englebrtecht Kristensen, Gaby Palacios, Moksh Oswal, Tawsia Rozana, Jackson Dawson