The AI+Well-Being Institute, explores how artificial intelligence can enhance human flourishing, strengthen communities, and support adaptive public policy in times of rapid change. Our work bridges research, education, and practice; combining primary research, narrative analytics, cultural insight, & student-led projects to address urgent societal challenges. By treating well-being as the true measure of progress, we position AI not as a distant technology, but as a present-day tool for resilience, creativity, and equity. This institute is both a research hub and a learning platform, designed to equip the next generation with the skills and imagination to shape a thrivable future.
Located within the ICLA (International College of Liberal Arts in Japan), the Institute collaborates with many international & local organisations, such as the Universities of Tokyo & Melbourne & Significance Systems.
Why do very different languages converge on a similar communication rate? Our new paper argues that language is not built to maximize information, but to achieve shared understanding through noise, ambiguity, and repair. Communication, we suggest, works by compressing meaning across shared worlds.
Sleep may do far more than store memory. Our new model suggests the sleeping brain actively edits experience, pruning noise and preserving what matters most. In computational terms, sleep may be solving a compression problem.
AI Won’t Lead … But your People Will. Chris Lowndes, AI lead for Accenture & John Ricketts hold a series of workshops & discussions with ICLA students. Here we introduce the Value Creator 2035 blueprint — five human competencies our curriculum is engineered to build, and why they matter for your future talent pipeline. Watch the videos, read the findings. Get ready for the future … that’s where you’ll spend the rest of your Life !
Policy failures from climate shocks to pandemic mismanagement reveal the inadequacy of static, prediction based governance. “This is not a call to optimism, but to capability… to listen deeply, design bravely & fail intelligently”
Global GDP has tripled since 1970, but life satisfaction in wealthy nations has barely moved. Our paper argues that the key issue is not how much economies grow, but where that growth goes. What matters for wellbeing is spending composition, not spending alone.
Safer AI is possible. The tools already exist — but many are not used because they slow growth and hurt engagement metrics. Our new paper argues that AI safety is not just a model problem, but an institutional one.
The Wellbeing Observatory is an ongoing research Project at the AI+Wellbeing Institute. Based upon a novel approach to quantifying contribution or detracts from wellbeing. This interactive global dashboard that visualizes “wellbeing efficiency” - the extent to which economies support or detract from wellbeing - across countries from 1970 to today.
This simulation provides an interactive exploration of the empirical relationships between personality, cognition, and authoritarian attitudes as established in the peer-reviewed literature. It is designed for educational use in courses covering political psychology, personality research, and social cognition
To make AI safer, we need more than technical skill. We need people who can connect evidence, values, history, culture, and governance. That is what liberal arts brings to AI safety.
The Dark Enlightenment is no longer just fringe theory. Its anti-democratic logic now echoes through parts of Silicon Valley and the New Right. This critique examines how those ideas moved from the margins toward power.
What happens to purpose in a world where AI can do so much? This short documentary follows a personal journey of losing direction, then finding it again through reflection, family, and the Ikigai model. It asks what remains uniquely human — and where meaning can be found, and nurtured.
What happens when a place builds on its own culture rather than importing a model from outside? Brookvale Arts District shows how light coordination and shared identity can unlock local energy, investment, and long-term momentum.
ICLA Students Are Designing What Comes Next 2028–2032: a possible breaking point, when “normal” no longer holds. As society rethinks labour, trust, and value, our students are asking the most important question: what world do we want to build? And then they begin building it.
Care work is everywhere — except where economies choose to look. This campaign uses AI-assisted erasure to remove caregivers from familiar scenes, making their absence impossible to ignore. The labor remains invisible; the workers do not.