Image of =""

Ramsay Brown

Ramsay is an Intellectual Forum Senior Research Associate.

Ramsay Brown is the founder and CEO of The AI Responsibility Lab and Mission Control, the platform for trusting Generative AI. Based in Los Angeles, Rams has spent his career bridging innovation to impact: leading in nanotechnology, brain mapping, behavioural engineering, AI, virtual reality therapeutics, and AI Safety. He works with the Fortune 500, governments, and militaries to make the future approachable and actionable to leaders today. Now, as CEO of Mission Control, Rams helps the world's most successful brands accelerate quality, velocity, and trust in AI. You can find Rams' life and work in 60 Minutes, Time Magazine, GQ, and on the air with Monocle Radio. Rams holds an MSc and BA from the University of Southern California.

What are you working on now?

Now I'm working on AI Governance. We're living through the most important time in human history; the very fabric of the built world around us is becoming imbued with synthetic intelligence. This simultaneously represents the greatest opportunity to accelerate human flourishing we've ever encountered, as well as the greatest peril for personal, societal, and existential risk. As CEO of The AI Responsibility Lab Public Benefit Corporation, and through my work with the IF as a Senior Research Associate, my colleagues and I accelerate the memetics, software, and global leadership community that creates a pathway towards safer, more flourishing outcomes with AI.

How has your career led to this?

While at the University of Southern California, I specialized in computational neuroscience and connectomics: the study of how the structure and connectivity of the nervous system generates animals' goal-oriented, adaptive complex survival behaviour. I had the privilege to study and work under Dr Larry Swanson, ostensibly the greatest living neuroanatomist. He promoted a systems-level approach to thinking of behaviour: as emergent of particular cybernetic architectures for which the brain is but one type. That systems-level thinking breeds an intellectual humility that makes it easier to take AI seriously as a scientist and engineer: AI systems are other types of "thinking machines", as we are too. Getting really comfortable with the interplay of computation, behaviour, and cognition opened a lot of conceptual doors for me, and led to the founding of my first startup, Dopamine Labs. Working on AI-powered behavioural engineering at scale was a tremendous chance to explore how AI impacts peoples' behaviour. But it was really when I was invited to speak at a NATO summit in 2018 that my career listed towards AI Governance and AI for Good. Speaking on the topic of "Do Humans have Free Will in the Age of AI" to military generals was an eye-opening experience; the future was arriving, and the world's most powerful organizations sought to align this exponential technology with our virtues and laws. That really was the lightbulb moment for me.

What one thing would you most want someone to learn from what you've done or are doing right now?

The most important thing anyone could know about our work is that the world we're stepping into every day is going to become increasingly alien in the next few years, really quickly. We're collectively almost completely unprepared to co-exist with thinking machines. And if that feels like a thought that's hard to think; it is. Human brains are nightmarishly bad at judging the speed or scale of accelerating change - especially if it's a singular, infrequent kind of change. That's what's happening with AI right now. And it's not really our fault that we're so bad at this; it's a 'cognitive bias': a sort of 'bug' in the code of our cognition that prevents us from thinking clearly or making good decisions. In neuroscience, we call this particular bias "Normalcy Bias". Normalcy Bias keeps us from acknowledging and acting on large, singular events that are happening that will remarkably impact our lives. It's why people don't get out of the way approaching hurricanes, it's why we're having such a hard time with the climate catastrophe, and it's happening with AI right now. It's why I think our work is so critical. If you want a glimpse of what 10 years from now will be shaped like, try and imagine what the world (the economy, law, society, entertainment, sex, nature, sports, religion, work) is shaped like when human-level synthetic intelligence is too cheap to meter.

What do you think of Jesus College and the Intellectual Forum?

I'm extremely impressed with Jesus College and the Intellectual Forum. After hosting our first annual Leaders in Responsible AI Summit with Dr Huppert and his team in 2023, I was blown away. The complete coherence of the stunning grounds, world-class faculty, excellent staff, and the cultural dedication to putting in the work to facilitate the most pressing conversations is exactly what we look for in a partner. Julian has collected not only brilliant thinkers, but an exceptionally compelling team that runs the IF. I'm extremely grateful to be a Senior Research Associate at the Intellectual Forum, and look forward to continuing our great work together for years to come.

You can meet the rest of the Intellectual Forum team or contact us via email.