Why your AI company should have a Head of AI Ethics on day 1

spheria hires head of ai ethics

Six month ago, we launched Spheria, a platform where people can create and host the AI version of themselves. As founders and consumers, we knew that from day 1 we wanted to build a product that would protect user privacy, away from the abuse of Google and Facebook…

Today I'm sharing our experience of having a Head of AI Ethics as the very first employee for your startup, and how he turned our naive good intentions into actual science and a foundational framework, so we can build a legitimate platform that people trust to create the AI version of themselves.

Realizing how much we didn't know about Ethics

As most founders, we are focused on delivering a great product and growing the user base, while trying to stick to our moral compass.

On the very first meeting, Alejandro, our new Head of AI Ethics, brought to us a framework that would help organize the big questions around Privacy and Ethics. Instead of us just creating an unorganized list of principles, he immediately cross-referenced ethical frameworks that he had seen across his research and deployed in existing organizations.

Our Head of AI Ethics introduced us to a concept called Procedural Fairness, a framework used by the World Bank to make sure Fairness is at the center of its decisions and policies. So the biggest win (right after the second meeting!) was for us to graduate from a chaotic list of good intentions to using Ethics frameworks that are actually used by researchers in international organizations.

world bank framework for procedural fairness used by Spheria

Principles and operational criteria of procedural fairness - a framework used by the World Bank that we adapted to set our AI Ethics foundations

spheria's framework to procedural fairness in ai ethics for owning a personal ai

Right after that second meeting, we defined 4 pillars for Spheria to use as our foundation to AI Ethics: Transparency, Fairness and Accountability and Privacy. Using this framework allowed us to visualise the relations between each pillar, and see how one idea can have ripples and implications across multiple pillars.

The consequences were immediate : these pillars would immediately lead to asking ourselves the right questions, and bring a new dimension of awareness:

  • how do we evaluate Fairness for our product ?
  • how do we make sure all features we create are inclusive ?
  • As a platform that creates new AI based on real ideas, are we accountable for the perpetuation of discrimination ?
  • how are we transparent and accountable when making an arbitrary decision ?

our experience hiring a Head of AI Ethics to help set moral safeguards

It's okay to not have all the answers, but it's important to ask the right questions

During our first month, every meeting with our Head of AI Ethics felt like opening Pandora's box, in a good way : a million questions arrived around freedom of speech, bias, inclusion, censorship, and each one felt as legitimate and urgent to answer as the other. “The goal..” Alejandro put it, was to “elevate ourselves to a higher level of confusion”. This mindset, that it was okay to work towards the right answers, as long as we did so in a transparent and inclusive way, would become the foundation of our ethical policy.

It became clear this would take time and a lot of consideration. So we started an internal document to list every question that was brought up during meetings. We needed to keep track of all the ideas.

We would write the questions as they came, then spend one minute evaluating if the question was properly asked and what tension did it create around which ethical concept, then where and how that tension would exist in the product or how it's being used. Finally, we would evaluate if this question could be broken down into smaller bits, to bring more granularity.

For example, the spontaneous question “what is our moderation policy?” needed to be broken down into sub-questions like:

  • “in what cases do we need a moderation policy?”
  • and “are there laws that want to prevent someone from adding into their own AI's knowledge sensitive or illegal content?”
  • all the way to: "should we filter the input, ie, blocking the owner of an AI from adding illegal content into it? Or should we block the output, ie, the AI from sharing information about illegal content?”
  • to finally touch the point of tension: “how hard do we need to prevent the perpetuation of immoral content, that is (let's be real) available elsewhere on the internet?”

That list of open questions today is significant, but it's also great as it lays our foundations as a company, a moral entity, to give a real direction for the team to build a future we believe in.

Given the extensive list of open questions, we aim to progressively provide pieces of answers that will show the progress we've made, and be transparent about our findings, resolutions, and decisions with our users.

Be accountable to our users - actions speak louder than words

Most companies and startup want users to trust them and will create a nice catch-phrase around “we love privacy”, “we are ethical” and get away with it.

After our launch, we saw that our privacy page was the most visited page after our landing page. We knew we had to do more than just a privacy policy, but at the time we didn't necessarily know how or what to do.

Having our Head of AI Ethics in our Team allowed us to act on this, to show our real and tangible work. We created our AI Ethics Hub to show our dedication, our efforts to be transparent and allow users to follow our progress.

Spheria set AI Ethics and Privacy to set the best conditions to create your AI double

By creating our AI Ethics Hub we feel we were doing right by our users, especially considering what we're building with Spheria, which is to let people create the AI version of themselves.

Our users don't think about all of this when they create their official AI double, but privacy, ethical rules and transparency are so tightly imbricated into creating your digital self, that as the makers and founders, it's our job to be transparent, protect privacy and provide ethical rules.

We wanted to make accessible to our users all the work, questioning, and resolutions about being a platform that hosts the AI version of thousands of real people.

We hope this helps in highlighting the difference between startups that are actively engaged in Privacy and Ethics, and those that just put pretty words on their landing page...

Setting the foundations to a company's culture

Having our Head of AI Ethics join our team so early was the best possible trigger to help build the right culture at Spheria. De-facto, it put the right values and principles we talked about above at the foundation of our startup, and these foundations would always be present to support the future we're building.

“Stay scrappy” is what an investor told me a few month ago. While scrappy does not mean unethical, having a full time Head of AI Ethics makes us accountable to all our users, and accountable to each team member, who may speak up against lines being bent.

The goal for us is to avoid any future (embarrassing and possibly dishonest) situation similar to when OpenAI's CTO replied “I don't know” to the journalist asking what data was used to train the new Sora video model.

So I'm happy to say that us being on this track definitely helps me sleep at night. It brings me some reassurance and a little boost of confidence to face the thousands of hurdles of growing a startup. It's also strong signal for users, future hires and investors, to judge us in the light of our actions.