What is Fairness in AI Ethics and how is Fairness an ethical pillar when creating the AI version of you on Spheria ?

spheria hires head of ai ethics

In this post, we will be discussing Fairness. This article is intended to outline the core features of a fair and ethical personal AI, and our practical commitments to ensuring how we will achieve them.

Keep in mind that if at times we do not have a solution right this moment, we are showing the complexity of AI Ethics, and keeping track of an important question for us to research and solve in the future.

In our previous research, we have defined 4 pillars as the foundations of AI Ethics in Spheria, and what it means to create and own an ethical AI version of yourself.

the 4 pillars that set the foundations of creating an ethical AI version of yourself

So let's dig into what is Fairness in AI Ethics? What are the components of Fairness and how does Spheria plan on implementing Fairness as its first Ethical pillar?

1. Fairness as Equality:

At Spheria, we define Fairness as a commitment to the equal treatment of all users, and the complete denunciation of discrimination in all forms.

Our AI systems are designed to treat everyone equally, ensuring that no one gets treated differently based on who they are.

How is fairness applied on Spheria?

  • We ensure equal access for anyone to create the AI version of themselves.
  • We ensure equal treatment when using someone's AI, such that an AI on Spheria treats every visitor and user with the same respect and access to information.
  • We set the ethical baseline to ensure against any discriminatory answers given by the AI.

2. Fairness against creating or perpetuating Bias

Bias and potential negative consequences on AI systems

There are unfortunately too many examples of biased AIs out there. We define Bias in AI Ethics as a differentiation in the decision-making processes that results in an unfair outcome, especially when targeted at a specific person or group. We want to stay conscious of this problem as it deserves our full attention.

Bias is important to evaluate because it can have direct negative consequences through the perpetuation of ideas, statements or stereotypes against people or groups of people. Simply put, nobody using Spheria should be discriminated against for who they are.

How do we prevent Bias in Spheria?

Bias exists by definition in someone's personal AI because their Spheria is the reflection of their opinion, tastes and preferences.

For now, we set our ethical baseline on Bias to the following :

  • protect users' freedom of expression and their right to have their own thoughts and opinions. This is unconditional.
  • allow AI to be different, think differently and celebrate differences,
  • not taking a dominant way of thinking as the truth
  • evaluate and set hard limits to prevent the consequences that bias can have, and to protect against the perpetuation of harmful ideas against people or groups of people.

3. Fairness by protecting against Abusive Language and Harm

Each tech product out there has its own moving line for setting the limit to abusive language and defining harmful content.

At Spheria, we set our ethical baseline as follows: we won't tolerate any abusive language or imagery that targets a particular person or group, perpetuates hatred and bias against a person or group, or calls for violent action.

How will we implement this on Spheria?

  • This will be done by targeting both the input (users training their AI), and the output (the AI giving an answer, or speaking in your name). Abusive Language and hateful content (as defined above) shall not be permitted to enter into the system, nor will it be displayed to users.
  • This will protect the platform against both Bias, Harm and Abusive Language.
  • We've published a dedicated section that looks into content moderation and the content policy rules for the AI version of you on Spheria.

4. Your Ideas, Your Property and Protecting what's Yours.

privacy is a foundamental aspect of creating and owning the AI versoin of yourself on Spheria

Your ideas are like treasure, and at Spheria, we celebrate that. There are very few AIs out there that exist entirely from your ideas and what you teach them, most AI pretend to know everything!

But at Spheria when you share something with your AI, it stays yours. Our ethical baseline, which is immovable, is that all the content you share with your AI stays yours. It's yours to keep, always. Since day 1, we've set a transparent and clear commitment to Data privacy and have received lots of compliments for it by our users.

This is why it's obvious to us that protecting your data ownership is a fundamental component of Fairness and of creating an Ethical platform to host the AI version of yourself.

Ideas constantly flow and it's our natural human behaviour to adopt, share or reject ideas. All of this is perfectly compatible with training and growing your AI.
But things become touchy when a user shows the intention to pass off other people's ideas or work as their own.

Our early reflection for protecting intellectual property and ideas on Spheria :

  • When users train their AI from a website, we will work at keeping track of the source of the information. When the AI gives an informed answer, we will work to make the source of information more explicit in the user interface.
  • One step further, we have to consider the attribution of an idea. It's very difficult, so we are still evaluating this question. For example, if a user trains his AI by pulling data from a professional Blog, Wikipedia or an expert's analysis, we have a duty to differentiate how the user's authentic opinion exists compared to the author's original opinion, evaluate if the author is referenced, and evaluate if the user is just copy-pasting content from that source or adding his own layer of thought to it.

5. Certainty

We know most AI out there can generate pretty much anything. How often do you ever get a response from an AI like ChatGPT and think “I wonder if that's really true?”. Not cool, right?

At Spheria, we tied Certainty as a component of Fairness because your AI is a direct reflection of you. Your AI will only give answers it is sure about, even if it means saying “I don't know” a little too often. If you receive an answer from someone's official AI, we ensure it accurately represents the point of view of that person.

How does Spheria deal with certainty?

  • The certainty threshold is set above 95% (this is also the value usually used as the threshold for statistical significance), which means that even if a piece of information ranks at 92% certainty, it may not be strong enough to use given the question asked by a visitor or from the ongoing discussion context.
  • Our job is to obviously improve the quality of the results while maintaining high certainty. We are constantly working on our algorithm for certainty and data ranking.

End Note: How Diversity is Our Strength:

If it's not clear by now, at Spheria, we celebrate differences and believe in equal treatment. From our team to our product and even our social media, we care about ensuring diversity and inclusion, and believe this is at the heart of our company. This entails a commitment to hearing different perspectives, being open-minded to new ideas, and creating a safe environment for feedback by taking the steps to learn and understand bias and discrimination and allowing all voices and perspectives to interact with and shape the product, we know that Spheria will be better off for it! Together we commit to working and developing a fair and equitable system for all of our users.

We welcome and encourage comments, emails, questions, and ideas from all of our users. Have a question or idea? Feel free to send me an email at Alejandro@spheria.ai