Our approach to AI
Inclusivity, sustainability, and human empowerment are all at the core of how we approach
the development of AI driven products. Bias is an endemic issue within AI products. We
don't claim to have solved this problem, but our AI architecture is designed in such a way
that biases that may be present in the data we use to evaluate our AI don't have an
implicit impact on our models. As our products are aimed at improving people's ability to
do their jobs, the concept of AI improving the working experience and not replacing it
entirely is one of our fundamental guiding principles as a company.
Spirit work with some of the largest companies across the games, media and other industries.
We process billions of messages each month and ensure communities thrive across the world.
We do things a little differently
As a rule, we don't habitually use one AI technique to solve all problems. We use the best
tool to solve each problem at hand and then combine multiple layers of the solution within
a systems architecture. This ensures the solution as a whole is more capable than the sum
of the individual parts. We avoid using black-box models for our live services. This makes
our solution fully auditable. Our solution is also deterministic for the live service
ensuring that results are repeatable and consistent. This approach enables us to very
quickly dive in and fix issues or fine tune our AI when addressing support requests from
our customers.
Our footprint
Sustainability is a very broad topic with critical consequences. At Spirit AI, we consider
it our responsibility to consider both the environmental and social impacts of our
products and how we operate as a company. Our software architecture and approach to AI
development has allowed us to operate our products with a minimal computational footprint
in production. Our choice of infrastructure provider means the electricity used to operate
our SaaS products in the cloud is generated from renewable energy sources. We are striving
to become carbon neutral by the end of 2021 and are already close to achieving this goal.
From day one, Ally has been engineered with these environmental and societal impacts in
mind. From moderator well-being to ensuring automated actions are fully transparent and
easily accessible.
Taking our societal impact seriously
As should be the case for all AI companies, we take the impact of our AI software on
society very seriously. Our software isn't aimed at removing people from the workplace but
instead intended to improve the efficacy and safety of that work. By using AI to shield
moderators from as much toxic content as we can, we strive to improve their mental health.
By using AI to enable people to work more effectively and automate mundane and highly
repetitive tasks, we strive to make their work more meaningful and more engaging.
When applying AI to address the dangers of exposure to online toxicity, we have to take
inclusivity into account. Our aim is to ensure people who are marginalised and victimised
online because of who they are and how they self-identify are given the voice and
protection they deserve. We go to great lengths to minimise any bias in our AI and are in
continual conversation with our customers to ensure both our software matches their
community management policies and that these policies reflect the common goals we share
with our customers.