Lies, Damned Lies, and Statistics AI

Matt Heys
Senior VP, Artificial Intelligence & Neural Genesis
We are all inherently biased individuals. We all harbour deep-seated, problematic views formed from the sum of our experiences and the insidious impact of wider discriminating societal ideologies and the hateful whims of media moguls. We all suck basically, and part of the human condition is to constantly try and break free of the negative and unhelpful thought patterns we’ve been conditioned into adopting as our own. For example: I hate cucumber – It’s a tasteless, watery, vegetable that doesn’t deserve the right to take the place of avocado maki in a sushi selection. Ok, so that’s a bit tame but I’m not exactly going to reel off a bunch of overtly racist, sexist or homophobic views and get myself CANCELLED™ am I? Also, I’m none of those things. This is all the proverbial, royal ‘we’ – i.e. ‘you’ all suck – ‘I’ am great.
Amazing start to a blog, right? Insult your audience, check ✅.
I’ve established the premise that humans are terrible and a scourge on societal harmony, BUT, at least now, thanks to the information age, we have a Monopoly get-out-of-jail-free card in the form of completely ‘objective’ artificial intelligence!
EXCEPT, no, we don’t.
Well, yes and no. There’s no malicious intent when it comes to machine learning models because, well, there’s no such concept as intent in machine learning models. At the end of the day, it’s all just maths (feel free to drop the ‘s’ if you’re American – it’s incorrect but I’ll allow it), algorithms, and probabilities – and whilst I may somewhat subscribe to a fatalistic view of existence, I still believe humans have the ability to ‘make decisions’; maths is deterministic. Mathematicians and physicists: Don’t come at me with your imaginary numbers and quantum realms, thanks.
So, AI alone cannot scan through a list of CVs and decide to reject candidates because it thinks their names sound ‘too ethnic’, they’ve mentioned a same-sex relationship, or they reference being in a wheelchair basketball team. AI could be instructed to do that, and without any form of moderation (a hot topic amongst the creators of foundational models), it may carry out the task – but it won’t just make those judgement calls on its own.
However, I stress the word ‘intent’ in all of this; maths cannot have intent. That doesn’t mean that machine learning models are free of any bias though. Unless an equation is mysteriously unearthed in the ruins of some ancient temple to the gods – most of our understanding of the world comes from observation vs. theory. Einstein was a clever dude, but he didn’t just magically say E=mc² and it became fact. In fact, the fact is, that scientists don’t really deal in facts. Theories exist for as long as they fit with observation and other theoretical models. When I was doing my degree in Physics, Astrophysics and Cosmology, we couldn’t reference papers from before 1998 due to the discovery that the expansion of the universe was accelerating – invalidating the accepted stance that a coefficient in Einstein’s Theory of General Relativity, the cosmologic constant, should be set to zero.
Sorry, got a bit physics-y there. The point I’m trying to make is that the way we apply maths is usually based on observation – and this is essentially how machine learning models are created. We pass in a large set of data, we observe the patterns in that data, and we create equations/models based on those observed patterns/interactions. For example, if you plot the number of deaths vs. age at time of death, you’ll notice an upwards trend – i.e. people tend to be older when they die, which matches with our expectations, so we can validate it. A basic model trained on this data will suggest the same thing because it’s just applying trends that it sees in the data and making predictions based on these. (1)
The problem comes when the data itself exposes problematic biases that exist in the real world. For example, the incarceration rate in the US is, on average, 6 times higher for black people than white people (2), which is widely understood to not be a reflection of criminality of different racial groups but of a fundamentally racist justice system. If we were to train machine learning models on this data and replace judges with cyborg adjudicators, making use of the outputs of these algorithms to determine the guilt of the accused, then a lot of innocent black people would be going to jail. However…this is already the case, so the cyborgs would just be continuing an unjust system.
Part of the solution is to identify where bias exists in data and then assess how the outputs of a model are to be used. The risk of maternal death is almost three times higher among women from Black ethnic minority backgrounds compared with White women (3). If we use this to help create targeted interventions for higher risk ethnic minority groups, then we’re acknowledging the bias and using it to try and change the underlying issue. If, however, we incorporate ethnicity into predicting whether a death was expected or not (i.e. the NHS has existing measures, Hospital Standardised Mortality Ratio (HSMR) and Summary Hospital-level Mortality Indicator (SHMI) – to my knowledge, these don’t currently include ethnicity), we could be ignoring many maternal deaths as ‘expected’ for people of colour. The data and model outputs remain the same in both cases – the underlying bias hasn’t changed – but it’s how we’re using the outputs that makes the difference.
So far, the examples I’ve suggested have been theoretical, at least in as far as I’ve described them. I’m not sure how long I’ve got before I’m murdered in my sleep by a rouge cyborg adjudicator but when that happens, avenge me dear reader. Avenge me.
In this case, the problem is that the underlying data used to train the classification model was clearly lacking in good representation of people of colour. This may also point to an issue with inclusivity and diversity in the research team involved in creating the classification models. A more diverse workforce with a wider range of life experiences, may have unearthed this issue before it came to market.
Bias exists in everything we do, whether we like it or not. We need to be mindful of identifying where biases are evident in models and assess how we are mitigating this – especially when it comes to using the outputs. The approach that we follow at Cyferd is to evaluate the following criteria when integrating AI into workflows:
- Identify whether any protected characteristics (direct or indirect) are provided as inputs and how this may be surfaced to users
- Define the intended use of AI and make it clear where it exists in a process
- Gather a representative set of test-cases to evaluate the models
- Assume the worst (what can I say, I’m a pessimist)
And above all else, remember to flatter your cyborg rulers. In fact, this works wonders for humans too – how do you think I got where I am today?
Bibliography
1. Office for National Statistics. [Online] 8 11 2024. https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths/bulletins/deathsregisteredweeklyinenglandandwalesprovisional/weekending8november2024.
2. Bureau of Justice Statistics. [Online] 29 12 2022. https://www.prisonpolicy.org/blog/2023/09/27/updated_race_data/.
3. NHS Digital. [Online] 7 12 2023. https://www.npeu.ox.ac.uk/mbrrace-uk/data-brief/maternal-mortality-2020-2022.
Find out more About Cyferd
New York
Americas Tower
1177 6th Avenue
5th Floor
New York
NY 10036
London
2nd Floor,
Berkeley Square House,
Berkeley Square,
London W1J 6BD
Request a Demo
Comparisons
BOAT Platform Comparison 2026
Timelines and pricing vary significantly based on scope, governance, and integration complexity.
What Is a BOAT Platform?
Business Orchestration and Automation Technology (BOAT) platforms coordinate end-to-end workflows across teams, systems, and decisions.
Unlike RPA, BPM, or point automation tools, BOAT platforms:
- Orchestrate cross-functional processes
- Integrate operational systems and data
- Embed AI-driven decision-making directly into workflows
BOAT platforms focus on how work flows across the enterprise, not just how individual tasks are automated.
Why Many Automation Initiatives Fail
Most automation programs fail due to architectural fragmentation, not poor tools.
Common challenges include:
- Siloed workflows optimised locally, not end-to-end
- Data spread across disconnected platforms
- AI added after processes are already fixed
- High coordination overhead between tools
BOAT platforms address this by aligning orchestration, automation, data, and AI within a single operational model, improving ROI and adaptability.
Enterprise BOAT Platform Comparison
Appian
Strengths
Well established in regulated industries, strong compliance, governance, and BPMN/DMN modeling. Mature partner ecosystem and support for low-code and professional development.
Considerations
9–18 month implementations, often supported by professional services. Adapting processes post-deployment can be slower in dynamic environments.
Best for
BPM-led organizations with formal governance and regulatory requirements.
Questions to ask Appian:
- How can we accelerate time to production while maintaining governance and compliance?
- What is the balance between professional services and internal capability building?
- How flexible is the platform when processes evolve unexpectedly?
Cyferd
Strengths
Built on a single, unified architecture combining workflow, automation, data, and AI. Reduces coordination overhead and enables true end-to-end orchestration. Embedded AI and automation support incremental modernization without locking decisions early. Transparent pricing and faster deployment cycles.
Considerations
Smaller ecosystem than legacy platforms; integration catalog continues to grow. Benefits from clear business ownership and process clarity.
Best for
Organizations reducing tool sprawl, modernizing incrementally, and maintaining flexibility as systems and processes evolve.
Questions to ask Cyferd:
- How does your integration catalog align with our existing systems and workflows?
- What is the typical timeline from engagement to production for an organization of our size and complexity?
- How do you support scaling adoption across multiple business units or geographies?
IBM Automation Suite
Strengths
Extensive automation and AI capabilities, strong hybrid and mainframe support, enterprise-grade security, deep architectural expertise.
Considerations
Multiple product components increase coordination effort. Planning phases can extend time to value; total cost includes licenses and services.
Best for
Global enterprises with complex hybrid infrastructure and deep IBM investments.
Questions to ask IBM:
- How do the Cloud Pak components work together for end-to-end orchestration?
- What is the recommended approach for phasing implementation to accelerate time to value?
- What internal skills or external support are needed to scale the platform?
Microsoft Power Platform
Strengths
Integrates deeply with Microsoft 365, Teams, Dynamics, and Azure. Supports citizen and professional developers, large connector ecosystem.
Considerations
Capabilities spread across tools, requiring strong governance. Consumption-based pricing can be hard to forecast; visibility consolidation may require additional tools.
Best for
Microsoft-centric organizations seeking self-service automation aligned with Azure.
Questions to ask Microsoft:
- How should Power Platform deployments be governed across multiple business units?
- What is the typical cost trajectory as usage scales enterprise-wide?
- How do you handle integration with legacy or third-party systems?
Pega
Strengths
Advanced decisioning, case management, multi-channel orchestration. Strong adoption in financial services and healthcare; AI frameworks for next-best-action.
Considerations
Requires certified practitioners, long-term investment, premium pricing, and ongoing specialist involvement.
Best for
Organizations where decisioning and complex case orchestration are strategic differentiators.
Questions to ask Pega:
- How do you balance decisioning depth with deployment speed?
- What internal capabilities are needed to maintain and scale the platform?
- How does licensing scale as adoption grows across business units?
ServiceNow
Strengths
Mature ITSM and ITOM foundation, strong audit and compliance capabilities. Expanding into HR, operations, and customer workflows.
Considerations
Configuration-first approach can limit rapid experimentation; licensing scales with usage; upgrades require structured testing. Often seen as IT-centric.
Best for
Enterprises prioritizing standardization, governance, and IT service management integration.
Questions to ask ServiceNow:
- How do you support rapid prototyping for business-led initiatives?
- What is the typical timeline from concept to production for cross-functional workflows?
- How do licensing costs evolve as platform adoption scales globally?
