It's not just outages and bugs. Cognitive, organizational, legal, and social failures emerge from the interaction between AI and the humans around it. The Crossfactors Framework applies a Human Factors Engineering lens to catalog this full surface area: the known failure modes, the unintended consequences, and the hidden leverage points.
AI teams can be good at managing the risks they can see - accuracy, latency, bias. Those are your known unknowns. You have dashboards for those. Someone owns them.
But most of the ways AI breaks enterprise won't show up on dashboards. They don't have owners. You may not even know they exist. Those are your unknown unknowns - and they're the ones that become incidents.
The Crossfactors Framework maps out the vocabulary you need to find them before they find you.
Each of these is a documented, named phenomenon - not an edge case, but a recurring pattern with real consequences.
When an AI system causes harm, responsibility flows downward to the nearest human operator - even if they had no meaningful control. The human absorbs the blame the system cannot.
Humans instinctively defer to an automated system's answer, even when they know better. Trained professionals do it too. The machine's confidence becomes the user's confidence.
The more we delegate to AI, the worse we get at doing it ourselves. The human backup degrades quietly, as does the organization as a whole.
Automation makes easy tasks effortless and hard tasks harder. It handles the routine, then leaves humans to manage complex failures with reduced situational awareness.
An AI that writes fluent legal briefs and cannot reliably count words. Competence and incompetence don't follow any predictable pattern. This makes calibrating trust extremelly difficult.
Alert systems trained to catch everything eventually catch nothing. Too many warnings desensitizes operators. Critical signals get lost in the noise.
The same 200+ factors, organized through different analytical lenses. Pick a lens and explore each category.
I got my start as a physicist developing new medical imaging technologies. I learned firsthand the problems of integrating AI (neural nets at the time) into the workflow of experts (radiologists). When the ChatGPT moment arrived, I foresaw the same dynamics playing out at enterprise scale.
Then, in early 2023, transcripts from a deposition in a fatal Tesla Autopilot crash case revealed that the Head of Autopilot software had no knowledge of his software's operational design domain, the perception-reaction time of its users or even whether his team had ay human factors expertise. He only saw himself as a software engineer.
This confirmed that the AI industry is full of blind spots.