The Algorithm That Sentenced a Man—And No One Knows Why Meet Eric Loomis. In 2016, he was pulled over in La Crosse, Wisconsin, driving a car linked to a recent shooting. Loomis wasn’t charged with the shooting itself but pleaded guilty to lesser offenses: attempting to flee an officer and driving a vehicle without the owner’s consent. On paper, these were relatively minor felonies. But when it came time for sentencing, something unusual happened. Loomis’s fate wasn’t decided solely by a judge or jury—it was shaped by an algorithm. Wisconsin had adopted a proprietary risk-assessment tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) as part of a push for “data-driven justice.” The software was designed to predict a defendant’s likelihood of reoffending, theoretically helping judges make fairer sentencing decisions. COMPAS scored Loomis as high-risk, suggesting he was likely to commit another crime. That score became a key factor in the judge’s decision to sentence him to six years in prison. Here’s the catch: No one—not Loomis, not his lawyers, not even the judge—knew how that score was calculated. The algorithm was a black box, its inner workings kept secret by its developers. What data was used? What factors mattered most? No one could say. Loomis appealed, arguing that sentencing someone based on unreviewable, unexplained evidence violated due process. The case reached the Wisconsin Supreme Court, which ruled—shockingly—that the use of COMPAS was acceptable. The court acknowledged the tool’s flaws and warned against overreliance on it but ultimately decided that as long as a human judge had the final say, the algorithm’s role was permissible. In other words: An AI made a life-altering decision, no one could explain why, and the court said that was fine—as long as a human rubber-stamped it. Trucks may not yet be pulling up to gas stations demanding we mere humans use our opposable thumbs to fill their tanks, but they could be thinking about it. Accountability: From Campfires to Courtrooms Accountability isn’t just a human invention—it’s a biological imperative. Social species, from apes to humans, enforce norms to maintain order. Apes punish cheaters, share food based on contribution, and even exhibit a rudimentary sense of fairness. For early humans, accountability was immediate and visceral. Steal from the tribe? Face exile. Endanger the group? Risk death. Over millennia, these instincts hardened into customs, then laws. The evolution of justice has been a slow march from arbitrary power to reasoned rule. Kings once claimed divine right—rule “because I said so.” But revolutions in thought—Magna Carta, Locke’s social contract, Beccaria’s arguments for proportionate punishment—shifted accountability from gods to people. Yet now, after centuries of demanding transparency from power, we’re handing decision-making back to unquestionable authorities—not kings or priests, but algorithms we can’t interrogate. The Problem with Machine “Decisions” When a human makes a choice, we expect a reason. Maybe it’s flawed, maybe it’s biased—but it’s something we can challenge, debate, and refine. Machines don’t work that way. AI doesn’t reason—it calculates. It doesn’t weigh morality—it optimizes for probability. Ask an AI why it made a decision, and the answer is always some variation of: “Because the data suggested it.” Consider AlphaGo, the AI that defeated world champion Lee Sedol in 2016. At one point, it made a move so bizarre that commentators thought it was a glitch. But Move 37 wasn’t a mistake—it was a game-winning play. When engineers asked why AlphaGo made that move, the answer was simple: It didn’t know. It had just calculated that the move had the highest chance of success. Brilliant? Yes. Explainable? No. Agentic AI: Decision-Making Without Oversight If black-box algorithms in courtrooms worry you, brace yourself. AI isn’t just recommending decisions anymore—it’s acting autonomously. Enter Agentic AI: systems that don’t wait for instructions but pursue goals independently. They schedule meetings, draft reports, negotiate deals, and even delegate tasks to other AIs—all without human input. Google’s Agent-to-Agent (A2A) protocol enables AI systems to coordinate directly. Workday touts AI handshakes, where agents manage workflows like hyper-efficient middle managers. But here’s the terrifying part: We can’t audit these systems. As Dr. Adnan Masood, Chief AI Architect at UST, warns: “AI-to-AI interactions operate at a speed and complexity that makes traditional debugging and inspection almost useless.” When AI agents collaborate, their decision chains become unfathomably complex. “Explainable AI” tools offer plausible-sounding rationales, but they’re often post-hoc justifications, not true explanations. Who’s Responsible When AI Goes Rogue? In human systems, accountability is clear. If a judge sentences someone unfairly, we can vote them out. If a manager makes a bad call, they can be fired. But in an AI-driven world, who takes the blame? The answer is no one—or worse, everyone and no one at the same time. The Future: “Because the Algorithm Said So” Eric Loomis’s case was a warning. Today, AI shapes who gets hired, who gets loans, who gets parole. Tomorrow, it could dictate medical treatments, military strikes, and legal outcomes—all without explanation. We’re outsourcing judgment to machines that can’t justify their choices. And once we accept that, we’re left with only one answer when we ask why: “Because the AI said so.” Is that the future we want? Like Related Posts AI Automated Offers with Marketing Cloud Personalization AI-Powered Offers Elevate the relevance of each customer interaction on your website and app through Einstein Decisions. Driven by a Read more Salesforce OEM AppExchange Expanding its reach beyond CRM, Salesforce.com has launched a new service called AppExchange OEM Edition, aimed at non-CRM service providers. Read more The Salesforce Story In Marc Benioff’s own words How did salesforce.com grow from a start up in a rented apartment into the world’s Read more Salesforce Jigsaw Salesforce.com, a prominent figure in cloud computing, has finalized a deal to acquire Jigsaw, a wiki-style business contact database, for Read more