top of page

Today We Got Lucky

  • Writer: Kimberly Best
    Kimberly Best
  • 8 hours ago
  • 5 min read

And Luck is Not a Plan

Presentation slide with text "Today, We Got Lucky" and "And Luck Is Not a Plan." Background features teal dots and a line. Text below reads, "When someone stands up for the right thing, they shouldn’t have to stand alone."

 We are living in dizzying times. Things are moving so fast, and so much has become possible, that I find myself witnessing developments I never would have dreamed of in my lifetime.  The events of everyday feel so surreal.  Fast, surreal, mind-boggling, dangerous.


This is the moment everyone has been warning about with artificial intelligence. Not someday. Not hypothetically. This week.


The U.S. government demanded that Anthropic, the company behind the AI model Claude, give the military completely unrestricted access to its technology. The Pentagon issued a deadline: Friday at 5:01 PM. Comply, or face contract termination, blacklisting as a “supply chain risk” (a designation normally reserved for foreign adversaries), and possible forced compliance through the Defense Production Act.


Anthropic’s two boundaries were narrow and specific: no AI-controlled weapons that fire without human involvement, and no mass surveillance of American citizens. That’s it. The company wasn’t refusing to serve the military. It was drawing a line around two uses it believes AI cannot safely or ethically perform.


Yesterday, with the deadline hours away, Anthropic’s CEO responded publicly: “These threats do not change our position: we cannot in good conscience accede to their request.”


Today, we got lucky. Someone held the line.


But here’s the part that I want to talk about: they held it almost entirely alone.


•  •  •

In my work as a conflict resolution professional, there’s a principle I return to again and again: when things are moving too fast, we have to slow down to make good decisions.


This is one of those moments. The headlines are moving at the speed of outrage. The technology is advancing faster than our ability to govern it. The political pressure is designed to compress timelines and force decisions before anyone can think clearly. Urgency is a tool of coercion, and anyone who has ever been in a high-conflict negotiation knows that the party demanding speed is usually the party that benefits from preventing reflection.


So I’m asking you to slow down with me. Just for a few minutes. Because this moment deserves more than a scroll and a shrug.


•  •  •

Step back from the headlines and ask the bigger question: What do we value?

Not what does your political party value. Not what does your industry value. What do you, as a human being living through the most consequential technological shift in history, actually believe about how this power should be used?


Do you believe the most advanced AI on the planet should operate without any ethical constraints, simply because someone with authority demands it? Do you believe “lawful” is a sufficient standard when the laws haven’t caught up to the technology? Do you believe a company that says “we won’t build autonomous weapons” is the problem in this scenario?


These aren’t partisan questions. They’re human ones. And they deserve more than reactive, fast-twitch answers shaped by whatever algorithm is feeding us our news today.


•  •  •

Here’s what the research tells us about moments like this, and it’s sobering.

A 2023 study in the British Journal of Social Psychology found that when someone confronts harmful behavior and bystanders stay silent, it doesn’t just fail to help. It actively undermines the person holding the line. Silence signals to onlookers that the confronter may be wrong, or at least unsupported. It emboldens the aggressor. And over time, it erodes the very norms the confronter was trying to protect.


Whistleblowing research tells the same story at the organizational level. A UK advocacy charity found that 73% of people who raised ethical concerns reported being victimized or forced out afterward. The University of Maryland documented what researchers call the “voice bystander effect”: even people who feel confident about speaking up will stay silent when they believe no one else will join them. Everyone assumes someone else will say something. And so no one does.


But the research also shows what changes everything: when others step forward. Coalition and collective support reduce the risk of retaliation, signal shared values, and break the silence that allows harmful norms to become permanent. The confronter’s position becomes credible not because they were louder, but because they weren’t alone.


Doing the right thing under pressure is nearly impossible when you can’t hear anyone behind you.


I know this personally. Some of my most painful professional moments haven’t come from the conflict itself, but from holding a principled position and hearing nothing but silence from the people I thought shared my values. The room gets smaller. The resolve gets harder to maintain. And the temptation to quietly give in, to tell yourself it’s not your hill, to let someone else carry it, becomes almost overwhelming.


I use Claude. I use Claude because it's not only a resource for perspective and editing, it also questions me. It holds me to a higher ethical standard in my writing and work. It is not a "yes" tool. Maybe because I ask it to give me the hard perspectives, I don't know. Yet it does. It questions me. My motives, my purpose. If you know Claude, you know it's noted for a surprising amount of empathy. This is true. Look at what Anthropic is doing. Try Claude out for yourself.


Anthropic is in that room right now. Today. At 5:01 PM, the deadline arrives. And whatever happens after that, we need to reckon with the fact that it shouldn’t have come down to one company, standing alone, deciding the ethical future of AI for the rest of us.


•  •  •

History is full of pivotal moments. Most of them aren’t recognized as pivotal until they’re over. We read about them in textbooks and think: how did people not see what was happening? Why didn’t more people speak up?


This might be one of those moments. I don’t say that for drama. I say it because the question being decided this week is not a small one: Will the most powerful technology in human history operate with ethical boundaries, or won’t it? And will the companies that try to maintain those boundaries be supported, or will they be punished into compliance while the rest of us watch?


We can watch. Or we can move.

•  •  •

I’m not here to tell you what to think. I’m a mediator. My job is to help people slow down, find clarity, and make decisions they can stand behind.


So here’s my invitation: Slow down. Ask yourself what you value. And if you find that you care about whether AI operates with ethical guardrails, say so. Not because any one company needs saving, but because values that aren’t voiced become values that don’t survive.


Write about it. Talk about it over dinner. Bring it up in your professional associations. Ask your representatives where they stand. The conversation about AI governance is not a tech conversation. It’s a conversation about the kind of world we’re building, and it belongs to all of us.


When someone stands up for the right thing, they shouldn’t have to stand alone. And when all of us stay silent, we shouldn’t be surprised when the line disappears.

Today, we got lucky. Let’s not need luck next time.

•  •  •

Kimberly Best, RN, MA, is a state-listed mediator in Missouri and Tennessee, FINRA Arbitrator, and founder of Best Conflict Solutions, LLC. With backgrounds in healthcare and dispute resolution, she helps individuals and organizations navigate challenging conversations and strengthen relationships. Her approach is rooted in the belief that conflict is natural, but how we manage it transforms our relationships and communities. www.bestconflictsolutions.com


 

Comments


bottom of page