There’s a conversation happening in almost every boardroom, leadership team meeting, and executive retreat right now. It goes something like this: “We need an AI strategy.” Someone nods. Someone else mentions a tool they’ve heard about. A third person suggests forming a committee. And then the meeting moves on, and nobody quite knows what comes next.

This is understandable. AI is moving fast. The tools are genuinely powerful. The pressure to “do something” is real. But there’s a fundamental error embedded in that conversation, and it’s going to cost organizations that don’t catch it.

The error is this: treating AI as a technology problem rather than a leadership problem.

The Tool Is Not the Advantage

Here’s the uncomfortable truth about the current moment: the AI tools that could give your organization a meaningful edge are available to everyone. ChatGPT, Claude, Gemini, Copilot — these are not proprietary assets. Your competitor can access the same tools, at roughly the same cost, starting today.

This means the tools themselves are not your competitive advantage. They’re the table stakes. The advantage goes to the organizations that adopt them most thoughtfully, most strategically, and most effectively — and that is entirely a leadership question, not a technology question.

The organizations that will win the AI era are not the ones who adopt the most tools. They’re the ones led by people who understand what AI is genuinely good for, what it’s not good for, and how to build the culture, processes, and governance structures to use it well.

What AI Is Actually Good For

One of the most useful things I do with leadership teams early in an AI engagement is a brutally honest inventory of where AI creates real value versus where it creates the illusion of value.

AI is genuinely excellent at:

  • Reducing the time cost of first drafts. Documents, communications, reports, proposals — AI can get you to a solid starting point in minutes instead of hours. The human still needs to review, edit, and own the output. But the blank page problem is largely solved.
  • Processing and synthesizing large volumes of information. AI can read, summarize, and extract patterns from data sets, documents, and research at a scale and speed no human team can match.
  • Handling repetitive, rule-based tasks. Scheduling, routing, formatting, tagging, sorting — any task that follows consistent rules and doesn’t require judgment is a strong candidate for AI automation.
  • Augmenting expertise. A skilled leader who understands AI can use it to pressure-test their thinking, explore options they haven’t considered, and move faster on decisions without sacrificing quality.

AI is consistently weak at:

  • Judgment under uncertainty. AI operates on patterns from the past. Novel situations, ethical edge cases, and decisions with high human stakes require human judgment.
  • Genuine relational intelligence. AI can simulate empathy. It cannot exercise it. Any interaction that requires genuine understanding of another person’s experience, context, or dignity needs human engagement.
  • Accountability. AI cannot own an outcome. Someone in your organization still has to. When AI is in the loop, clarity about human accountability matters more, not less.

The Three Leadership Decisions That Actually Matter

When I work with organizations on AI strategy, we always arrive at three decisions that have more impact than any tool selection:

Decision 1: What will we use AI for, and what will we not? This is a values and strategy question, not a technology question. Organizations that haven’t made this decision explicitly will have it made implicitly — by individual employees making their own calls, inconsistently, without governance.

Decision 2: How will we maintain quality and accountability when AI is in the loop? AI errors are real. AI hallucinations are real. AI outputs can be confidently wrong. The organizations that use AI safely are the ones that have built review processes, quality standards, and clear human ownership into every AI-assisted workflow.

Decision 3: How will we develop our people to work well with AI? AI literacy is not a technical skill — it’s a leadership competency. Every member of your leadership team needs to understand what AI can and can’t do, how to evaluate AI outputs critically, and how to integrate AI assistance into high-quality decision-making. This doesn’t happen by accident.

A Word on Ethics

The organizations I’m most concerned about are the ones moving fast on AI adoption without asking the harder questions: What data are we feeding these tools? What are we doing with AI-generated outputs that affect real people? What happens when AI gets it wrong in a consequential situation?

These aren’t abstract philosophical questions. They’re leadership questions with practical, legal, and reputational stakes. The time to build your ethical AI framework is before you need it, not after an incident makes it urgent.

The Bottom Line

Every organization is going to be using AI. That’s not a question. The question is whether you’ll be using it strategically or reactively — intentionally or chaotically — in a way that creates real advantage or just generates noise.

That outcome is a function of leadership, not technology. And that’s actually good news. Because it means you have more control over it than the current conversation might suggest.


If you want to think through what a genuine AI strategy looks like for your organization — for-profit, nonprofit, or faith-based — let’s have that conversation.