Strategy & Insights

Why CIOs Need to Get Ahead of AI Adoption Before Their Employees Do It for Them

After nearly 30 years leading IT organizations, the pattern I keep seeing is this: the biggest AI risks aren't the ones leadership planned for.

Amir Belferman
Guest Contributor
Share this article

About the Author

Amir Belferman is a global CIO with nearly 30 years of experience across the technology industry and government. He has led large international teams through major digital transformation programs, with a consistent focus on how technology can drive real business outcomes, not just IT roadmaps. Amir advises organizations on AI adoption, enterprise platform strategy, and building IT functions that operate as strategic business partners.

A few years ago, I deployed a discovery tool across our environment to get a picture of which AI tools were actually being used inside the organization. What I found surprised me. Employees had already adopted multiple AI tools independently, some of which introduced real security exposure, and none of it had gone through IT. The organization had not waited for us to catch up. In hindsight, I should not have been surprised. That is almost always how it goes.

The pressure to adopt AI is coming from everywhere now, not just from employees who want better tools, but from management who want lower costs and higher productivity. The question most CIOs are wrestling with is not really whether to adopt AI. It is how to build an enterprise AI adoption strategy that does not create more problems than it solves.

Shadow AI is already inside your organization

Most leadership teams assume the risk shows up after a formal AI rollout, once the procurement process is done and the implementation is underway. In my experience, it surfaces much earlier, often the moment employees discover a tool that makes their work easier and start using it without telling anyone.

What makes AI different from previous technology waves is that the risk is not primarily a technology risk. It is a human behavior risk. It is user-driven rather than IT-driven, which means it is harder to see and harder to contain. AI tools also create specific concerns that older software did not: questions about data retention, model training on proprietary information, and long-term exposure that organizations may not fully understand until much later.

Leadership tends to frame shadow AI as a security problem, and it is, but the more immediate issue is usually organizational readiness. When employees go outside official systems to get work done, you end up with shadow IT at scale, and that creates visibility gaps that take years to close.

Why slowing down increases your AI governance risk

There is a temptation, when facing this kind of complexity, to move carefully. To wait until the governance framework is fully built, the vendor assessments are complete, the strategy is signed off. That caution is understandable, but it tends to backfire.

When employees do not have access to sanctioned tools, they find other ways. Shadow AI expands. IT loses the visibility it needs to manage risk, and by the time a formal solution is ready to roll out, the organization has already built habits and dependencies around tools that nobody in IT knows about. Future adoption becomes significantly harder as a result.

A CIO who moves slowly on AI is not necessarily being prudent. In many cases, they are trading one kind of risk for another, and the one they are left with is harder to manage.

"When IT is seen as an enabler rather than a blocker, the CIO gains greater influence with leadership and can shape decisions that actually drive business outcomes."

What a proactive CIO AI strategy actually looks like

I always tell my teams that being proactive is not just about managing risk, it is about positioning. CIOs who get out ahead of AI adoption give their organizations the ability to take on new capabilities in a controlled way. That changes how the business sees IT. Instead of coming to IT when something goes wrong, they start involving IT earlier, when decisions are still being made. That is a fundamentally different relationship, and it is a much better place to operate from.

In practice, a proactive approach means monitoring how major vendors are integrating AI before employees start self-selecting alternatives. It means evaluating emerging tools early, not necessarily to deploy them, but to understand what the business might gravitate toward and whether there is a safer path to the same outcome. And it means working to move users toward licensed, controlled tools rather than simply blocking the ones they are already using.

One thing I have learned over the years is that expectation management matters at every stage of this. IT is a service organization, and we can only do our job well when the business treats us as a partner. That means setting honest expectations with both employees and leadership about what IT can and cannot deliver, and what we need from them in return. Collaboration is not optional here. It is the whole model.

The Salesforce complexity challenge CIOs are underestimating

There is a related challenge that I think CIOs in enterprise environments are feeling acutely right now, particularly those running Salesforce or similar platforms. Over the past decade or so, these platforms have expanded well beyond their original scope. Each wave of expansion brought new capability, but also new complexity. The underlying technology has changed multiple times. Integrations and dependencies have accumulated. And through all of it, users still expect tools that are easy to use, delivered quickly, and within budget.

The reality is that many organizations are struggling to keep up. Every new business request now involves substantial development work, and the knock-on effects across connected systems add time and cost that is hard to predict. Budgets built around Salesforce a few years ago often do not reflect what it actually takes to deliver today.

This is part of what makes tools that reduce development dependency so valuable. When analysts can deliver solutions without relying entirely on specialist developers, and when those solutions remain aligned with established platform standards, it changes the economics of what is possible. Organizations with limited IT resources can move faster, and teams that were bottlenecked can deliver. That is the kind of leverage that makes a real difference to the business.

Where IT leaders should focus right now

For CIOs thinking about how to approach enterprise AI adoption, my advice comes down to a few things: 

  1. First, do not treat AI as something that is coming. It is already here, and the organizations that are doing well with it accepted that early and built around it rather than trying to hold it at arm's length.
  2. Second, work with leadership to develop an enterprise-wide AI strategy that covers the full organization, not just the product and engineering teams. The back office functions are just as important to operational stability, and they are often where shadow AI takes hold first because demand is high and sanctioned tools are scarce.
  3. Third, invest in vendor partnerships. AI adoption that is secure, controlled, and sustainable does not happen in isolation. It happens because IT is actively engaged with the vendors building these capabilities, shaping how they get deployed rather than inheriting whatever the business has already decided.

The underlying principle in all of this is the same one I keep coming back to: the role of IT is to be an enabler, not a blocker. That is not just a philosophy. It is what earns IT a seat at the table when the decisions that shape the organization's future are being made.

Transform Existing Systems
into
an Operational Agentic Enterprise

Delivered in Days, Not Months.