If you thought managing people was challenging
This is not ending well.
As predicted by typical technology adoption patterns, AI is being deployed function by function in organizations, and the results are exactly what coordination science would have predicted.
This makes sense for a lot of practical reasons. Engineering needs different help than marketing which needs different things than accounting and right on down the line.
But this has created a few really big problems that are now playing out in real time:
Departmental AI agents have little visibility into the nuances of the different departments and use cases, and they cannot manage the inevitable tradeoffs required by different functions.
Marketing’s AI agent wants feature X included in the next release because it will put the company ahead of the competition, but engineering’s AI agent says that adding that will add more than two months to the development schedule and raise the estimated product cost by 10%.
This type of conflict is leading right back to humans with different perspectives and priorities having to decide between the two options with incomplete information. While humans always work with imperfect information, the AI fragmentation is making it worse by creating rapidly moving information silos. Does new information generated from the engineering AI take precedence over new information from the marketing AI?
And this is not the only scenario where this is playing out. This type of conflict is playing out across all the interactions across all the job functions. HR and finance, purchasing and production and so on. It gets exponentially more complicated the more functions that are involved.
Banking organizations are already experiencing conflicts where risk management AI agents flag customers for loan restrictions while relationship management agents simultaneously try to retain those same customers through enhanced service offerings. Manufacturing companies report coordination failures between supply chain AI agents making procurement decisions and production AI agents making capacity planning decisions, leading to production delays and quality issues.
It is still left for humans to make the difficult tradeoff decisions balancing the competing requirements of different functions. While it is technically possible to develop an AI to mediate these types of conflict, it is unlikely that it will be able to do a better job at it than humans. The speed at which all of this is happening is exceeding the ability for traditional organizations to manage this very well. Only 30% of AI pilots reach production because they can’t handle this coordination complexity.
Some organizations are attempting federated AI systems or AI orchestration layers, but these approaches bring their own coordination complexities and are proving difficult to implement at scale. Perhaps an AI with cross functional awareness can anticipate and navigate these inevitable conflicts with better planning and greater visibility, but the world is still governed by the unexpected.
Moreover, AI agents can still be blind-sided by black swan events. They don’t have any special ability to predict the future.
If your organization struggles with cross functional collaboration and coordination now, it is not getting better with the addition of function specific AI agents. In fact, the evidence shows it’s getting worse. Organizations report decision delays when multiple AI agents provide conflicting recommendations, increased manual intervention requirements, and employee resistance when AI agents provide inconsistent guidance.
While organizations are looking at some centralized AI governance techniques, few companies are adequately addressing coordination and collaboration across functionally specific AI agents. With 68-85% of large organizations now deploying AI agents across functions, and 48% failing to monitor these production AI systems, the coordination crisis that coordination science predicted is happening faster than most organizations can adapt.
Leave a Comment