Smart Operations v2.0
Thomas C. Fountain
Managing Member, TCF & Associates, LLC
An updated vision for business transformation which delivers a digitally enabled business operating model based on optimizing business response to actual and predicted events
Executive Summary
This document presents an updated vision for Smart Operations, a digitally-enabled business operating model that optimizes business responses to actual and predicted events through intelligent processes, AI, and advanced analytics. It builds on a 2003 foundational vision that emphasized near real-time decision-making and process optimization, integrating people, processes, and technology to achieve hyper-competitiveness in dynamic markets.
Introduction and Evolution of Smart Operations
Smart Operations originated from experiences at General Electric and Honeywell, combining Six Sigma, Lean principles, and early digital technologies to improve speed, cost, and quality in business processes. The original vision focused on automating well-defined process steps and leveraging analytics for decision support. Over two decades, advances in cloud computing, AI (including machine learning and generative AI), and cybersecurity have expanded the potential of Smart Operations, necessitating a refreshed vision—Smart Operations v2.0—that fully exploits these technologies while addressing business and people change management challenges.
Core Concepts of Smart Operations v2.0
Smart Operations v2.0 retains the original principles of optimizing business execution but now incorporates AI and next-generation capabilities to vastly expand the solution space for optimization. The approach is evolutionary, enabling incremental benefits and trust-building through phased investments. The model emphasizes a “Sense-Analyze-Optimize-Respond” cycle inspired by the OODA loop, enhanced by the proliferation of digital sensors and digital twins that provide rich environmental and operational data for predictive and prescriptive analytics.
Smart Operations Build-Out: Five Key Steps
- Understanding Processes: Deeply map and characterize current business processes, capturing both explicit and tacit knowledge, measuring performance at task and end-to-end levels, and benchmarking against entitlement and target capability levels to identify improvement strategies.
- Classic Automation: Apply mature automation techniques to low-complexity, low-variability process tasks, prioritizing improvements by ROI and impact on overall process performance.
- Task-specific Agents: Deploy AI-powered agents to handle more complex and variable tasks, leveraging machine learning and generative AI. Integrate these agents into a unified framework to enable consistent development and operations.
- Agentic Capabilities: Enable collections of agents to interoperate and optimize collectively across broader contexts, achieving global optimization beyond local task improvements. This includes modeling complex business problems such as demand forecasting, inventory management, and logistics optimization, culminating in execution orchestration across internal and external resources.
- Continuous Improvement: Continuously enhance agent capabilities, data acquisition, partner engagement, and orchestration reach to expand the solution space and improve business outcomes.
Business Transformation Considerations
The document identifies contemporary challenges such as accelerated product cycles, complex supply chains, geopolitical risks, and ESG considerations, illustrating how Smart Operations can address these through enhanced modeling, prediction, and optimization. It also highlights opportunities like sovereign interests, AI/technology advances, and unprecedented data access, which Smart Operations can leverage for competitive advantage.
Core Transformation Strategies
A successful transformation requires clear financial objectives, business strategy mapping, competency assessment, and a thorough understanding of existing applications, analytics, and enterprise data. Business capability and process design/improvements are essential, including specific plans for process, people, and technology improvements all supported by a robust communications infrastructure to maintain organizational alignment and trust.
Key Constituencies in Transformation
Five key groups are identified: Senior Business Leadership, Deal Team & Investment Professionals (especially in Private Equity contexts), Functional Leaders & Operating Partners (Again for PE situations), IT Teams, and Technology & Service Providers. Each plays a vital role in sponsoring, designing, executing, and sustaining the transformation, with distinct contributions and benefits.
The Future of Work
With the introduction of sophisticated AI and other analytical technologies Organizations must anticipate the impact on organizational design and individual responsibilities. A segmentation approach is introduced which characterize a High Volume / Low Complexity set of activities well served by “automation” capabilities; a Low Volume / High Complexity segment where advanced analytics and AI can outperform humans in speed and analytical depth; and finally a third segment where humans lead the critical responsibilities for setting vision, goals, boundaries, company principles, and provide general oversight for situations where a machine-only scenario does not hold up well for first-of-kind and unanticipated situations.
Architectural Layers
The architecture is layered into “below the line” and “above the line” components. Below the line includes traditional enterprise Applications, Infrastructure, Analytics & Data Management, and Middleware. Above the line introduces dynamic, modular layers for Agent and Agentic enablement, Intelligent Business Process Management, Classic Automation, Agent-based Execution Management, and Agentic Execution Management leveraging Context and Communications Management. This layered approach supports speed, adaptability, and scalability in business operations and flexibility for a rapidly evolving technology landscape.
Solution Operating Model
The model conceptualizes a “Sphere of Optionality” within which data is analyzed and business decisions are made to activate optimization strategies. Three dimensions are discussed: Time (past, present, future), Cross-Functionality (integrating multiple business functions), and Internal/External (data and processes inside and outside the enterprise). The goal is to find a “Point of Optimality” within this sphere that maximizes business outcomes. The system must support dynamic re-optimization triggered by significant changes in inputs or constraints, balancing responsiveness with practical considerations like human action-taker stability.
Business and People Change Management
A balanced approach segments work into three categories: high volume, low complexity tasks suited for automation; low volume, high complexity tasks suited for machine learning and other high-powered analytical techniques; and human-centric strategy, vision, and compliance set of tasks that retain critical human judgment and ethical oversight. The document emphasizes proactive organizational development, skills assessment, learning path design, and transparent communications to prepare the workforce for transformation and foster engagement and trust.
Program Guiding Principles and Operating Model
The transformation program should be modular, with clear roles for diverse constituencies and robust communication both internally and with external partners. It must demonstrate incremental value through phased investments, maintain rigorous program, and change management disciplines, and prioritize people-centric approaches to ensure adoption and success. The program phases include chartering, prioritization, process capability assessment, roadmap development, business case creation, sponsorship, deployment of automation and agentic layers, and continuous performance measurement.
Role of Technology and Service Partners
Given the complexity and scope of Smart Operations transformation, external partners are essential for business consulting, technology strategy and architecture, program and change management, and technology implementation. Partners bring expertise in aligning technology with business strategy, managing multi-phase programs, driving adoption, and delivering measurable business value. Proposed financial models create a low upfront investment approach with gain-sharing arrangements to bootstrap the transformation journey and sustain funding.
Critical Success Factors
The document underscores the importance of clear vision, continuous communication, disciplined program management, robust value measurement, and especially the central role of people in embracing and leveraging new capabilities. These factors are pivotal in achieving a thriving, continuously improving, high-performing enterprise enabled by Smart Operations.
Contents
Section 1: Smart Operations & Strategic Transformation.. 6
1. Introduction – The Genesis and Evolution “Smart Operations” 6
3. Business Transformation Considerations. 17
4. Core Transformation Strategies. 21
Section 2: Architecture & Operating Model of Smart Operations. 32
8. Solution Operating Model 39
Section 3: Transformation Programmatics. 42
9. Business Change Management 42
10. People Change Management 44
11. Program Guiding Principles. 45
12. Program Operating Model 46
14. Technology & Services Partners. 49
15. Financial Resourcing & Partner Engagement 51
16. Identification, Quantification, and Measurement of Business Value. 54
17. Critical Success & Risk Factors. 56
Section 1: Smart Operations & Strategic Transformation
1. Introduction – The Genesis and Evolution “Smart Operations”
In 2003 I published an initial whitepaper outlining a vision called “Smart Operations” – a vision which characterized the transformation of a business which:
- actively built and deployed intelligent business processes
- that leverage the accumulated intelligence of its ecosystem
- to orchestrate the optimal response to actual and predicted events
The heart of this concept was the idea that businesses who could intelligently and optimally make near real-time, optimizing business decisions are positioned to out-execute competition. The concept of having analytics and intelligent process management help guide the execution of a business made-up of people, processes, and various systems (transactional, analytical, and planning) and having those elements amalgamated in a dynamically optimized context would be super competitive in any industry.
The initial formulation of this vision was grounded in part by real-life experiences while employed initially by General Electric and later Honeywell International. At GE, the process capability and improvement disciplines of Six Sigma were broadly adopted and deployed in all GE businesses. The core tenets centered on a deep understanding of current process capability, entitlement, and target performance and provided a robust, data-driven approach to either improving or re-designing processes that were not performing at target levels. At Honeywell, lean principles were added to Six Sigma to root out waste and simplify some of the heavy analytical elements of the full discipline yet deliver similar process improvements. I saw first-hand the significant improvements in speed, cost, and quality of many key processes from on-time product delivery to reducing cycle times for jet engine design and validation. In addition to the core improvements realized, these frameworks rightly included the sustainment or “control” element that was a required part of every project to ensure that performance did not revert to prior bad practices once the project spotlight had faded.
Another formative component of the vision was the emerging discussions from the technology industry leaders of the day about how emerging technology could accelerate product and service design, manufacturing, and delivery. Bill Gates of Microsoft wrote of the “Digital Nervous System” and we recall TV ads where robotic paint sprayers immediately reacted to customer order changes. Hewlett-Packard espoused an “Adaptive Enterprise” strategy where fast compute and analytics would speed decision-making while IBM promoted the concept of “Smarter Planet” where intelligent processes leveraged broad insights. One could successfully argue that none of these concepts were fully realized and further make a case that to successfully transform a business you cannot rely solely on advanced technology. In this paper considerable content will be offered to emphasize the holistic nature of the proposed business transformation journey and the necessary deep interconnectivity between people, processes, and technology.
This original Smart Ops (for short) vision was of grounded in the technology of the early 2000s, namely basic business process management platforms, transactional applications, and increasingly data warehouses and basic query/reporting tools. The overall proposed approach was evolutionary in nature and started with basic process automation. There, early robotic capabilities as we now call them, would automate processes or steps where inputs, outputs, and processing rules were well bounded and relatively easily encoded into a modest set of rules to perform useful work. In any business process it is reasonable to expect a select set of steps where input to output transformation has a manageable degree of complexity and scale. These are ripe candidates for the relatively simpler automation techniques of the day. There is no doubt that meaningful improvements were made in the speed, cost, and/or quality of key processes such as those examples above with even these now rudimentary technologies.
If we fast forward 20 years, the explosion of data, operating complexity, number of suppliers, depth of supply chain, and such has taken us far beyond where those conventional tools would have been applicable. Fortunately, recent advances in cloud computing (low cost, high scale compute), high bandwidth networks, AI (both machine learning and generative AI (to deal with fantastically large data sets and derive intelligence), and cybersecurity have positioned us to pursue the full potential of Smart Ops. We now have the opportunity to respond to today’s business scenarios and capitalize on such tools while ensuring our tech strategy is robust to the ongoing advancements we see coming.
Therefore, we need to refresh the Smart Operations vision to take full advantage of today’s technology and enablers for what I will call Smart Operations v2.0. Note that most of the principles of the original Smart Ops still hold – faster, optimized decision-making, centered against business objectives, while bounded by a set of active constraints, and leveraging the capabilities of Partners, Suppliers, Customers, and of course internal resources. Such an aggregated set of capabilities should position any business for hyper competitiveness. As we look forward, the evolution of tools and platforms, particularly inspired by new AI developments, promise to provide breakthrough capabilities and accelerate activation of such a vision. We will delve into the dynamics of this build-out later in the paper.
We can imagine an evolutionary path being the most reasonable such that businesses can take a smooth investment profile aligned to this broader vision, but which yields incremental benefits which in turn helps bootstrap follow-on projects. This mechanism should offer a lower risk profile, be more easily funded, and critically, would build increasing levels of trust and credibility for the implementation team. Evolutionary approaches are also highly sensible given the current rate of technology introduction (especially in the AI space) as we can take advantage of new developments in turn without wholesale “rip-and-replace” events. We should anticipate, for example, such transformative market offerings including an “Agent” Marketplace” that are full of third-party, ready-to-use and pay-per-use AI agents that can be tapped into specific flows and activities on demand.
While the emerging technology is exciting and empowering, we must ground ourselves in the realities of both business and people change management. Incredible numbers of articles are pouring forth daily predicting or reflecting the impact these new technologies will have on the workforce, the business operating model, and even the nature of work itself. Content within this paper will characterize important considerations and practical steps a business should take to prepare itself and its staff for these opportunistic, sometimes traumatic, and unquestionably dramatic changes on the horizon. If done proactively and thoughtfully, organizations can take a leadership role in structuring change in a positive way and energize its staff to continually adapt and leverage technology for long term professional success.
In addition to optimizing pure business outcomes, we also expect an evolution in how this “Smart Ops” engine is built and operated. While contemplating the ultimate scale and cost of running this construct, we should consciously build in operational approaches that leverage a dynamic mix of self-built, purchased, leased, or pure pay-per-use resources where optimization capabilities determine when and where to use, for example, different agents that have varying cost, performance, and capability profiles best suited to a current unit of work. This is not something that will happen overnight but there are critical enablers that we must consciously construct to enable evolving our agents or agentic capabilities. Businesses will also be well served to rigorously understand their current capabilities in terms of data, business processes, application landscapes, business sponsorship and governance, the capacity to invest, and critically the partnerships that it has in place or can develop to tap into the required expertise, scale and thought leadership.
2. Smart Operations v2.0
As we characterize Smart Operations v2.0 from a vision perspective, much survives from the original concepts. I still foresee optimizing the execution of business, but now deeply enriched by AI and other next generation capabilities that can materially expand the solution space over which we attempt to optimize performance. In the original Smart Operations paper, the contemplation of both short term and long-term decision making was limited by the technology and compute capacity of its day. Now with the advent of public cloud and a host of other new technologies, the computational, integration, and data management boundaries that might have existed 2 decades ago have largely evaporated. That will free us to think about a virtually unconstrained, far-reaching model driven by an understanding of our business and the ability to specify the constraints and decision variables at any point in time that must be considered. As was said two decades ago, businesses must be keen to understand both their internal capabilities and those of its suppliers, customers, and partners – many of which are hyper-specialized and focused in a given area of functionality with leading capabilities and performance.
While the reader is invited to review the original paper and its in-depth examination of the various building blocks and example use cases, it is instructive to summarize the major tenets of the vision and explain practical applications of its multi-faceted offerings.
At its outset, Smart Operations is based on the concept of Sense-and-Respond. This was an early style of thinking that promoted the rapid activation of one or more, typically pre-developed, responses that were selected based on a sensed event detected by a decision-maker or action-taker. With the rise of computer-based sensors and analytical capabilities in the late 1990’s and early 2000’s, the opportunity to not only automate specific activities but introduce “intelligence” started appearing in various Use Cases in hopes of improving both the speed and efficacy of the response. Many of the early solution designs centered on simple trigger-based responses where a set of possible responses were considered and one selected in response to a specific sensed variable value or event. As compute power continued to accelerate, both the speed and depth of analyses possible in the loop also grew. That in turn created the opportunity for enhancements in the breadth and depth of decision-making while maintaining the inherent cycle time design target of the process in question. In the original paper, I advocated for the pulling apart of the Sense-Respond pair and the active insertion of Analyze and Optimize steps to demonstrably characterize the dimensions needed to effect the truly best response. I paid homage to the famous “OODA” loop principles of Observe-Orient-Decide-Act comparing a fighter pilots approach to winning a dogfight to modern businesses combatting the myriad of competitive, customer, supplier, and other dynamics all conspiring to challenge business execution.
An exciting additional development over the last two decades is the explosion of digital sensors now deployed across a multitude of physical situations. Sensors that cover acoustic, temperature, vibration, humidity, and many other environmental and physical disciplines have greatly expanded our ability to model and understand our operating environments. This provides and deeply enriches our solution strategy by covering both physical and digital elements of the solution space. In fact, the marrying of digital intelligence to physical/real-world capabilities and activities provides a much more realistic and representative model of how we conduct business. With digital sensors feeding analytical routines that help us detect or even predict a much wider range of events, we greatly improve our ability to optimally respond in real-life executable ways.
A close companion to the explosion of digital sensors is the emergence of sophisticated “digital twins.” These “twins” are in fact virtualized representations of physical members of our environment and are actual models that accurately characterize design, performance, failure modes, and many other aspects of operation. For industrial scenarios, these twins represent a unique opportunity to digitally build, test, operate, and simulate business operations without the necessary investment in physical devices and space. With such models we can more accurately predict performance and failure modes under varying operating conditions, which is invaluable for building a Smart Operations foundation.
At the heart of the Smart Operations philosophy is a simple fact: every ounce of insight and predictability we can extract from both digital and physical operations translates to increased lead time and certainty of forthcoming events, which in turn creates a larger envelope to determine optimizing future actions. That is, actively enabling and building such capabilities will maximize the solution space over which we can consider orchestration (response) alternatives. Increased lead times means we can choose possibly slower but better actions which lead to improved outcomes. Increased certainty means we can consider tighter confidence intervals and improve the convergence of predicted to actual outcomes.
In fact, nearly every new technology development over the last 20 years will, not surprisingly, improve the speed, cost, and quality of the Smart Operations build-out.
Smart Operations Core Elements & Build-Out
It is critical to balance long and short-term perspectives when undertaking the build-out of a transformative strategy like Smart Operations. In the short term when initial skepticism and lack of confidence dominate, it is crucial to engage a core set of individuals who believe in the vision yet are grounded in the practicalities of delivering value to the business. That core team must diligently pursue fundamental elements of any successful team looking to drive true transformation, including:
- listening to leaders, process owners, and front-line employees, taking the time to understand the business problem and the individual’s personal investment in meeting business goals
- capturing and analyzing data to truly understand the dynamics of the opportunity and accurately depict current and target state
- effectively communicating what the team plans to do and the participation required from all constituents
This core team must also quickly realize that they alone will not “fix” the business but rather they must engage up, down and across the organization, to leverage the collective knowledge, skills, and passion of the entire team. More details about business and people change management will be presented later in the paper, but it is worth mentioning here as it is fundamental to every aspect of the Smart Ops build-out.
The first stage of a Smart Operations transformation relies on a foundational belief that business processes are the fundamental unit of business activity. From taking orders, to manufacturing product, to shipping, servicing, and collecting cash, business processes are the blueprint for how a business works. Even highly creative activities like Process R&D or Marketing campaign design live within a business process. Business processes are effectively containers where inputs are applied, activities undertaken, and outputs delivered. With a core focus on the design, operation, measurement, and continuous improvement of these processes, we can communicate intent, assign responsibilities, measure business performance, maintain accountability, and ultimately deliver value to customers. Our journey will start and finish with how effectively we can influence continuous improvement in these processes as a measure of our ability to sustain differentiated competitiveness in the market.
Step 1: Understanding your Processes
With the core focus on business processes in place, the first step of Smart Operations is to embrace the need to deeply understand one’s current processes. To be sure it is very acceptable to focus on one specific business area or operation initially as it could take months or longer to fully map, characterize, and assess the multitude of business processes in a modern enterprise. In our desire to quickly establish a foothold of trust and credibility, it is highly advisable to start small, show quick but meaningful wins, gain more converts as believers in the vision, and move on to other business areas. As such, the core team should deeply map and characterize the initial one or few processes being sure to engage with process owners, designers, and execution staff. Figure 1. represents an example process where Smart Operations has been applied to Sales Operations and will be referred to throughout this section. Taking the time to document the written and unwritten rules, capturing knowledge not in any systems, and faithfully representing how work gets done will be both rewarding and engaging for everyone involved. Like it or not, keeping the truth hidden about errors, poor quality, re-work, and other process incapabilities will quickly destroy any hopes of driving a truly successful transformation.

Figure 1. Application of Smart Operations 2.0 – Agentic Sales Operations
As you map and capture data about speed, cost, and quality you are positioned to properly characterize overall performance. Note that not only must end-to-end performance be characterized, but each step (often called “tasks” today) must also be characterized. Without a step-by-step understanding of your processes, you are rendered powerless to really understand what may be contributing to poor overall performance. Fortunately, there are great tools now available to help you in these activities including process modeling based on transactional data, extracting intrinsic knowledge from workers through AI-structured interviews, etc.
As you complete the analysis of your process you can now effectively baseline performance and critically benchmark it against the required performance levels needed for business success. With that comparative, and an understanding of the process entitlement (max performance potential of the current process design) you can determine your improvement strategy – either remediate the current process or fundamentally re-design for higher potential performance. With this baseline knowledge and a decision on improvement strategy, we can proceed to the next step of the transformation journey and begin the targeted improvement of key process elements until the required performance levels are attained.
Step 2: Classic Automation
As you examine and profile tasks across the collection of activities in an end-to-end process, you will likely see a clustering of tasks based on relative complexity. Specifically, one can classify / categorize tasks by examining the variability within that task’s inputs, activities, and outputs. Its likely that some tasks can be categorized as “low complexity” because there is little variation of those three elements and as such, it is relatively easy to codify a clear set of “business rules” that transform inputs to outputs. In these lower complexity cases, traditional automation techniques can readily be applied. As you refer to Figure 1, such classic automation techniques are represented as dotted process steps across the end-to-end workflow. Such techniques are highly matured and easily specified for a given task. In these cases, businesses are well positioned to deliver solid improvements in speed, cost, and/or quality in the execution of that task. As an aside, do account for reduction in cycle time variability (versus on mean reduction only) as a worthy benefit to note.
Importantly, because you have profiled both task level and end-to-end performance, you will understand the incremental impact on overall performance derived from improving each different task. That detail, combined with clarity of the expense to achieve that improvement, will allow you to develop an optimal sequence of improvement actions. Executing these improvements in the best ROI order will further accelerate overall performance improvement and continue the critical trust and credibility building for the team.
Step 3: Introduction of Task-specific Agents
Having worked through the low complexity/variability process tasks we are poised to attack the more complex/variable tasks in our process. These tasks are the biggest beneficiary of recent AI developments given AI’s general excellence at completing larger scale tasks that require analysis of large data sets and document libraries and/or generating content based on guided modeling. Here we imagine a broad library of Agents accumulating over time, from which we can select and deploy against specific needs within various processes. Some agents will employ Machine Learning (ML) techniques to analyze large data sets and predict customer activity, some will use Generative AI (GenAI) to prepare document summaries or e-Mails to customers, yet others could even collect data and formulate a series of constraints for a downstream optimizer.
In a fashion similar to what was discussed in Step 2, we again profile the highest ROI cases where specific agents are bought/built and deployed against poorly performing process tasks and create measurable improvement for end-to-end process performance. In this step we must also, of course, have built out the ability to develop, test, deploy, manage, and optimize Agents in our environment (more on that later). These Agents are represented in Figure 1 as execution mechanisms paired with each step of the core process. As we successively deploy additional Agents we will see a commensurate rise in overall process performance. Within this step we will also want to begin transitioning the automations from Step 2 into our agent framework to allow them to participate in a cooperative way with all other Agents. Here we can imagine our basic automation steps being wrapped with Agent behaviors to facilitate communications and management. This will prepare our solution for follow-on phases of activity.
Step 4: Introduction of Agentic Capabilities
Having now built out a collection of Agents to perform task-specific duties and accumulated a series of point improvements in our process, we may well seek additional improvements to reach target performance. To break through any current limitations of our current solution we will rely on the old adage from linear programming “the global optimum beats the sum of local optimums.” Translating to our situation, we need to move to a state where collections of Agents can cross-communicate and even cross-optimize behaviors across their collective scope in hopes of finding an even better solution for each instance of the process. Effectively what is happening is by creating context across multiple Agents we are enlarging the solution space across which our collection of agents can trade-off upstream and downstream possible responses to find the overall best response to the current inputs and situation. Such operating context is represented in Figure 1 as the dotted lines encompassing groups of individual Agents.
Creating this context and allowing cross-Agent optimization is the province of “Agentic” behavior. In this realm, collections of Agents, often structured hierarchically, inter-operate to find the best possible outcome against their current tasking. For a business this could include maximizing operating margin by determining what goods to offer, from which warehouses, at which prices, delivered by which carriers for what costs. A complex business problem (opportunity) such as this requires consideration of a vast range of alternatives against a series of inputs and constraints as well as the capability to execute an optimization engine that performs analyses required to identify the best possible outcome that is consistent with those inputs and current state.
Let’s consider the practical application of an Agentic model. In the situation where a retailer wishes to maximize profit over a given time period, we have to consider actual and/or predicted demand and how to most profitably serve that demand. To tackle this opportunity, we first must predict new demand and combine with existing demand to set a baseline for overall Demand to be served. Then we must aggregate available inventory from across a range of warehouses and combine with added supply we could acquire from wholesalers (with what lead times and at what cost) within the time period under consideration. Next, we must gather distribution capabilities of our own delivery fleet and combine with availability and price of 3rd party delivery partners. Finally, we must codify any operating constraints that could curb our plans but must be honored to stay within legal, regulatory, contractual, or other such limits. With all of those inputs in place we have fully characterized the solution space that bounds the selection of our operating decisions that will maximize profit.
The last step of formulating our problem is using yet another Agent to build our “optimization function.” This serves as a description of our business goal and what the optimizer will seek to maximize within the constraints and decisions it can make. This particular Agent would be trained by “reading” past financial statements that characterize all financial elements that are part of the calculation for a firm’s operating margin. The function builder Agent would re-create that formula to consider all appropriate revenue and cost elements, financial adjustments, etc that yield a final margin result.
With the problem formulation (Max function, decision variables, and constraints) we can now call on an “Optimizing” Agent to perform the analysis and deliver the specific decision variable values that serve as the operating plan for our business. These decision variables tell us what to source, from whom, at what cost as well as expected sales, at what prices to what customers in which stores, plus what warehouses will supply product, who will deliver and at what costs. Ultimately the planned business activities and derived revenues and costs deliver the optimized level of income.
The last step of our solution flow is to now activate our optimal plan across the set of action-takers in our operating environment. Initially we would limit such “orchestration” to internal resources over which we have definitive and predictable control. One or more likely a series of “orchestration” agents would send “signals” to either applications (move which inventory between which two warehouses), operators (make 50 units of which product), suppliers (deliver which products to which Customers from which warehouse), and so on to effect activation of our derived operating plan. Orchestration Agents could take many forms from an API call to an application, the creation and send of an e-Mail to an action taker, a transmission to an operating device to change its behavior, etc. There is a wide range of such actions that over time are built out, put in a library, and routinely selected and used across a diverse range of business processes.
Of course, depending on the time horizon of our planning we will likely face changes in a very typically dynamic operating environment. We will discuss how this solution handles changes in inputs, constraints, and other factors later in the paper, but for now we have our core solution in place. That is, a dynamic ability to gather inputs, understand constraints, specify our goal function, determine optimizing actions, and orchestrate execution. The Agentic model allows us to sequentially add functionality, scale, and intelligence to successively improve our performance by continuously expanding our reach across an expanded solution space.
Step 5: Continuous Improvement of Business Performance and Smart Ops Execution
As we deploy our Agentic solution, we naturally want to ensure we can continuously improve our results by adapting to changes and new opportunities in our operating environment. Here we must in part rely on our business architects to be constantly assessing where our next sources of improvement may come from and then partner with the solution architects to bring to life the needed enhancements in our solution. If the core optimization techniques described above hold true, then we have a blueprint of where to look for opportunities to further improve our results. Examples include:
- seek out ever “smarter” agents which more accurately, quickly, and/or cheaply model, predict or optimize outcomes
- seek out agents which can acquire new, value-added data which improves predictability and richness of our understanding of the dynamic environment around us
- seek out agents that engage and interact with new customers, suppliers, and partners to enrich the universe of those we could do business with (dependent of course on finding new partners who possess better operations, supply, demand, or IP we can leverage to improve our business)
- seek out agents that can reach additional types of action takers such that our orchestration options are enriched
In summary, these agentic improvements and expansions serve to expand our solution space, improve the precision of the recommended actions, access and utilize new partners who bring enhanced capabilities, and activate decisions more effectively across our operating environment.
3. Business Transformation Considerations
With a core understanding of Smart Operations v2.0 in hand, we need to now step back and create the business context and motivations that validate its applicability to modern business situations. We can do so by cataloging a sampling of the core business challenges and opportunities seen today and exploring each from the point of view of how Smart Ops can help businesses successfully deal with those dynamics.
Sampling of Challenges
In the two decades since the original Smart Operations paper, there have been dramatic changes in the operating context for many businesses that have compounded typical challenges and caused a fundamental re-thinking of how a business understands its markets, customers, supply chains, competitors, and products. Here are a few examples and select examples of how businesses could successfully confront and respond to those challenges.
Business and Product Cycle Acceleration
Largely based on the rise of technology enabled / accelerated processes, many businesses have seen a significant shortening of business and product cycles. The rates of new product introduction have increased in many industries and suppliers to those producers have been pressed to speed up their innovation and delivery capabilities as a result. That in turn has put pressure on core business processes including new product development, supplier and material qualifications, contractual and regulatory compliance, and other key elements. To meet these demands businesses must find ways to accelerate their own execution in how they serve their Customers. A Smart Operations approach can drive substantial acceleration of these core New Product Introduction processes by leveraging agentified analytical capabilities that in turn depend on data acquisition/preparation agents needed to characterize materials, production processes, and other core R&D activities.
Supply Chain Complexity
Another critical element challenging modern businesses is the sharp increase in Supply Chain complexity. Collectively Supply Chains have become longer and more global and as a result more subject to international trade regimes, tariffs, geo-political risks, pandemic-induced disruptions, and a host of other challenges. To successfully respond to these challenges, Smart Operations can dramatically assist businesses both tactically and strategically. In short time horizons, classic demand/supply planning and execution activities can manage around sudden disruptions by automatically triggering trade-off analyses that re-optimize across a revised set of possible responses. For longer term planning, Smart Operations principles can be used that leverage increased prediction accuracy, supplier capability projections, and other derived insights to play out how markets are expected to evolve and how best to build out capabilities to serve those changes. Typically these decisions are closely tied to longer term investments like warehouse placement, factory scale-ups, and distribution agreements.
Geopolitical Risks & Trade Policies
Another dynamic that has caused considerable disruption to business activities is recent geopolitical developments which cause rapid and frequently unpredictable changes to the flows of trade and commerce. Wars, unchecked immigration, and protectionist trade policies can quickly disrupt carefully designed plans and networks of how a business executes its strategic plans. In such scenarios business ideally will simulate the causes and effects of possible actions and design products and services, supply chains, partnerships, and other components of their business plan to maximize resiliency. Smart Operations capabilities include sophisticated modeling of operating environments, trade policy scenarios, and governmental actions that can help businesses effectively explore their solution space both for short and long-term decision making. Developing a priori simulations that require sophisticated modeling of events and implications will help create models that can be continuously refined then introduced into core Smart Ops production environments to help improve recommendations generated by the system.
Access to Natural & Man-Made Resources / ESG Topics
Governments and NGOs are paying considerable attention to environmental and social issues that have broad impact on businesses and citizens alike. Energy consumption (driven by rapidly increasing AI deployment), clean water, clean air, and income inequality are just of the few of the top issues gaining substantial attention. Ideally, businesses need a flexible framework within which they can activate chosen strategies and guidelines that are aligned to their environmental and social principles. Setting those defined guidelines as constraints for decision-making ensures compliance because the business optimization analyses will always serve constraints first in crafting the best possible business outcome. By actively modeling such environmental and social constraints and making those a part of the problem formulation, businesses using Smart Operations approaches can directly and visibly align their operations to their stated ESG goals. Internally there are also opportunities to model, for example, the compute-driven consumption of energy and water in datacenters that power Smart Operations execution. It makes sense that similar to how we may model constraints on business activities we can also model constraints in our compute environments to meet such ESG objectives.
Sampling of Opportunities
On the flip side of challenges to business execution there are also emerging opportunities that business architects can take advantage of as they expand and extend their businesses. Here are a few and how Smart Operations can assist in deploying intelligent strategies to take advantage of each.
Sovereign Interests
A recent development that presents interesting opportunities for business architects is an increased awareness on the part of individual countries of the role they do or could play in the conduct of international commerce. Establishing sovereign wealth funds who invest in local or foreign opportunities, creation of free trade zones, creation of low tax opportunities or investment tax credits, and similar strategies are helping countries build sustainable economic and social growth vehicles for their citizens. By evaluating these trends, it may be advantageous for businesses to identify, model, and simulate various country specific situations which allows the Smart Ops solution to determine the most advantageous ways to build out and scale their firms. By dynamically trading off alternatives and optimizing across the set of feasible solutions, we can evaluate a multitude of different combinations virtually and quickly. As we saw earlier, initial simulations can help identify and validate opportunities with a later introduction of such models into the production environment where these models determine and orchestrate specific actionable recommendations.
AI/Technology
It is likely obvious and in fact the heart of this paper that AI and more broadly digital technology can and will play a major role in helping business identify and activate the best strategies and tactics to deliver on business goals. The emergence of sophisticated data capture, modeling, prediction, simulation, and optimization techniques and platforms is unlocking entirely new ways for business leaders and process owners to conceive, build, and operate a business. It is of course incumbent on these leaders to set robust vision, empower individuals, provide resources, and hold people accountable that will enable success from the deployment of new technology. As opposed to two decades ago, the emerging capabilities to process fantastically large datasets, extract useful insights from that data, model the real-world around us, instantly communicate around the world, engage humans deeply with the technology environment, and measure with precision the results of actions taken present a compelling opportunity for technology to fulfill its potential as a value-creation lever.
Access to Diverse, Deep, and Valuable Data
A final new opportunity worth mentioning is the unprecedented access to data of all types, volumes, and locations. As discussed earlier, the emergence of diverse and widely deployed sensors which can sense and collect a wide range of data is helping us better understand our world and the interactions between diverse actors. Having increasingly timely, granular, and valid datasets dramatically improves the confidence we have in the models we rely on to characterize the behaviors or intentions of people, machines, systems, governments, nature, etc. Our ability to capture and process this data has created completely new levels of understanding of our operating environment and we see a constant stream of new data sources being introduced. We must of course carefully weigh the costs of acquiring and using these new sources. Fortunately, our optimization-based approach within Smart Operations provides a direct evaluation framework of how a revised and augmented set of models can produce ever more valuable outcomes allowing us to compare and ensure that incremental value exceeds incremental cost.
4. Core Transformation Strategies
With the context of increasingly important business environment considerations established, we must face squarely into the challenges of designing and deploying a transformative strategy that is embraced and accelerated across the business. As IT has painfully learned over decades of experience, great technology will never reach its potential unless we position it strategically with senior leaders, build a sound business case for the financial community, design a change management strategy that engages every level of the organization, construct a highly credible execution plan recognizing internal capabilities/deficiencies, and effectively communicate with stakeholders throughout. To that end let’s explore a methodology that can be used to develop a successful transformation roadmap for the scope and scale of a Smart Operations transformation.
Specify Financial Objectives
Clearly codifying the financial objectives for the firm is a critical first step in any transformation. Here we are using market-based outcomes to specify the level of performance that investors and senior leaders will find acceptable. The selection of targets, including both type and magnitude, will establish important boundaries and areas of focus for the subsequent design and intensity of investment initiatives that will be required for success. Most often these are captured as revenue, operating margin, or working capital, but increasingly other metrics are also being used to indicate capital efficiency, environmental impact, or other indicators of success. It is also crucial to indicate the timeframe for achievement of each goal to ensure required levels of intensity are well understood. Of note, these objectives as well as others throughout the business may require iteration as we examine the fundamental opportunities and constraints the business may encounter. Ultimately having clear targets as a measure of success will help establish proper accountability and a shared commitment to achieving the desired outcomes.
Map Business Strategy
Once clear financial objectives have been codified, it is necessary to develop and pressure test relevant strategies that will be employed to deliver the targeted outcomes. Typically, these strategies are a combination of financial (how we will secure investments), operational (where we will make product), organizational (how many employees versus contractors we will hire), and so on. These strategies begin to decompose targeted financial outcomes into actionable themes to be undertaken by various leaders across the organization. It is worth noting that even at this stage the business architect has the opportunity to begin shaping an understanding of the core capabilities and their respective measures of excellence that will be required to succeed. It is also beneficial at this stage to engage a wide range of experts and experience to evaluate, question, iterate, and improve these core strategies to ensure their robustness in real-life application.
Core Competencies
Once core strategies are defined and validated the business architect and others must catalog the core competencies and level of proficiency required to successfully deliver the requisite strategies. This is often a very difficult exercise as firms frequently lack a sound methodology to self-assess each competency, but the exercise helps a firm honestly baseline where and how it wins versus simply being a qualified contender. Core competencies are usually higher-level constructs (e.g., curating customer experience, providing market-leading innovation), but definitively capture the essence of who the firm is, how it is perceived in the market, and what it can rely on to differentiate itself. Having a good grasp of the core competencies required to power the chosen strategies provides clarity of the key themes around which follow-on improvement activities can be identified and pursued.
Map existing Application & Analytics Landscape
Taking a turn toward the more technical elements of the transformation, a key step after core competency assessment is to map and profile the existing Application and Analytics landscape in the company. As the repository of digitally based competencies, we must understand our current digital competencies and capabilities to collect and process data, support intelligent decision-making, and execute proscribed actions. There may well be remediation activities in any of these areas that are a precursor to more sophisticated pursuits in the march toward a smart operation. Any such improvements will necessarily have to be catalogued, funded, assigned, and executed in turn to ensure readiness of that capability to contribute to higher levels of achievement.
Map/Cleanse Enterprise Data
One of the newest and most challenging elements of a Smart Operations transformation is the mapping, cleansing, and overall preparation of Enterprise data. With the exception of a few forward-thinking businesses, most companies have not actively and carefully managed and governed their data over the last decade. Combining a lack attention in terms of governance with an explosion in data generation and capture has created a huge headache and barrier for companies looking to utilize the newly AI inspired analytical techniques. Competing databases that have alternate definitions of product, customer, defect, order, etc. all conspire to confuse and confound attempts at robust analytical and modeling pursuits. Such competing definitions will ultimately degrade efforts to fully leverage such data in the creation of high value information, knowledge, and insights. Many firms even set aside / delay the start of their transformation efforts until they get their data in order.
Within the context of a Smart Operations transformation, it is possible to focus on an initial slice of the firm’s total data holdings as long as it is clear that everything relevant to the initial improvement area is in the scope of the initial data cleansing. It is crucial that a data management plan is built holistically and target constructs for clean data structures are established early on and rigorously maintained thereafter.
Baseline Organizational and People Capabilities
As a deeper dive into the core competencies discussed above, it is now necessary to profile the individual capabilities from which higher-level competencies are composed. Here we are drilling into the details of business processes, people and skills, business partners, and others to capture and document their relative functionality, maturity, capacity, and scalability. These capabilities serve as the building blocks of a business and are often improvable components of the overall transformation plan. On the journey from financial outcomes to strategies to competencies to capabilities, we begin to develop a mapping of needs and the target levels of performance versus what we currently have in place. Fortunately, there is a rich set of capability building methods available, ranging from people training, process assessment and improvement, and system deployment. It all comes down to improving the next most important thing, in an aligned and most cost-effective fashion, and ensuring those improvements are fully adopted and leveraged in the context of higher-level requirements.
Business Process Design & Improvement
As we complete the decomposition of what we want and what we have, we must transition to improving or building capabilities that enable us to achieve higher level strategies and outcomes. As discussed earlier, we define the business process as our core fundamental building block of business activity and capability. Using the approach outlined earlier we need to establish a definitive understanding of how our business processes work, their ultimate capability (entitlement), and their current performance levels. Assessing those elements tells us if we can improve or must re-design our way to acceptable levels of performance. Similar to the discussion in the Data section, it is acceptable to initially focus on one or perhaps a small number of important processes to improve. That is even advisable in the early stages of a broad-based transformation such as Smart Operations to ensure the team stays focused and accountable to delivering defined improvements quickly and with a suitable ROI.
As this step progresses, we become increasingly equipped to lay out the various people, process, and technology improvements that are required to progress our capability to the target level of performance.
Specify a Process Improvement Plan
With a clearer understanding of the process improvements required to attain a target level of performance, we can develop a specific plan to smartly attack and sequentially improve the elements of the process that are underperforming. Using the techniques described earlier we profile tasks by performance and complexity, prioritize and categorize improvement actions and document the composition of the specific improvement plan. We can then launch process improvement actions natively or dive into lower-level elements of each task as covered in the next two sections.
Specify a People Improvement Plan
Frequently we see cases where new or newly assigned employees are ill-equipped for success in a given role. This is often caused by a lack of training, knowledge transfer, suitable tools, or a number of other root causes. As we evaluate tactics to improve various tasks we should carefully assess if training or other human-centric methods are the most suitable avenue to drive the needed improvement. As we profile across multiple tasks, we may see key themes we could address at scale (such as a group training class) or realize individual attention is a better choice. Carefully collecting, aggregating, analyzing, and preparing such an improvement plan will pay dividends both in terms of process performance but also in terms of personal commitment to success, productivity, and loyalty to the company.
Specify a Technology Improvement Plan
Next, and in a manner similar to the People improvement plan, we must develop a technology improvement plan. Drawing again from an earlier section, the use of core automation techniques and/or more sophisticated agents and agentic technologies provide a very rich set of options for the process designer to leverage as they balance cost, scalability, reliability, adaptability, etc. in pursuit of target level performance. Fortunately, these technologies are improving both in performance and price at a rapid rate which provides tremendous flexibility for the designer. The earlier section lays out a multi-stage process for transforming key processes to very high levels of performance by using increasingly sophisticated techniques.
Design / Execute a robust Communications Plan
Finally, it would be a mistake to not discuss the importance of a robust Communications Plan regarding the intent, timing, approach, expectations, and results of our transformation efforts with the entire organization. Nothing will derail a strategic initiative such as this quicker than for information gaps to exist across the organization. Without clear and honest communications shared throughout the organization, you will soon see fissures forming: front-line employees will see this program as a mass layoff strategy, technologists will see this as a chance to run a big tech program (or a takeover by some outside consulting firm), and so on. Nothing beats the CEO or Transformation Leader delivering a series of all-hands meetings where he/she delivers a clear accounting of what, why, when, how and so forth to the organization including a commitment to come back and report on progress, lessons learned, success stories, failures and corrections, etc. Being honest with everyone will create the best chance of success and promote robust and badly needed engagement throughout the organization.
5. Key Constituencies
I see five key constituencies each playing an integral role in conceiving, designing, building, deploying, and adopting this vision and each has specific contributions that will be needed to accomplish this transformation. Each in turn will also receive specific benefits that reinforce its willingness to contribute actively toward reaching target performance. Here is a summary of each constituency, its benefits, and contributions. Note that there is an added Private Equity twist to this accounting to incorporate the added players that are involved with companies in whom Private Equity has invested.
Senior Business Leadership
Senior leadership of a company is ultimately tasked with setting vision, developing strategies, allocating resources and being accountable for delivering the target financial outcomes of the firm. As they perform these various duties, they are constantly scanning the horizon for emerging trends, new competitors, paradigm shifts in their industry, and a host of other strategic influences. What many senior leadership teams seek are strategies and investments that build capabilities which have the flexibility and adaptability to adjust to the constant strategic and tactical course corrections needed to maintain competitiveness without wholesale “re-investment.” As such, senior leaders who can clearly convey their vision and strategic intent to those who will build and operate the business, will create a much higher chance of long term success because the foundation of the firm will have been built with these principles clearly in mind.
For this very reason, senior leaders will play a critical role in setting the context for a Smart Operations style transformation. Clearly laying out the company’s vision and strategies will provide the Smart Ops architects with critical insights into how best to partition key business capabilities into adaptable, flexible, and scalable constructs that can be more easily adapted, re-configured, and re-deployed to implement the ongoing adjustments needed to respond to changing conditions.
Senior leadership must also play a leading role in creating the right sponsorship, governance, and communications for such a transformational journey. Establishing clear expectations, rules-of the road, performance measurement guidelines, etc. will be crucial to ensure the right feedback signals reach senior leaders and enable sustained support for the program.
Finally, senior leadership will be asked to provide the right financial and people resources to lead and deliver the transformation program. Selecting the right sponsor is a critical step in the transformation journey. Naming a leader who balances strong communications skills, program leadership disciplines, motivational practices, and possesses the instincts to protect a team during challenging periods, etc. may be one of the most critical elements of success.
The return on investment for the senior leadership team is a business built ready for the challenges ahead. The ability to rapidly re-configure elements of the business and operating model, dynamically respond to disruptions in Supply Chain, Staffing, and Competitive actions, and have truly deep insights into the inner workings of key business activities will provide a truly robust, differentiated business that is built to last.
Deal Team & Investment Professionals
In scenarios where a Private Equity or other Business Ownership structure is in place, we must anticipate and embrace the role of these owners and their expectation for improvements in Enterprise Value. Particularly in the case of Private Equity, we must also anticipate the dynamics around the target holding period and ensure our transformation strategy is accretive to and not dilutive of the core selling proposition for the company. As such it is important to engage the Deal Team upfront and architect a transformation strategy that is relatively immune to the potential early or late sale of the company. This in turn requires a thoughtful strategic transformation journey map that at any point can be successfully marketed to a potential buyer of the business.
As discussed in the Smart Ops section, we foresee the potential for a compelling telling of the transformation story at each step of the journey. Even if an early sale is contemplated, the ability to tell a cohesive story about a process-based, sequentially sensible improvement strategy with incremental investments and returns is a compelling one. Characterizing progress to date, specific initiatives in flight, and next steps all with a strong alignment to core strategies and ultimately financial outcomes, should be seen as a benefit and not a risk by potential buyers.
At the same time, a transformative journey such as Smart Operations will require engagement, patience, potentially some additional capital and certainly sustained commitment and encouragement from the Deal Team. For a management team knowing they have a committed Deal Team behind them will inspire forward-thinking, intelligent risk-taking, and a strong desire to deliver.
Functional Leaders & Operating Partners
Functional Leaders (and Operating Partners if again Private Equity is involved) are one of the most critical members of the transformation team. In most situations these leaders are the critical linkage from Senior executives who set strategy and allocate resources to the front-line employees who execute the business daily. Along the way these functional leaders are responsible for the design and delivery of the core competencies of the business. They represent vast experience and expertise, are responsible for talent development, and are specifically held accountable for results in their operating areas.
In the context of a Smart Operations transformation, these functional leaders will be a linchpin of success at the very core of the program. They will serve as the first line translators of targeted financial outcomes to robust business strategies. Next, they will task their leadership teams to take those strategies further down to core competencies and capabilities from which they will identify required improvements. They will also assign accountable staff to deliver on actions they have reviewed and approved. Given this range of critical responsibilities, the Transformation Program Leadership team is well advised to build and maintain very robust relationships with this core group of individuals. It is not at all uncommon for a leading member of this group to specifically be selected to lead such a strategic transformation program for the company. Having deep operational and leadership skills in such a position should be seen as a great advantage for the program.
IT Team
As keepers of the technology environment in the business, the IT Team has the responsibility to ensure that all deployed technology meets functional needs, maintains operational security, exhibits availability aligned to business operations, and is highly cost effective. That is a tall order for any IT organization let alone one also attempting to engineer a highly transformative strategy. As the Program unfolds, IT must effectively leverage its internal relationships, existing trust across the business, and eye for how best technology meets key business needs to constantly evaluate and ensure that each phase of the transformative program maintains alignment to the core principles and requirements of the business.
It is likely the IT team already uses a set of Managed Services and Security Services providers (MSPs, MSSPs) in the delivery of its services to the broader organization. The accumulated experience in the selection, management, and guidance of those organizations must be directly leveraged when possibly adding new Partners to help drive the Smart Operations journey. IT will play a leading role in stitching together such a robust ecosystem of Partners who must seamlessly work together to design and build the Smart Ops capabilities and successfully deploy those into the business. As such, IT must identify and assign its strongest leaders into key Program roles and rely on its experience and understanding of key business drivers to help shape and guide those new Partners. If done well, the Smart Operations-inspired transformation can take IT to new levels of recognized contribution to the strategic success of the company and, most urgently, rightfully position digitally-enabled solutions as another key value creation lever for business leaders as they devise strategy, solve critical business problems, and capture new opportunities.
Technology & Service Providers
Technology and Service providers will be a critical component of the Program team for the Smart Operations journey precisely because they offer the expertise, scale, and capacity to augment key missing capabilities within the business. Ideally these Service Providers help the Program Team accelerate key decisions based on their broader market knowledge, reduce deployment errors given their deep implementation history, and create operating leverage for IT by bringing their own developed IP and best practices. A deeper examination of the many roles Partners will be needed for is detailed below but here we will simply summarize what is needed and what is gained by Partners invited into the Program.
There are three specific competency areas that must be robustly present in the Smart Operations transformation and the first centers on the ability of the Program Team to understand and address critical strategic, financial, and operational goals of the Enterprise. Within this competency we need both an element of management consulting (understanding, advising, and aligning to core strategies) as well as value estimation and realization. While the former ensures the program team is aligned to and addressing key strategic needs, the latter helps the team properly estimate the quantity of and how best validate the realized level of benefits derived from the program.
The second major competency area is Program Management. The Partner must excel at structuring and leading a complex, multi-faceted program that balances required technology deployment with an acceptable pace of business and people change as well as business resources and funding made available. A second key competency required is broadly “Change Management.” Change Management practices and expertise will ensure that changes in both Product / Service delivery as well as internal business processes are robust and well managed.
A final area of required competency is core technology implementation. It starts with employing robust frameworks and experienced staff at devising the core solution architecture and ensuring a balance of flexibility and adaptability across the architecture. Finally, the Partner must also excel at platform selection, implementation, and operationalization. Ideally the Partner demonstrates robust anticipation of evolving technology and best positions each new technology introduction to play its designated role.
6. The Future of Work
The potential impact on the nature of work and in turn the workforce of an Enterprise is perhaps one of the most interesting and consequential elements of this paper. With the rise of new AI-based techniques and platforms there is no shortage of predictions as to exactly what the ultimate impact on Enterprise staff will be. Today we see prognostications ranging from a complete wipe-out of low level and even some higher-level jobs, as AI sophistication grows into reliably performing these roles (especially in the Generative AI arena). Others argue that humans will always play a vital role at contributing the goal setting, ethical, moral, and related principles that can and should guide organizations. Over the last few decades, we have seen the constant march of new technology and generally results have been mixed at fully thinking through and properly positioning our people and people management disciplines to best balance the consequences and potential of technology adoption.
The position taken in the Smart Ops vision is a balanced perspective where we segment the duties within an organization and place those responsibilities where they are best served. While the goal of an organization is to succeed in its core mission, the broader individual and societal perspectives merit significant consideration. The following framework attempts to provide a balanced approach to these competing dynamics. Please refer to Figure 2 as we explore these various Work Segments.

Figure 2. The Future of Work – Work Segmentation
Work Segment # 1 – High Volume, Low Complexity Tasks
We profile activities into this segment that are characterized by broadly routine high volume but low complexity tasks. These tasks have traditionally been handled by people, often in front line roles, where experience and repetition have built capability. Given these more routinized tasks where inputs, work activities, and outputs exhibit relatively low variability, they are traditionally the first targets for classic automation efforts. As robotic process automation and similar technologies have evolved over the last few years, we have seen a much stronger blend of machine-driven execution taking at least the lowest complexity tasks with triaged and more difficult cases being routed to experienced human staff. As generative AI capabilities have exploded in the last 24 months, we are seeing an expansion of front-line tasks that can be handled by trained AI Agents who are modeled (trained) across countless prior transactions and activities to effectively handle the majority of more complex cases. While it makes sense to retain some human participation in this segment, largely to oversee quality control and model ongoing improvements in how machines interact with human customers, the bulk of activities in this segment will likely go largely in the direction of Agents sequenced by core business process flows.
Work Segment # 2 – Low Volume, High Complexity Tasks
The tasks that populate this segment of work are characterized by high complexity with widely varying inputs and outputs. These are often analytically intensive activities, given the explosion in data volumes has effectively taken this work beyond the capacity of humans to adequately perform. Tasks in this segment include forecasting demand across years of sales history, extracting key themes and insights across volumes of documents, and optimizing performance across multiple operating dimensions. The advent of machine learning and other analytical techniques have dramatically improved the speed and quality of such work, and practically speaking, lands squarely in the domain of machine-led processing. As echoed in Segment # 1, the expectation is there will still be a core set of humans involved in this segment’s activities, largely to help guide and shape deep analytical modeling and techniques as well as helping to identify, source, and engage newly available data sources. Virtually everyone sees these capabilities continuing to rapidly expand driven by ever more robust analytical and compute capabilities available for consistently lower cost. The strategic role of this segment should absolutely grow substantially and is the cornerstone of Smart Operations, that is, the dynamic and rapid response to ever changing internal and external conditions all projected and guided by an optimizing engine.
Work Segment # 3 – Human-centric Vision, Strategy, & Compliance
This element could be considered the overall “glue” of the entire solution. Most AI “experts” still feel that true intelligence is still far off in time but one could argue we may never be completely comfortable turning over the running of a business to a machine. Having a critical element of humanity retains the intangible and often very difficult to model behaviors, compassion, and sense of community that people bring to their jobs every day. What if we haven’t yet perfectly modeled a machine to always do the “right thing”? Retaining the human element feels not only responsible but practical – always present to make a final determination when our algorithms have never seen a particular situation or can’t resolve every situation within its instructed parameters.
In this element we see humans taking the lead role at setting organizational vision, validating acceptable strategies, as well as determining the social and environmental principles and guidelines that are “right” for the company. Those guidelines and principles need to be properly characterized and structured for “machine consumption.” and the more unique or first-of-a-kind situation we must anticipate, the more human reasoning and judgement will be crucial to proper structuring. We also need humans as the ultimate feedback loop to deal with the one-off, never-seen-before conditions that could cause unresolvable conflicts in a machine-only environment.
Caution – Don’t lose your best Future Workforce
One early risk to watch out for is a consequence of aggressively deploying early AI primarily to reduce headcount. Many companies who have considerable Segment # 1 work see basic process automation correctly as a major contributor to improved speed, cost, and/or quality improvements in core processes. While this is a sensible mid to longer-term goal, each company should consider the broader context of changes to its organization. One should expect that at least a subset of today’s frontline employees may be exactly the types of individuals a company needs in the future to occupy the critical “Human Segment” (# 3) of work described above. Many highly experienced individuals in roles today have accumulated keen insights, relationships, industry knowledge, Partner understanding, Customer tendencies, etc. that will be invaluable in future efforts to model the environment around the firm. Others possess keen process knowledge and are well positioned to conduct experiments and make adjustments to the optimizing behaviors of our Smart Ops engine. Yet others are seen today as the “soul of the company,” i.e., those that model company spirit, brand, and image. They may be the perfect staff to retain and leverage as the firm progresses through its transformation journey.
Section 2: Architecture & Operating Model of Smart Operations
7. Architectural Layers
Given the fast-moving nature of today’s technologies and the constant need for businesses to adapt to changing markets, customers, and other factors, a layered and modular architecture appears best suited to support a Smart Operations transformation. The following is a characterization of the various existing and future layers anticipated in the build-out of the full Smart Ops vision.
Before diving into individual layers, it is useful to create a macro view of today’s common Enterprise systems and how the addition of Smart Ops enablement could be brought to bear. Today, classic Enterprise Architecture encompasses a set of core elements: Applications, Infrastructure, Analytics and Data Management, and often a set of supporting Middleware. Within each of these the core transactional, planning/execution, and analytical capabilities reside, are routinely managed, and occasionally upgraded. What is common in these areas today is a lack of “dynamism,” meaning a programmatic and efficient response to changing customer and market conditions. This is quite reasonable, especially given the classic price-value trade-offs we have seen in the past when considering either upgrading platforms or attempting to encode a more dynamic operating model into existing applications. Generally, it is very costly and time consuming to open up an Enterprise application, perform development, regression test, and deploy the new functionality. This situation works directly against making a business more responsive to constantly changing conditions, strategies, and needs and stifles creativity and new ideas emanating from business leaders and front-line employees alike. IT has been feeling that frustration for a long time and is constantly seeking ways to improve its responsiveness to the innovative urges of its internal and external customers.
Let’s imagine that we draw a bold horizontal line and place that existing Enterprise landscape below that line. To be sure, these Enterprise systems are the backbone of core business activity. Orders are managed, production scheduled, and shipments delivered on a daily basis through these systems. Critical data is collected, managed, and analyzed to understand how well a business performed and to some degree even mined to help us better understand “why” certain things happened. For reference, please refer to Figure 3 as you review these respective architectural layers.

Figure 3. Smart Operations 2.0 – Architectural Layers
The strategic challenge going forward, and the domain of solutions we need to build “above the line,” is to find a systemic, sustainable, flexible, and adaptable way to dramatically increase the dynamism in the business. Here we are constantly consuming the many live inputs available to us, interpreting what has happened (and why), predicting what is likely to happen, determining how we could optimally respond to those actual or predicted events and shaping such a response as a compositional use of the execution capabilities available to us. We need this construct to dynamically respond and constantly adapt to changing circumstances around us as well as continued changes in our own or our partner’s capabilities. This solution must also be execution cost effective lest we dilute the improved business execution with outsized technical execution costs. Such a construct, smartly architected and operating above the line, would provide a highly leverageable and accessible set of capabilities that would not suffer from the long and costly development and test cycles of classic Enterprise applications. Instead, the predictive and prescriptive response planning for a business would operate largely independent of the monolithic constructs of today’s functionally specific applications.
Below the Line
The primary elements below the line were stated earlier: Applications, Infrastructure, Analytics & Data Management, and Middleware. Each is briefly characterized below to ensure a consistent understanding of the functionality assigned or tasked to each in the broader Smart Operations vision.
Applications
The application layer is one of the most consequential layers in any Enterprise architecture. Applications are the primary containers of execution functionality, business rules, execution integrity, and business status management. They not only track virtually all aspects of business activity, but they provide a highly resilient platform for interactions with Customers, Suppliers, and Partners. What Applications excel at is executing highly configured business activities largely according to specified and routinely static protocols. Broadly, and somewhat unfairly, most applications are not dynamically trading off execution alternatives, suggesting new execution paths, or dynamically composing unique flows of activity. Supporting such activities is precisely the opposite of the typical mission statement for an application – predictable functionality at an effective cost with high resiliency. That said, Applications play a primary execution role in the Smart Operations model, as they are tasked with specific actions, often with supporting guidance that lives within their configured behavioral envelope, and they execute that task with high predictability.
In our future model we will rely on the range of capabilities we can call upon to be executed within any given application to accomplish each step of an orchestrated business flow. Going forward we must ensure that we constantly re-evaluate applications in order to maintain an appropriate degree of flexibility and ease of integration such that they can continue to play a role in our ever more sophisticated dynamic execution model.
Infrastructure
Not surprisingly, Infrastructure is the very foundation of how we execute our digital solution. We are buoyed by rapid enhancements in highly effective and large-scale, cloud-based compute and storage and the multitude of options available to technology architects. In this area, we are looking for a few critical features to ensure the integrity, performance, and security of our build-out. First, high degrees of cost-effective scaling are an absolutely must. Since much of our core value proposition comes from the processing and use of intelligence, which is in turn based on the collection and processing of data, we must have high scale and cost-effective resources to enable both of those elements. We need to examine core compute, data transmission, and storage costs ideally with robust modeling capabilities to be able to accurately project overall systems execution cost both now and in the future. We also want to ensure a rich set of options in each of those three domains, trading off performance for cost in particular, such that we can increasingly bring the optimization principles to running the infrastructure similar to how we use Smart Ops to run the business. Our ability to seamlessly and cost effectively migrate between these various options will go a long way to smoothing out the run cost increases we expect to encounter as the solution expands in reach and performance.
Analytics & Data Management
In a “below the line” way, the existing Analytics and Data Management structures may or may not be well suited to meet the needs of a Smart Ops design. Conventional data warehouses and query/reporting tools did a solid job in the earlier phases of “intelligence management,” namely descriptive and diagnostic tasks, but are less well suited to the high scale / high sophistication tasks we envision going forward – particularly predictive and prescriptive processing. In these latter cases we are seamlessly integrating diverse data types, performing highly sophisticated pattern matching, clustering and other predictive tasks in addition to highly compute intensive analytical tasks including simulations and optimizations. We will discuss “above the line” functionalities shortly, but that said, a careful examination and profiling of existing capabilities in this area should be mapped for reuse in the future model. Of note, some businesses have already pursued modern data management constructs such as “Data Lakehouses” which provide high scale options for managing and processing both structured and unstructured data. There is simply no harm, and in fact substantial benefit, to not reinventing capabilities that can serve our longer term needs.
Middleware
Highly functioning Middleware came along a bit later in time versus the original timeframe of Smart Operations v1.0 and has served as a key integration and process management layer for early adopters. While there was considerable early momentum in this space, that excitement faded a bit as more application vendors began to build competing functionality into their core platforms. With ongoing consolidations in application layer diversity, some of the core application integration benefits started to soften. Depending on the installed platform and its native capabilities, there may be substantial reuse potential for the critical intelligent business process management activities discussed earlier. When assessing the applicability of any existing platforms please consult the sections below (“Above the Line” sections) for key considerations to apply when evaluating the potential to reuse your existing platform(s).
Above the Line
The mindset above the line is highly complementary to the “below the line” elements yet architected for speed of execution, flexibility for change, and dynamic adaptability to changing business conditions. It must present a set of capabilities that are highly accessible, granular, transparent, and scalable while maintaining outstanding reliability and low cost of change. Each layer will be characterized below including key elements of functionality and enablement.
Agent / Agentic Enablement
In the Agent/Agentic enablement layer we represent the critical enablers needed to power the new generation of AI capabilities in our solution.
- Most obviously we will incorporate access to a secured instance of one or more Large Language Models (LLMs) which provide a broad-based capability to generate content in response to general purpose prompts. LLMs bring an exceptionally broad knowledge base in addition to skills on refining written content and customizing toward specific guidance.
- Small Language Models (SLMs) are a relatively new phenomena but hold tremendous promise given their typically far narrower and deeper understanding of a specific subject area. SLMs are being developed for use case specific applications and are often trained on a company’s own and trusted content. Given their far narrower scope, SLMs are much more accessible in terms of training time, cost, and resources. With rapid advances in cost-effective and high-scale computing, training SLMs is or will soon be highly accessible by even modestly sized companies.
- Vector databases are used to capture and structure document-based content such that targeted access for analytical purposes is significantly faster. Such databases help optimize such tasks by ensuring content specific extracts are accelerated by using indexes and other techniques.
- Machine Learning models are an extremely valuable yet often overlooked tool to process large datasets and yield key insights and patterns that are unrecognizable to humans. Today ML models are routinely used for predictions and classifying behaviors based on various attributes.
- An Agent Lifecycle Management platform provides the development and run time environment for the development, testing, and operations of individual Agents. To fully realize the potential for Agent use, we need the ability to specify, develop/assemble, test, and continuously improve the speed, cost, and quality of Agent execution. Importantly we also need capabilities to manage a library of Agents, whether locally built and maintained or available through an external marketplace. Further, we will want Agent usage and performance profiling capabilities to assess their individual efficacy, as well as lifecycle cost tracking and other macro management features to help us determine when to upgrade, retire, or otherwise limit usage.
- An Agentic Management platform provides higher level constructs that effectively manage the performance of a collection of Agents. To do so it must allow various “contexts” to be constructed around specific Agents as well as provide communications, data sharing/context management, memory, and other enabling elements for true collective behavior to take place. This Agentic Management platform should support emerging protocols like Model Context Protocol (MCP) to maximize interoperability across diverse Agents as well as have hooks into the infrastructure management capabilities of hyperscalers providing cloud hosting for Agentic operations. Finally, the Agentic Management platform should also offer robust instrumentation of both Agent and Agentic flow activities such that valuable post-processing can provide operating insights for further improvement.
Intelligent Business Process Management
Since we have grounded our transformation strategy on the concept of a business process, it makes sense to characterize the role of an iBPM platform and make a concrete distinction between it and the Agentic workflows described below. For a classic iBPM platform we envision a more traditional, relatively static, rules-driven execution model where discrete process steps or “tasks” are carried out in a programmatic order and guided by the configured workflow. This approach has considerable merit – it is highly recognizable and enjoys broad-based awareness and familiarity for a wide range of business staff. It also excels at instrumenting execution which allows us to capture consistent data about execution times and outcomes which can be a valuable source of ideas for continuous improvement. This is in often sharp contrast to emerging Agentic workflows whose core composition is much more dynamic and driven by a goal seeking approach. These are just a few of the many considerations to balance between the two platforms as you build out your solution architecture.
“Classic” Process Automation
As characterized earlier “classic” process automation techniques serve as a first technique to deploy against relatively low complexity process tasks. By codifying the relatively small variation of inputs and processing rules, a process designer can quickly achieve solid speed, cost, and/or quality improvements in targeted parts of an overall business process. This is effectively the first tier of the specific solution we will build and serves the very important role of helping to establish critical trust and credibility for the project team. Within this activity we also want to instrument execution so we specifically collect execution performance data that we can later use to optimize the process itself.
Looking forward, the expectation is that classically automated process steps will be “wrapped” in order to make them look and feel, at least externally, just like any other agent. That will set them up to be accessible for later agentic contexts and freely participate in cross-Agent communications and optimizations.
Agent-based Execution Management
While the Agent Lifecycle Management platform just discussed provides the execution environment for Agent build and run, this layer of the architecture represents the actual design and delivered functionality of each specified agent. Here the process designer/solution architect specifies the core attributes of the Agent’s role in task success as well as interoperability requirements (specifying inputs and outputs) used by other Agents. As described earlier, Agents are introduced anywhere we have a task that is beyond the scale, speed, and quality of what a human can handle and the constructs in this layer represent the mechanics by which those work elements will be satisfied. We actively separate Agents from the higher-level Agentic workflow precisely because we want the Agentic layer to have the flexibility to pick and choose between diverse/competing Agents that exhibit varying price, performance, quality, etc. features such that the optimal Agent can be smoothly selected for any given instance of that Process Task.
Agentic Context and Communications Execution Management
The final and overarching layer in the solution architecture is where we encode, oversee, and continuously improve the Agentic activities. We use an Agentic Management platform to specify the creation of “context” (scope), facilitate communications, maintain “memory” and a host of other specific solution elements which allow cooperative goal-seeking among the designated agents. It is precisely this layer where highest level business goals are encoded or prompted into the solution to guide the behaviors necessary to deliver targeted outcomes. Key features of this layer include specification of the targeted outcome(s), constraints, decision variables, and other critical inputs such as execution cycle time and cost, i.e. any important factors that should be analyzed when devising the best possible action plan to deliver the best possible outcome. We also need to instrument activities in this layer such that post processing could be performed to identify possible avenues of further improvement in execution dynamics.
8. Solution Operating Model
To help us understand the complex dynamics and use of a Smart Ops solution, it useful to construct a visual representation of the important dimensions and overall operating model for how we would use a Smart Operations solution to guide business performance.
“Sphere of Optionality”
In a modern business we have considerable complexity in every direction – financial, operational, regulatory, organizational, and environmental to name just a few. These important dimensions require us to dynamically think, trade-off, and act in the best interests of the firm, balancing those competing interests. I propose the use of a “sphere” construct to help us organize the multiple dimensions within which we must decide and act. Figure 4 depicts three dimensions to consider as we design the initial solution and later embrace added layers of complexity.

Figure 4. “Sphere of Optionality” & “Point of Optimality”
- Dimension # 1 – Time
- Along the first axis of optionality is time. Starting at the origin of our imaginary 3-D graph let’s think of the past, present, and future. In its simplest form, we can imagine running a Smart Operations-based solution strictly based on present data and situational awareness. To be sure that approach would be highly limited in efficacy as it would ignore all past results and learnings we could leverage for predictability as well as forward-projecting anticipated situations we might like to prepare for. On the flip side it would be fast and simple given the limited scope. As we open the possibility to capturing, modeling, and learning from past data (moving left on the axis) and looking ahead via predictions for even a modest reach into the future (moving right on the axis) we greatly expand the usefulness and applicability in a modern business context. To be sure, we must anticipate the added computational load from using ever more historical data for modeling purposes and the inherent uncertainties of forward-looking predictions, but we can be comforted in knowing sophisticated techniques can be brought to bear to optimize our results at both ends of this continuum.
- Dimension # 2 – Cross-Functionality
- In early versions of our solution, we will likely focus on one or perhaps a small number of business processes or functional areas that are ripe for early improvement endeavors. As we successfully improve results in those relatively limited set of business activities, we will be immediately tempted to not only expand our scope to other functional areas but in fact start intersecting multiple functional areas with one another. Reiterating the earlier principle of achieving a global optimum over a sum of local optimums, we must aspire to connect Marketing to Sales to Product to Supply chain to Manufacturing to Procurement to Services to Financials to ensure the overall best trade-offs are made comprehensively and cooperatively. Ultimately, we wish to optimize across the entire business simultaneously for the truly optimized possible outcome. Expansion along this dimension again drives considerable computational load and we should take care to understand and balance the purist attitudes of full completeness of analysis with the actual ROI of operating at such scale.
- Dimension # 3 – Internal / External
- In this dimension, we are concerned with sources of data, optimization analyses, and orchestration with respect to the internal, i.e., inside the four walls of the company plus the external phenomena, customers, suppliers, regulators, and environmental factors that also affect our business. As we first concentrate on internal elements, we know we can more fully trust the data, models, actors, and systems we have built, hired, and run. As with the other dimensions, that more limited scope tends to constrain the size of the modeling and analyses we must undertake which in turn accelerates answer generation and lowers costs. In return, we are ignoring potentially critical factors that influence our business results whether we modeled them or not. As we expand along this dimension, we are increasingly enriching our solution with new value-added data sources, increased operating capabilities, and more insights that advantage our decision-making. This of course drives the need for expanded compute resources and potentially expanded cycle times for running the highly complex optimization protocols. The proposition is that carefully selecting high value-add external elements will more than offset the added costs and cycle times.
Point of Optimality
With the sphere concept established as representing our “solution space,” one can imagine our Smart Operations model navigating throughout that space in search of a “point of optimality.” This point represents where, after considering all possible solution alternatives, the best possible combination of decisions to maximize the targeted business outcome is found. Intuitively, the sphere/point construct is meant to help business architects envision possible advantages of expanding one or more dimensions in order to disproportionately increase the delivered business value in excess of the added costs of increasing optionality. As a theoretical concept, one could imagine tracing the evolution of where successive points of optimality are found over time to build intuition as to what added dimensionality expansions typically yield the most valuable improvements in business outcomes. Building such intuition may help the business architect most properly allocate his/her time to developing targeted expansions in the solution space to yield the highest ROI increases.
Re-Optimization Triggering
We must also be cognizant of the dynamics of the solution space and be prepared at any time to re-optimize our solution in response to material changes in the solution space. As depicted in the sphere conversation, these changes could materialize due to newly acquired historical (older) transactional data, newly sourced external data, changes in a supplier’s capabilities, weather changes, etc. Borrowing from the linear programming paradigm again, we introduce the concept of “slack” around decision variables being utilized as a trigger to signal when a re-optimization must be performed. Briefly, the slack concept indicates when specific constraints are fundamentally limiting the determined solution and therefore changes in elements involved in “zero-slack” situations could indeed signal a revised solution is required. Similarly, we envision instrumenting the key decision variables in our model with sensors that will detect significant variations, compare these against zero-slack ranges, and trigger the master optimizing agent to call for a re-optimization with the now updated inputs.
An important additional consideration is how frequently we are willing to re-optimize and hence re-orchestrate the activities of action-takers. We must be especially sensitive when those action-takers are human as continual re-direction of assigned actions will tend to frustrate humans and could lead to distrust, frustration, and eventually a lack of adoption. Part of our modeling, which will likely evolve over time, is an understanding of practical time boundaries within which we should not allow a re-optimization / re-orchestration to take place unless the new solution does not requires changes to the human components of the new execution plan. Alternatively, if the only impacted elements of our newly optimized solution are systems and devices we wouldn’t hesitate to re-orchestrate continuously.
Lastly, we must also contemplate the execution dynamics of tasked activities that are directed by our optimizer. It will not make sense, in many circumstances, to attempt a re-orchestration that would require key in-flight activities to be terminated or re-constructed. Hence, we need to understand the atomic nature of some orchestrated activities to ensure their completeness before evaluating possibly disruptive changes. All of these considerations are additional layers of sophistication that we can use to augment the realism of our models and actions, and which ultimately ensure we are consistently and smartly developing actions that will reliably execute the business.
It is likely that the sophistication of our optimization strategy will utilize an ever more sophisticated set of constraints that are built by a constraint generation agent and fed to the optimizer to ensure the recommended solution lives within those human, time-bounded, and atomic activity constraints. Fortunately, we can follow a very incremental strategy of increasing sophistication while continually evaluating if adding that next level of intelligence ultimately pays off in continuously improving outcomes.
Section 3: Transformation Programmatics
9. Business Change Management
An important capability to have in place for a transformation of the scale and impact of Smart Operations is a robust Change Management program. Change Control / Change Management are key processes for ensuring that even in the face of any number of changes, the integrity of a company’s products and services, internal processes, etc. maintain a high degree of continuity, predictability, risks management, and documentation. Controlling change in a predictable way ensures that the right stakeholders, owners, users, etc. are all properly informed of coming change, have the opportunity to weigh in on expected impact, and most importantly have the lead time to effect enabling or responsive changes in their own areas. Two specific types of change are discussed below and should be closely considered when designing the Smart Operations transformation.
Product / Service Change Control
Product and Service change control mechanisms are critical in helping ensure continuity in the delivery of a firm’s products and services. As Customers rely on what a firm delivers, and in fact often only buy from a given company given the confidence it has in that delivery, we must carefully weigh the impacts, both positive and negative, foreseeable from a Smart Ops-drive change. To execute change control properly, we must design and exercise a process that catalogs and models the potential changes in product or service delivery performance, reliability, and cost as well as impact on regulatory conformance, legal compliance, and environmental impact. A Smart Operations project team should consciously partner with existing product/service change management leaders to understand their core processes and ensure direct linkage into the Smart Ops program to ensure potential risks are properly identified and accounted for. To the degree that the Smart Ops program team can model future performance and demonstrate the “To-Be” product/service delivery mechanics, it will help drive confidence and ultimately success in getting such changes approved.
Business Process Change Control
The internally focused complement to Product/Service change management is the management of internal business process change. Examples of key internal processes include closing the books, approving a new hire, and procuring raw materials through competitive RFPs. These internal processes are the backbone of how work is performed and as described earlier represent a foundational element of how we frame the Smart Operations opportunity. In ways similar to how we framed the Product/Service change management situation, we must here consider similar factors including staffing, cost, process performance, cycle time, and cost. As a matter of course these would typically be deeply evaluated as a core input to program decision-making about next/future Smart Operations targeted improvements. From a change management perspective, we must broaden our analysis to ensure we identify all key stakeholders, quantify the impact of the proposed change, and work through the change process and relevant owners to ensure all impacted entities are aware and prepared for the change.
10. People Change Management
We explored earlier the potential impact on the Enterprise workforce and the opportunities to properly position work across three candidate segments. At the very outset of the Smart Operations transformation, it is highly advantageous to initiate robust thinking and planning of exactly how to engage, develop, and retain staff who will critically populate the key roles in such a new operating model. That will maximize the lead time that may be required to identify, upskill, re-position, and activate staff in entirely new roles while also preparing management on how to evolve the talent identification, development, and performance management techniques for fundamentally different types of work. The suggested actions below can help HR and others involved in people leadership design a program that delivers the needed organizational and individual transitions required for transformation success.
Go-Forward Guidance
Broadly this set of recommended actions for organizational development leaders will help ensure active planning and activation of key steps necessary to ensure readiness for a new model of work in an organization. Here is a suggested sequence of actions:
- First, actively consult with business leaders, strategists, and architects to characterize the emerging operating model of the firm inclusive of the critical core competencies that will define success.
- Discuss with technology leaders what skills will be required in the future to design, build, feed, and guide the Smart Operations engine along the lines discussed above.
- Consult with functional leaders on what work they may have already done to characterize the specific competencies and capabilities of their respective workforce. Include in that discussion the collection of those leader’s views on how the work segmentation discussed above is likely to impact their specific functional work activities.
- Actively project into the future the organizational requirements in terms of organizational design, staffing, and skills on an aggregate basis with perhaps annual projections of anticipated changes in that mix.
- Use the identified understanding of current staff and skills and perform a preliminary mapping of people to roles while also documenting key future skills gaps that must be actively filled.
- Assess that preliminary mapping with the identification of key themes and develop one or more proactive strategies that can be activated, perhaps over time, to ensure the development actions are executed in sync with the ongoing broader organizational changes
- Finally, develop and launch a robust communications plan that shares the strategic intent of the plan with the entire organization.
The hypothesis for such a proactive and candid explanation of the future with the workforce at large, is that by signaling specifics about where the organization is headed, what roles/skills will be critical to success, and over what time period this change will occur, you are helping each individual make a conscious decision as to whether that future state is something they want to commit to being a part of. Putting that decision in their hands shows trust and commitment to them and allows for a return signaling of their commitment.
You also need to expect that some individuals may opt out. At the end of the day one can make a strong case that both the organization and the individuals are better off when incentives, purpose, and individual responsibilities are strongly aligned.
11. Program Guiding Principles
When setting out to architect a transformative business strategy such as Smart Operations, it is helpful to codify guiding principles that can be used to help ensure a consistency of purpose and alignment for all program participants. These guiding principles should be shared upfront with both the core program team as well as other staff and partners who participate in the program. Given the breadth of participation and impossibility of program leaders to be deeply involved with every decision at every level, these guiding principles serve as an invisible set of guardrails for program design, planning, and execution.
These principles include:
- Maintain robust and ongoing alignment to both short and long-term core business strategies that will continuously serve as the primary vector being followed to generate differentiated business performance and enterprise value
- All process mapping, design, build, deploy, and continuous improvement activities will be grounded in the core principles of process capability analysis, clarity of operational KPIs, analysis of ongoing performance and trends, and a deep and quantitative understanding of how operational KPIs link to financial and non-financial business outcomes
- Development, Use, and Protection of Enterprise IP must be built-in from the ground up to ensure trust and confidence is built and maintained for all stakeholders
- Intelligent Risk-taking and provable short term value creation must be demonstrated repeatedly while each short-term development should further the long-term mission of breakthrough strategic performance and ensuring sustainable sponsorship and funding across the life of the program
- Program leadership must always be positioned to concisely share the current status, spend, ROI, and business value created to date while also sharing a comprehensive summary of next phase plans, spend, expected timelines, and anticipated net value creation
- Funding for each activity within each phase will target a low initial investment outlay for the business by leveraging vendor subsidies and a Partner commercial engagement model that institutes a risk/gain share model. That model will pay Tech/Service providers based on realized gains in financial performance for a proscribed term post deployment
- Value realized from each completed project/phase will be allocated in three ways: (1) pay Tech/Service Partners for their contributions to realized value on that project; (2) take a portion of the value to the bottom line to provide a measurable and sound ROI for that project’s efforts; and (3) seed subsequent projects in a bootstrap fashion to accelerate/lower barriers for follow-on work
- Throughout the design and execution of each evolutionary phase, decisions should balance short-term gain and long-term positioning to ensure later phases will see accelerating benefits in speed and ROI as they leverage earlier foundational efforts. That approach should further accelerate future projects by enabling increased ROI impact for those activities as a higher share of the spend will be assembling previously built capabilities versus building new ones
These guiding principles will help program participants continuously make sound trade-offs and select program strategies and activities that best contribute to longer term success of the organization.
12. Program Operating Model
An often-overlooked element of a successful transformative program is taking the time to specifically design an operating model that recognizes the staff, skills, culture, operating capabilities, and other key elements of the team(s) involved. Neither organizations nor people are static entities and as time goes on new thinking, skills, and environmental conditions should always be considered and accounted for to ensure a high-performance project team is built and sustained. The following are key considerations for Program leadership as they think through the issues, challenges, and opportunities of how to build and maintain such a team.
- Clear Sponsorship must be provided by a named senior executive who has a direct connection to the business’ financial goals and who has deep credibility with the operating elements of the firm. The sponsor will play a key role in program communications, team recruitment, and validation of outcomes plus resolving disputes and lowering barriers when dependence on entities outside the project team presents itself.
- Robust Governance must be established to oversee resource allocation, benefits validation, and adherence to program principles. This governance will play a key role in creating trust and belief in the requests, plans, and results that emerge from the program. Having senior management engaged in governance provides a key conduit from the program team to the executive team and ensures they are an active part of delivering the required results.
- Establishing an active Steering Committee with representation from the 5 core constituencies will ensure deep and integrative planning, execution, and alignment of incentives throughout the varying phases, activities, and decision-making needed to deliver target results. The steering committee will also be key in helping the project team partner with other organizations to resolve execution challenges and accelerate business change.
- Establishing a rolling 12-month program plan with at least 6 months in detail and 6 months at a summary level will help program leadership effectively communicate the forthcoming staffing and financial resources needed by the team. Such a forward-looking calendar will also provide early signals of future organizational and process change that in turn must be communicated to functional and HR leaders. Those leaders will then have sufficient time to plan for and execute needed changes to take full advantage of the program’s deliverables while minimizing the negative impact of change on the organization.
There are many other best practices that should be considered and the selection of which ones are best suited to a specific company or culture should be carefully considered by leaders experienced with driving change in the target organization.
13. Program Phases
When designing a high complexity, deeply transformative program it is best to thoughtfully deconstruct the program into multiple phases each with manageable complexity and duration. Each phase should have a clear purpose, crisp entry/exit criteria, specific deliverables, and be easily communicated to each stakeholder. The following suggested phases can serve as an initial framework for designing a compelling program with maximum chances for success.
- Chartering – in this phase the core opportunity and targeted outcomes, program sponsorship and leadership, and overall program horizon should be captured. At the highest level, the chartering step should be a compelling call to action for the organization and leadership that clearly establishes the driving rationale and scope for the initiative.
- Design – in this phase, program leadership begins to specifically codify the strategic imperatives, goals, resources, governance, sponsorship, steering committee, timeline, assessment criteria, initial funding, and project leadership for the program. While latter phases may not be spelled out in detail, it is highly valuable to play through the full program scope and timeline to help program designers anticipate the resources, opportunities, risks, and execution plans required to deliver the targeted results. Within the design phase specific people, process, and technology elements in the scope of the program should be detailed and profiled to allow for a robust concept of operations and program strategy to be developed.
- Initiation – in this phase the initial staffing of the team is performed, fully detailed project planning is completed, specification of initial deliverables, timelines, and resources (human, financial) to be expended is documented and communications are prepared and delivered to all impacted individuals and entities to signal program launch.
- Execution – Within one or more execution phases, program members begin the formal and tracked execution of program tasks, reviews of deliverables, assessment of interim goal achievement, taking of any corrective actions to restore the program to on-budget, on-plan status, and other elements of a well run program. Each execution phase should formally review and accept results against defined phase entry and exit criteria to guarantee soundness of execution. These are further defined as:
- Phase Entry – viewed typically as a checklist of must-haves, the entry criteria ensure that the resources, prior phase outputs, and other considerations are in place for the successful initiation of a given phase. Entry activities are often overlooked in the name of speed only to realize shortly thereafter that critical people, capabilities, partners, or other key inputs are missing/unavailable which in turn slows down or halts further progress. These program halts can quickly damage team credibility and momentum and cause a major disruption to the overall effort.
- Phase Exit – to successfully exit an execution phase, program leadership should prepare a formal assessment of the deliverables required for each phase and ensure all exit criteria have been met (deliverables, realized value, etc.). Typically, such results are formally reviewed by both program leadership and the Steering Committee to ensure buy-in and support for closing the current phase and signaling support for the program to continue to a follow-on phase.
- Close-Out – This final phase is an important opportunity to summarize the overall costs, benefits, and other impacts delivered by the program. While not every organization performs a “Lessons Learned” activity it is a highly recommended activity that codifies key learnings across the program lifecycle that can be invaluable for future program leadership.
14. Technology & Services Partners
Technology and Service providers will play a critical role in every phase of the Smart Operations journey. Given that many businesses lack comprehensive in-house product, platform, and best practices expertise across multiple disciplines, those in-house teams necessarily need to find, deeply partner, and leverage the scale and expertise of partners. There are three competency areas seen as critical elements of successfully delivering a Smart Operations style transformation. The following sections summarize six specific capabilities across those three competency areas and businesses considering a Smart Ops styled transformation will be well served by understanding their internal and partner capabilities in each area.
Business Consulting
The first critical competency needed is broadly characterized as Business Consulting. Specifically, if the in-house team does not have the experience to engage with business designers and strategists to receive, comprehend, deconstruct, feedback, and design to core business strategies they could miss important alignment and future-proofing elements in initial program design. Such consulting expertise would work closely with senior leadership as well as functional operations and process owners to dive deeply into the intent, deliverables, scaling, flexibility, etc. needed for both short and long-term success. These activities provide critical context for the Smart Ops architects to design structures and capabilities aligned to current and emerging competitive dynamics.
A second competency in the consulting arena is the ideation, estimation, measurement, and realization of business value potentially derived from the deployment of Smart Operations. Determining both the potential and actual realization of value derived from deploying Smart Ops capabilities is paramount to sustaining the transformation journey. Consulting skills in this area would be leveraged to map out incremental value potential in diverse operational areas, create compelling analyses centered on how digital and process improvement investments would translate to operational improvements and in turn financial outcomes. Armed with a robust value creation and realization plan, the project team can routinely report on incremental investment and return to sustain enthusiasm and continued funding for the program.
Program Management
The second major area of competency that may be needed from technology and service providers is Program Management. To deliver such a far reaching and transformative outcome, the Program must necessarily be broken down into individually executable, funded, staffed, and value creating phases. Virtually no business in today’s world is willing to throw significant dollars at a team and say “go build me a future” then wait a year or more for results. Instead, an evolutionary approach with definable goals, resourcing, and outcomes, where measurable deliverables are presented will be an absolute requirement to sustain long term support. A service provider with strong multi-generational planning skills will be an invaluable asset as they consider the blending of technological, process, staff, business environment, etc. activities and change management considerations into multiple, well sequenced phases is frankly an art form in many situations. Getting the sequencing correct and wrapping it with robust Program Management disciplines like project statusing/reporting, corrective action execution, change control, and communications is a non-trivial undertaking especially when crossing multiple “jurisdictional” boundaries like Operations, Finance, HR, and Technology.
An adjunct element within the Project Management arena and worthy of some discussion is Change Management. If there was ever a program where change management was a critical success factor, Smart Operations is the one. Across each phase of the program, we can expect serious process and people change requirements must be carefully tracked, managed, communicated, and documented. The partner must necessarily have a strong change management discipline that can be applied at key junctures of the program with particular emphasis on an initial baselining step at the very beginning. Having a grounded assessment of the degree that change acceptance is present in the culture will often indicate the degree of challenges to expect for the project team. The partner will also need solid analytical skills in assessing, prioritizing, and measuring progress of change initiatives to advise program management leaders in moderating their communications, program pace, and areas of added attention as various parts of the business are in turn transformed by Smart Operations.
Technology Implementation
The final area of core competence needed from one or more Partners is fundamental technology implementation capabilities which will be called upon to architect, design, select platforms, install, configure, and operationalize. What will be critical is not only how well selected platforms provide a response to today’s needed functionality, but also how well they will be able to adapt to future needs in response to changes in the operating model, data environment, compute scale, and other dimensions. Dealing with those anticipated and unanticipated changes is what makes the combination of architectural design and platform selection so critical. Partners with deep and proven skills at projecting into the future how the technology market will change, what new technologies and services will arise, and how well their chosen architecture can respond, will be incredibly important to the long-term success and resilience of the Smart Operations transformation.
In addition to strong solutioning (aka architectural design and platform selection), the partner must also have excellent implementation skills including installation, configuration, testing, and operationalization. Each step of the technology build-out is a critical step toward enabling incremental business value but also serves as the next step on the broader journey. Poor implementations will slow down overall progress, fail to fully deliver promised value, and consume precious dollars – all diluting the trust and credibility that IT and the Program Team are working so hard to develop. Partner(s) responsible for technology implementation must have robust experience in target industry sectors, have handled diverse implementation scenarios (including for example scale and geography), and be well versed in modern and emerging deployment models (esp. cloud hosting).
15. Financial Resourcing & Partner Engagement
When considering the funding model for a transformative strategy like Smart Operations it is very important to have a good sense of the investing philosophy of your business as well as a good feel for where in the business cycle the business currently sits. The days of being handed/promised millions of dollars for a major tech initiative are long gone (if they were ever really here), so program leadership needs a funding strategy that business leaders will find acceptable. In addition, funding should play a major role in creating alignment and commitment, especially with Partners. We see in the technology industry a few Partners who are willing to engage in a risk/gain share model where fully payment is made only when stated program objectives are met. This so-called “outcome-based billing” can create tremendous enthusiasm for speed, creativity, and the tendency for a Partner to deploy their best people onto the Program. That is exactly the type of Partnership that every technology and initiative leader is looking for.
Implementing an outcomes-based billing model, while highly attractive for the reasons above, is also fraught with risks that must be effectively dealt with upfront to ensure downstream disputes are minimized or eliminated. In environments that are inherently dynamic items like late arriving internal staff, delays in funding technology acquisition, or a “bad quarter” can all derail, even temporarily, a transformation program. When a Partner has built their financial model based on achievement of specific milestones over specific timelines, these delays can cause significant disruption to their ROI model. Below are several suggested techniques to ensure maximum alignment and establishment of a robust operating model between the Partner and the firm.
On the upside, gain sharing models have two distinct advantages that can be leveraged by Program leadership to enable accelerated decision-making and increased confidence on the part of business leaders. First, the entry point funding requirements are typically much lower than traditional consulting projects which eases the need for the business to “find” additional funding. Second, Partners are only paid for achieved business outcomes, which by design has a rigorous linkage from tech/service investment to operating performance improvement to discrete financial outcomes. Selling this type of program to senior business executives goes something like “we only pay a portion of our realized savings/gains to our Partners” which is a far different story than “we pay consultants an hourly rate whether we see savings or not.”
The following elements will help ensure that your gain share model and governance is a robust part of your transformation program and you can maintain a positive working relationship with your Partners.
Clearly Documented Business Case / Payments Model
The first step in designing a gain share model is to co-develop with the chosen Partner(s) a robust and detailed business case. This case should mimic the overall program vision, strategies, and detailed outcomes the business seeks. By working with a Partner, you can build agreement from the ground up and combine insights that factor into key assumptions, risks, and opportunities. Within this business case it is crucial to document the linkage from investment (technology and/or services) to program deliverables to targeted improvements in named business processes and ultimately to quantified financial outcomes. There should be a set of well defined “conversion factors” that provide those critical linkages such that there is a high degree of transparency and agreement between the firm and its Partners for exactly how their combined efforts will ultimately drive financial outcomes.
As part of this calculation set, we must establish a pay scale for the Partner which will be relied upon as the determinant for the payments made to the Partner. This pay scale must define the share of savings/gain that is due the Partner, over what time period this agreement is applicable, and how often payments will be made (e.g., quarterly).
Robust Governance and Change Management
A second critical element of making gain share models successful is the establishment of a robust governance model and accompanying change management process. It is almost a certainty that over the lifespan of a typical transformation program there will be occasions where a material element of the program will be impacted that could meaningfully alter either/both the business case and payments model. In these cases, and leveraging a highly cooperative relationship between the parties, a change management process should be utilized to assess and quantify the impact of the change and in turn develop proposed changes to the relevant models. Note that this mechanism is often employed when new upside in a program is discovered and properly reconfiguring a program can greatly benefit both parties. A successful and equitable outcome from exercising this process will likely sustain or even further develop the cooperative and opportunistic relationship between the parties.
If such a change to the model(s) is required, it is incumbent on both parties to broadly communicate the changes and potential impact on outcomes. This will drive any needed realignment around new goals and ensure all program participants ingest any new guidance that could alter their decision-making criteria for success.
Funding Formula based on directly measured KPIs
A point worth emphasizing is the need to develop specific links that connect operating process improvement to changes in financial outcomes. This has traditionally been overlooked by many technology program leaders, but is crucial when building trust and credibility that initial investments are truly impacting the bottom line. Fortunately, our earlier emphasis on deeply understanding business processes and the component impacts on end-to-end performance helps us best understand how targeted improvement will drive operating improvements. From there, if we work closely with Finance professionals we can translate these operating improvements into hard bottom-line value. To the good, as more of these individual project business cases are built, we will develop a library of techniques that can be reused to accelerate future modeling efforts and have the benefit of Finance’s seal of approval.
Partner Payments based on measured outcomes
The last step of this approach is to develop a Payment funding model based on realized business outcomes. As we touched on earlier, there is both a percentage of savings/gain to be negotiated with the partner but also the time horizon over which benefits will be shared. Ultimately, these two variables should be set to generate/incentivize breakthrough performance on the part of the Partner but still ensure the net value retained by the company is compelling. When developing the payment model the principals will want to consider the overall business cycle and where the business expects to be when the project is slated for delivery. In some circumstances it will be to the firm’s benefit to pay a higher percentage over a shorter horizon (which the Partner should prefer) if it understands that outyear performance may be even better than current performance. Needless to say, some solid “what-if” analysis will be highly valuable for Program planners and it is recommended to deeply partner with the Finance function to ensure that this analysis is robustly aligned to overall business performance forecasting.
16. Identification, Quantification, and Measurement of Business Value
One of the critical elements of developing a suitable payment plan for Partners in the previous section, is accurately modeling the financial benefits of each technology/service investment. As we indicated earlier, this has often been a serious gap in the business cases offered by IT and as a result it has unnecessarily diluted the perceived impact of IT’s efforts. A step-by-step process is proposed below to help program leaders methodically understand and account for targeted business outcomes.
Ideation (Derived from Strategic / Financial Objectives)
The first step is to accurately capture the intended strategic or perhaps tactical benefits that are aligned specifically to a desired business outcome. Working back from the needed business result, program designers should deconstruct the relevant strategies and tactics needed by the business to deliver that outcome. Partnering with both Finance and various Functional leaders is often the most effective way to gather and document the right context for the project. At this stage, one should capture the key approaches to solving the need and ideating over the investment elements that are likely needed. In this step the business value should be expressed in bottom-line dollars along with key risks and their respective dilution potential. Often it is advantageous to document a “Concept of Operations” that characterizes the future state of operations to help all stakeholders visualize the mechanics of how new technology or services translates to improved operations and value.
Once the “ConOps” is developed, the next step is to annotate that document with the process impacts, investments, process user changes, risks, expected outcomes and their qualitative value to further enrich the storytelling aspects of the visual. With this visualization fully prepared, a best practice to leverage is sharing it with all levels of the organization. Having front-line employees validate the workability of the revised process will help ensure adoption. Functional leaders can next validate that the concepts will deliver the expected operating improvements and help them contribute to important planning and execution considerations. Finally, sharing with Finance and other Senior leaders will help cement agreement on the efficacy and net value expected.
Quant (Industry/Functional Templates; Company or Service Provider real-life results)
As outcomes are qualitatively modeled and validated, a deep partnership with the Finance function should next be leveraged to translate those operating improvements to financial results. Importantly, Finance will apply conventional classification rules as to the types of benefits, how they will be accounted for, and what evidence they will require to properly assign dollarized benefits to the program. This last element should not be taken likely. Since technology investments happen often deep in the organization it can take several translational layers to arrive at the ultimate financial outcome. Along the way there will be a strong temptation for others to attempt to claim a share of those benefits. Program leaders should take the time to document benefits attribution to avoid later disputes among various business functions.
Measure Operational KPIs
In the course of documenting value creation, a great way to lock up participation and commitment from the broadest group of contributors is to define operating KPIs that will be actively baselined and tracked to show specific benefit achievements. The ability of the program to demonstrate specific improvements at each operating layer of the business will help cement credibility at each of those layers. Decomposing expected outcomes, how those will be measured, and to whom such reporting should be shared, will be a useful design aid for Program leaders to ensure they properly instrument the various impacted processes. An important side benefit of instrumenting these processes is the ongoing rich source of highly granular and timely data about process performance that will in turn feed analyses for future improvements.
Validate (Process Instrumentation, Data Collection, Finance Sign-Off)
As various projects approach completion, the Program team should specifically begin documenting operational KPI improvements and presenting those to their Finance team members as a key input to validating final financial impacts. This is a key step to provide a closed loop approach for the program to continuously demonstrate achievement of targeted outcomes. In the earlier design of the value strategy, key stakeholders should have been identified as well as their expectations for operational reporting. Program leaders are well served to construct an efficient mechanism for data collection, analysis, and delivery of results to ensure that the accomplishment of multiple projects does not present an unsustainable amount of such work for the team. Ideally such data collection, analysis, and delivery are highly automated which should scale well and support a broad-based program with a potentially high number of such improvements.
Regarding financial outcomes, Program level reporting should summarize actuals versus plan and indicate ownership of each result set to ensure clear and ongoing accountability. Those respective owners should serve as key validation points given their specific responsibilities and be highly trusted members of the broader business team. Ultimately the Finance function is the typical final validation point to allow a true program ROI calculation for the program.
17. Critical Success & Risk Factors
For a transformation of the scale and impact of Smart Operations it is very important for Program leaders to build out a robust Risk Management capability. That risk management plan is a key input to ongoing program management and can surface key issues that if not dealt with effectively could derail program delivery. The risk management program should use a standard risk grading model to ensure a consistent understanding of probability and impact of each risk. Additionally, a specific risk register should track all identified risks, mitigation actions taken, and resulting risks that still threaten final outcomes. Some successful programs even track phase specific risks and mitigations which helps focus the program team on very current risks that need active consideration and mitigation. The Program team could also consider implementing limits on total risk allowable before existing a phase, thus requiring additional mitigation in the current phase.
In addition to key risks, the Program Team should also document key success factors to focus on that can help ensure success across the program. It is often the case that overdrive on a success factor can more than make up for a given residual risk so Program leadership should leave no stone unturned in their examination of program execution.
Success Factors
One of the easiest, yet time consuming activities that can actively strengthen a program is having a regular and effective communications plan. For transformative situations where there will naturally be a multitude of questions and concerns that arise from fundamental change, having an outlet for the broader team to receive or request insights which help maintain alignment and enthusiasm across the team. Program Leaders should actively develop a communications cadence and content strategy that meets the informational needs across the business. Typically, a mix of e-mails, newsletters, and townhalls are used to meet individuals where they are and in effect “bring the Program to the people” versus placing the burden on individuals to seek and find out answers to questions they have.
Clear & Realized Business Cases
As described earlier, the efforts to develop a clear and concise business case that is in turn effectively communicated to key stakeholders, can provide a strong alignment mechanism across an often geographically distributed team. This helps maintain decision consistency at each level of the business because those decision-makers have a clear understanding why and for what the Program is being pursued.
Integrative Team & Culture of Change
The importance of culture cannot be overstated. To ask an organization that has demonstrated little willingness to change, to the degree that a Smart Operations styled transformation may ask for, is a tall order at best. Instilling an openness to change across the organization may be the single most important indicator of success. This is not to say change for change sake, but rather the openness to participating actively and contributing to a well-founded program with clear ambitions and outcomes means the overall Program can draw upon the collective wisdom and experience of the entire organization.
Risk Factors
Tech Platform Selection
One of the critical risk factors in any digital transformation program is the selection, configuration, and operationalization of new platforms into the Enterprise architecture. This is especially acute in circumstances where there is a high rate of change in available technology and presents businesses with a meaningful chance of selecting the wrong platform. In these cases, a strong risk mitigation approach is to develop a robust, multi-faceted set of evaluation criteria when performing platform selection. Criteria should cover both functionality, cost, and manageability of course, but also understand the platform’s recent and projected trajectory, rate of change, and strategy of ownership. The winning platform should demonstrate a successful recent past of dealing with market shifts and competitive product introductions.
Change in Leader/Sponsor
Another critical risk factor for long cycle, transformative programs, is the potential loss of a key Sponsor or Program Manager. As these key individuals move on, it could signal a loss of confidence in the viability or expected results of the Program and that signal can infect others on the team, Partners, or other key supporters. Ideally the initial selection of these critical roles is done with specific focus on their career trajectory, expected time in position, and the expected effectiveness in the role. If one of these key resources does in fact move on to another role, it is important for Program Leaders and Sponsors to develop and deliver a solid communications plan to explain to the Team and broader stakeholders the reason for the change, who will replace the departed person, corrective actions that may have caused the change and ongoing efforts to support the new person in the role. As discussed earlier, timely and complete communications can help dispel rumors and concerns that tend to naturally bubble up in uncertain situations.