Why clarity, not coverage, defines modern observability

Why clarity, not coverage, defines modern observability

Imagine standing in a control room filled with screens. Every system reports green, and every dashboard is populated. The view feels complete. Then, a critical business process misses its deadline.

The data was there. The warning signs weren’t obvious. By the time the impact surfaced, the moment to intervene had already passed.

This is a familiar tension for many enterprise leaders. Visibility exists, but understanding doesn’t always follow. Monitoring tools confirm that systems are running, but they rarely explain how automation behaves under pressure and how delays ripple across dependencies or where risk is quietly accumulating.

The single pane of glass was an important step forward. It brought fragmented information into a shared view and reduced blind spots. What it doesn’t consistently provide is depth: the ability to move from status to meaning without manual interpretation.

That gap becomes clear the moment questions turn from “Is it running?” to “Can we rely on it?”

When insight depends on translation, risk increases

Most enterprises already collect enormous amounts of operational data. Automation platforms generate execution logs and performance metrics. And applications and infrastructure emit their own signals. So on paper, nothing is missing. But in practice, insight is scattered.

Understanding what’s happening across critical workflows often requires translation. IT teams pull data from multiple monitoring tools, correlate timelines and explain what technical behavior means for business outcomes. Leaders then depend on these explanations to assess risk, prioritize action and answer questions they know are coming.

This model is fragile. It slows decision-making and quietly extends mean time to resolution (MTTR), even when teams are working as fast as they can. By the time an issue is fully understood, the opportunity to intervene early has often passed, turning what could have been a minor disruption into a larger operational event.

Observability reduces that dependency. Correlating automation data and presenting it with context, it allows different audiences to access the insight they need without waiting for interpretation.

Why consolidation alone doesn’t create clarity

The promise of a single pane of glass is powerful when the goal is shared visibility into a specific domain — one platform, one set of processes, one operational context. It creates a common reference point and a shared understanding of what’s healthy and what’s not.

The challenge emerges when that same approach is stretched to cover the entire enterprise. A single view can only show so much. When automation spans applications, infrastructure, data pipelines and business services, compressing everything into one window often flattens the story instead of explaining it. 

Over time, this leads to dashboard fatigue, especially when green statuses can mask issues that matter deeply to specific teams. Different roles need different windows into the landscape:

  • Process owners need to understand whether end-to-end workflows will complete on time 
  • SAP teams need to see how automation execution affects business services and applications
  • Platform teams need to connect workflow behavior to application performance and infrastructure health

Effective observability builds on the single-pane-of-glass approach with more of a panoramic view, where multiple, connected panes together reveal the full landscape. Each pane provides the right context for the person looking through it, while still drawing from the same underlying source of truth.

One view in a broader landscape

Redwood Software builds observability as a native capability with Redwood Insights for RunMyJobs, ensuring insight is accurate, contextual and available where decisions are made. RunMyJobs provides a clear pane into orchestration, while enabling other platforms that offer their own views into applications, infrastructure and business services. This integrated approach avoids the fragmentation that comes with bolt-on monitoring tools and spot solutions, ensuring orchestration data is captured at the source and contributes to a broader, connected picture.

Context changes how problems are handled

Monitoring answers a narrow question: did something happen?

Observability answers a more useful one: why did it happen?

With cross-domain, correlated, up-to-date data, teams can see how workflows behave as part of the enterprise ecosystem, how dependencies influence response times and where delays originate — insight that directly shortens MTTR by narrowing focus to the point of failure instead of the symptom. 

The real impact shows up in consistency. Fewer surprises reach leadership. More importantly, service-level agreements (SLAs) stop feeling like commitments you hope to meet and start becoming outcomes you can actively manage. Ultimately, the organization spends less energy reacting and more time improving how critical processes perform.

So, the control room still exists, but it stops being a wall of indicators. It becomes a place where cause and effect are visible.

Resilience requires a longer memory

Operational resilience isn’t built in a single incident. It’s built over cycles.

Short-term monitoring captures what happened today, while observability preserves history and makes it actionable. With extended data retention, leadership teams can look across quarters instead of weeks. They can compare peak-period performance year over year, identify recurring bottlenecks and understand how changes in architecture or volume affect outcomes.

This longer view supports better planning and more credible conversations with the board. It also simplifies governance and audit preparation. Instead of assembling evidence manually, you can rely on a consistent execution history that reflects how systems actually operate.

A 15-month narrative, rather than the two- or three-month one many teams work with today, creates continuity. It allows leaders to explain not only what changed, but why it changed — and how those decisions improved reliability, protected SLAs during peak periods and strengthened the return on automation investments in the long run.

A more sustainable role for IT

When observability is done well, something subtle but important changes inside the organization.

IT teams stop being the place everyone goes for explanations. They’re no longer stuck translating technical signals into business impact after the fact. Instead, they set the conditions for shared understanding. The right information is available earlier, in context and in language that different teams can actually use.

That shift frees technical managers to focus on improving how systems perform rather than defending why something failed. It also changes how leaders engage. Conversations become less about status and more about trade-offs, priorities and what to improve next. Visibility no longer depends on deep technical detail or last-minute briefings.

This is why observability can’t be reduced to “better dashboards.” The real value is confidence: 

✅ Confidence that the systems carrying real business risk are understood

✅ Confidence that issues will surface early 

✅ Confidence that decisions are grounded in reality, not assumptions

Continue exploring observability

As automation continues to scale via Service Orchestration and Automation Platforms (SOAPs), the ability to understand, anticipate and explain performance becomes a strategic advantage. To learn more about how modern observability supports resilient, data-driven operations, explore Redwood’s approach to enterprise observability.

The automation talent shift: Building teams that thrive in the SOAP era

The automation talent shift: Building teams that thrive in the SOAP era

After years of working with SAP customers, partners and internal teams, one thing has become clear to me: automation has outgrown its roots as a technical initiative tucked inside IT. Today, automation is the connective tissue of modern digital infrastructure and, increasingly, of the teams that run it.

What doesn’t get talked about enough is that this shift isn’t primarily about tools. It’s about us as humans.

That’s why I find the 2025 Gartner® Magic Quadrant™ for Service Orchestration and Automation Platforms (SOAPs) so relevant as both a technology lens and a talent and operating-model conversation.

SOAPs aren’t simply better schedulers. They orchestrate end-to-end business services with precision, context and intelligence. And when an organization adopts one, it doesn’t just modernize its automation stack, but reshapes how teams work together, learn and create value.

From my perspective, this isn’t a skills gap problem. It’s about how roles are naturally evolving. With the right guidance, encouragement and space to grow, people can thrive in change rather than just adapting to it.

The shift from task execution to service ownership

SOAPs dramatically expand the surface area that automation touches. We’ve moved from “run this job” to “run this entire business service — with events, conditions, dependencies and real consequences.” 

That evolution changes the nature of the work. You’re no longer optimizing isolated workflows inside a single system. You’re orchestrating processes that span ERP, SaaS applications, cloud platforms and external services. That requires broader thinking and deeper collaboration. The work itself becomes more strategic.

People are still central, but they’re now enabling resilience, business agility and real-time orchestration, not just maintaining automation.

What high-performing automation teams think differently about

The first real shift has to happen in how teams think about automation. Over time, I’ve seen successful teams move from:

  • “Automate the task” → “Orchestrate the service”
  • Siloed responsibility → Shared, cross-functional enablement
  • Versioned scripts → Reusable templates and modular components
  • Maintenance thinking → Platform thinking
  • Support role → Strategic enabler

This mindset shift is subtle, but it changes everything from design decisions to how people collaborate across SAP and non-SAP landscapes.

How automation responsibilities are being redistributed

As SOAPs reshape operational models, new roles naturally emerge. Many organizations already have the talent; they just haven’t named or empowered these roles yet.

Some patterns I’m seeing more often:

  • Process architect/Orchestration designer: Connects workflows across business services, APIs and cloud-based environments
  • Automation data translator: Bridges operational logic with analytics, logs and business context
  • Workflow monitoring and exception manager: Manages signals, dependencies and upstream/downstream impact
  • Adoption lead or Change champion: Drives orchestration consistency across business ops, IT operations and development teams
  • Automation culture steward: Shapes shared norms around reusable assets, platform thinking and feedback loops

Revisiting how automation work is distributed can unlock capacity you didn’t realize you had.

The conditions that make orchestration stick

Technology expands what’s possible. Culture determines what actually sticks. As automation spans more of the business, shared ownership becomes essential.

That starts with a shared vocabulary. If one team calls it “dependency mapping” and another says “event chaining,” you’ll end up with silos — not orchestration.

It continues with a learning loop that goes beyond training. People and teams need space to experiment, compare patterns and refine their instincts. As a leader, your job isn’t to prescribe every step but to create the conditions for repeatable learning that scales naturally.

And all of the above depend on clear ownership. Without visible leaders accountable for process optimization, templates and tooling standards, automation efforts remain reactive.

How to help your team thrive

If SOAPs unlock new levels of human potential, your team’s job is to take that and run. The environment you create — and the initiatives you choose to prioritize — will bring the shift to life.

At a minimum, I’d focus on the following.

1. Build capability with intention

Capability building can’t be accidental. It should be purposeful. Help teams build fluency in event-driven automation, API integration, cloud orchestration and monitoring patterns. Give legacy automation specialists room to evolve into orchestration designers or platform operators.

The goal isn’t to turn everyone into a developer. But everyone should understand how services fit together and how dependencies behave. You’ll be designing for resilience when your teams understand the “why” behind SOAP-driven workflows.

2. Create a structure that supports orchestration

Orchestration doesn’t thrive without structure. Establish an automation Center of Excellence as a guide for standard workflow patterns, exception handling and reuse. Make ownership explicit for templates, connectors and observability.

Most importantly, bring your practitioners into governance conversations. That’s how you remove friction between process design and automation design.

3. Equip teams with tools that let them excel

People do their best work when technology reduces complexity instead of adding to it. Choose platforms that make dependencies visible and support both low-code and advanced design approaches. Each persona should be able to contribute at their level.

I often ask one simple question: Will this tool make it easier for my team to design, understand and maintain end-to-end processes? If the answer is yes, the value shows up quickly.

4. Avoid the usual traps

Automation stalls when it’s treated as an IT-only scripting exercise, when adoption is an afterthought or when success is measured by output instead of outcomes. You can avoid these traps by formalizing enablement, designing for orchestration — not tasks — and tying KPIs to reliability and business impact.

Your people = your differentiator

SOAPs raise expectations for how work flows across the enterprise. But it’s your people who turn those expectations into outcomes.

When you make space for teams to think bigger about how data, work and ideas move across the business, you unlock something far more powerful than automation alone.

If you’re building that kind of culture, it helps to understand where the market is headed. The 2025 Gartner® Magic Quadrant™ for SOAPs report offers a grounded view of the Leaders and orchestration capabilities shaping the next chapter of enterprise automation — and the teams that will thrive in it.

Payments modernization depends on orchestration — not just the core

Payments modernization depends on orchestration — not just the core

There’s a particular kind of risk that only exists in systems that “work.” It’s not the flashy kind, or the kind that triggers emergency funding or board-level interventions. This is a quieter risk, embedded deep in the background of day-to-day operations. 

It’s the infrastructure everyone depends on, but almost no one revisits, because it hasn’t failed loudly enough.

Banks have spent years modernizing what customers can see: digital experiences, mobile apps, real-time payment rails, cloud-native cores. Those investments were necessary. In many cases, they were overdue. And on paper, they delivered exactly what executives asked for.

So, why does it still feel harder than it should be to move money safely, quickly and predictably?

When “good enough” stops being defensible

Most enterprise architects and IT operations leaders know this feeling well. The environment works. Payments clear, and fraud is caught. Reconciliation eventually balances. When something breaks, teams step in, fix it and move on. The system absorbs stress, and people compensate. And because the compensation works, the underlying issue stays invisible.

But “good enough” becomes much harder to defend when three pressures converge at once:

  1. Payments volumes accelerate
  2. Time-to-decision collapses
  3. Accountability increases

That convergence is happening now, and it’s visible to regulators and customers.

Real-time rails like FedNow and real-time payments (RTP) aren’t just faster versions of existing processes. They eliminate the buffer zones — overnight windows, batch retries, manual intervention points — that legacy schedulers took advantage of for decades. At the same time, regulatory scrutiny and customer expectations have converged around one assumption: you know exactly where a payment is, why it failed and what you’re doing about it.

That assumption exposes a structural weakness many banks and financial institutions have learned to work around — but not fix.

The invisible complexity behind every transaction

A modern payment doesn’t move through a straight line. It fans out across fraud detection, compliance checks, routing decisions, settlement systems, reconciliation workflows, notification services and reporting pipelines. Many of those components have been modernized individually. Few have been modernized together.

Orchestration fills the gap.

Many teams still rely on a combination of legacy schedulers, custom scripts and tribal knowledge. It’s not elegant, but it’s familiar. And familiarity is powerful, especially when budgets are tight and priorities are visible elsewhere.

The problem is that technical debt compounds fast, and it’s sticky.

Outages that weren’t supposed to matter

In May 2025, a major outage at Fiserv disrupted payment services across multiple United States banks and credit unions. Zelle transfers stalled, and online banking features and ACH processing were affected. For customers, the experience was confusing. And for banks, it was clarifying. It was a failure of coordination, not innovation.

Similar stories have played out across industries. 

  • Airlines grounded by systems that couldn’t reconcile real-time data flows: Hundreds of flights were canceled in 2022 when key IT systems went offline, revealing how critical poorly coordinated back-end layers can be.
  • Cloud providers experiencing cascading outages because dependency logic behaved differently under load: A major AWS outage in 2025 rippled across global services when internal automation triggers weren’t sufficiently orchestrated, showing how even modern platforms can fail without resilient control layers. 

In each case, the visible platform was modern, but the control layer beneath it was not. These incidents are foreshocks, signaling the risk of a greater problem in the near future. They indicate architectural lag — that the desire for execution speed outpaced application and data orchestration maturity.

The operational resilience question no one wants to ask

Over the past several years, operational resilience has stopped being something IT teams manage behind the scenes and started becoming something boards are directly accountable for. Regulators now expect banks to demonstrate not just recovery plans but clear tolerance for disruption, while customers and markets punish even short-lived outages with lost trust. As a result, resilience is now a governance issue.

Here’s the uncomfortable question many organizations avoid: If a critical payment flow failed right now, could you trace its path end to end quickly enough to meet your obligations without assembling a war room?

Not in theory. Not eventually. But immediately, in real time.

Could you see which system made the last decision, which dependency stalled and which downstream processes were affected? Or would your teams jump between dashboards, logs and scripts to reconstruct the story after the fact?

If the answer feels uncertain, don’t blame capability. The failure is architectural. Operational resilience is proven in the moment of impact: when systems strain, dependencies collide and decisions must be made immediately. It depends on understanding how work actually flows and how systems behave together under stress, so breaks can be proactively identified and addressed in real time, not explained after the fact.

Core modernization: Essential, but not enough

Core banking platforms were never designed to own end-to-end payment coordination. They were designed to be systems of record. Modernizing the core improves performance, scalability and flexibility, sure. But it doesn’t automatically unify the workflows that surround it. Those workflows still exist across dozens of systems: many internal, many external and all interdependent.

Without deliberate payments orchestration, modernization shifts complexity outward. Integration logic multiplies and exception handling becomes bespoke. Therefore, recovery paths vary by payment type, rail and geography.

From the outside, everything looks faster. But inside, operations feel heavier.

Why this matters now

For years, banks could afford to defer this problem. Latency masked fragility, and lots of manual effort absorbed uncertainty. Institutional knowledge filled the gaps, but that tolerance is disappearing.

Real-time payments have reduced recovery windows to seconds. AI-driven fraud models are introducing asynchronous decision points. And each new payment method and provider increases the number of routing paths. Customers, retail and corporate alike expect transparency when something goes wrong. In that environment, orchestration is a strategic capability rather than background plumbing.

Orchestration as the control plane

Being successful at modern payments orchestration means establishing a control plane that understands how payment flows behave across systems.

That includes:

  • Event-driven execution instead of clock-based scheduling
  • Dependency awareness that prevents cascade failures
  • End-to-end visibility across payment journeys
  • Governance and auditability built into execution, not layered on afterward

When orchestration evolves, your ecosystem behaves differently. Failures isolate instead of spread, and recovery is not some heroic moment. You regain your margins quicker than you would’ve thought possible in the worst-of-the-worst scenarios.

Modernizing your orchestration approach is also going to prepare your organization for executing on the AI use cases you’ll need to keep up in tomorrow’s financial services world. Learn how.

The risk (and opportunity) of waiting

The greatest risk in payments modernization today isn’t choosing the wrong platform. It’s assuming the operational foundation will keep holding. Most organizations don’t modernize orchestration because something breaks. They do it because the cost of not knowing what’s happening in their payment flows and not being able to change them quickly — eventually exceeds the cost of change itself. When competitors can launch new payment experiences in weeks and you’re stuck doing it in quarters, the limitation isn’t strategy but orchestration.

Payments modernization is already a recognized growth lever. What’s often missed is where that growth actually comes from. It doesn’t come from new payment types alone, but from the ability to operationalize, deploy and scale them into production quickly and reliably. That capability lives in the underlying application and data pipeline orchestration. When plumbing is rigid, modernization becomes cosmetic rather than transformational.

This is why payments modernization succeeds or fails long before a new rail or service goes live. Real-time processing and richer payment data enable request-to-pay, embedded finance, merchant insights and cross-border optimization. None of these are possible without orchestration that can adapt payment flows quickly, route intelligently across providers and expose consistent data across the ecosystem. Modernization creates growth only when the plumbing underneath is built to move.

The banks that act now won’t be the ones chasing outages but the ones making payments boring again. And in financial services, boring is often the highest compliment. Find out more about how to modernize your payments processes.

The reconciliation is done … or is it?

The reconciliation is done … or is it?

Reconciliation checkboxes aren’t a close, especially when “reconciliation” really means transactional matching.

Most transactional reconciliation tools rely on dashboards and checklists to show progress across the financial close. Once data matching flags items as “matched,” the system often marks the task complete. From the surface, the close process appears controlled. Dashboards turn green. Workflows advance. The reconciliation looks finished.

But checklists are driven by task completion, not data movement or financial accuracy, and a “complete” status in the reconciliation tool doesn’t mean the data has been updated or validated. It only means someone flagged a match. In the financial close process, completion should mean corrected account balances in the general ledger instead of a visual signal in a reconciliation solution. This distinction matters during the month-end close, when manual processes and unresolved discrepancies can quietly accumulate.

That gap misleads CFOs into thinking issues are resolved when they are not. One healthcare controller learned this the hard way. Their team believed reconciliations were complete across bank reconciliation, sub-ledger activity and accruals. The dashboards showed no open items. Yet during an audit, $2.6 million in accrual-related journal entry corrections were still sitting in email threads, never posted to ERP systems. The financial statements looked clean on paper, but the underlying financial records told a different story.

Finance Automation by Redwood prevents this false confidence by tying reconciliation status to execution. The platform does not allow the close process to advance until required journals are created, approved and posted inside SAP to align transactional reconciliation with real financial outcomes.

“Matched” doesn’t mean corrected

In transactional reconciliation, data matching is detection, not correction. Auto-match logic highlights discrepancies between bank statements, bank feeds, bank transactions, credit cards and bank accounts, but it doesn’t fix them. Many reconciliation tools stop once discrepancies are identified, which forces finance teams to resolve issues elsewhere.

That “elsewhere” is typically spreadsheets or Excel templates used to calculate correction journals. These manual processes introduce human error, increase manual effort and slow the account reconciliation process, especially in high-volume environments handling large volumes of transactions across multi-currency entities. This time-consuming workaround introduces risks that include:

  • Added burden on finance and accounting teams already stretched thin
  • Late-cycle changes that disrupt the month-end close
  • Lower reliability in financial reporting and audit trails
  • More exposure to error-prone, manual processes

Validation functionality inside transaction-level reconciliation tools rarely touches the actual SAP posting layer. As a result, the system cannot reconcile accounts end to end. In the healthcare example, unmatched accruals required correction journals before depreciation could run. Because those journals were not posted, downstream close management tasks stalled, consolidation was delayed and financial reporting timelines slipped. The reconciliation tool checked the box, but the close process broke.

Finance Automation closes this gap by linking transaction matching directly to journal execution. When reconciliation logic is satisfied, the platform can automatically create, route and post journals based on configured rules and approvals to eliminate spreadsheet dependency.

Resolution depends on actual journal execution

A reconciliation is only complete when correcting entries are posted to the general ledger. Visual confirmation without execution is meaningless. Yet many reconciliation tools cannot natively see whether journals tied to reconciliation items are even in flight, let alone posted.

Auditors know this weakness well. During the healthcare audit, the team was asked to prove when corrections posted, with timestamps, audit trails and supporting documentation. Without proof of posting, the team couldn’t explain how those corrections affected the broader financial data or when adjustments were reflected in reporting. The reconciliation system showed completion. The ERP showed nothing. Internal controls existed on paper but not in execution.

Finance Automation enforces reconciliation completeness by embedding the entire discrepancy resolution process into ERP-native execution. It tracks discrepancy detection, journal creation, approval workflows, posting and reversal where needed. As a result, teams get audit-ready financial records with full traceability that reduce risk management exposure and support accurate decision-making.

Why most tools create journal gaps instead of closing them

Most tools separate anomaly detection from journal processing. That architectural split forces accounting processes to span multiple systems and modules, which creates manual work outside the platform. Corrections are calculated in Excel, routed through email and posted manually through ERP interfaces or APIs that break audit trails and slow down downstream SAP jobs. Even when teams try to fill the gaps manually, the process remains error-prone because they’re relying on disconnected handoffs between people and systems.

This fragmentation impacts cash flow visibility, forecasting accuracy and consolidation timing. When account balances are corrected late, pricing assumptions shift and financial management becomes reactive. The reconciliation solution reports completion, but the financial close continues behind the scenes.

Finance Automation addresses this structurally. Built as a cloud-based orchestration layer, it unifies reconciliation, journal entry and close management in a single platform. It integrates directly with data sources, bank feeds and ERP systems and removes the journal entry automation gaps that reconciliation tools leave behind by streamlining the entire close process.

Use reconciliation to trigger real action

Finance Automation transforms transactional reconciliation from passive review into active resolution. Where traditional account reconciliation software promotes visibility and certification as its key features, Finance Automation embeds execution directly into the ERP layer so reconciliation actually results in posted journal entries. Finance Automation is the leading record-to-report (R2R) orchestration platform and is designed to execute the financial close rather than monitor it.

When reconciliation logic confirms discrepancies, Finance Automation automatically generates correcting journal entries, applies approval workflows, validates posting rules and posts directly to SAP. The reconciliation process becomes a trigger for real action instead of a reporting exercise. Account reconciliation tools no longer stop at visibility. They drive execution.

In the healthcare controller’s case, this would have changed the outcome entirely. The $2.6 million in accruals would have been posted in real time, depreciation would have run on schedule and audit questions would have been answered with system-backed evidence. Finance and accounting teams would have spent less time chasing emails and more time closing with confidence.

By orchestrating close management, automated reconciliation and journal execution across ERP systems, Finance Automation reduces manual processes, improves scalability for enterprise organizations and delivers real-time insights through a user-friendly platform.

If your dashboards look clean but your journals live in email, your reconciliation is not done, and your journal entry close is not really automated. Test your journal automation maturity and see how your reconciliation breaks down into manual journals.

Why manufacturing automation has hit a plateau — and what will get it moving

Why manufacturing automation has hit a plateau — and what will get it moving

If you lead manufacturing operations or IT today, automation itself probably isn’t your constraint. In many environments, it’s working exactly as intended. Production lines are more stable. Downtime is lower. And automated systems are doing the jobs they were designed to do, often reliably and at scale.

Yet, in my conversations with plant managers, operations leaders and CIOs, a familiar theme keeps surfacing: progress feels harder than it should. Automation initiatives keep getting approved, but then momentum slows. Improvements arrive in pockets rather than end to end.

The data in Redwood Software’s new manufacturing automation research backs that up. Seven in ten manufacturers report automating 50% or less of their core operations. Only about a quarter say they’ve automated more than half.

This isn’t a failure of manufacturing automation or a lack of commitment. What the data points to instead is a structural limitation. You reach a plateau in automation maturity because automation often stops at system boundaries, not because you lack the right tools. Over time, your organization may have built an impressive collection of automation technology, but the connective tissue between those systems never quite materialized. Returns often flatten in this scenario because automation stops compounding, not because it never worked in the first place.

The middle-stage trap

When manufacturers described their automation maturity, the pattern was striking. Nearly half — 47% — placed themselves in the “Managed” stage, where automated processes exist but orchestration is partial. Another 26% identified as “Controlled,” with most tasks automated and orchestration present. Only about 2% described their operations as fully autonomous.

In other words, nearly three-quarters of manufacturers sit squarely in the middle automation maturity stages.

That clustering isn’t random. It reflects a ceiling most organizations hit after automating the obvious, self-contained processes. Early automation wins are straightforward: scheduling jobs, triggering reports, running batch processes, stabilizing equipment routines. These improvements deliver immediate value and reduce human error on the factory floor. But once those gains are captured, what remains is harder. 

The next level of improvement depends on workflows that span multiple systems — ERP, MES, supply chain platforms, quality systems and control systems built around programmable logic controllers. That requires orchestration, not just automation.

The challenge is that middle-stage maturity feels like success because dashboards are green and production rates look healthy. But the manual work hasn’t disappeared; it’s shifted into the gaps between automated processes, where people compensate with spreadsheets, emails and workarounds.

Where automation delivers and why connection matters

Automation delivers its strongest results when applied to processes contained within a single system. The report shows that about 60% of manufacturers have reduced unplanned downtime by at least 26%, with a meaningful share reporting reductions beyond 50%. Uptime, throughput and quality control consistently emerge as areas where automation excels.

These results are real, and they matter. They represent reduced risk, stabilized high-volume operations and improved consistency across production processes.

Challenges tend to emerge when outcomes depend on coordination across systems. 

  • Inventory turns remain difficult to improve even as automation improves uptime, highlighting the limits of siloed execution 
  • Data accuracy also lags, especially when information must move quickly between planning, execution and supply chain functions using real-time data

Lack of coordination isn’t limited to automation initiatives. Recent McKinsey research shows that broader disruptions — from supply chain volatility to shifting manufacturing footprints — are exposing the same structural weaknesses, where disconnected systems and fragmented decision-making limit performance even in otherwise well-run operations.

You can optimize maintenance schedules inside an MES or improve machining efficiency with CNC and control systems. Those are bounded workflows with clear inputs and outputs. But improving inventory performance requires synchronized data and decision-making across forecasting, production planning, material handling, warehouse operations and supplier networks.

When automation stops at system boundaries, single-system metrics improve, while cross-system outcomes lag. Orchestration addresses this gap by connecting existing automation into workflows that span the entire manufacturing environment.

The top bottlenecks between systems

When we asked manufacturers about their automation challenges, three issues arose most often: 

  1. Forecasting accuracy gaps
  2. Manual exception handling
  3. Lack of integration between ERP, MES and PLM systems 

Together, these account for roughly 66% of reported bottlenecks. What’s notable is what isn’t on that list. Manufacturers aren’t pointing to weak automation technology, but to breakdowns between systems.

Exception handling is a clear example. Only 40% of manufacturers have automated it, even though 22% cite manual exception handling as a top disruption. Exceptions don’t respect system boundaries. A supply delay affects production schedules, inventory positions, customer commitments and financial forecasts simultaneously. Resolving that requires coordinated action across systems, not isolated scripts.

The same pattern appears in forecasting. Forecasts depend on timely, accurate data from many sources. When those systems aren’t connected through event-driven workflows, forecasts rely on stale information. By the time data is reconciled, the window for action has already closed.

These aren’t edge cases. And they persist not because automation has failed, but because automation alone was never designed to solve them.

Fragmented data automation 

Most manufacturers automate inside systems, not between them. The data shows that 78% have automated less than half of their critical data transfers. More than a quarter still move sensitive information through email or manual methods. Nearly 30% rely on scheduled scripts rather than event-driven automation that responds to conditions as they change.

Over time, this fragmentation compounds. Each new automation initiative delivers value in isolation, but also introduces another boundary that someone must manage. Complexity increases and manual handoffs multiply. Each additional project adds less incremental benefit than the one before it.

Manufacturing environments span decades of technology: legacy MES platforms, modern cloud applications, IoT and data collection layers and enterprise systems from multiple vendors. Connecting that landscape requires orchestration that can coordinate workflows across it all, based on events and business rules rather than schedules.

Reframing the challenge

Automation hasn’t failed the manufacturing industry. It has delivered real, measurable value where workflows remain contained. Fixed automation works. Flexible automation works. Individual automation solutions continue to advance.

What needs to change is the focus.

The next phase of automation maturity will be about connecting what’s already automated rather than adding more tools. Exceptions and handoffs — the points where risk and cost accumulate — need to become primary targets for improvement. Workflows must adapt in real time. How well you handle this shift will determine whether your manufacturing automation investment plateaus or continues to scale.

🠆 See a demo of what orchestration could look like using RunMyJobs by Redwood for SAP production planning.

What gets automation moving again

Manufacturers that climb beyond mid-stage maturity share common characteristics. 

  • They automate exception handling across systems
  • They connect data flows between ERP, MES and supply chain platforms
  • They rely on event-driven workflows instead of scheduled scripts

These organizations are also more likely to explore artificial intelligence and machine learning use cases — not as a leap into the unknown, but as a natural extension of orchestrated operations. AI models are only as effective as the data feeding them, and orchestration ensures that data is timely, complete and actionable.

Orchestration changes the question from “What should we automate next?” to “Which workflows still depend on manual coordination?” It shifts success metrics from the number of automated tasks to the reduction of human intervention across the manufacturing industry.

The plateau is real, but it isn’t permanent. Changing your outcomes starts with changing how systems work together.

Get prepared for an orchestrated future now. Download the full Manufacturing AI and automation outlook 2026to see how your organization compares — and what it takes to move beyond the middle.

The AI and automation trends that will decide which enterprises hold up in 2026

The AI and automation trends that will decide which enterprises hold up in 2026

If the past few years were about proving that AI works, the next few will be about proving it can deliver.

By 2026, most enterprises will no longer be asking whether AI belongs in their automation strategy. That debate is effectively over. The harder questions are about trust, resilience and value: 

  • Can automation adapt when reality does not follow the plan? 
  • Can leaders rely on it when pressure is highest? 
  • Does it genuinely make the business stronger, not just faster?

These questions signal a turning point. Automation is growing up. Below are Redwood Software’s top predictions for how AI, agentic systems and automation will show up in real-world IT and operations over the next year and beyond.

1. ERP will evolve from “system of record” to “system of action”

1. ERP

For decades, enterprise resource planning (ERP) platforms have been treated primarily as systems of record: authoritative databases and sources of truth for the business.

That’s changing. In 2026, as AI adoption expands and agentic systems move beyond chat and analysis into execution, the ERP will still be at the center of the business. But its value will increasingly come from how effectively it drives action.

This shift has been discussed for years, but only now is the surrounding ecosystem mature enough to make it practical. Many agentic initiatives struggle today because they operate in isolation, confined to a single team, department or experimental environment. They rarely deliver sustained value without deep integration into core business systems.

Service Orchestration and Automation Platforms (SOAPs) play a pivotal role in closing this gap. By connecting ERP data models via the SOAP — the orchestration layer — that span applications, integrations and infrastructure, enterprises can move from insight to execution with greater reliability. Because it allows teams to evolve processes using AI technologies with minimal disruption, a true orchestration platform enables a business’s ERP, agentic systems and traditional services to work together, making a return on AI investment far more achievable.

Watch out: Treating agentic AI as a standalone layer outside ERP and orchestration will limit its impact. The value comes when insight, decision and execution operate as one system.

2. AI governance will move from policy to operating model

2. AI governance

Most enterprises now have some form of AI governance framework, but few have fully operationalized it. That will change quickly. 

As AI-driven and agentic decision-making becomes embedded in day-to-day operations and core automation workflows, governance can no longer live in policy decks or steering committees alone. In 2026, effective AI governance will look much more like an operating model.

This means clearly defined boundaries for autonomous action, explicit escalation paths for human oversight and transparent validation of AI models and decisions. Just as importantly, it requires auditability that scales across complex, cross-system workflows.

Strong governance is an enabler rather than a constraint, and teams move faster when they trust the systems they rely on. Organizations that build governance directly into their automation foundations will be far better positioned to scale AI responsibly and confidently.

Watch out: Governance that lives only in policy documents will slow adoption. Governance built into workflows accelerates trust and scale.

3. Shadow AI will force agentic orchestration to the forefront of enterprise operations

3. Orchestration

As AI capabilities expand, enterprises will face a familiar challenge in a new form: shadow AI.

Just as shadow IT emerged during the early days of cloud adoption, shadow AI appears when teams deploy AI tools and agents outside enterprise guardrails. These initiatives often move quickly but operate in isolation, creating fragmentation, unpredictable downtime and security exposure from tools never designed for mission-critical use.

This fragmentation is one of the main reasons many agentic initiatives stall or fail to deliver ongoing value. Intelligence without coordination means decisions are made in isolation and can’t reliably translate across complex business environments.

2026 is the year orchestration will be widely recognized as the connective tissue that resolves this problem and makes AI useful at scale. This includes the growing role of agentic orchestration, where intelligent agents coordinate decisions and actions across workflows rather than acting as standalone tools. This year, agentic AI will move from experimentation into planning. Buyers will increasingly score vendors on “agent readiness,” asking how AI agents are governed, orchestrated and integrated into existing workflows without introducing new risk.

Rather than hardcoding every possible scenario, orchestration allows workflows to adapt in real time while maintaining visibility, accountability and control. This is what turns AI from a collection of point capabilities into something enterprises can depend on.

Watch out: Shadow AI can deliver short-term wins, but without orchestration and governance, it introduces long-term operational and security risks that enterprises cannot afford.

1125 Agentic AI Pop up banner 1

4. AI will amplify experienced teams, not replace them

4. AI will amplify

Despite the headlines, most enterprise leaders are not trying to remove people from operations. They’re trying to remove friction. This year, AI-enabled automation will increasingly support overstretched teams by handling exception triage, diagnostics and routine decision-making more consistently and at greater scale. Skilled professionals will be able to focus on higher-value work, where judgment and context matter most.

This is already changing how teams interact with SOAPs. Natural-language co-pilots are becoming standard, helping teams build workflows and configure automations without deep scripting expertise. What once required specialist knowledge is becoming accessible to a broader range of operational and technical users.

At the same time, AI-driven anomaly detection is becoming the default for runtime operations. Instead of reacting to failures, teams increasingly rely on systems that continuously ask, “What’s unusual here?” across schedules, queues, dependencies and downstream impacts — using data that orchestration platforms already collect.

This shift is critical because the IT operations skills gap is not a future problem — it’s already here. Enterprises can’t hire their way out of complexity. AI-assisted automation offers a more sustainable path by capturing expertise and making it available when and where it’s needed.

The result is better human involvement, not less. People remain accountable for strategy and outcomes, while automation absorbs the noise that slows teams down.

Watch out: AI that only accelerates development but ignores run-time operations shifts effort, not outcomes. The biggest gains come when AI supports teams across the full automation lifecycle.

➔ 40% of automation teams don’t feel ready to adopt AI. Read the latest research.

5. Resilience will matter more than efficiency

5. Resilience

For years, automation initiatives were justified primarily through efficiency metrics: jobs automated, tickets reduced, hours saved. Those numbers were useful, until they stopped telling the full story.

By the end of 2026, enterprise leaders will care far less about how much automation is running and far more about what it protects and enables. They’ll ask:

  • Did automation prevent a disruption? 
  • Did it help the business absorb change without slowing down? 
  • Did it keep critical commitments on track when systems, data or partners behaved unpredictably?

As enterprises become more interconnected and event-driven, resilience becomes the real measure of process maturity. Automating individual tasks is no longer enough. What matters is orchestration: the ability to manage end-to-end processes across business domains and take corrective action when conditions change.

AI will accelerate this transition by helping automation prioritize intent over rigid execution. As agentic approaches mature, automation will increasingly be able to evaluate context, choose appropriate paths and coordinate actions across systems when conditions change midstream.

Watch out: Efficiency gains from isolated automation fade quickly. Resilience comes from orchestrating processes across domains, not optimizing tasks in isolation.

What this means for 2026 and beyond

The next phase of AI and automation will not be defined by novelty, but by trust, discipline and outcomes.

It will be essential to ground intelligence in strong operational foundations, invest in orchestration and governance and use AI to empower people and focus on orchestrating work rather than automating individual tasks. As orchestration platforms take on more responsibility, enterprises can drive transformation while lowering their total cost of ownership (TCO) by reducing tool sprawl, operational friction and rework.

Automation is no longer just about doing more with less. It’s about doing what matters most, even when conditions are far from ideal.

Want help laying the foundation for agentic orchestration in 2026? Explore Redwood’s AI hub.