Finance and accounting teams believe intercompany is under control because their systems can match balances across entities. However, matching is only the signal. It’s not the movement. The real work begins after the discrepancy is flagged, and that’s exactly where the process breaks down. Intercompany transactions stall between detection and execution, which leaves journal entries unposted, ownership unclear and the close process waiting on decisions that never happen fast enough. If your intercompany process looks complete on the surface but still delays your close or forces last-minute manual adjustments, it’s worth asking a harder question: What actually happens after the match?
Flagging is not the finish line
Picture your intercompany accounting process as a rail network. Each intercompany transaction is a train moving between legal entities, like company A to company B or subsidiary A to subsidiary B. This train carries intercompany balances, cost allocations and expense allocation entries across your corporate group.
Matching is the signal light. It tells you something is aligned. It tells you something is misaligned. But it doesn’t actually move the train.
Most manual intercompany processes stop at that signal. Accounting software flags discrepancies between intercompany receivables and intercompany payables. Dashboards show that balances are “matched.” Accounting and finance teams see green lights and assume progress is being made when, in reality, nothing has moved.
The intercompany journal entry, which includes the debit and credit that updates general ledger accounts, adjusts liabilities and reflects the correct financial position, still hasn’t been created, approved or posted in SAP.
Take a manufacturing group operating across multiple legal entities. Subsidiary A records intercompany sales to subsidiary B. Company B records the payable, but timing differences and exchange rates create a mismatch. The system flags it. The match appears “resolved” on the dashboard. But over the next three days, finance teams debate ownership. Who posts the intercompany journal entry? Which chart of accounts should be used? Should the adjustment sit in the base currency of the parent company or the receiving subsidiaries?
The signal turned green, but the train never left the station.
Blame the manual hand-off
This is where intercompany management breaks down. Once a discrepancy is flagged, resolution depends on people. And people introduce friction.
When finance and accounting teams are stuck doing manual tasks, they get stuck in a loop of discrepancies instead of resolving them because they have to:
Debate timing differences and ownership between company A and company B
Hesitate on complex scenarios like intercompany loans, fixed assets transfers and internal transactions
Route approvals through disconnected workflows instead of in-system execution
Rely on email and spreadsheets to track decisions that never return to SAP
Fragment the audit trail, which makes it harder to trace what actually happened
Meanwhile, the intercompany journal entry sits in limbo.
Accounts payable and accounts receivable teams wait on each other. Intercompany payable balances don’t align with intercompany receivable balances. Allocation decisions stall. No one owns the final step: posting the entry that resolves the issue.
In the manufacturing example, the delay compounds. The parent company can’t merge the data. Intercompany elimination is postponed. The close process stretches. What started as a minor mismatch in intercompany transactions became a missed group close deadline. The train is still sitting at the signal because no one is driving it forward.
Matching without posting is a false positive
A matched status without a posted intercompany journal entry is a false positive, not a resolution. Dashboards show aligned intercompany balances, but underneath:
The accounting records haven’t changed
The general ledger still reflects outdated positions
Financial reporting pulls from incomplete data
Accurate financial reporting becomes a matter of timing rather than truth
This is where risk builds quietly.
Without orchestration, visibility becomes misleading. Finance teams believe intercompany processes are complete, while intercompany journal entries remain unposted. During the audit, these gaps surface as discrepancies between reported numbers and actual ledger activity. Adjustments are made late. The audit trail shows delays. Questions follow.
Finance Automation by Redwood approaches this differently. It connects intercompany matching directly to execution. Once a match or mismatch is detected, the platform applies rules to generate the intercompany journal entry, route it through approvals within the system and post it natively in SAP.
This includes both sides of the transaction. The intercompany payable in company B and the intercompany receivable in company A are updated together. Debit and credit entries are aligned. General ledger accounts reflect the same reality across entities.
The manufacturing organization would’ve benefited from this automated process. Finance Automation would’ve generated, routed and posted both sides once rules, ownership and approvals were satisfied. The train wouldn’t have waited for manual coordination. It would have moved.
Put the train back on track
Intercompany accounting doesn’t fail at detection. It fails at execution. Orchestration is the reliable way to move from matching to resolution because it connects every step — detection, ownership, approval and posting — into one automated flow.
With Finance Automation, intercompany processes no longer rely on manual hand-offs. The system detects mismatches in real time across subledgers and general ledger accounts. It assigns ownership based on predefined rules tied to legal entities, transaction types or chart of accounts structures.
From there, workflows operate inside the platform, not outside it. Approvals happen in context. Audit trails are complete. Once approved, the intercompany journal entry is posted directly into SAP, which updates both sides of the transaction.
This applies across complex scenarios: intercompany loans, expense allocation, cost allocations, sales of goods and fixed assets transfers. Whether dealing with base currency adjustments, exchange rates or arm’s-length requirements under International Financial Reporting Standards (IFRS), the process remains consistent.
When your intercompany solution orchestrates the process, the train doesn’t stop at the signal. It continues through to its destination.
Finish what matching starts
Matching is only a signal, but the discrepancy continues without execution. Unresolved intercompany balances delay consolidation. They distort currency translation. They create double-counting risk in financial statements. They trigger internal disputes between business units and receiving subsidiaries. And they weaken decision-making because leadership is working with numbers that are still shifting.
Another example of intercompany journal entries makes this clear. If company A records a debit to intercompany receivable and company B fails to post the corresponding credit to intercompany payable, the imbalance carries forward. That single gap can cascade across reporting cycles and affect the balance sheet, financial position and consolidation outcomes.
Finance Automation ensures this doesn’t happen. Through rule-driven automation, it generates mirrored intercompany journal entry pairs, enforces approvals and posts across both entities’ books simultaneously. Cross-book posting orchestration keeps accounting records aligned and provides a complete audit trail from detection to resolution.
What begins as a small mismatch doesn’t grow into a bottleneck because it’s resolved at the source. Without this level of execution, intercompany accounting remains reactive. With it, the process becomes controlled, predictable and aligned with the demands of modern financial reporting.
Finance Automation is a platform designed to fully resolve this cycle end-to-end. The train won’t stall because underlying manual processes are still waiting for your team to complete them.
J.D. Power’s 2025 United States Electric Utility Residential Customer Satisfaction Study recorded the lowest satisfaction score ever measured — 499 out of 1,000 — with billing reliability and outage communication identified as the primary levers available to recover it. The same dynamic is playing out across markets. In the United Kingdom, energy providers paid out approximately £20 million in compensation for billing mistakes over five years, with complaints up 141% over that period and billing disputes accounting for 58% of all Energy Ombudsman cases in 2024.
Those aren’t customer experience problems in isolation. They’re symptoms of whether back-office processes run reliably, in sequence and at scale. For most large utilities, the orchestration layer responsible for that is still catching up to the grid it’s supposed to support.
Utilities have spent the better part of a decade winning the case for grid modernization, deploying Advanced Metering Infrastructure (AMI), integrating renewable energy and managing Distributed Energy Resources (DER). Grid edge intelligence is generating more operational data than most organizations know what to do with. What hasn’t moved at the same pace is the software layer connecting all of that to the business processes that serve customers and satisfy regulators.
In most large utility environments, these application and data pipeline workflows are still being automated by legacy schedulers and scripts designed before hybrid cloud existed, before AMI was standard and before a single weather event could simultaneously spike customer calls, outage jobs and data ingestion volumes across a dozen interconnected systems.
The seam running through every utility’s automation environment
Utilities deal with a version of the problem with an extra dimension: the systems that run the grid and the systems that run the business weren’t built to work together — and for most of their history, they didn’t need to.
Operational technology (OT) and information technology (IT) evolved in parallel in most large utilities, governed by different teams and built to different standards. AMI put metering data in both worlds simultaneously. DER integration put real-time grid signals into billing and settlement workflows. Outage management systems started feeding customer communication platforms. What most utilities built to bridge that divide was a collection of schedulers, scripts and point-to-point integrations, each solving a specific problem without anyone owning the full picture.
As a result, a meter-to-cash workflow can touch a dozen systems with their own scheduling layers and their own definitions of “done.” When one step slips, the cascade is difficult to see and harder to interrupt because no single platform spans the full picture. For years, keeping those worlds loosely coupled was an acceptable tradeoff. It isn’t anymore — not when time-of-use tariffs, demand response activations, electric vehicle charging programs and AI-driven forecasting all require them to move in sync.
Stress reveals weakness
In the utility sector, spikes aren’t rare, and they’re considerably less forgiving of fragmented automation than normal conditions. During a major weather event, for instance, utility operations teams are running outage management workflows, restoration sequencing and customer communication jobs simultaneously, while billing cycles, AMI data ingestion and regulatory reporting continue in the background. Each process depends on others completing in the right order, at the right time, against dependencies that no single monitoring console can see across.
The response ends up being human coordination: multiple dashboards, manual judgment calls, compliance trails that go dark and a customer experience that degrades in ways that are difficult to trace after the fact. Regulators and customers don’t distinguish between a transmission failure and a process failure. When automation silos prevent outage workflows and billing systems from executing in sync, the damage is the same.
The operational case for addressing this is clear. The organizational dynamics around doing so are considerably less straightforward. Legacy scheduler renewal contracts route through procurement, extensions get signed and the harder conversation about whether the solution still fits the strategy gets deferred. Another cycle of expensive upgrades with painful agent patching gets funded, and the war-room model gets staffed again during the next storm season.
AI ambition and the foundation it needs
The IFS Global Utility Survey 2024 found that 82% of utility executives consider AI essential to their digital transformation strategy, yet only 20% have completed that journey. That gap isn’t primarily a technology problem, as every utility surveyed had initiated transformation. The ones stalled are the ones that haven’t addressed the execution layer underneath their AI ambitions.
Predictive outage management, demand forecasting, dynamic pricing — none of these perform against fragmented, inconsistently sequenced data workflows, and no amount of hiring fixes an orchestration layer that can’t support the models sitting on top of it. Closing that gap requires consolidating what sits underneath the models, and the utilities moving fastest on AI are the ones that addressed the orchestration layer first.
Consolidating onto a strategic orchestration platform
That’s the shift utility companies are making with RunMyJobs by Redwood. They’re replacing legacy tools, open-source schedulers and bridging scripts with a single orchestration and execution control plane that governs end-to-end workflows across the full span of utility operations.
Legacy, self-hosted workload automation (WLA) schedulers require agents installed across every server and environment, each tied to OS updates, security patches and version dependencies. In a hybrid OT/IT utility environment, that maintenance load is constant, consuming engineering time, increasing technical debt and operational costs that boards are now expecting to fund AI initiatives and digital transformation instead.
RunMyJobs SaaS is cloud-native and agentless. Updates arrive as part of the SaaS service, so you maintain your cybersecurity posture without dedicated patching cycles. The engineering capacity previously absorbed by platform maintenance shifts toward grid digitalization and the AI programs already on the roadmap.
For utility operations teams, that translates directly:
Billing cycles, regulatory submissions and outage workflows no longer depend on manual monitoring to catch failures — built-in dependency management and real-time triggering respond automatically when a step doesn’t complete on time
Audit-readiness is built into execution by default, with every step logged and traceable, so compliance reporting doesn’t require post-hoc reconstruction
Grid resilience improves measurably, as 99.95% uptime means mission-critical outage and restoration workflows execute under peak load without adding infrastructure or having lots of people on call
SAP and non-SAP systems orchestrate from a single platform, eliminating the parallel scheduling layers and maintenance-heavy custom scripts most utilities are currently running
Consolidation also changes the economics. Restrictive, self-hosted legacy licensing, maintenance fees and unpredictable cost increases are replaced by transparent, predictable SaaS pricing — and total cost of ownership (TCO) drops meaningfully when agent patching, infrastructure management and upgrade projects that delivered no new capability are no longer on the bill. That spend gets redirected toward AI initiatives and digital transformation programs that boards are already asking about. For IT leaders, that’s legacy costs converting into investment fuel. For business leaders, it’s the operational headroom to bring new tariffs, demand response programs and customer-facing digital services to market faster.
See it in practice
American Water consolidated automation and managed file transfer tools onto RunMyJobs for its simplicity, SaaS flexibility and expert migration support. Daniel Sivar, Technologist for Basis and Security, describes how a phased, business-aligned approach made the transition low-risk and the outcome measurable. Watch the story →
Renewal as the decision point
Most large utilities are closer to this decision than they realize. Legacy WLA contracts come up for renewal and get routed through procurement and extended for another cycle without the strategic question ever being asked.
If a renewal is on the horizon for your organization, the question worth asking isn’t whether your current environment is stable enough to extend, but whether “stable enough” is still sufficient for what the business is being asked to deliver.
According to a recent Redwood Software customer survey, 68% of RunMyJobs by Redwood users work with AI tools multiple times per week. They ask ChatGPT to troubleshoot errors, use Copilot to draft scripts and paste job logs into Claude and ask what went wrong.
None of those AI models can reach into RunMyJobs and take action. They answer questions about your workflows but can’t run them, check on them or build new ones. Your AI assistants and your automation platform operate in separate worlds, and that gap is exactly what makes it hard to get real value out of either.
Model Context Protocol (MCP) support in RunMyJobs changes that.
Why every major AI platform adopted the same protocol
MCP is an open standard that gives AI systems a shared way to connect with external tools. Think of it as the USB port of agentic AI. MCP lets you plug any MCP-compatible AI agent into any MCP server without custom integrations. Drop an MCP server in front of a product, and AI systems can immediately interact with it, reading context, calling functions and taking action.
Anthropic released MCP in late 2024 and donated it to the Agentic AI Foundation under the Linux Foundation in December 2025. Since then, OpenAI, Google DeepMind, Microsoft, Salesforce and ServiceNow have adopted it. The protocol has moved past experimentation, as its interoperability across AI platforms is proven in production today.
For RunMyJobs users, MCP means something specific: the business logic, connectors and workflows you’ve spent years building are now accessible to AI agents through a standardized protocol that every major AI platform already supports. Any MCP-compatible AI tool or large language model can now trigger your workflows, check job status and build new workflows through the RunMyJobs MCP server with no custom API work and no rearchitecting. The workflows and connectors you’ve built over years of production use become tools that AI agents call on demand.
This is how your backend processes become agentic.
What can AI tools do through RunMyJobs’ MCP?
Trigger workflows and jobs: Any MCP-compatible agent can kick off your existing RunMyJobs workflows to make your current processes agent-ready without migration.
Check job status: AI tools can query whether critical jobs are running, finished or failed and surface that information inside whatever platform your team uses.
Manage workflows: Coding agents can validate and deploy RunMyJobs workflows through MCP, cutting development time.
These capabilities work with Claude, Microsoft Copilot Studio, ServiceNow Agent Builder, n8n, ChatGPT and Salesforce Agentforce. Authentication, access controls and permissions all flow through RunMyJobs — agents can only do what their associated role is allowed to do.
Here’s what that looks like in three real-world use cases.
SAP’s Joule: Submitting and monitoring jobs from your ERP
Your SAP basis administrator needs to trigger a nightly data extraction early because a report deadline moved up. Instead of switching to the RunMyJobs console, logging in, finding the workflow and submitting it by hand, they stay in SAP and tell Joule:
“Submit the GL account extraction workflow for company code 1000.”
Joule calls RunMyJobs through MCP. The workflow starts. Joule confirms it.
An hour later, the same admin asks Joule about the financial data load. Joule checks RunMyJobs and finds that the extraction finished, but the transformation step is still running. Estimated completion: 45 minutes.
No dedicated SAP integration project made this possible. MCP standardized the connection. RunMyJobs partitions and roles still control who can trigger what, so your governance model is intact, but your admin gets a faster, context-aware path to the same workflow they’ve run hundreds of times before.
This is what it means to agentify your existing SAP processes: The workflows don’t move, and the business logic stays where it is. Joule just gets a direct line to it.
ServiceNow: Remediating failed batch jobs without the 2 AM phone call
Your nightly accounts receivable batch job fails at 2:14 AM. Today, that triggers a ServiceNow incident. An on-call operator picks it up, logs into RunMyJobs, reads the error log, figures out the cause, restarts the job with corrected parameters and closes the ticket. That process takes 30 to 90 minutes, depending on who’s on call and how fast they diagnose the issue.
With ServiceNow Agent Builder and MCP, the ServiceNow agent handles most of that loop. It detects the failed job alert, queries RunMyJobs through MCP for real-time error details and job history and matches the failure pattern against known remediation steps. If the fix is a known restart with corrected parameters — wrong file path, stale credentials, a transient connection timeout — the agent resubmits the job in RunMyJobs and updates the ServiceNow incident with what it found and what it did.
If the failure falls outside the agent’s confidence threshold, it escalates to the on-call operator with a pre-built diagnostic summary: the error, job chain context and last three successful runs for comparison. Your operator starts the investigation 15 minutes ahead of where they’d be without it.
RunMyJobs still controls execution. Partitions and roles still govern who — or what — can restart which jobs. ServiceNow still owns the incident lifecycle. MCP connects the two external systems without custom middleware in between.
Microsoft Copilot Studio: Finance teams running month-end close
Month-end close involves dozens of batch processes across ERP, consolidation and reporting systems. They run in strict sequence, often at night, and someone watches a console to catch failures.
Your finance controller builds a Copilot agent in Microsoft Copilot Studio. The agent submits the intercompany elimination workflow through MCP to RunMyJobs. When that job finishes, the agent triggers the consolidation jobs. If reconciliation fails, the Copilot agent sends the controller a Microsoft Teams message — a plain-language summary of the failure, plus the remediation workflow they can approve with one click.
The controller doesn’t need RunMyJobs training. They tell the Copilot agent what outcome they need, and RunMyJobs handles execution. Your finance team stays in Teams and focuses on the close, not the tooling.
What this means for your RunMyJobs investment
The good news for teams feeling pressure to adopt agentic AI: you don’t have to rewrite your enterprise workflows. You don’t have to move your batch processing into a new tool. MCP exposes what you’ve already built to the agents your developers want to build, through a standardized protocol they already know.
The automation fabric you’ve built in RunMyJobs is your real AI asset; MCP is how you unlock it.
The RunMyJobs governance model — partitions, roles, access controls — still applies. Scalable agentic orchestration doesn’t require trading away the enterprise-grade controls you rely on. Your AI-powered workflows run under the same oversight as everything else in your automation environment. But your teams get a new way to interact with automation that fits inside the AI tools they’ve already adopted.
Redwood is building toward a model where AI agents and workload automation run side by side, supporting open standards such as agent-to-agent (A2A) and MCP to unlock existing business logic and make it accessible in a governed and observable platform operating at enterprise scale. MCP support in RunMyJobs is where that starts, and the foundation is the automation you’ve already built.
See how RunMyJobs works with MCP to unlock your investment in enterprise applications and expose them to agents. Get a demo today.
On May 29, 2020, SAP introduced SAP Cloud ALM as its cloud-based successor for application lifecycle management, replacing SAP Solution Manager and SAP Focused Run with a SaaS model built for SAP S/4HANA Cloud (now SAP Cloud ERP), SAP Business Technology Platform (BTP) and the RISE with SAP roadmap.
Nearly six years later, that roadmap is well underway. Mainstream maintenance for SAP Solution Manager ends December 31, 2027. Extended support runs through 2030. SAP Cloud ALM is already included in SAP Enterprise Support and most cloud subscriptions, with no additional license required. If you haven’t started your transition planning yet, now is the right time.
Most teams I talk to have accepted the direction. The more interesting question is how to get the most value out of SAP Cloud ALM once you’re there — and how observability fits into that.
What SAP Cloud ALM delivers and how automation extends it
SAP Cloud ALM handles project management, health monitoring and lifecycle visibility across SAP-centric environments. For teams moving away from on-premises systems, it’s a meaningful step forward.
The opportunity grows when your business processes span multiple systems. A typical end-to-end flow might move through SAP Cloud ERP, SAP SuccessFactors, SAP BTP services, external APIs and non-SAP platforms. SAP Cloud ALM provides strong visibility into the SAP application layer. Extending that same transparency to the automated workloads running across your broader landscape is a natural next step that makes your SAP Cloud ALM investment generate an even greater return.
In my experience, the individual step that fails is almost never the step that caused the problem. It might start in a data integration, surface in the ERP and affect downstream reporting by the time anyone notices. Connecting automation execution data to SAP Cloud ALM is what turns that sequence visible, so your operations teams can trace a timeline, not reconstruct one.
Observability from the process layer, not just the system layer
What changes about your observability needs when you move from SAP Solution Manager to SAP Cloud ALM is worth thinking through carefully. SAP Solution Manager was built around on-premises system monitoring, whereas SAP Cloud ALM is built for a world where business processes run across cloud services, SAP BTP extensions and non-SAP systems simultaneously.
The scope of what needs to be visible has grown considerably, and most organizations are already feeling that pressure.
According to EMA’s 2025 observability research, 87% of organizations are running multiple observability tools and actively looking to consolidate, yet fewer than half describe their current visibility as fully successful.
An SAP cloud transition is the right moment to get ahead of that, rather than add to it.
RunMyJobs by Redwood approaches observability from the automation layer. Instead of checking whether individual systems are healthy, it lets you track whether the business process completed as expected — start to finish, across every system involved.And when a process is at risk of missing an SLA before it actually does, AI-driven predictive monitoring flags it early so teams can act rather than react.
Redwood Insights provides dashboards tied to workflows, SLAs and execution data. Operations teams, business stakeholders and SAP teams can each see what matters to them without waiting for someone to translate technical signals into business terms.
RunMyJobs also connects with platforms like Dynatrace, Splunk, New Relic and AppDynamics, enabling full stack telemetry correlation and accelerating root-cause analysis. When something breaks, you trace the sequence rather than guess at it. Therefore, resolution times drop because the investigation starts in the right place.
Connecting RunMyJobs to SAP Cloud ALM
RunMyJobs has integrated with SAP Solution Manager for years. The new SAP Cloud ALM connector extends that relationship into SAP’s current operational standard to synchronize job definitions, workflow status and execution data directly into SAP Cloud ALM on an ongoing basis. SAP Cloud ALM becomes the command center, while RunMyJobs provides the orchestration and execution layer beneath it.
The powerful combination helps operations teams detect SLA risk before it becomes a business impact, trace root causes faster by correlating automation telemetry with application and infrastructure performance and maintain long-term execution records that hold up for audits and compliance reviews. Self-service dashboards mean business stakeholders can answer their own questions without routing every request through IT.
Moving to SAP Cloud ALM changes day-to-day operations in ways that open up real opportunity:
You onboard new use cases faster
Cloud services move from supporting infrastructure to the systems your operations depend on daily
More systems contribute to each business process
And you’re working with a platform SAP is actively investing in and expanding. The more those systems are interconnected, the more valuable connected observability becomes.
When you can follow a business process across systems from within SAP Cloud ALM, issues stay contained and time-to-market stays predictable. When automation execution data is part of that picture, the operational view becomes more complete — and the value of both SAP Cloud ALM and RunMyJobs compounds.
That’s the case for acting before the 2027 deadline, not just meeting it.
Across financial services, the onboarding moment is where customer relationships are won or lost. Imagine a prospective customer: a mid-sized manufacturer with $200M in annual revenue who has decided to move a high-value financial relationship to your institution. They’ve engaged with your team, liked what they heard and submitted their application. Then they wait.
Three days pass. Then five. A compliance step stalls somewhere between two systems that don’t talk to each other cleanly. Nobody catches it until the prospect calls to ask what’s happening. By day ten, they’ve quietly restarted conversations with a competitor. By day fifteen, they’re gone.
No outage was declared. No incident ticket was filed. The workflow technically completed. But the relationship — and the revenue — evaporated anyway.
That wouldn’t be unusual. According to Fenergo’s 2024 KYC and Onboarding Trends report, 67% of banks globally lost clients due to slow or inefficient onboarding in 2024. The gap between what institutions believe is happening in their onboarding workflows and what new customers actually experience is widening, and the financial consequences are measurable.
This scenario plays out more often than most executive teams realize in banking, insurance, wealth management and other sectors of financial services, because onboarding is almost universally treated as a customer experience problem:
✅ Reduce friction
✅ Improve the digital journey
✅ Shorten time to first transaction
Those are worthy goals, but they address the surface without touching the foundation — where the real financial exposure lives. In onboarding, regulatory, operational and reputational risk converge into a single workflow and are compressed into a narrow, high-stakes window where any failure is costly.
Revenue exposure hiding in plain sight
Consider what’s actually at stake in a delayed onboarding workflow. Every day a high-value customer isn’t activated is a day of fee revenue, deposit float and relationship potential that doesn’t materialize. That exposure is measurable, and it compounds.
McKinsey research on corporate client onboarding found that the average process can take up to 100 days, with Know Your Customer (KYC) due diligence and account setup alone consuming more than 40% of that time. For the client on the other end of that process, the experience doesn’t feel like due diligence. It feels like indifference.
The numbers may vary by institution and segment, but the pattern is consistent: onboarding delay is a direct revenue drag. And the financial impact is only part of the story, because compliance exposure increases alongside it. Unlike churn, which is visible and tracked, delayed activation often goes unmeasured, absorbed into the operational budget rather than surfaced as a financial risk.
There’s also an abandonment problem that rarely makes it onto executive dashboards. When a consumer or business customer encounters significant friction during onboarding, they don’t always raise a complaint. They disengage. And when they disengage mid-process in a digital channel, they often don’t return. Capgemini’s World Retail Banking Report 2025 found that 47% of prospective customers abandon card and account applications midway through the onboarding process — rising to 51% in the United States — and that only 3% of banks consider their own onboarding experience to be seamless.
So, you’re spending acquisition costs to reach customers you never actually convert.
Where processes fail: Bottlenecks and manual data entry
The reason onboarding is structurally different from other customer-facing workflows is that it forces coordination across systems and functions that don’t typically operate together in real time.
KYC verification, Anti-Money Laundering (AML) screening, credit assessment, account provisioning, document management, regulatory reporting and notification services all have to execute in sequence, often across a hybrid mix of on-premises systems, SaaS platforms and third-party data providers. Each handoff is a potential failure point. And each dependency is a place where a timing issue, data mismatch or system timeout can stall the entire chain.
Fenergo’s research puts a number on the cost of that complexity: annual KYC review costs can reach up to $175 million for a single commercial bank, with 86% of banks citing poor data management and siloed processes as the primary driver of onboarding inefficiency. Meanwhile, Capgemini found that 75% of banks report consistent delays in verifying customer identity and that 61% feel overwhelmed by application volume, specifically because of a lack of automation.
In most institutions, the workflow logic coordinating these steps was designed for a world of longer processing windows and more predictable cycles. That logic still works — until it doesn’t. When it breaks, the consequences don’t stay contained.
A compliance step that fails silently can create a regulatory exposure
A provisioning delay that cascades can affect multiple customers at once
An audit request that requires reconstructing a workflow trail becomes an investigation instead of a routine review
This is the structural problem that customer relationship and experience improvements don’t address. You can redesign the front-end journey and still have an execution layer underneath it that’s fragile, opaque and poorly instrumented.
The blind spot in onboarding risk
Think about your organization and ask yourself: if a critical onboarding workflow failed right now, could you trace it end-to-end immediately without assembling a team to reconstruct what happened across four systems and a spreadsheet?
For most institutions, the honest answer is no. And that gap matters increasingly to regulators who expect demonstrable control over KYC and AML processes, not just evidence that those processes exist.
Operational resilience requirements are also expanding. Regulators are asking more than just whether institutions can recover from disruptions. They’re asking whether institutions can demonstrate, in real time, that their compliance workflows are executing as designed. Onboarding sits directly in the crosshairs.
Yet onboarding orchestration rarely receives the same executive visibility as payments infrastructure or trading systems. It doesn’t get reviewed at the board level, nor does it appear in technology risk registers with the same prominence. It’s treated as an operational concern delegated well below the executive committee, even though the consequences of failure are board-level in nature.
Legacy systems, silos and technical debt
The existing systems coordinating onboarding workflows in most large institutions were built when the process was slower, more linear and more forgiving of delays.
That tolerance is gone. Customer expectations have shifted to decisions in hours, not days. Regulators expect documented control. And the hybrid cloud environments most institutions now operate in — where a KYC check runs in one cloud, document verification in another and account provisioning still on-premises — introduce dependencies that legacy workload scheduling tools weren’t designed to manage. This is why onboarding has become a modernization problem, not just an operational one.
The data reflects how far behind most institutions actually are: Fenergo found that only 4% of banks had fully automated their KYC workflows as of 2024. The remaining 96% are absorbing that coordination cost manually, case by case and workaround by workaround.
Legacy systems can be adapted, patched and extended. Teams do it every day. But the technical debt accumulates, and every adaptation makes the next one harder. Each new integration becomes a custom workaround. Each compliance requirement becomes a manual checkpoint. The operational overhead grows while the underlying fragility stays invisible.
A modern, scalable onboarding solution
The path forward isn’t a front-to-back replacement of onboarding infrastructure. It’s establishing an orchestration layer that can coordinate the existing ecosystem — legacy and modern, on-premises and cloud — while providing the visibility, control and scalability that executive governance requires.
That means:
Event-driven execution that responds to real signals — a document verified, a KYC check completed, a risk score returned — rather than clock-based scheduling that assumes everything ran on time
End-to-end workflow visibility so that any step in the onboarding chain can be traced, audited and explained without manual reconstruction
Dependency-aware orchestration that isolates failures rather than allowing a single stalled step to cascade across an entire batch of onboarding cases
Hybrid cloud connectivity that works across the full environment without requiring every system to be rearchitected first
This is the capability gap that modern orchestration fills — and it’s where RunMyJobs by Redwood is built to operate. As a cloud-first Service Orchestration and Automation Platform (SOAP), RunMyJobs connects legacy and modern systems across hybrid environments without the technical debt of self-hosted tools, and without forcing changes to the stable workflows that already run reliably.
Elevate the conversation beyond customer experience
The most important shift here is executive framing. Onboarding reliability belongs in the same strategic conversation as payments modernization, operational resilience and regulatory compliance — because the consequences of getting it wrong land in all three places.
When you treat onboarding orchestration as a CX improvement project, you underfund the control layer. It also requires a shift in how onboarding is measured, from speed and completion rates to revenue at risk, exception rates and compliance exposure.
But if you treat it as an operational risk, you invest in the right place. The revenue protection, regulatory defensibility and customer retention gains follow from there.
The institutions that recognize this distinction early won’t just onboard customers faster. They’ll do it in a way that’s auditable, resilient and built to scale as the workflow complexity around them keeps growing.
In my role as a Strategic Account Manager at Redwood Software, I work closely with some of the largest Fortune 500 manufacturers in our client base, advising on automation strategy across complex, mostly SAP-centric environments. Those conversations tend to surface patterns that don’t always show up in formal transformation plans, but they’re often where meaningful change starts.
One of the more consistent patterns is surprisingly simple. Procurement teams are often the first to ask a question that cuts through the complexity: “Why are we running multiple workload automation platforms when we could consolidate onto one?”
They’re not aiming to be more technical; they’re surfacing an opportunity that directly supports the CIO’s priorities around standardization, cost control and operational efficiency.
Legacy automation is back in focus
Over the past five years, the workload automation market has consolidated through mergers and acquisitions. Fewer vendors, combined with rising demand for automation, have shifted the balance of supply and demand. Procurement teams are often the first to feel that pressure, and they’ve been reacting by pushing for vendor consolidation. In doing so, they’re forcing CIOs to take a closer look at a part of their environment that has largely been ignored for decades.
This phenomenon has been a blessing in disguise for many of the CIOs we work with at Redwood. What initially seems like a cost-driven initiative is turning into something much more strategic. At the same time procurement is pushing consolidation, most Fortune 500 manufacturers are in the middle of large-scale digital transformation efforts, like moving from SAP ECC to SAP S/4HANA or RISE with SAP, shifting to and/or optimizing workloads in the cloud or introducing AI into core operations. As those changes take shape, it becomes clear that the legacy automation layer doesn’t transition as easily as expected.
In many cases, expecting these legacy tools to support moving operations to a modern, hybrid cloud architecture requires heavy customization, introduces technical debt or simply breaks altogether. Many of the workload automation solutions still in use today were originally built for on-premises, mainframe-based environments in the 1990s. They weren’t designed for cloud, hybrid infrastructure or the pace of change organizations are dealing with today.
According to McKinsey and Bain research for Redwood, only one-third of enterprises consider replacing their automation tools every year. This means two-thirds of manufacturers are going to stumble upon this problem with their next automation vendor renewal, rather than getting ahead of it.
Environments are fragile by accumulation
Very few manufacturers deliberately built the complexity they now live with. It usually happened one sensible decision at a time.
A scheduler went in to support SAP batch jobs, another tool was added for data pipelines and scripts were written to move files between the MES and cloud analytics. A manual handoff that was meant to be temporary became permanent. Each of those choices was justified by an important need. Each solved a real problem. But they cumulatively created a technology landscape that’s harder to manage, slower to change and more fragile than it looks.
Tool sprawl would be bad enough on its own. What makes it worse is the maintenance load and technical debt that comes with it: undocumented scripts, manual fixes, installed software components and agents everywhere, plus the constant churn of patching and version alignment. IT teams are asked to support modernization while spending their days keeping outdated automation systems stable.
78% of manufacturers have automated less than half of their critical data transfers, and nearly 27% still rely on manual or email-based methods to transfer sensitive internal documents like financials and contracts. – “Manufacturing AI and automation outlook 2026”
Fragmentation creates a split operating reality. Production data lives in one place, analytics in another and planning somewhere in between, while supplier updates arrive through EDI, CSVs or inboxes on uneven schedules. If orchestration can’t normalize and route those signals in real time, planners are left working with stale information. Tool sprawl starts hitting the business.
Redwood’s manufacturing research shows the same pattern. Automation is delivering gains in throughput and uptime, but results flatten when the KPI depends on multiple systems moving together. Inventory turns and data accuracy are much harder to improve in fragmented environments. Only 40% of manufacturers have automated exception handling, even though 22% cite it as a top operational disruption. Thus, many manufacturing operations still depend on people to bridge gaps when resilience matters most.
Orchestration changes the equation for the factory
At some point, manufacturers have to decide whether legacy automation will support the operation or define its limits.
It’s possible to find a more connected path when you step away from legacy schedulers that rely on thousands of installed agents spread across plant-floor servers, applications, data sources and virtual machines, each one tied to operating system changes, security patches and version dependencies. In a modern manufacturing environment, that overhead becomes a constant drain.
Moving to a modern application and data pipeline workflow orchestration platform with an agentless, cloud-first architecture cuts that burden at the source and gives technical teams their time and focus back. Instead of babysitting infrastructure, they can align their effort toward enterprise MES rollouts, IIoT connectivity, plant modernization and the data foundation needed for predictive maintenance and better decision-making.
A unified orchestration model changes what teams can see, what they can scale and where they optimize throughput, efficiency and budgets. It gives manufacturers, in particular:
Better visibility across end-to-end processes: In fragmented environments, teams see isolated jobs and individual handoffs. In a unified model, forecasting, procurement, production scheduling and fulfillment become part of the same end-to-end flow. If a supplier update affects material availability or a quality hold changes what can ship, the response can move through the system instead of waiting for human intervention.
A stronger foundation for modernization: Tool consolidation is often treated like cleanup work, but it’s actually foundational. If the orchestration layer remains fragmented, every smart factory or Industry 4.0 initiative built on top of it inherits that fragility.
More room to scale: Manufacturers expanding across plants and regions can’t afford growth that brings license friction, infrastructure bloat or unpredictable costs. A SaaS model with transparent economics makes scalable growth easier to support.
Better use of budget: Too much money still goes into maintaining old schedulers, managing compatibility issues and upgrading platforms that add no new business capability. Consolidation creates a chance to shift that spend toward projects that improve production processes, shorten cycle times and remove bottlenecks.
Bring your orchestration strategy to life
This is where an orchestration platform like RunMyJobs by Redwood fits. Its job is not to add another tool to the pile, but to replace fragmented scheduling and automation with a single execution layer across ERP, MES, IIoT, quality control and plant-floor workflows.
For manufacturers with large SAP landscapes, that matters even more. Redwood’s SAP partnership and SAP Endorsed App status give customers a more reliable way to connect SAP Cloud ERP, SAP Business Technology Platform and SAP Business Data Cloud without leaning on maintenance-heavy scripts and custom workarounds. For teams moving through RISE with SAP, that supports a clean core strategy rather than pulling the architecture away from it.
A unified application and data pipeline orchestration platform also makes governance more practical. Once workflows span plants, business units and systems, consistency balloons into a serious operational issue. Compliance, auditability, security controls and traceability need to be built into execution, not layered on later.
AI raises the stakes further. Manufacturers are investing in it for planning, forecasting and predictive operations, but those efforts depend on reliable workflows and dependable data collection. If the underlying process is still patched together, AI will expose the weakness faster. Traditional automation is deterministic: you know what output to expect. AI is not. Even with consistent inputs, outcomes can vary. As organizations introduce AI agents into finance, supply chain and operations, there’s a growing need for a layer that can govern and control how those systems behave.
A strong orchestration foundation gives teams cleaner execution, earlier visibility into failures and true observability across the plan-to-produce chain. The result is less legacy technical debt and drag, fewer update delays and a better path to faster product introductions, smarter scaling and more resilient manufacturing processes.
The window is open
Manufacturing leaders don’t need more reminders that legacy tool sprawl is a problem; most are living with the consequences already. The real question is how much longer they can afford to let aging automation tools sit underneath the modernization agenda, widening the gap between smart factory ambition and operational reality every time a new initiative is layered onto a cracking foundation.
Consolidating to a modern, SaaS, AI-powered orchestration platform is the act of removing a bottleneck before it becomes the reason transformation stalls.
If a legacy renewal is approaching for your enterprise, treat it like the strategic decision it is.