Oriol Feu i Camps | Technical director, INFORMATICA FEU
— Included in Data Dialogue, Issue 002
In every board meeting where growth, customer experience, and compliance are on the agenda, there is an unspoken dependency sitting behind every slide: data. Not just data as a technical asset, but data as the operational heartbeat of the organization. When data is available, consistent, and protected, the business feels agile. When it is not—even for a few minutes—the organization can feel fragile: revenue stalls, operations hesitate, and the company’s reputation can be seriously compromised and trust erodes.
For CIOs, that reality has changed the meaning of business continuity. It is no longer a document that lives in a binder, to be opened only during an incident. Continuity is now a key and permanent strategic posture: a design philosophy where availability, integrity, and protection are built into the platform that runs the business—and into the operating habits of the teams that maintain it.
I. The Evolution of Continuity Towards Active-Active
This shift is accelerated by a simple observation: the modern enterprise runs on interconnected services—ERP and databases, logistics and finance, collaboration and analytics, including even automated and unattended integrations and data exchanges with your clients, suppliers, and partners. A single disruption can cascade across departments and third parties in minutes. In small and mid-sized organizations, the impact can be existential, because downtime hits both cash flow and credibility at the same time.

Figure 1: Highly Reliable Active-Active Architecture Ensures 24/7 Service Running with Zero Stoppage
So the question becomes less ‘How do we recover?’ and more ‘How do we stay available while we handle change, failure, and risk?’ The answer is found in how continuity architectures have evolved—and why Active-Active has become the reference model for the most demanding environments.
For a moment, let us observe the evolution in four simplified phases:
1. Stand-alone model: a single data center with a single storage system at the core. It was simple to understand and easy to operate—until it failed. When storage became a single point of failure, an otherwise healthy server cluster could still go dark. This stand-alone model forced organizations to accept that outages were inevitable.
2. The next step introduced redundancy as Active Passive configurations inside the same data center: Availability improved, but operations became more complex, and the design still had a weakness— physical events. If a fire, flood, or power event affected the site, the redundancy did not matter.
3. Moving to Active–Passive across two data centers addressed physical risk, but brought a different set of trade-offs. Recovery times were longer, the risk of data loss remained (RPO greater than zero), and failover procedures often depended on manual—yet critical—decision-making that had to be carried out by the appropriate IT. Many organizations discovered that the passive site, while expensive, was underutilized for most of the year.
4. Active-Active across multi data centers, changed the narrative. Instead of ‘one site runs, the other waits,’ both sites contribute to production at the same time, with data kept synchronized. This transforms continuity from an event—an emergency procedure—into an operating state. Maintenance becomes less risky. Scaling becomes more predictable. And when a site fails, service continuity is preserved because the architecture was designed to run that way every day.

Figure 2: Active-Active across multi data centers
II. Aligning Architecture with Business: Tiers and Multiprotocol Consolidation
But architecture is only valuable when it matches business reality. One of the most pragmatic decisions a CIO can make is to classify environments by criticality and align technology to impact. This is where continuity stops being a ‘technical’ discussion and becomes a leadership and strategic narrative: critical services are protected like critical services, and cost-optimized environments are designed to be sensible rather than excessive.
In critical environments—core databases, ERP, logistics, financial systems—the bar is clear. The business expects maximum availability, near immediate recovery, and zero data loss. In these environments, RPO must be zero and RTO should be as close to immediate as possible. This is the domain of geo-clustering and true Active-Active designs.
In important but non-critical environments— VDI, user file services, collaboration—availability still matters, but small windows of downtime may be acceptable if data loss is not. Here, the architectures can often be configured in high availability using an Active–Passive model plus asynchronous replication, aiming for minutes of RTO rather than seconds, while protecting CAPEX and OPEX.
And then there are development and pre-production environments. These environments are essential for quality and velocity when moving into production—testing changes before production reduces risk. Yet they usually maybe do not justify the same investment in multi-site replication. Cost optimized, stand-alone designs often make sense here, preserving advanced resilience budgets for the services that truly require it.
Once those tiers are defined, a second strategic move often delivers disproportionate value: simplify how storage is delivered across the organization. Modern data is not uniform. Some applications demand block storage for low-latency transactions; others rely on file services for collaboration; and increasingly, object storage underpins backup, archive, analytics, and cloud-native use cases. Historically, each protocol created its own technology silo—with its own tools, teams, and policies, which resulted in a dispersion of technical knowledge, a dispersion of solutions/ brands, a dispersion of procedures, a dispersion of contracts, a dispersion of licenses, …
Multiprotocol storage changes that. When block (SAN), file (NAS), and object services coexist natively in a single platform, IT can consolidate silos into one governable architecture. The operational payoff is significant: fewer gateways, fewer translation layers, fewer failure points, and fewer technologies competing for attention. Access becomes more direct and predictable; operations become more consistent, in summary, everything becomes simpler.
The business payoff is just as important. A unified platform makes it easier to scale capacity where the business needs it, instead of overprovisioning one silo while another runs short. It reduces license and maintenance overhead. It also helps create a coherent governance model where security, retention, and protection policies are applied consistently across the estate.
From a CIO’s perspective, this is where technology starts to look like strategy. A unified multiprotocol platform supports agility: new projects can be onboarded without building a separate storage island. It supports governance: security controls and protection policies can be standardized and audited. And it supports budgeting: investments move from fragmented purchases to a deliberate platform roadmap that can be explained in business terms.
III. Achieving Continuous Operations: Objectives, Ransomware, and Immutability
A useful way to describe the destination is ‘continuous operations’: the organization can tolerate component failure, site-level disruption, and planned maintenance without turning continuity into a crisis. That doesn’t mean incidents disappear—it means incidents stop dictating whether the business can operate. The operational burden shifts from improvisation to execution, from uncertainty to rehearsed response.
To get there, CIOs typically translate resilience into a small set of service objectives. They decide which systems require zero data loss, which can tolerate small recovery windows, and which are primarily cost-optimized (the three scenarios we discussed earlier). Those objectives then drive architecture decisions: synchronous replication where RPO must be zero; asynchronous patterns where cost and distance matter; and stand-alone designs where agility and budget are prioritized.
Active-Active architectures help reduce that mismatch because they encourage an ‘always-on’ posture. If both sites are active, then failover is not a rare event; it is an operational capability that can be tested and validated routinely. Over time, this raises maturity: teams become confident in switch procedures, automation reduces human error, and change becomes safer because the platform was designed to sustain it.
Ransomware has turned data resilience into a board level concern. In many incidents, the attacker’s goal is not only to disrupt operations, but also to compromise recoverability by encrypting production data and attempting to delete or corrupt backups. In that context, architecture matters as much as tooling.
Immutability becomes a decisive element. Immutable snapshots and version retention— applied appropriately to the workload—help ensure that recovery points cannot be silently altered or deleted. This does not replace security controls; rather, it strengthens the last line of defense: the ability to restore trustworthy data quickly, even under pressure.
In practice, ransomware readiness also reshapes executive expectations. The question becomes: can we keep the business running while we contain and eradicate? Can we restore quickly from a clean point in time? And can we prove that the restore point is trustworthy? When resilience is designed into the platform (as if it were a native property)—rather than layered on afterwards—those questions are easier to answer with confidence.
IV. The Strategic Platform: Enabling Fit-for-Purpose Resilience
One practical way to bring these ideas to life is through enterprise storage platforms designed for Active-Active+HyperReplication (by AIR GAP) and multiprotocol operations. In this context, Informàtica Feu (ifeu.net) commonly positions OceanStor Dorado New Generation as a reference platform for organizations that want to unify protocols while reaching high resilience targets.
At the core, the platform is designed for low latency and predictable performance, while providing native services across SAN, NAS, and object. It supports synchronous Active-Active replication between data centers, enabling continuous availability across sites. It also supports architectures extending beyond two sites—3DC and 4DC patterns—designed to increase resilience where business impact demands it.
For protection, the platform emphasizes fast recovery capabilities, including immutable snapshots designed to accelerate restoration from incidents, as well as ransomware detection and recovery features.
Automation and orchestration further reduce operational complexity, helping teams maintain consistent service levels across protocols.
The most compelling part of this story is not a feature list; it is alignment and versatility. Critical workloads—databases, ERP, financial systems— map naturally to block storage with Active-Active replication. Collaboration platforms and repositories often map to file services under Active-Active or Active–Passive depending on criticality. And backup, archive, and analytics commonly map to object storage where immutability and scale are strategic priorities.
In other words, different workloads can coexist on the same platform with differentiated policies for performance, availability, and protection— provided the design is intentional. This is how organizations move from ‘one-size-fits-all’ resilience to ‘fit-for-purpose’ resilience.
A narrative approach also helps CIOs describe why these matters to stakeholders outside IT. Consolidation reduces complexity, which reduces risk. Predictable failover reduces downtime, which protects revenue. Unified governance reduces audit friction, which supports compliance. And cyber-resilience reduces the probability that a single incident becomes a multi-week operational shutdown.
It also changes the economics of resilience. Traditional Active–Passive designs often hide inefficiency: the passive site is built, powered, cooled, and maintained, yet it delivers value only during a crisis. In Active-Active, the second site contributes every day, which can improve utilization and make resilience investments easier to justify—because they are not ‘idle insurance’, they are productive capacity.
Industry examples make the narrative tangible. In banking, requirements often include continuous processing, rapid service switching, strong compliance, and cyber-resilience. Active-Active designs help protect the transactional core, while multiprotocol services support digital channels, document generation, and user information repositories. Immutable object storage can strengthen regulatory archive strategies, where retention and integrity matter as much as availability.

Figure 3: Geo-redundant 3DC DR in two cities
In healthcare, the diversity of data is a defining challenge. Hospital Information Systems (HIS) centralize clinical, administrative, and financial workflows—often requiring the highest availability targets. Meanwhile, imaging and diagnostic ecosystems—such as pathology PACS, genomic PACS, and vendor-neutral archives (VNA)—prioritize long-term scalability and interoperability. A tiered approach is crucial: Active-Active is commonly reserved for the most time-sensitive clinical systems, while imaging repositories may accept different availability levels, supported by immutable snapshots as retention strategies where appropriate.

Figure 4: Geo-redundant 4DC DR
This is also where the rationale for Active Active becomes clear. When both sites operate simultaneously and share the production burden, the architecture eliminates traditional single points of failure and reduces planned maintenance risk. It is a resilience model that aligns with modern expectations: services should be continuously available, and continuity should not depend on heroic manual procedures.
In practice, the transition to this model is as much about operating habits as it is about infrastructure. CIOs who succeed treat resilience as a lifecycle: they classify workloads, define service objectives, and then institutionalize testing. The goal is not merely to ‘have’ a continuity solution, but to exercise it routinely—so that site switching is predictable, auditable, and repeatable.
This discipline also clarifies investment decisions. Instead of distributing budget across disconnected silos, organizations can fund a common platform that serves multiple protocols and multiple tiers. Over time, that consolidation can simplify vendor management, reduce skill fragmentation, and make automation realistic—because operational processes target one architecture rather than many.
Finally, when executives understand that Active Active is not ‘extra redundancy’ but a way to keep the business running through failure, maintenance, and cyber incidents, the conversation shifts from cost to risk management. That shift is often the moment resilience becomes a strategic advantage rather than a necessary expense.
When this approach is implemented well, organizations gain something difficult to buy directly: confidence. Confidence that planned changes will not trigger outages. Confidence that a site incident will be absorbed by design. And confidence that, when cyber pressure escalates, the business has options—continue operating, isolate safely, and restore from protected points in time. That confidence is what allows CIOs to move faster without taking reckless risk.
Finally, as you reflect on your own continuity strategy, consider these questions:
(1) If a full site (or data center) outage occurred today, could your most critical services continue operating with no loss of transactions?
(2) Are your recovery points protected from ransomware tampering through immutability and governance—not only in backups, but also in production-adjacent snapshots?
(3) Do you enforce consistent security, monitoring, and retention policies across block, file, and object— or do protocol silos create gaps attackers could exploit?
(4) Can you execute a controlled site switch as a routine operation, or only as an emergency procedure under pressure?
And if you think this is extremely expensive, slow to implement, or overly complex, that would be the perfect moment to interact and exchange ideas OceanClub Data Storage User Club.
Access Data Dialogue, Issue 002 →