
If you are a CIO or CFO who has recently sat through a presentation on data platform modernisation, you will recognise the experience: slides full of architecture diagrams, acronyms stacked on acronyms, and a recommendation that boils down to “trust us, this is the right approach.”
The problem is not that the technical detail is wrong. It is that none of it helps you make a good decision.
Your job is not to evaluate whether BigQuery is a better fit than Snowflake for your ingestion layer. Your job is to decide whether a significant investment in data infrastructure will produce the outcomes your organisation needs, and to hold your team and your partners accountable for delivering them. Those are different questions, and most of the content written about data platforms is not designed to help you answer them.
This post is.
What is a cloud data platform?
Strip away the technical language and a data platform does three things: it brings your data together from wherever it lives, it prepares that data so it can be used reliably, and it makes it available to the people and tools that need it.
Every organisation already has a version of this. For many mid-market businesses in New Zealand, that version involves a combination of spreadsheets, reports pulled manually from multiple systems, and a persistent argument between finance, operations, and IT about whose numbers are correct. That is a data platform. It is just an expensive, fragile, and slow one.
Modernising it means replacing manual effort, fragmented sources, and delayed reporting with automated pipelines, a single governed version of the truth, and timely insight. The technology that underpins this lives in the cloud, primarily in platforms like Google Cloud (BigQuery), Microsoft Azure, and Amazon Web Services. The technology is not the hard part.
Five questions to ask before you commit to a data platform build
When organisations get data platform decisions wrong, it is rarely because they chose the wrong database engine. It is because they did not ask the right questions before the build began.
What decisions do we need to make, and how often?
Start with the outcome, not the architecture. What does good look like for your business in 12 months? Are you trying to reduce the time it takes to close your monthly accounts? Get daily visibility of labour costs across multiple sites? Build the data foundation that AI tools will eventually run on? The architecture should follow the use case, not precede it.
Organisations that start with technology and work backwards to the business problem tend to build platforms that are technically impressive and practically underused.
Who will operate this once it is built?
This question eliminates more bad decisions than any other. A modern cloud data platform requires ongoing maintenance: pipelines need monitoring, data quality needs governance, and the platform needs to evolve as source systems change. If your internal team does not have the capacity or the skills to operate what is being proposed, you need to select a trusted vendor who works in partnership with your business, or watch the platform degrade within 18 months of go-live.
The right architecture is not the most sophisticated one available. It is the most capable one your organisation can actually sustain.
What does it cost to run, not just to build?
Cloud platforms are often sold on the basis of build cost. The more relevant number is total cost of ownership over three to five years, including compute, storage, data transfer, licences for visualisation and transformation tools, and the ongoing engineering time required to keep everything running.
Different cloud architectures have different cost profiles. Some scale cheaply because they use serverless models, meaning you pay for what you use rather than provisioning capacity in advance. Others require more active management to stay tuned and cost-efficient. A FinOps approach, where cloud spend is actively governed rather than passively accumulated, is not optional on large deployments.
Ask your technical team or partner to show you a projected cost model, not just an indicative build estimate.
How fragmented is our data, and what is that actually costing us?
Most organisations underestimate this. When data is siloed across systems: an ERP here, a CRM there, payroll in a third platform, operational data in a fourth, the cost shows up in places that are easy to miss. The analyst who spends three days per month reconciling numbers that should reconcile automatically; the executive who makes a decision based on a report that was accurate last Tuesday; the compliance risk that sits in a payroll system nobody has fully audited.
Quantifying the cost of fragmentation is usually a prerequisite for building a credible business case for modernisation. It also gives you a baseline against which to measure the return on your investment.
Are we building for where we are, or where we are going?
This is particularly relevant for AI. Many New Zealand organisations are beginning to explore what AI tools can do for their business: predictive analytics, automated reporting, intelligent forecasting. Almost all of those tools require a well-structured, high-quality, unified data layer underneath them.
Poor source data often equates to poor Analytics and nonsensical AI results. If your data platform is not ready for AI today, it needs to be designed so that it can be. That is a decision made at the architecture stage, not retrofitted later.
Choosing the right cloud provider: GCP, Azure, or AWS
Google Cloud, Microsoft Azure, and Amazon Web Services are all capable platforms for a modern data architecture. Choosing between them is less important than most people assume, and more dependent on your existing environment than any objective performance comparison.
If your organisation runs SAP, the integration story with Google Cloud is particularly strong. Google’s Cortex Framework provides pre-built data models and accelerators for SAP workloads, reducing the engineering effort required to extract and structure your SAP data for analytics. Briscoe Group, a Mero client, moved from a manual reporting process to automated daily reporting across every store in under three minutes by connecting their on-premise SAP warehouse to Google BigQuery.
Each cloud platform has strengths and weaknesses when it comes to analytics and AI. Google Cloud offers a highly integrated set of tools that are regarded as industry-leading. Azure may make sense if your organisation is already heavily aligned to the Microsoft ecosystem. AWS offers a huge array of capable tools, but the integration required between them adds complexity that should be factored into your decision.
The more important question is whether you are getting a single-cloud or multi-cloud architecture, and whether your consultancy has genuine depth across the platforms they are recommending, or whether a commercial relationship is making one platform the answer regardless of the question.
Questions to ask your data platform consultancy
Whether you are engaging an external partner or relying on internal capability, the following questions cut through most of the noise.
Show me the business case, not the architecture diagram. A credible recommendation should start with the problem you are solving, quantify the cost of not solving it, and demonstrate that the proposed approach is the most cost-effective path to a specific outcome.
What will we be able to do after this that we cannot do now? If the answer is abstract (“better data quality”, “improved visibility”), push for specifics. What reports will be faster? What decisions will be better informed? What manual processes will be eliminated?
Who has done this before, and can I speak with them? Reference clients in comparable industries and at comparable scale are a reasonable expectation from any consultancy. Be particularly interested in organisations that have completed the build and are operating the platform, not just gone live.
What happens when things go wrong? Data platforms fail in predictable ways: pipelines break, source systems change, data quality degrades, tools get updated in ways that break downstream dependencies. Ask how those scenarios are handled, who is responsible, and what your exposure is.
What does self-sufficiency look like? The goal of a good implementation should be a platform your business can embrace and build on over time. Some elements of analytics and AI development can be outsourced effectively, but where the platform touches business processes and decision-making, your team needs to be closely involved, not just handed a finished product. The distinction between what to own internally and what to delegate should be part of the conversation from the start. Ask what knowledge transfer is included, what documentation will be produced, and what ongoing support and mentoring looks like after the build is complete.
Is your organisation ready for data platform modernisation?
Data platform modernisation is a real investment with real returns, but only when it is anchored to a clear business problem and executed with the right level of organisational commitment. The organisations that get it right are not the ones with the most sophisticated architecture. They are the ones that were honest about their starting point, disciplined about their priorities, and deliberate about building internal capability alongside the technical solution.
If you are at the point of evaluating options, the most useful thing you can do before any technical conversation is to write down, in plain language, what you are trying to achieve and what it is currently costing you not to have it. Everything else should follow from that.
Mero works with mid-market and enterprise organisations across New Zealand to design and build cloud data platforms on GCP, AWS, and Azure. If you would like an honest conversation about what your organisation actually needs and what it does not, get in touch.
Frequently asked questions
It depends on the complexity of your data environment and the scope of what you are building. A focused implementation covering a defined set of data sources, a cloud data warehouse, and a reporting layer can be completed in eight to twelve weeks. A full data platform modernisation across multiple source systems, with governance, transformation, and analytics layers, typically takes six to twelve months. The most important factor is not the technology: it is how clearly the business requirements are defined before the build begins. Poorly scoped projects take longer and cost more, regardless of the platform chosen.
A data warehouse stores structured, processed data that is ready for reporting and analysis. It is optimised for business users who need to query and visualise data quickly, typically using tools like Looker or Power BI. A data lake stores raw data in its original form, including unstructured data like documents, images, or event logs, and is better suited to data science and machine learning workloads. Most modern cloud platforms, including Google BigQuery, blur the line between the two, offering the flexibility of a data lake with the query performance of a warehouse. For most mid-market organisations, a cloud data warehouse is the right starting point.
There is no single answer, but there are two costs to understand separately: the build cost and the ongoing running cost. Build cost covers the initial design, engineering, and implementation work. Running cost covers compute, storage, data transfer, tool licences, and the ongoing engineering time required to keep the platform maintained and healthy. These are different in scale and different in nature, and both should be modelled before you commit. Running costs on a modern cloud platform are generally lower than legacy on-premise infrastructure, particularly if the architecture uses serverless services where you pay for consumption rather than provisioned capacity. Any credible partner should be able to provide a projected total cost of ownership model, not just an indicative build estimate, before work begins. FinOps should be built into the development process from the start, not treated as an afterthought. Architecture and design decisions made early in the build have a direct and lasting impact on what the platform costs to run at scale.
The clearest signals are: reporting that takes days rather than hours, finance or operations teams maintaining multiple versions of the same data in different spreadsheets, an inability to get a consistent answer to a basic business question across departments, or an AI initiative stalling because the underlying data is not structured, clean or accessible enough to support it. If any of those describe your organisation, the cost of staying put is likely higher than the cost of modernising.

