When your business model depends on onboarding new customers quickly, every minute of manual setup is a bottleneck. We built an automated provisioning system that takes a new Mautic tenant from "button click" to fully operational instance in under 8 minutes - with zero DevOps involvement.
Here's how we got there, and what we learned along the way.
The Business Context
Our client runs a SaaS platform and decided to offer marketing automation as a white-labeled feature of their product. Under the hood, it's powered by Mautic - but every new customer of theirs needs their own independent Mautic instance with its own database, configuration, DNS, and queue infrastructure.
The client's team manages their customers through their own internal management system. When they onboard a new customer, they fill out a form - customer name, configuration details - and hit submit. From that moment, a new Mautic instance should appear, ready to use.
With 4-5 production tenants today and a roadmap to 300-500, the provisioning process had to be fully automatic. Manual DevOps work per tenant simply wouldn't scale.
What Manual Provisioning Looked Like
Before we built the automation, standing up a new Mautic instance meant:
- Editing Kubernetes YAML templates - even with Helm, you still needed to customize values for the specific tenant.
- Provisioning AWS resources - the tenant's database, SQS queues, S3 storage, and DNS records all needed to be created.
- Configuring the application - SQS credentials, queue settings, domain configuration, and other tenant-specific parameters.
- Running the Mautic installation - installing the application, creating the admin user, verifying the setup.
- DNS propagation and verification - registering the tenant's subdomain and confirming it resolves correctly.
Even when not fully "manual" (you'd still use infrastructure-as-code tools), the process required an engineer's attention for about an hour per tenant. And it required someone who understood the full stack - Kubernetes, AWS services, Mautic internals.
For 5 tenants, that's manageable. For 500, it's impossible.
The Installer Microservice
The core of our automation is a custom installer microservice that sits between the client's management platform and our Kubernetes infrastructure. Here's the flow:
Step 1: Client triggers installation
In the client's internal system, an operator fills out the new customer form and hits submit. This sends a request to our installer microservice with the tenant's configuration details.
Step 2: Microservice gathers resources
The installer service springs into action:
- Fetches the latest version of our Helm chart - the template that defines a complete Mautic instance.
- Retrieves credentials and configuration for AWS services: SQS queue credentials, S3 bucket paths, and other infrastructure parameters.
- Prepares the Route 53 DNS configuration for the tenant's subdomain.
Step 3: Jenkins orchestrates the installation
The microservice packages everything up and sends a request to Jenkins, which runs the actual installation job. Jenkins applies the Helm chart with the tenant's specific values, creating all Kubernetes resources.
Step 4: DNS registration
The installer microservice automatically registers the tenant's DNS records in AWS Route 53. Each tenant gets their own subdomain that routes through our nginx Ingress Controller to their specific Mautic pods.
Step 5: Verification
The system verifies the installation completed successfully by checking three things:
- Does the database exist?
- Does the admin user exist?
- Does the tenant's URL respond?
If all checks pass, the tenant is live.
Idempotency: The Unsung Hero
One design decision that saved us more trouble than any other: making the entire process idempotent.
Every time the installation job runs, it first checks whether Mautic is already installed for that tenant. It runs through the same three checks - database, user, URL - and if everything is already in place, it exits cleanly. No duplicate databases created, no configuration overwrites, no broken state.
This means:
- If a network hiccup interrupts the installation halfway through, you can safely re-run it.
- If someone accidentally triggers the installer for an existing tenant, nothing happens.
- The system can be used as a health verification tool - run the installer, and it confirms the tenant is properly set up.
This sounds simple, but it's remarkably easy to build provisioning systems that break when run twice. We made idempotency a first-class design requirement from the start.
The Numbers
Based on our analysis of recent deployments:
| Metric | Before | After | |---|---|---| | Provisioning time | ~1 hour | 7-8 minutes | | DevOps involvement | Required | Zero | | Error rate | Manual errors possible | Idempotent, safe to retry | | Scale limit | ~5 tenants/day (one engineer) | Unlimited (parallel) |
That's a ~90% reduction in provisioning time. But the more important number is the DevOps involvement going from "required" to "zero." The client's own team can onboard new customers without ever talking to us.
How It Fits Into the Bigger Picture
The installer microservice doesn't work in isolation. It's part of a larger architecture:
- Argo CD watches the Helm chart and ensures the tenant's Kubernetes resources match the desired state. If something drifts, Argo CD reconciles it.
- Helm charts provide the template. One chart defines everything - Apache/FPM pods, Messenger consumers, cron jobs, health checks. Tenant-specific values are injected at installation time.
- nginx Ingress Controller with a Network Load Balancer handles routing. Each tenant's subdomain is mapped to their specific pods through Ingress rules. The DNS records that the installer creates point to this load balancer.
- Terraform manages the underlying AWS infrastructure - the EKS cluster, database instances, S3 buckets, and other shared resources.
The installer is the trigger that brings all of these pieces together for a new tenant.
What We'd Do Differently
If we were building this from scratch today:
Start with automation earlier. Even with just 2-3 tenants, the investment in automated provisioning pays off. Manual processes develop tribal knowledge that's hard to codify later.
Build monitoring into provisioning. We track provisioning success/failure, but we wish we'd built more granular timing metrics from the start - how long each step takes, where bottlenecks are, which AWS API calls are slowest.
DNS propagation is the wildcard. Everything in the provisioning pipeline is deterministic except DNS. Route 53 changes are fast (usually seconds), but we've learned to build verification into the process rather than assuming DNS is ready.
A Pleasant Surprise: Costs
One thing we didn't expect: the per-tenant infrastructure cost came in below our initial estimates. We designed the architecture assuming certain resource requirements per instance, but careful optimization - spot instances for worker nodes (saving 50-65%), efficient shared infrastructure, right-sized databases - brought the actual cost down meaningfully.
When you're building a product where each customer represents a cost center, that margin matters. And it makes the business case for automating provisioning even stronger - every new tenant that can be onboarded without DevOps time is pure improvement.
Key Takeaways
- Automate provisioning before you need to. By the time you have 10 tenants, manual provisioning is already holding you back. Build automation for tenant #3.
- Idempotency is non-negotiable. Make every step safe to re-run. This prevents the worst provisioning failures and turns your installer into a verification tool.
- Separate the trigger from the execution. The microservice handles coordination; Jenkins handles execution; Argo CD handles state management. Each component does one thing well.
- Measure provisioning time as a business metric. Faster provisioning = faster customer onboarding = faster revenue. Track it.
- Plan for the DNS step. It's the most unpredictable part of the pipeline. Build verification in, don't assume.
Ready to Automate Your Multi-Tenant Provisioning?
If you're building a multi-tenant SaaS product and struggling with onboarding complexity, we'd love to help. We've solved the hard problems - idempotent installation, DNS automation, infrastructure orchestration - and we can help you get there faster.
[Book a free consultation](https://www.droptica.com/contact/) to discuss your provisioning architecture.
Written by
Mautomic Team
The Mautomic team brings together experienced marketing automation specialists, developers, and consultants dedicated to helping businesses succeed with Mautic.