
Taming Payroll with a Modular Monolith
8
min read
Few things are as critical as embedded payroll. Our partners are industry leaders of all sizes, from startups to public companies, and they trust us to move billions of dollars in payroll and taxes for their business owners. Success demands moving fast, safely, without buckling under complexity. Many scaling organizations, when hit with similarly-shaped dilemmas, face an option between microservices or a traditional layered monolith.
Our path is an emerging one: the modular monolith. We continually deploy one API codebase while otherwise incorporating microservice principles. Discrete domain “applications” expose “services” that enable independent, predictable, and productive development and operations.
The modular monolith has proven to be an ideal fit for companies like ours, and we see tremendous potential in this architectural paradigm.
Our modular monolith journey
Thoughtful decomposition is how we tame complexity. Representing embedded payroll as a set of simple, modular API resources has made our API beloved by developers. We’ve also applied decomposition internally to power those experiences with robust, reliable payroll infrastructure.
We started by breaking embedded payroll into smaller, coherent business subdomains as we encountered them (a bounded context in domain-driven design (DDD)). Each subdomain has its own application in our Django codebase:
companies
: Management of the company and worker lifecycle.payments
: Robust and reliable payments processing.payrolls
: Payroll configuration and processing.risk
: Fraud detection and credit risk evaluation.
And so on. At its most basic, each application is just a standard Django app. Note, the top-level orientation is not around software layers like models
or views
, but around business domains. Many Rails monoliths migrations over the past 5+ years have migrated to similarly organize their codebases, like Shopify, Middesk, Gitlab, and Gusto.
As Check scaled, we felt common monolith issues—bugs from implicit data flow, slower dev cycles, more complex tests, and fuzzy ownership. Many organizations move to microservices at this point, but we knew we could solve these challenges with careful modularization. By thoughtfully incorporating microservice design principles to evolve our monolith, we could gain most of microservices' benefits without their costs. We placed bets in our payments and then tax filing domains, bringing multiple teams together to design and migrate to strong service boundaries. That worked well, and we made our call: we're all-in.
Microservices in the monolith
It’s easy to think “are we just describing good code?” to which the answer is “yes, and”! The real power comes from embracing microservice principles beyond code decomposition, extending to service API design, serialization formats, observability, security, code ownership, CI/CD, testing, and storage.
Here are a few of the most important principles we’ve settled on through debate and iteration:
An application and its data must be bounded to a subdomain. For example, a company in the context of payroll setup works differently than a company in the context of payments. The two contexts share some overlap, but the attributes, behaviors, and what matters is different. Payments and company setup are different subdomains, and so that data is similarly bounded.
Applications cannot have inter-app foreign keys. Database models are private to each subdomain. This keeps the conceptual boundaries strong, creates more predictable performance characteristics, and reduces invisible database-level coupling.
Applications communicate exclusively via services and simple data objects. Applications speak to each other via JSON-serializable data objects and documented service contracts.
An application must be broad enough to independently achieve a business capability flexibly, and narrow enough for high cohesion. An application should be installed and removed without conceptually invalidating other applications. A good thought experiment is to imagine the implications of ejecting an app: do all the other apps still make sense?
To accelerate ourselves on this path, we created a microservice-like “service registry” framework in our monolith and supercharged it with linting (similar to packwerk), testing tools, automatic service instrumentation in Datadog, built codegen HTTP APIs for internal tools, and established code ownership requirements around service APIs. It’s how we build services now.
Armed with this toolkit, we deploy updates faster and with less risk. We’re building standard, modular, composable bricks of functionality that can be efficiently clicked together quickly to deliver value to partners. Tests don’t need to pull in as much functionality in setup and execute faster. Stable, documented service boundaries and associated team ownership ensures subdomains stop leaking concepts or making risky cross-app assumptions. As long as our service APIs do what they say, teams have freedom to split, merge, and reorganize code, and deploying everything in one unit allows us to ship breaking service changes if needed.
A comprehensive vision for modular monoliths
While we’ve built strong foundations for our modular monolith tooling, our vision for this paradigm looks out further. This includes:
Test isolation: Running only relevant tests based on which applications changed, speeding up development and pipelines.
Independent persistence options: Application isolation unlocks independent persistence for scenarios that call for unique persistence solutions—high-volume writes, columnar databases, low-latency reads, etc.
Autogenerated HTTP servers and clients: Like gRPC, using each service’s stable, typed interface to cut out boilerplate and ensure compatibility.
Dependency visibility enforcement: Automated linting to ensure each service only imports from each application’s permitted services.
Dependency visualization: Building a real-time map of which services depend on which, so we can plan re-architectures with confidence.
Unlocking workflow orchestration
By embracing the modular monolith, we’ve also unlocked safer orchestration of complex, multi-step processes—workflows—that touch multiple subdomains. For example, quarter-end payroll adjustments span at least pay calculation, tax liabilities, and payments, and may require pausing and resumption with human intervention. Since each domain enforces its own interface, we can define a long-running workflow that calls these modules in sequence and persist workflow state without tangling them all in overloaded functions or duct-taped async jobs.
Want to learn more? We'll share more about how we build workflows at Temporal Replay on March 5, 2025, in London, UK. Don't miss Sam Wilson's talk, Durable Payroll in a Modular Monolith, where he'll dive into how we build payroll workflows.
The path forward
Our modular monolith isn’t just a technical choice—it’s fundamental to how we serve our diverse, growing ecosystem of partners. By decomposing payroll into discrete domain applications, we can evolve each subdomain independently while moving fast and getting leverage on a unified codebase. This means new automations, feature requests, and compliance changes can be implemented quickly and reliably. All to empower our partners shaping the future of work.
You might also like
Become more for your customers.
Find out how payroll can benefit your customers and your business.