Performance
Optimizing your merge queue for maximum efficiency.
Tune parallel checks, batch size, and CI scope to balance throughput, reliability, and CI cost.
The Trade-offs: Reliability, Cost, and Velocity (RCV Theorem)
Section titled The Trade-offs: Reliability, Cost, and Velocity (RCV Theorem)A merge queue can only optimize two of three properties at a time: reliability (merges don’t break main), cost (CI jobs executed), and velocity (throughput and latency). We call this the RCV theorem, analogous to the CAP theorem for data stores.
The three viable combinations:
-
Reliability + Velocity: enable parallel speculative checks to test predicted merge states concurrently. Wasted CI runs occur when a PR ahead in the queue fails.
-
Reliability + Cost: validate pull requests sequentially. No wasted CI, but throughput is capped at one PR per CI run.
-
Velocity + Cost: use batch mode to test groups of PRs as a single unit. Fewer CI runs, but hidden failures inside a passing batch can land on main.
Parallel checks and batching can be combined for a middle ground across all three dimensions.
Determining the Right Configuration for Parallel Checks and Batching
Section titled Determining the Right Configuration for Parallel Checks and BatchingWeigh these factors when picking batch_size and max_parallel_checks:
-
Merge throughput and queue latency: target settings that match your historical merges-per-hour and keep PR wait time acceptable.
-
Peak load: size for peak developer hours, not averages, so the queue doesn’t back up during busy windows.
-
CI capacity and job duration: fast CI and abundant runners tolerate higher parallelism; slow CI or constrained runners need a conservative setting, since reruns on failure are expensive.
-
Change stability: repositories with frequent flaky or failing PRs should lower batch size and parallelism to limit wasted work.
-
Team distribution: globally distributed teams produce a steady PR flow; concentrated teams spike and need headroom for bursts.
Performance Configuration Calculator
Section titled Performance Configuration CalculatorProvide the inputs below and the calculator will suggest a configuration:
-
CI time in minutes: average duration of a CI run.
-
Estimated CI success ratio in %: how often CI passes (e.g. 95 if 95 out of 100 runs succeed).
-
Desired PRs to merge per hour: target throughput.
-
Desired CI usage in %: 100% matches a standard queue. Below 100% favors batching to conserve CI; above 100% favors parallel checks for lower latency at higher CI cost.
Optimizing Merge Queue Time with Efficient CI Runs
Section titled Optimizing Merge Queue Time with Efficient CI RunsEvery minute shaved off CI compounds across every queued PR. Run only the tests required for the change under test.
The Two-step CI method splits validation into:
-
Preliminary tests: fast checks run when a PR is created or updated, gating entry into the queue.
-
Pre-merge tests: exhaustive checks run just before merge.
This keeps queue-time CI short without sacrificing final-merge confidence.
Combining Batch Merging and Parallel Checks
Section titled Combining Batch Merging and Parallel ChecksBatch merging tests multiple PRs as a single unit. Parallel checks run several batches concurrently. Together, they maximize CI utilization: if any PR in a batch fails, Mergify binary-searches for the culprit, removes it, and continues processing.
merge_queue: max_parallel_checks: 2
queue_rules: - name: default batch_size: 3 ...With batch_size: 3 and max_parallel_checks: 2, Mergify runs up to 2
batches of up to 3 PRs each in parallel. Given 7 queued PRs and a 10-minute
CI pipeline, the first 6 merge in 10 minutes instead of the hour required for
sequential validation.
Was this page helpful?
Thanks for your feedback!