In this page
Scalability
Performance estimation
Techniques
Disable the unused DevOps automation rules
Disable the DevOps automation rules at the initial import of large repositories
Rate limiting
Limiting the number of accepted commits
Limiting the execution of automation rules
Overview
This page describes tuning Better DevOps Automation for improved performance, especially when being used with a massive number of automation rules and massive changesets. By "massive" we mean hundreds of automation rules and changesets containing hundreds of commits.
If you use smaller workloads, you will unlikely face performance problems. In that case, you can just skip this page and leave every setting on the default value.
Scalability
Pushing a changeset with a large number of commits can result in a large number of automation rule executions within a short timeframe. This section explains what to expect when it happens.
First off, it's important to understand the queue architecture behind rule executions. When a changeset "arrives" from the Version Control System:
- The Better DevOps Automation app collects all data about the VCS changeset.
- Depending on the content of the changeset ("what was changed and how?"), the Better DevOps Automation app fires the events that would trigger DevOps automation rule executions. Each event carries its custom-tailored data payload. For example, the "branch created" event carries the branch name.
- The Automation for Jira app inserts these events to its internal rule processing queue. Each item in this queue represents a pending rule execution. In other words, the queue is a buffer for the "work to be done".
When new items are added to the queue, the Automation for Jira app picks one item out of the queue and starts the corresponding the rule using a so-called worker thread. If there is no free worker thread available, it will wait for one become free. If there are multiple worker threads available, multiple rules can be executed in parallel.
This way, the queue will effectively smooth the spiky workload generated by a massive changeset!
Performance estimation
It may not be obvious for the first sight, but there are multiple factors that determine the number of the DevOps automation rule executions. These are:
-
The number of the automation rules configured with DevOps triggers:
- number of rules configured with the Changeset Accepted trigger
- number of rules configured with the Changeset Rejected trigger
- number of rules configured with the Branch Created trigger
- number of rules configured with the Tag Created trigger
- number of rules configured with the Commit Created trigger
- number of rules configured with the Genius Commit Created trigger
-
The complexity of the changeset received:
- number of branches created
- number of tag created
- number of commits created
- number of Genius Commits created
-
The number of issues linked through the changeset:
- number of issues linked through branch names
- number of issues linked through tag names
- number of issues linked through commit messages
Let's see a concrete example of executing a fairly large automation rule list triggered by a fairly large changeset!
First, let's say we have a automation rule list like this:
- 3 rules with the Changeset Accepted trigger
- 2 rules with the Branch Created trigger
- 2 rules with the Tag Created trigger
- 3 rules with the Commit Created trigger
- 8 rules with the Genius Commit Created trigger
Let's assume that developer "joe" finished the implementation of a user story. He worked committing the changes to his local repository, and now he's pushing the complete changeset in one go. The changeset he's pushing contains:
- 2 new branches
- a new tag
- 100 commits
- 10 Genius Commits (out of those 100), each referencing one issue
The total number of rule executions will be:
3 Changeset Accepted trigger + 2 branches * 2 Branch Created trigger + 1 tag * 2 Tag Created trigger + 100 commits * 3 Commit Created trigger + 10 Genius Commits * 1 issue * 8 Genius Commit Created trigger = -------------------- 3 + 4 + 2 + 300 + 80 = 389 rule executions
Let's assume that one automation rule completes in an average 50 milliseconds. (Some may terminate immediately due to an unmet condition, some may do low-cost asynchronous work, while some may do more expensive work.)
Then the total work will took 389*0.05 = 19.45 seconds.
As rules are executed in parallel, the reality is that the whole workload will complete in a few seconds. As the execution is totally asynchronous, the developer will not be blocked for that period. He can continue working right after he pushed the changeset.
These number of items may seem like a lot, but the Better DevOps Automation app (the producer of the events), Jira's internal event management facilities (the transmitter of the events) and the Automation for Jira app (the consumer of the events) can easily handle tens of thousands events depending on the system configuration.
See the rate limiting section for more details on preventing the system from overloads.
Techniques
Disable the unused DevOps automation rules
If you have automation rules that are not in "active use" anymore, it's better to disable or delete those.
"Active use" can mean non-trivial situations. For example, if you have an automation rule that contains a condition that allows executing the rule only in repository "A", but "A" is frozen (not accepting new changes anymore), then the rule is effectively unused! Even in this case, source code changes pushed to the repositories will trigger the rule which will then stop when it reaches the condition, but executing the rule up to that point was just waste of resources.
Disable the DevOps automation rules at the initial import of large repositories
It is rather unlikely that you want to run DevOps automation rules when you import a large repository (with a lot of files, long history, etc.). It would generate noise by sending out unwanted notifications, for example. Even worse, it would put unnecessary load on your Jira instance.
To save these, disable the automation rules configured with a DevOps trigger, then re-enable those after the import. (It can be done with just one click per rule.)
Rate limiting
There are two points in the rule execution chain where upper limits are exposed for the execution rate.
Limiting the number of accepted commits
The Better Commit Policy for Bitbucket app does not accept changesets that contain more than 1000 commits. In order to push a massive changeset, you have to do it in batches "smaller" than this.
The app, by putting an upper limit for the size of the changeset, also limits the number of the events that can trigger automation rules.
Limiting the execution of automation rules
The Automation for Jira app offers multiple service limits that apply to any kind of automation rules, including the DevOps ones.
If a rule exceeds any of the service limits, further executions will be skipped and turned into the "THROTTLED" status.
Questions?
Ask us any time.