Guest blog post by Kazuma Nagano. The original article was posted in Japanese.
Kazuma Nagano is an iOS engineer at CyberAgent who works in the iOS team of Tapple (Japanese dating apps).
Workflow before and after tuning
My team is now automating tests on Bitrise and the workflow is as follows.
- Clone repository
- Install Homebrew package (Xcodegen, Linter, etc.)
- Gem package installation (xcov, CocoaPods, danger, fastlane, etc.)
- Install Cocoapods
- Run tests
The following is an actual summary of the workflow before acceleration (with the last execution time displayed).
Especially in this, we reviewed each setting of Homebrew, Gem, Cocoapods, (App build). By making better use of the cache, we were able to speed up the total workflow time by 40-50% as shown below.
The total runtime on the Bitrise summary varies depending on the time zone and environment. The summary logs we are comparing in this article are compared at the same time, but other factors may also have an impact.
In the pre-tuning workflow above,
- . / Pod (if Podfile.lock has changed)
- / Users / vagrant / Library / Caches / Homebrew
- ~ / Library / Developer / Xcode / DerivedData
In the workflow after acceleration, steps 2 and 3 were deleted because they were not effective.
Cache function in Bitrise
Bitrise docs have a description of the cache functionality.
- Cache tars all cached directories and dependencies and stores them securely in Amazon S3
- If a / path / to / cache is specified,/ path / to / cache / .ignore-me is also cached
- Cache is managed in branch units
- Cache-push is not performed in PRs
- If no new build is done on that branch, it will expire and be deleted after 7 days
It can be used by adding two steps to the workflow.
Bitrise.io Cache:Pull step to download the previous cache (if any).
Bitrise.io Cache:Push step to check the state of the cache and upload it if required.
By writing the following in the Bitrise workflow, you can cache under the selected directory.
You can specify a file to watch for changes like the example in Podfile.lock. But it doesn't mean that the system doesn't check diff in other cases.
Log when caching is successful
If you check the logs, you can see that each deletion, modification, and addition is confirmed. So you can simplify this flow by specifying a file to monitor for changes.
Cache settings that actually worked
In this chapter, we introduce a caching method that we have tried and found effective.
In the Bitrise Docs - Caching Homebrew installers, the following methods are described,
The Brew install Step supports caching: if the option is enabled, any downloaded brew installers will be cached from the location of brew --cache. The cache path is ~/Library/Caches/Homebrew/.
To enable caching of brew installers:
- Go to the Workflow in which you want to cache brew installs and select the Brew install Step.
- Set the Cache option to yes.
- As always, click Save.
In this description, Homebrew’s --cache option is specified. This method, however, runs the installation every time, which - though not necessary - takes extra time.
By using the method of directly linking binaries, you can avoid re-installing the installed fomula.
（I recommend that you don’t use the official Brew Install Step because the cache in ~/Library/Caches/Homebrew has little time-saving contribution）
Log when the cache is successful
Also, with this method, if one or more installations are run, brew cleanup will run on the same folders every time.
You can check the condition in cleanup.rb in the Homebrew Github repository.
The forced cleanup process is determined using ~ / Library / Caches / Homebrew / .cleaned, so it can be avoided by adding this to the cache target.
In Bitrise Docs - Caching Ruby Gems, The following methods are described.
In my local environment, this cache path refers to ~ / .rbenv / versions / 2.6.3. This is the cache specification for rbenv.
In the stack provided by Bitrise, ruby 2.6.3 is not installed, so this setting was effective.
Also, if you manage .ruby-version in the repository, check this when you push the cache.
In addition, by caching the gem installation directory (. / Vendor / bundler on Tapple)
You can also avoid installing during bundle install.
Since this is reflected in Gemfile.lock, specify the check at the time of push.
Log when the cache is successful
This reduced the time of bundle install by 90%.
In, Bitrise Docs - Caching Ruby Gems, The following method is described.
Before you start, make sure you have the latest version of the Cocoapods Install Step in your Workflow.
- Open your app’s Workflow Editor.
- Insert the Cache:Pull Step after the Git Clone but before the Cocoapods Install steps.
IMPORTANT: Make sure that your Step is version 1.0.0 or newer. With the older versions, you have to manually specify paths for caching.
- Insert the Cache:Push step to the very end of your workflow.
It seems that it will work just by using the official Bitrise Cocoapods Install step.
But, in my case, I checked the log during the execution of the official step,
- Check if selected Ruby is installed
- $ gem install bundler --force
- bundle install
- pod install
It does a lot of work, but there is an overlap with the previous step.
So I run the bundle exec pod install script alone.
If the correct bundle and rbenv is installed before the step,
You can shorten it by simply specifying Pods file as the cache path and checking Podfile.lock.
In the pre-tuning workflow, the --repo-update option was specified every time, but it only has to be run if there is an update.
You can also keep a valid CocoaPods cache.
As a result, we were able to save about 85%.
Log when the cache is successful
For further acceleration
When you look at the pre-tuning summary, you can see that the Fastlane Test runs the longest.
In this step
- Build app
- Notification to Slack
Is there any way to speed up this part using a cache?
Unfortunately, Bitrise currently doesn’t support Xcode build cache, but CocoaPods’s _Prebuild using cocoapods-binary can be cached.
https://github.com/leavez/cocoapods-binary (The introduction is omitted in this article.)
By caching . / Pods / _Prebuild (because it is under Pods, you do not need to specify an additional cache path), you do not need to build Pods every time.
The build time on CI was greatly reduced: this was a 30% improvement, but this was the biggest time we were able to achieve (5.4m).
In this article, we introduced the segments of a build that can make use of Bitrise’s cache the most.
But don’t forget that storing and using the cache always has the risk of inconsistencies somewhere.
If you look at the Bitrise cache log, you’ll see that it confirms that the stack is the same before pulling.
Measures such as
- Set TTL such as *Discard cache on library update
- Prepare a way to refresh and invalidate the cache at any time are required.
With this implementation, I felt that it was important to build relationships that depended on the cache, but that did not depend too much.