Skip to the content.

Stop Putting Your Kotlin Multiplatform CI Logic in YAML

This project did not start as an attempt to invent a “portable CI architecture” for Kotlin Multiplatform.

It started as a practical effort to get a mobile pipeline under control.

The usual pattern showed up quickly: more logic in GitHub Actions, more conditionals, more environment-specific behavior, more secrets handling, more release steps, and more moments where the answer to “what does this job actually do?” was “open the CI UI and start digging.”

That works for a while, until it does not.

At some point, the YAML stops being orchestration and starts becoming the application. Local reproduction gets harder. Migrating between CI providers gets expensive. Debugging turns into archaeology.

I took a different route: move the job contract into the repository, and let the CI adapter stay thin.

That decision produced a Kotlin Multiplatform mobile CI setup that is easier to run locally, easier to explain, and easier to share with other teams.

Portable KMP CI architecture

The real problem with mobile CI

Kotlin Multiplatform mobile CI is not hard because any single step is unusual. It is hard because too many concerns pile up in the same place:

When all of that gets pushed directly into YAML, the pipeline becomes tightly coupled to the CI product that happens to be running it.

That creates a few predictable problems:

The issue is not YAML itself. The issue is putting too much meaning into it.

One practical note from running this on real runners: the current Amper Android integration expects Java 21. That is why the workflows in this repo now pin Temurin 21 instead of Java 17.

Another practical note: the iOS build-only jobs use a generic iOS Simulator destination, but they also force a single simulator architecture matching the host. That avoids depending on a precreated simulator device while still working around Amper’s current limitation around multi-architecture simulator builds.

The shift that made this manageable

This setup is built around one idea:

CI should describe when a job runs, not what the job means.

Once that principle is applied, the architecture gets much simpler:

  1. GitHub Actions decides when to run a job.
  2. A shared repository script decides what that job means.
  3. Helper scripts prepare the environment the same way everywhere.
  4. Fastlane provides the build and release command layer.
  5. Amper remains the actual build system.

In this repository, the layers look like this:

The pipeline now has a stable contract that lives inside the repo.

That contract is a set of portable job names:

Those names are more valuable than they look. They give a team a shared vocabulary. They let local development and CI talk about the same operations. They make it obvious what belongs in the repo and what belongs in the CI adapter.

Start with Amper, not with a handcrafted demo app

One of the strongest parts of this workflow is that it starts from a generated project instead of a hand-assembled example.

The Amper CLI can scaffold a strong starting point:

mkdir my-kmp-ci-app
cd my-kmp-ci-app
amper init compose-multiplatform

That matters because it gives readers a command they can run, not just a repo they are supposed to copy blindly.

The generated project already includes:

It also includes a jvm-app/ module. This sample removes that module to keep the public story focused on Android, iOS, and shared code.

That is the trimmed public sample.

What changed from the raw Amper template?

The sample starts from amper init compose-multiplatform, but it does not stop there.

What changed from raw Amper template

The main edits that turn the generated app into a reusable CI sample are:

That combination keeps the sample honest: it stays close to the generated Amper baseline while still exercising the CI architecture I want to teach.

The sample can regenerate itself

This repo also includes a maintenance command:

./scripts/regenerate_from_amper.sh

That script reruns amper init compose-multiplatform, trims the generated project back to Android + iOS + shared, and then reapplies the project-specific adjustments that this CI setup expects.

It deletes and recreates the generated app layer:

while preserving the CI files and release helpers that make this setup work.

That gives the project a much better maintenance story. When Amper changes, the sample can be refreshed from a command instead of being rewritten by hand.

What “thin CI” actually looks like

Once the real job logic moves into the repo, the GitHub Actions workflow gets surprisingly boring.

That is a good thing.

A typical job becomes little more than:

- uses: actions/checkout@v4
- uses: actions/setup-java@v4
  with:
    distribution: temurin
    java-version: "21"
- uses: ruby/setup-ruby@v1
  with:
    bundler-cache: true
- uses: android-actions/setup-android@v3
- name: Build Android debug
  run: ./scripts/ci/run_job.sh android-build-debug

At that point, the YAML is doing exactly what it should do:

And it is not doing a bunch of things it should not do:

That is the difference between orchestration and implementation.

The portability claim should be visible in the repo

One of the easiest ways to undermine a “portable CI” story is to only publish one CI adapter.

That is why I include both a GitHub Actions workflow and a GitLab CI file:

They are deliberately boring in the same way. They define orchestration details, then they both call the same shared dispatcher:

./scripts/ci/run_job.sh android-build-debug
./scripts/ci/run_job.sh android-test
./scripts/ci/run_job.sh ios-build-debug

That matters because portability is no longer just a design claim. It is visible in the repository layout itself.

The dispatcher is where the pipeline becomes understandable

The shared entrypoint is scripts/ci/run_job.sh.

It answers the question every pipeline eventually needs to answer clearly:

“What does this job actually do?”

Here is the shape of it:

case "${job_name}" in
  android-build-debug)
    ci_prepare_android_job
    ./scripts/ci/run_fastlane_with_amper_logs.sh buildDebug
    ;;
  ios-testflight)
    ci_prepare_ios_testflight_job
    bundle exec fastlane ios uploadTestFlight
    ;;
esac

That is dramatically easier to reason about than chasing behavior across a CI file full of conditionals, environment mappings, and inline shell.

It also means a developer can run the exact same job locally without faking an entire CI environment.

The helper scripts do the quiet work that usually clutters pipelines

Most of the portability comes from the helper layer under scripts/ci/lib/.

That layer is responsible for:

That gives the rest of the pipeline stable concepts such as:

Once those values are normalized, the actual job logic stops caring whether it is running in GitHub Actions or a local shell session.

Separating validation from release makes the pipeline calmer

One of the best decisions in this setup is to keep normal CI validation separate from release delivery.

For Android, that means separate jobs for:

For iOS, it means separating:

That split is not just organizational neatness. It keeps normal pull request feedback from depending on Apple signing or release credentials. It makes store delivery something deliberate instead of something every commit has to survive.

Secrets are materialized at runtime, not stored in the repo

This repository does not commit signing files or API keys.

Instead, release-oriented jobs materialize them at runtime through small helper scripts:

There is still one unavoidable caveat on the iOS side: Apple certificates and provisioning profiles have to exist on the macOS runner that performs the archive. A repository can materialize API keys, but it cannot replace proper host-level signing setup.

That is not a flaw in the design. It is just the reality of Apple delivery workflows.

Fastlane still makes sense here

Fastlane fits well into this arrangement because it sits at a natural boundary.

It is not trying to be the CI orchestrator. It is not trying to replace the build system. It is simply the command layer between the repository scripts and the platform-specific delivery steps.

That keeps the responsibilities clean:

Local reproduction stopped being an afterthought

Because the job contract lives in the repository, the same jobs can run locally that CI runs:

export AMPER_BOOTSTRAP_CACHE_DIR="$PWD/.amper-cache"
./scripts/ci/run_job.sh android-build-debug
./scripts/ci/run_job.sh android-test
./scripts/ci/run_job.sh ios-build-debug

For current Amper-based Android builds, local reproduction also assumes JDK 21. On self-hosted macOS runners, I also expect a host-installed Ruby, preferably from Homebrew, and lets the shared CI helper install the Bundler version required by Gemfile.lock.

The iOS build jobs here use a generic iOS Simulator destination for xcodebuild, and the CLI wrapper keeps SWIFT_ENABLE_EXPLICIT_MODULES=NO for the command-line path. This repo still preserves a single simulator architecture override through ONLY_ACTIVE_ARCH and ARCHS, because the current Amper-backed Kotlin build phase rejects multi-architecture simulator builds. That keeps the build independent from a precreated device while also respecting Amper’s current limitation. The remaining requirement is still that the runner has an iOS Simulator runtime installed in Xcode.

That distinction matters on GitHub-hosted macOS images. A repository can avoid depending on a specific device, but it cannot force the hosted image to ship a Simulator runtime that is not installed. When that happens, the remaining fix is to select an image or Xcode version that includes a runtime, or to install one explicitly as part of runner provisioning.

That changes debugging completely.

If a shared job works locally, then most remaining failures are usually much narrower:

That is a much better place to debug from.

Roll it out slowly

Even with a cleaner architecture, release workflows are still release workflows. A sensible rollout looks like this:

  1. Get Android debug build working locally.
  2. Get Android tests working locally.
  3. Get iOS debug build working locally.
  4. Move those jobs into CI.
  5. Add Android release signing.
  6. Publish manually to Play internal testing.
  7. Add iOS archive signing on the macOS runner.
  8. Upload manually to TestFlight.
  9. Add promotion flows only after the basics are stable.

That order keeps the learning curve manageable and avoids conflating pipeline design problems with store-delivery complexity.

The part worth copying

The most useful thing here is not a specific Actions feature, a specific Fastlane lane, or a specific Amper command.

It is the decision to stop treating the CI provider as the home of the build logic.

Once the job contract moved into the repository, a lot of problems got smaller:

That is the part worth copying.

Not the exact YAML. Not the exact project structure. Not the exact runner label.

The idea.

Let CI orchestrate. Let the repository define the jobs.

That shift makes a Kotlin Multiplatform CI setup feel less like a collection of fragile automation and more like an actual system that can be understood, debugged, and shared.

If the practical reproduction steps are the priority, start with the README. If the practical maintenance story is the priority, start with ./scripts/regenerate_from_amper.sh.

Trade-offs and when this approach is worth it

This approach is not universally better. It is a deliberate trade-off that favors portability, local reproducibility, and long-term maintainability over short-term CI-provider convenience.

Increased upfront design effort

A repository-defined pipeline requires a clearer execution contract from the start. Compared to writing provider-specific CI YAML directly, this usually means more initial design effort.

The cost is front-loaded, but the payoff is a delivery model that is easier to evolve over time.

More responsibility in the repository

Moving delivery logic into the repository means scripts and orchestration code become part of the system design. That improves portability, but it also means these scripts need to be treated with the same care as production code.

That includes:

Less dependence on CI-native conveniences

A thin CI model intentionally reduces reliance on provider-specific features. That can mean giving up some convenience in exchange for portability.

This approach does not try to maximize what GitHub Actions or GitLab CI can do individually. It tries to minimize how much the pipeline depends on either of them.

Team adoption and learning curve

Teams used to YAML-centric pipelines may need time to adapt to a repository-first mental model. The shift is conceptual as much as technical: the pipeline is no longer “something in CI”, but part of the repository’s executable contract.

Not always necessary

For small projects, short-lived codebases, or teams with no need to move across environments, a simpler provider-specific CI setup may be perfectly adequate.

This approach becomes more valuable when:

Design intent

This model intentionally optimizes for:

That makes it a good fit for engineering teams that treat delivery as part of the software system itself, not just as CI configuration.

View source on GitHub