Jenkins is the way to cast magic of continuous delivery

Unifying international teams with a shared CI/CD process

Submitted By Jenkins User Oleg Tikhonov
elivering code changes more frequently and reliably through a unified continuous integration and continuous delivery pipeline allows DevOps teams to more easily find and fix issues and add new projects and teams.
Organization: One of the world’s leading semiconductor IP companies, producing IoT technologies that reach 70% of the global population
Industries: Internet of Things
Programming Languages: C/C++, Python, Assembly
Version Control Systems: GitHub
Community Support: Jenkins.io websites & blogs, spoke with colleagues and peers

Improving product quality via a unified CI/CD process for DevOps.

Background: As one of the world’s leading semiconductor IP companies, the firm I work for is rapidly advancing IoT technologies by designing and developing the integral platforms, sensors, and subsystems that drive IoT performance. We enable a trillion connected devices and help enterprises to secure devices from chip to cloud. I work out of offices in Israel, but we have both teams and clients internationally. We wanted to make our release cycles more reliable, running all kinds of tests on boards, code standards and coverage. Our build runs in parallel with everything-as-code. We use Kubernetes (K8s) as cluster orchestrator and Jenkins as a lock mechanism. Each commit to a git repository triggers a full flow end-to-end. 

Goals: Unifying CI/CD processes across international teams.

"Jenkins is an awesome tool that defines what CI/CD should look like. It’s integrated pretty well to modern cloud-based infrastructure, has many plugins and a supportive community."
image— Oleg Tikhonov, DevOps & Infrastructure Architect

Solution & Results: The trigger of flow is done by a git webhook when the commit is sent. The first step of the pipeline creates the job’s metadata containing the commit id, Gerrit project, and Gerrit branch and build number. The second step fetches a source code on a specified slave. And the next parses a configuration file containing permutations of sub-flows, creating sub-folders for each permutation and copies the sources to each of them. 

At this point, we run a Docker for each permutation with mapped sub-folders, which were created in the previous step. The Docker container contains all the required scripts and toolchains to perform a build. 

Basically, we defined an interface through defined operations like build, install_test, run_test, release, and upload_artifacts. A developer should implement these interfaces to be able to perform these operations. 

After the build steps, we copy all created binaries on boards and run all tests. During copying and running tests, we use the Jenkins mechanism of locking. For example, we copy an executable file, Jenkins locks a particular board in the pool and, after a task is completed, it releases the board and returns to the pool. 

The last step in the pipeline is the aggregation of all logs and which are then sent to Splunk. Splunk indexes them, creates dashboards showing related information, like failed tests and links to error pages or places. If the pipeline succeeded, Gerrit gets a notification that the commit passed all requirements and the merge can be completed. In any case, mail notification is sent to the team members.

The Jenkins capabilities we used include: Lockable Resources, Splunk, Blue Ocean, Gerrit trigger, Configuration as Code, LDAP plugin, Pipeline, and Repo, to name a few.

Results:

  • Unified CI/CD process 

  • Improved product quality 

  • Much easier to find and fix issues 

  • Easier to manage 

  • Easier to add new projects and teams