devops

Jenkins

Automation server basics β€” controller, agents, pipelines as code, and how it fits with Git in a delivery workflow


What is Jenkins?

Jenkins is an open-source automation server. You define jobs or pipelines that run when something changesβ€”usually after code lands in Gitβ€”so builds, tests, and deployments happen the same way every time.

Git (push / tag) β†’ Jenkins (webhook or polling) β†’ build β†’ test β†’ deploy / promote

Core pieces

PieceRole
ControllerWeb UI, stores job configs, schedules work, serves artifacts
Agent (node)Machine or container that runs build steps (can be Linux, Windows, or a cloud label)
Job / PipelineThe automation: checkout, compile, test, publish, deploy
PluginsExtend Jenkins (Git, Docker, credentials, cloud agents, notifications)

Controller and agent (worker) nodes

Jenkins splits orchestration from execution. The machine that runs the Jenkins service is the controller (sometimes still called β€œmaster” in old docs). Machines or containers that run your pipeline steps are agents β€” the same idea people call worker nodes or build agents.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
Developers ───► β”‚ Controller β”‚
(Git webhooks) β”‚ UI, job config, β”‚
β”‚ queue, scheduling β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ assigns work
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β–Ό β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Agent β”‚ β”‚ Agent β”‚ β”‚ Agent β”‚
β”‚ (Linux) β”‚ β”‚ (Windows)β”‚ β”‚ (Docker) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
build / test / deploy steps run here

Controller β€” what it does

  • Hosts the web UI and REST API.
  • Stores job definitions, credentials metadata, and global configuration.
  • Holds the queue: when a job is triggered, the controller decides which agent (if any) runs it.
  • Runs plugins that extend the UI, SCM integration, and pipeline DSL.
  • Often keeps build history and fingerprints (metadata); heavy artifacts may live on agents or external storage.

You can run trivial jobs on the built-in node (the controller’s own JVM), but best practice is to disable or restrict executors on the controller so builds do not compete with scheduling and UI work.

Agents (workers) β€” what they do

  • Execute pipeline stages: sh, bat, Docker steps, tests, deployments.
  • Can be permanent VMs, ephemeral cloud instances, Kubernetes pods, or Docker agents spun up per build.
  • Are registered in Manage Jenkins β†’ Nodes with a name and optional labels (for example linux, docker, gpu).

Agents connect to the controller (outbound), which fits typical firewall rules: workers initiate the channel; you do not need the controller to SSH into every datacenter.

Naming you will see

TermMeaning
AgentCurrent Jenkins term for a worker that runs builds.
NodeAny machine entry in Jenkins (controller + agents).
ExecutorOne concurrent build slot on a node (you can set several per agent).
LabelTag used in pipelines (agent { label 'docker' }) to pick agents.
SlaveDeprecated old name for agent β€” avoid in new docs.

Picking an agent in a pipeline

  • **agent any** β€” first available executor on any agent (or built-in if allowed).
  • **agent { label 'linux' }** β€” only agents with that label.
  • **agent { docker { image 'node:20' } }** β€” run inside a container on a Docker-capable agent.

More agents and more executors per agent raise parallelism; the controller still coordinates but should stay lightly loaded.


Job and project types

Jenkins groups automation into jobs (sometimes called projects in the UI). The type you pick affects whether configuration lives in the UI, in a Jenkinsfile, or both.

TypeWhat it isTypical use
Freestyle projectClassic job: SCM, triggers, and build steps configured in the Jenkins UI (shell, Ant, Maven, batch tasks, post-build actions).Quick one-off jobs, legacy setups, or teams not using pipeline-as-code.
PipelineSingle pipeline tied to one branch (or a fixed ref); the definition usually lives in a Jenkinsfile in the repo.One main branch, release branch, or a job that always runs the same pipeline file.
Multibranch PipelineOne folder-style item that discovers branches (and often PRs) from Git; each branch can run its own Jenkinsfile.Trunk-based flow, per-branch builds, and PR validation without duplicating jobs.
Organization FolderScans a whole GitHub org, GitLab group, or similar and creates multibranch projects per repository.Many repos under one team with consistent discovery rules.
Maven / other β€œspecial” jobsWizard-style jobs for Maven, Gradle, etc., with less flexibility than a full pipeline.Older Java-centric workflows; many teams prefer a Pipeline with explicit sh steps instead.

Freestyle limitations

Freestyle jobs are fine for simple automation, but they show their limits when delivery gets complex:

  • No (or weak) β€œpipeline as code” β€” logic lives in point-and-click steps unless you add Job DSL or Shared Libraries separately.
  • Harder to review and version β€” changes are not normal Git commits on the same rhythm as application code.
  • Duplication β€” copy-paste across many jobs when behavior only differs a little.
  • Weaker structure β€” no first-class stages / post blocks; visual pipelines plugins exist but differ from a single Jenkinsfile model.

For new greenfield work, Pipeline or Multibranch Pipeline is usually the better default so the definition stays in the repository.


Pipeline as code (Jenkinsfile)

A Jenkinsfile is a text file that lives at the root of the repository and defines the entire build/test/deploy pipeline as code. Jenkins reads it automatically when the job runs.

repo/
β”œβ”€β”€ src/
β”œβ”€β”€ package.json
└── Jenkinsfile ← Jenkins reads this

Why a Jenkinsfile?

  • Pipeline logic is version-controlled alongside application code.
  • Changes go through code review like any other file.
  • Every branch can have its own pipeline (Multibranch Pipeline).
  • Easy to reproduce or roll back β€” check out any commit and re-run.

Jenkinsfile syntax basics (declarative)

The declarative pipeline uses a fixed top-level structure with specific sections. Every section is optional except agent and stages.

pipeline {
agent { ... } // where to run
environment { ... } // env vars
options { ... } // job-level settings
parameters { ... } // runtime inputs
triggers { ... } // schedule or webhook
stages {
stage('name') {
when { ... } // conditional
steps { ... } // actual work
post { ... } // stage-level post actions
}
}
post { ... } // pipeline-level post actions
}

agent β€” where to run

agent any // any available executor
agent none // declare agent per stage
agent { label 'linux' } // specific label
agent { docker { image 'node:20' } } // spin up a container

environment β€” env variables

environment {
APP_ENV = 'production'
DB_URL = credentials('db-cred-id') // inject a Jenkins secret
}

Variables are available as $APP_ENV in sh steps or ${env.APP_ENV} in Groovy expressions.

options β€” job-level settings

options {
timeout(time: 30, unit: 'MINUTES') // kill if it runs too long
retry(2) // retry whole pipeline on failure
disableConcurrentBuilds() // only one run at a time
buildDiscarder(logRotator(numToKeepStr: '10'))
}

parameters β€” runtime inputs

parameters {
string(name: 'BRANCH', defaultValue: 'main', description: 'Branch to deploy')
booleanParam(name: 'RUN_TESTS', defaultValue: true, description: '')
choice(name: 'ENV', choices: ['dev', 'staging', 'prod'], description: '')
}

Reference in steps: ${params.BRANCH}, ${params.RUN_TESTS}.

triggers β€” when to run automatically

triggers {
cron('H 2 * * 1-5') // nightly on weekdays
pollSCM('H/5 * * * *') // poll Git every 5 minutes
githubPush() // fire on GitHub webhook push
}

stages and stage β€” units of work

stages {
stage('Build') {
steps {
sh 'npm ci && npm run build'
}
}
stage('Test') {
steps {
sh 'npm test'
junit 'reports/**/*.xml' // publish test results
}
}
stage('Deploy') {
steps {
sh './deploy.sh'
}
}
}

when β€” conditional stages

stage('Deploy to prod') {
when {
branch 'main' // only on main branch
// environment name: 'APP_ENV', value: 'production'
// expression { return params.RUN_TESTS == true }
}
steps {
sh './deploy-prod.sh'
}
}

post β€” actions after a stage or pipeline finishes

post {
always { echo 'Runs regardless of result' }
success { slackSend message: 'Build passed!' }
failure { mail to: 'team@example.com', subject: 'Build failed' }
unstable { echo 'Tests passed but with warnings' }
changed { echo 'Result changed from last run' }
cleanup { deleteDir() }
}

parallel β€” run stages at the same time

stage('Test in parallel') {
parallel {
stage('Unit tests') {
steps { sh 'npm run test:unit' }
}
stage('Lint') {
steps { sh 'npm run lint' }
}
stage('Type check') {
steps { sh 'npm run typecheck' }
}
}
}

script β€” drop into raw Groovy inside declarative

steps {
script {
def tag = sh(returnStdout: true, script: 'git describe --tags').trim()
env.IMAGE_TAG = tag
if (tag.startsWith('v')) {
currentBuild.displayName = tag
}
}
}

Common built-in steps

StepWhat it does
sh 'cmd'Run a shell command (Linux/macOS)
bat 'cmd'Run a batch command (Windows)
checkout scmClone the repo at the triggering revision
echo 'msg'Print a message to the console log
withCredentials([...])Bind Jenkins credentials to env vars in a block
archiveArtifacts 'dist/**'Save files as build artifacts
junit 'reports/**/*.xml'Publish JUnit test results
stash / unstashPass files between stages or agents
input 'Approve?'Pause and wait for a human to click OK
error 'msg'Fail the build immediately with a message

Complete annotated example

pipeline {
agent { label 'linux' }
environment {
IMAGE = "myapp:${env.BUILD_NUMBER}"
REGISTRY_CREDS = credentials('docker-registry')
}
options {
timeout(time: 20, unit: 'MINUTES')
disableConcurrentBuilds()
}
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build & Test') {
parallel {
stage('Build') {
steps { sh 'npm ci && npm run build' }
}
stage('Lint') {
steps { sh 'npm run lint' }
}
}
}
stage('Docker image') {
steps {
sh "docker build -t ${IMAGE} ."
sh "echo ${REGISTRY_CREDS_PSW} | docker login -u ${REGISTRY_CREDS_USR} --password-stdin"
sh "docker push ${IMAGE}"
}
}
stage('Deploy') {
when { branch 'main' }
steps {
sh "./deploy.sh ${IMAGE}"
}
}
}
post {
success { echo "Deployed ${IMAGE} successfully" }
failure { mail to: 'oncall@example.com', subject: "Build ${env.BUILD_NUMBER} failed" }
always { cleanWs() }
}
}

Pipelines live in the repo (like other CI YAML). A minimal declarative pipeline:

// Jenkinsfile (at repo root)
pipeline {
agent any
stages {
stage('Checkout') {
steps {
checkout scm
}
}
stage('Build') {
steps {
sh 'npm ci && npm run build'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
}
post {
failure {
echo 'Pipeline failed β€” check console output'
}
}
}
  • **agent any** β€” run on any available agent (narrow with labels or Docker images in real setups).
  • **checkout scm** β€” uses the same revision Jenkins checked out for this run.
  • **sh '...'** β€” shell on Unix agents; use bat on Windows.

Declarative vs scripted pipelines

Pipelines are written in Groovy, but Jenkins offers two styles:

DeclarativeScripted
SyntaxWrapped in a top-level pipeline { ... } block with required agent, stages, and steps.Uses node { ... } (or older patterns); you call stage and steps as Groovy code.
StructureOpinionated: Jenkins validates the shape; clear place for post, environment, options, when.Free-form Groovy: loops, conditionals, and shared code anywhereβ€”maximum flexibility.
Learning curveEasier for teams that want a standard layout and guardrails.Better when you need complex programmatic flow, but easier to write unmaintainable scripts.
Extensibilityscript { ... } inside steps escapes to raw Groovy when needed.Everything is already script-style.

The example above is declarative. A minimal scripted pipeline looks like this:

node {
stage('Checkout') {
checkout scm
}
stage('Build') {
sh 'npm ci && npm run build'
}
stage('Test') {
sh 'npm test'
}
}

Prefer declarative for new Jenkinsfiles unless you have a concrete reason scripted flow fits better (heavy reuse, dynamic stage lists, or gradual migration of old scripted jobs).


Connecting Git to Jenkins

Typical flow:

  1. SCM β€” Job points at your Git URL (HTTPS or SSH).
  2. Trigger β€” Poll SCM, webhook from GitHub/GitLab/Bitbucket, or generic hook.
  3. Credentials β€” Stored in Jenkins (not in the Jenkinsfile) for clone or deploy keys.

After Git: you commit; Jenkins is what reacts to that commit with automation.


Credentials & secrets

  • Prefer Jenkins credentials + binding in the pipeline over hard-coding tokens.
  • For Kubernetes or cloud deploys, use the appropriate plugin and short-lived tokens where possible.

Pros and cons

Pros

  • Open source and free β€” no licensing cost for the core server.
  • Highly customizable β€” huge plugin ecosystem for SCM, clouds, notifications, and tooling.
  • Scriptable for advanced users β€” Groovy pipelines and shared libraries for complex automation.
  • Pipeline as code β€” Jenkinsfile in the repo, reviewable like application code.
  • Mature and feature-rich β€” long track record and broad community knowledge.
  • Scalable β€” distribute work across many agents and labels as load grows.

Cons

  • Steeper learning curve β€” controller, agents, plugins, and Groovy take time to master.
  • Maintenance overhead β€” upgrades, plugin compatibility, backups, and disk for artifacts.
  • Performance β€” many concurrent jobs or heavy plugins can demand CPU, RAM, and I/O.
  • Security β€” plugin surface area and long-lived servers need patching, least privilege, and careful credential handling.
  • Self-hosted β€” unlike SaaS CI, you run and pay for the infrastructure (unless you use a managed Jenkins offering).

When to choose Jenkins

Good fitConsider alternatives
Self-hosted CI, heavy customization, lots of pluginsGitHub Actions if code is already on GitHub
Existing Jenkins investment in the orgGitLab CI for an all-in-one Git host
Mixed Windows/Linux fleetsCloud-native tools if you are 100% K8s + GitOps

Quick ops commands

Terminal window
# Service (typical Linux package install)
sudo systemctl status jenkins
sudo systemctl restart jenkins
# Logs
sudo journalctl -u jenkins -f

See also