Location>code7788 >text

Building Images with Packer

Popularity:759 ℃/2024-08-21 17:32:39

What is Packer

Packer is a powerful tool that helps us to easily build various types of images, such as virtual machine images, Docker images, and so on.

Packer works by defining a configuration file that describes the characteristics and requirements of the image to be built. Packer then uses this configuration file to perform a series of steps, such as installing the necessary software, configuring system settings, copying files, etc., to finally generate a usable image.

Why Packer?

The benefits of using Packer to build images are manifold.

  1. Portability: First, it provides a repeatable, automated way to create images, which means we can ensure that images are consistent every time we build them, reducing the risk of human error. Second, Packer supports multiple infrastructure providers such as AWS, VMware, Azure, etc., which allows us to easily deploy images in different environments.
  2. Automation: Packer's "Infrastructure as Code" approach greatly reduces the likelihood of errors when compared to traditional manual work by creating images in a pipeline + concurrently, based on a single configuration file.
  3. Issue Tracing and Localization: All changes on Packer are based on code, which is traceable, making it easy to quickly locate issues and roll them back. In the traditional way, it is not easy to trace out the problem completely, considering that the manual process may involve multiple people.
  4. Fast Iteration: Packer's configuration file is editable, so we can easily modify the configuration file and then rebuild the image for fast iteration.

Components and Principles of Packer

Packer contains a builder (Builder), (Derivator) Provisioner, (Post-Processor) Post-Processor three components , through the JSON format template file , you can flexibly combine these three components in parallel , automated creation of multi-platform consistent image file . For a single platform to generate images of a single task is called build , and the results of a single build is also known as artifact (Artifact), multiple builds can be run in parallel .

  • The Builder, also known as a builder, can create images for a single platform. The builder reads some configuration and uses it to run and generate images. The builder is invoked as part of the build to create the actual generated image. Common builders include VirtualBox, Alicloud ECS and Amazon EC2. builders can be created as plugins and added to Packer.
  • Provisioner, this component installs and configures software in the running machine created by Buider. They perform the main work of making the image contain useful software. Common provisioners include shell scripts, Chef, Puppet, and others.
  • Post-Processors, which is the process of creating new artifacts using the results of a builder or another post-processor. Examples include compression post-processors compressing artifacts, upload post-processors uploading artifacts, and so on.

(of a plane) land

  1. Building images locally with qemu-kvm
  2. Parameter configuration template for managing image builds via gitlab repositories
  3. Triggering builds via gitops, tracking build logs and build results (submitting merge requests for template changes, triggering build tasks via comments)
artifact releases clarification
Packer 1.9.4 official document
Packer-plugin-qemu 1.0.10 Packer plug-in
qemu-kvm 7.0.0 QEMU 7.0.0

file

(architecture) formwork

packer {
  required_plugins {
    qemu = {
      source = "/hashicorp/qemu"
      version = ">= 1.0.10"
    }
  }
}

variable "checksum" {
  type = string
  default = "xxxxxxx"
}

variable "ssh_password" {
  type = string
  default = "xxxxx"
}

source "qemu" "autogenerated_1" {
  accelerator = "kvm"
  boot_command = ["<tab> ", "console=ttyS0,115200n8 ", "=http://{{ .HTTPIP }}:{{ .HTTPPort }}/ks/ ", "nameserver=1.1.1.1 ", "<enter><wait> "]
  boot_wait = "0s"
  communicator = "ssh"
  format = "qcow2"
  headless = true
  iso_checksum = "sha256:${}"
  iso_url = "../../../Rocky-9.2-x86_64"
  qemu_binary = "/usr/libexec/qemu-kvm"
  qemuargs = [["-m", "4096"], ["-smp", "2,sockets=2,cores=1,threads=1"], ["-cpu", "host"], ["-serial", "file:"]]
  shutdown_command = "/sbin/halt -h -p"
  shutdown_timeout = "120m"
  ssh_password = "${var.ssh_password}"
  ssh_timeout = "1500s"
  ssh_username = "root"
  http_content = {
      "/ks/" = file("../../kickstart/")
    }
}

build {
  description = "\tMinimal Rockylinux 9 Qemu Imageni\n__________________________________________"

  sources = [".autogenerated_1"]

  provisioner "shell" {
    script = "./"
  }
  
# provisioner "file" { // Copy Configuration File
# destination = "/etc/cloud/"
# source = "../../resource/"
# }

}

Configuration library organization

  1. Configuration repository templates the kickstart file (this part tends to change infrequently)
  2. For different mirrors of the template file in a different directory, easy to manage
  3. Three files in the same image directory
    1. Packer's hcl file (packer template master file)
    2. Preparation process scripts, packages to be installed in the image, files to be modified, etc.
    3. In order to link some configurations of the DevOps system, such as identifying the current version, type, usage, etc. of this image
  4. The resources directory mainly stores some resource files, such as configuration files, scripts, and so on.

├── kickstart # kickstart configuration file storage directory
│ ├── packer
├── packer # Template files for different versions of mirrors │ ├── rocky9
│ ├── rocky9
│ │ ├──
│ │ ├── # Preparing scripts, installing packages, modifying kernel parameters, and so on.
│ ├── # System configuration, such as os_type, os_version.
│ ├── centos7
│ ├── # System configuration, such as os_type, os_version │ ├── centos7
│ │ ├──
│ ├── # System configuration, such as os_type, os_version │ ├── Resources # Some resources
├── resources # Some resource files, configuration files can be cpoyed directly.

effect

file
file

mileage

  1. Enhance the degree of automation of building images to improve efficiency: in the past, the operation and maintenance students manually go to the cloud to play the image to get the image ID and then configured to the DevOps system.
  2. Mirror version can be described, version traceability, more transparent: in the past, the mirror version is manually typed, after a period of time, no one knows what changes have been made in the currently running mirror, what is loaded, and what characteristics it has.