Annotations are key-value pairs that store non-identifying metadata on Kubernetes resources. Unlike Labels, annotations are designed for machine consumption and aren’t used for selection or querying.

Purpose and Design

Annotations hold data that tools, libraries, and controllers need but that shouldn’t be used for identification or organization. This separation keeps Labels clean and query-optimized while providing unlimited flexibility for tool-specific metadata.

Common Use Cases

Build and Release Information

annotations:
  build.commit: "a3f2d1e"
  build.timestamp: "2025-10-05T12:00:00Z"
  deployed.by: "jenkins-pipeline"

Tool Configuration

annotations:
  prometheus.io/scrape: "true"
  prometheus.io/port: "8080"
  ingress.kubernetes.io/rewrite-target: "/"

Operational Metadata

annotations:
  description: "User service cache layer"
  oncall.team: "backend-team"
  documentation: "https://wiki.example.com/user-service"

Non-Searchable by Design

Annotations are explicitly not indexed for searching. This design decision allows:

  • Arbitrary data sizes (labels are limited to 63 characters)
  • Structured data (JSON, YAML fragments)
  • Tool-specific schemas without coordination
  • Frequent updates without impacting selectors

Labels vs Annotations

The distinction is fundamental to distributed primitives:

Labels answer: “What is this resource?” and “Which resources should I select?”

  • Used by Services to find Pods
  • Used by operators to group and manage resources
  • Optimized for queries and selection

Annotations answer: “What additional context does this resource have?”

  • Used by monitoring tools to configure scraping
  • Used by deployment tools to track versions
  • Optimized for arbitrary metadata storage

Operational Context

Annotations provide essential context for runtime operations without affecting how resources are selected or scheduled. They enable rich tooling ecosystems to layer functionality onto Kubernetes without modifying core primitives.

For example, Services use annotations to configure load balancer behavior, while the Service’s Labels determine which Pods receive traffic.