TL;DR client-go ships a read-through, write-around caching layer built on SharedInformers. A GET/LIST hits an in-memory Indexer; anything that mutates goes straight to the API server. This cuts the controller read latency by ~90 %, but you need to understand sync windows, deep-copies, and field indexes to avoid stale reads and perf foot-guns.

Why bother with a cache? Link to heading

Without cache (plain REST)With DelegatingClient
Every Get/List → HTTPS → API server → etcdIn-process map lookup (zero RTT)
High QPS & watch throttlingNear-zero read cost; server load ≈ 0
100 ms tail latency on busy clusterssub-µs typical

Controllers are read-heavy: they reconcile, compare desired vs. actual, and only occasionally write. Caching amortises that read path.

The moving parts Link to heading

flowchart LR A[Reflector] --> B(DeltaFIFO) B --> C[Indexer] C -->|Read| D(Cache Client) D -->|Write| E(API Server)
  1. Reflector – streams events (ADD/UPDATE/DELETE) from the API server.

  2. DeltaFIFO – buffers events so processing Goroutines never block the watch.

  3. Store / Indexer – thread-safe map keyed by namespace/name + optional custom indexes.

  4. Cache client (DelegatingClient)

    • Reads: Get, List → Store.
    • Writes: Create, Update, Patch, Delete → REST.

A real code path — c.Get(ctx, key, obj) Link to heading

func (c *delegatingClient) Get(ctx context.Context, key client.ObjectKey, obj client.Object) error {
    // 1. deepCopy into obj from the Informer cache
    if err := c.cache.Get(ctx, key, obj); err == nil {
        return nil
    }
    // 2. fallback to direct client (rare, e.g. cache miss before sync)
    return c.client.Get(ctx, key, obj)
}

Sync window Link to heading

  • On start-up the cache is empty.

  • HasSynced() becomes true after the first LIST+WATCH finishes.

  • Until then the client falls back to direct REST calls.

    mgr.GetCache().WaitForCacheSync(ctx)
    

Field indexes = O(1) secondary look-ups Link to heading

Need all Pods selected by a Service label?

mgr.GetFieldIndexer().IndexField(
    &corev1.Pod{},                 // type
    ".spec.nodeName",              // field path
    func(obj client.Object) []string {
        pod := obj.(*corev1.Pod)
        return []string{pod.Spec.NodeName}
})

Now:

var pods corev1.PodList
_ = r.List(ctx, &pods,
           client.MatchingFields{".spec.nodeName": req.NodeName})

No server round-trip; the Indexer maintains a reverse map under the hood.

Writes bypass the cache (on purpose) Link to heading

A Create or Patch does not immediately appear in your cache. The informer will observe its own change a few milliseconds later and update the Store. 👉 Always treat the cache as eventually consistent; reconcile loops should be idempotent.

Performance snapshot Link to heading

OperationDirect RESTCache hitΔ
Get Pod (single)6.8 ms p9535 µs p95-99.5 %
List Pods - 1 k140 ms p951.7 ms p95-98.8 %

Tested on a 3-node KIND cluster, Go 1.22, client-go 0.30.0.

Common pitfalls Link to heading

SymptomLikely causeQuick fix
Reads return stale dataMissing deep-copy or comparing pointer refsobj.DeepCopy() before mutating; never cache raw pointers in maps
High memory usageLarge LIST results in StoreAdd label selectors to Informer; narrow resync period
Random cache missesForgot to add type to SchemeschemeBuilder.AddToScheme(scheme) in init()

Take-aways Link to heading

  • Use the cache for all GET/LISTs inside controllers—don’t mix in custom REST clients.
  • Always wait for HasSynced() during startup.
  • Add field indexes early; they’re free once built.
  • Expect eventual consistency and design reconciles to retry.