Skip to content

incorrect deployment selected when multiple in namespace #5

Closed
@zswanson

Description

@zswanson

Using the example in the readme I've attempted to create a migration object. The controller seems to be selecting random deployments from the same namespace despite the label selector and the target deployment matching correctly.

In this example below I'm just trying to get the job to launch with a simple sleep command just to prove that it works. A deployment named frs with labels app=frs, component=api is deployed in the default namespace, along with 5 other applications. Each has a unique app label though.

apiVersion: migrations.coderanger.net/v1beta1
kind: Migrator
metadata:
  name: frs-migration
  namespace: default
spec:
  selector:
    matchLabels:
      app: frs
      component: api
  args:
    - "/usr/bin/sleep 300s"

In the log snippet below the migration controller indicates it is running using an image from a different deployment than the one the label selector should match. The Job pods are terminating very fast due to some other issue, but long enough for me to see that they are in fact copying the spec of the wrong deployment nearly every time. It seems to be random which one is selected.

INFO controllers.migrator.components.user !!! {"object": "default/frs-migration", "last": "", "image": "XXXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/myorg/auth:e8a87e2"}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions