argocd: why helm app not applying values.yml

7k Views Asked by At

I would like to install a helm release using argocd, i defined a helm app declaratively like the following :

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: moon
  namespace: argocd
spec:
  project: aerokube
  source:
    chart: moon2
    repoURL: https://charts.aerokube.com/
    targetRevision: 2.4.0
    helm:
      valueFiles:
      - values.yml
  destination:
    server: "https://kubernetes.default.svc"
    namespace: moon1
  syncPolicy:
    syncOptions:
      - CreateNamespace=true

Where my values.yml:

customIngress:
  enabled: true
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt"   
  ingressClassName: nginx
  host: moon3.benighil-mohamed.com
  tls:
  - secretName: moon-tls
    hosts:
    - moon3.benighil-mohamed.com
configs:
  default:
    containers:
      vnc-server:
        repository: quay.io/aerokube/vnc-server
        resources:
          limits:
            cpu: 400m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 512Mi

Notice, the app does not take values.yml into consideration, and i get the following error:

rpc error: code = Unknown desc = Manifest generation error (cached): `helm template . --name-template moon --namespace moon1 --kube-version 1.23 --values /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml --api-versions acme.cert-manager.io/v1 --api-versions acme.cert-manager.io/v1/Challenge --api-versions acme.cert-manager.io/v1/Order --api-versions admissionregistration.k8s.io/v1 --api-versions admissionregistration.k8s.io/v1/MutatingWebhookConfiguration --api-versions admissionregistration.k8s.io/v1/ValidatingWebhookConfiguration --api-versions apiextensions.k8s.io/v1 --api-versions apiextensions.k8s.io/v1/CustomResourceDefinition --api-versions apiregistration.k8s.io/v1 --api-versions apiregistration.k8s.io/v1/APIService --api-versions apps/v1 --api-versions apps/v1/ControllerRevision --api-versions apps/v1/DaemonSet --api-versions apps/v1/Deployment --api-versions apps/v1/ReplicaSet --api-versions apps/v1/StatefulSet --api-versions argoproj.io/v1alpha1 --api-versions argoproj.io/v1alpha1/AppProject --api-versions argoproj.io/v1alpha1/Application --api-versions argoproj.io/v1alpha1/ApplicationSet --api-versions autoscaling/v1 --api-versions autoscaling/v1/HorizontalPodAutoscaler --api-versions autoscaling/v2 --api-versions autoscaling/v2/HorizontalPodAutoscaler --api-versions autoscaling/v2beta1 --api-versions autoscaling/v2beta1/HorizontalPodAutoscaler --api-versions autoscaling/v2beta2 --api-versions autoscaling/v2beta2/HorizontalPodAutoscaler --api-versions batch/v1 --api-versions batch/v1/CronJob --api-versions batch/v1/Job --api-versions batch/v1beta1 --api-versions batch/v1beta1/CronJob --api-versions ceph.rook.io/v1 --api-versions ceph.rook.io/v1/CephBlockPool --api-versions ceph.rook.io/v1/CephBlockPoolRadosNamespace --api-versions ceph.rook.io/v1/CephBucketNotification --api-versions ceph.rook.io/v1/CephBucketTopic --api-versions ceph.rook.io/v1/CephClient --api-versions ceph.rook.io/v1/CephCluster --api-versions ceph.rook.io/v1/CephFilesystem --api-versions ceph.rook.io/v1/CephFilesystemMirror --api-versions ceph.rook.io/v1/CephFilesystemSubVolumeGroup --api-versions ceph.rook.io/v1/CephNFS --api-versions ceph.rook.io/v1/CephObjectRealm --api-versions ceph.rook.io/v1/CephObjectStore --api-versions ceph.rook.io/v1/CephObjectStoreUser --api-versions ceph.rook.io/v1/CephObjectZone --api-versions ceph.rook.io/v1/CephObjectZoneGroup --api-versions ceph.rook.io/v1/CephRBDMirror --api-versions cert-manager.io/v1 --api-versions cert-manager.io/v1/Certificate --api-versions cert-manager.io/v1/CertificateRequest --api-versions cert-manager.io/v1/ClusterIssuer --api-versions cert-manager.io/v1/Issuer --api-versions certificates.k8s.io/v1 --api-versions certificates.k8s.io/v1/CertificateSigningRequest --api-versions coordination.k8s.io/v1 --api-versions coordination.k8s.io/v1/Lease --api-versions crd.projectcalico.org/v1 --api-versions crd.projectcalico.org/v1/BGPConfiguration --api-versions crd.projectcalico.org/v1/BGPPeer --api-versions crd.projectcalico.org/v1/BlockAffinity --api-versions crd.projectcalico.org/v1/CalicoNodeStatus --api-versions crd.projectcalico.org/v1/ClusterInformation --api-versions crd.projectcalico.org/v1/FelixConfiguration --api-versions crd.projectcalico.org/v1/GlobalNetworkPolicy --api-versions crd.projectcalico.org/v1/GlobalNetworkSet --api-versions crd.projectcalico.org/v1/HostEndpoint --api-versions crd.projectcalico.org/v1/IPAMBlock --api-versions crd.projectcalico.org/v1/IPAMConfig --api-versions crd.projectcalico.org/v1/IPAMHandle --api-versions crd.projectcalico.org/v1/IPPool --api-versions crd.projectcalico.org/v1/IPReservation --api-versions crd.projectcalico.org/v1/KubeControllersConfiguration --api-versions crd.projectcalico.org/v1/NetworkPolicy --api-versions crd.projectcalico.org/v1/NetworkSet --api-versions discovery.k8s.io/v1 --api-versions discovery.k8s.io/v1/EndpointSlice --api-versions discovery.k8s.io/v1beta1 --api-versions discovery.k8s.io/v1beta1/EndpointSlice --api-versions events.k8s.io/v1 --api-versions events.k8s.io/v1/Event --api-versions events.k8s.io/v1beta1 --api-versions events.k8s.io/v1beta1/Event --api-versions flowcontrol.apiserver.k8s.io/v1beta1 --api-versions flowcontrol.apiserver.k8s.io/v1beta1/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta1/PriorityLevelConfiguration --api-versions flowcontrol.apiserver.k8s.io/v1beta2 --api-versions flowcontrol.apiserver.k8s.io/v1beta2/FlowSchema --api-versions flowcontrol.apiserver.k8s.io/v1beta2/PriorityLevelConfiguration --api-versions moon.aerokube.com/v1 --api-versions moon.aerokube.com/v1/BrowserSet --api-versions moon.aerokube.com/v1/Config --api-versions moon.aerokube.com/v1/DeviceSet --api-versions moon.aerokube.com/v1/License --api-versions moon.aerokube.com/v1/Quota --api-versions networking.k8s.io/v1 --api-versions networking.k8s.io/v1/Ingress --api-versions networking.k8s.io/v1/IngressClass --api-versions networking.k8s.io/v1/NetworkPolicy --api-versions node.k8s.io/v1 --api-versions node.k8s.io/v1/RuntimeClass --api-versions node.k8s.io/v1beta1 --api-versions node.k8s.io/v1beta1/RuntimeClass --api-versions objectbucket.io/v1alpha1 --api-versions objectbucket.io/v1alpha1/ObjectBucket --api-versions objectbucket.io/v1alpha1/ObjectBucketClaim --api-versions operator.tigera.io/v1 --api-versions operator.tigera.io/v1/APIServer --api-versions operator.tigera.io/v1/ImageSet --api-versions operator.tigera.io/v1/Installation --api-versions operator.tigera.io/v1/TigeraStatus --api-versions policy/v1 --api-versions policy/v1/PodDisruptionBudget --api-versions policy/v1beta1 --api-versions policy/v1beta1/PodDisruptionBudget --api-versions policy/v1beta1/PodSecurityPolicy --api-versions rbac.authorization.k8s.io/v1 --api-versions rbac.authorization.k8s.io/v1/ClusterRole --api-versions rbac.authorization.k8s.io/v1/ClusterRoleBinding --api-versions rbac.authorization.k8s.io/v1/Role --api-versions rbac.authorization.k8s.io/v1/RoleBinding --api-versions scheduling.k8s.io/v1 --api-versions scheduling.k8s.io/v1/PriorityClass --api-versions snapshot.storage.k8s.io/v1 --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1/VolumeSnapshotContent --api-versions snapshot.storage.k8s.io/v1beta1 --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshot --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotClass --api-versions snapshot.storage.k8s.io/v1beta1/VolumeSnapshotContent --api-versions storage.k8s.io/v1 --api-versions storage.k8s.io/v1/CSIDriver --api-versions storage.k8s.io/v1/CSINode --api-versions storage.k8s.io/v1/StorageClass --api-versions storage.k8s.io/v1/VolumeAttachment --api-versions storage.k8s.io/v1beta1 --api-versions storage.k8s.io/v1beta1/CSIStorageCapacity --api-versions v1 --api-versions v1/ConfigMap --api-versions v1/Endpoints --api-versions v1/Event --api-versions v1/LimitRange --api-versions v1/Namespace --api-versions v1/Node --api-versions v1/PersistentVolume --api-versions v1/PersistentVolumeClaim --api-versions v1/Pod --api-versions v1/PodTemplate --api-versions v1/ReplicationController --api-versions v1/ResourceQuota --api-versions v1/Secret --api-versions v1/Service --api-versions v1/ServiceAccount --include-crds` failed exit status 1: Error: open /tmp/74d737ea-efd0-42a6-abcf-1d4fea4e40ab/moon2/values.yml: no such file or directory

Notice both application.yml and values.yml are located in the same directory on my local machine, ie: the structure of the 2 files in question looks like :

.
├── application.yml
└── values.yml

Any help please ?

2

There are 2 best solutions below

0
On BEST ANSWER

Cleanest way to achieve what you want is using the remote chart as dependency:

Chart.yaml

name: mychartname
version: 1.0.0
apiVersion: v2
dependencies:
  - name: moon2
    version: "2.4.0"
    repository: "https://charts.aerokube.com/"

And overriding its values like this:

values.yaml

moon2:
  customIngress:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt"   
    ingressClassName: nginx
    host: moon3.benighil-mohamed.com
    tls:
    - secretName: moon-tls
      hosts:
      - moon3.benighil-mohamed.com
  configs:
    default:
      containers:
        vnc-server:
          repository: quay.io/aerokube/vnc-server
          resources:
            limits:
              cpu: 400m
              memory: 512Mi
            requests:
              cpu: 200m
              memory: 512Mi

Pay attention to this file. You need to create a key in your values file with the same name as the dependency(moon2 in your case), and indent the values you want to override one level.

You need to upload both of these files to a repository and point your ArgoCD application URL to this repository.

This has the advantage that whenever the upstream helm chart gets updated, all you need to do is increase the version in Chart.yaml

0
On

This can be achieved without an umbrella Chart also if you are fine with inline values definitions in the argocd application.

Basically, you can check out below file in your GitRepo and then configure argocd to track this file/argocd application. In the future whenever you have to update this application you can directly edit the definitions of the explicit values in this file.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: moon
  namespace: argocd
spec:
  project: aerokube
  source:
    chart: moon2
    repoURL: https://charts.aerokube.com/
    targetRevision: 2.4.0
    helm:
      values: |
        customIngress:
          enabled: true
          annotations:
            cert-manager.io/cluster-issuer: "letsencrypt"   
          ingressClassName: nginx
          host: moon3.benighil-mohamed.com
          tls:
          - secretName: moon-tls
            hosts:
            - moon3.benighil-mohamed.com
        configs:
          default:
            containers:
              vnc-server:
                repository: quay.io/aerokube/vnc-server
                resources:
                  limits:
                    cpu: 400m
                    memory: 512Mi
                  requests:
                    cpu: 200m
                    memory: 512Mi
  destination:
    server: "https://kubernetes.default.svc"
    namespace: moon1
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
      

This is only about personal preference if you do not want an Umbrella Chart in your git repo. However, I personally would go with the Umbrella Chart deployment.