I have a consul cluster hold on AKS deployed with helm. When I deploy it, it will boot up successfully and running well. However after some times, the cluster will crash and throw the error: 2024-02-01T19:16:01.629Z [WARN] agent.leaf-certs: handling error in Manager.Notify: error="rpc error making call: ACL not found" index=1
I think maybe my helm deployment file has something wrong? Here is my helm yaml file:
# Enable the Consul Web UI
ui:
enabled: true
service:
type: LoadBalancer # Expose the Consul UI with a LoadBalancer
# Enable Connect injection
connectInject:
enabled: true
# Sync Kubernetes services with Consul
syncCatalog:
enabled: true
# Enable the Consul agent on Kubernetes nodes
client:
enabled: true
updateStrategy: |
type: RollingUpdate
grpc: true
# Enable DNS on Kubernetes
dns:
enabled: true
# Set Gossip key
global:
gossipEncryption:
key: base64key()
secretName: consul-gossip-encryption-key
secretKey: encryptionKey
acls:
manageSystemACLs: true
bootstrapToken:
secretName: consul-bootstrap-acl-token
secretKey: token
# tokens:
# static:
# - name: "UI Token"
# kubernetesSecret:
# secretName: "consul-ui-token"
# secretKey: "token"
tls:
enabled: true
verify: false
enableAutoEncrypt: false
httpsOnly: false
caCert:
secretName: consul-ca-cert
secretKey: tls.crt
caKey:
secretName: consul-ca-key
secretKey: tls.key
# Configure default settings for services
serviceDefaults:
protocol: "http"
# Use default settings for the Consul server
server:
# serverCert:
# secretName: consul-server-cert
# secretKey: tls.crt
# serverKey:
# secretName: consul-server-cert
# secretKey: tls.key
replicas: 3
bootstrapExpect: 3
updatePartition: 0
exposeService:
enable: true
type: NodePort
nodePort:
https: 8501
http: 8500
rpc: 8300
disruptionBudget:
enabled: true
maxUnavailable: 1
I tried to manual update the services, however, even after I restarted the servers, it will still have the same error and cannot be booted up after.