I've created a deployment which exposes a custom metric through an endpoint and an APIService that registers this custom metric, so I can use it in an HPA to autoscale the deployment. To achieve this, I've followed this tutorial.
It worked well while using an apiregistration.k8s.io/v1beta1 APIService. The metric was exposed correctly and the HPA could read it and scale accordingly. I've tried to update the APIService to version apiregistration.k8s.io/v1 (as v1beta1 is deprecated and removed in Kubernetes v1.22), but then the HPA couldn't pick the metric anymore, with this message:
Message
-------
unable to get metric threatmessages: Service on test services-metrics-service/unable to fetch
metrics from custom metrics API: the server is currently unable to handle the request
(get services.custom.metrics.k8s.io services-metrics-service)
If I manually request the metric, it exists though:
kubectl get --raw /apis/custom.metrics.k8s.io/v1/namespaces/test/services/services-metrics-service/threatmessages |jq .
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1",
"metadata": {
"selfLink": "custom.metrics.k8s.io/v1"
},
"items": [
{
"metricName": "threatmessages",
"timestamp": "2021-02-09T14:43:39.321Z",
"value": "0",
"describedObject": {
"kind": "Service",
"namespace": "test",
"name": "services-metrics-service",
"apiVersion": "/v1"
}
}
]
}
Here are my APIService and HPA resources:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1.custom.metrics.k8s.io
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
service:
name: services-metrics-service
namespace: test
port: 443
version: v1
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: services-parallel-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: services-parallel-deployment
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
describedObject:
kind: Service
name: services-metrics-service
metric:
name: threatmessages
target:
type: AverageValue
averageValue: 4k
behavior:
scaleDown:
stabilizationWindowSeconds: 30
policies:
- type: Pods
value: 1
periodSeconds: 30
What am I doing wrong? Or are these 2 versions just not compatible for some reason?
According to APIService 1.22 documentation you can find information:
First of all,
v1is available for the version you are using (v.1.19).Secondly, and more important,
All existing persisted objects are accessible via the new APIandNo notable changes. This means that the objects created usingv1beta1don't need to be updated or modified. They will be available and working even if they were created usingv1beta1. That is, after upgrading to versionv 1.22, you should have no issue, the same objects will simply be accessible (and, I would think, accessed byHPA) as if they had been created usingv1. What is more, they may already (in version1.19) be accessible asv1, as I will explain next, so you can check now if everything is fine.I have run some quick tests on a
v 1.19GKE clusterand found if something containsapiVersion: apiregistration.k8s.io/v1beta1(in fact, using exactly what the OP provided in in question:except using
v1beta1asapiVersion) is applied, and then a get command is used to obtain what was created$ kubectl get apiservices/v1.custom.metrics.k8s.io --output=json, an object marked with apiVersionv1, notv1beta1, will be obtained ("apiVersion": "apiregistration.k8s.io/v1"). And if, intead,apiVersion .../v1is used during the creation instead ofapiVersion: apiregistration.k8s.io/v1beta1, the same is obtained. If either of those is deleted, the other one is gone. It is the same object behind the scenes. Yet it is marked asv1.As a result of all the above, you should simply revert back to what you were doing when you deployed using
v1beta1, given it worked and will keep on working according to APIService 1.22 documentation. You can also run$ kubectl get apiservices/v1.custom.metrics.k8s.io --output=jsoncommand, either$ kubectl get APIService --output=jsonor$ kubectl get APIService.apiregistration.k8s.io --output=jsononce deployedv1beta1version, to understand if object is already marked as usingv1behind the scenes despite having created the object withv1beta1- like is happening in my case.Creating objects with
v1is not necessary if they were already created withv1beta1.