I've deployed a MinIO server on Kubernetes with cdk8s, used minio1
as serviceId and exposed port 9000.
My expectations were that I could access it using http://minio1:9000
, but my MinIO server is unreachable from both Prometheus and other instances in my namespace (Loki, Mimir etc...). Is there a specific configuration I missed to enable access within the network? The server starts without error, so it sounds a networking issue.
I'm starting the server this way:
command: ["minio"],
args: [
"server",
"/data",
"--address",
`:9000`,
"--console-address",
`:9001`,
],
Patched the K8S configuration to expose both 9000 and 9001
const d = ApiObject.of(minioDeployment);
//Create the empty port list
d.addJsonPatch(JsonPatch.add("/spec/template/spec/containers/0/ports", []));
//add port for console
d.addJsonPatch(
JsonPatch.add("/spec/template/spec/containers/0/ports/0", {
name: "console",
containerPort: 9001,
})
);
// add port for bucket
d.addJsonPatch(
JsonPatch.replace("/spec/template/spec/containers/0/ports/1", {
name: "bucket",
containerPort: 9000,
})
);
Can it be related to the multi-port configuration? Or is there a way to explicitly define the hostname as service id to make it accessible in the Kubernetes namespace?
Here's my service definition generated by cdk8s:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
use-default-egress-policy: "true"
name: minio1
namespace: ns-monitoring
spec:
minReadySeconds: 0
progressDeadlineSeconds: 600
replicas: 1
selector:
matchExpressions: []
matchLabels:
cdk8s.deployment: monitoring-stack-minio1-minio1-deployment-c8e6c44f
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 50%
type: RollingUpdate
template:
metadata:
labels:
cdk8s.deployment: monitoring-stack-minio1-minio1-deployment-c8e6c44f
spec:
automountServiceAccountToken: true
containers:
- args:
- server
- /data/minio/
- --address
- :9000
- --console-address
- :9001
command:
- minio
env:
- name: MINIO_ROOT_USER
value: userminio
- name: MINIO_ROOT_PASSWORD
value: XXXXXXXXXXXXXX
- name: MINIO_BROWSER
value: "on"
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: public
image: minio/minio
imagePullPolicy: Always
name: minio1-docker
ports:
- containerPort: 9001
name: console
- containerPort: 9000
name: bucket
securityContext:
privileged: false
readOnlyRootFilesystem: false
runAsNonRoot: false
volumeMounts:
- mountPath: /data
name: data
dnsConfig:
nameservers: []
options: []
searches: []
dnsPolicy: ClusterFirst
hostAliases: []
initContainers: []
restartPolicy: Always
securityContext:
fsGroupChangePolicy: Always
runAsNonRoot: false
sysctls: []
setHostnameAsFQDN: false
volumes:
- emptyDir: {}
name: data
---
apiVersion: v1
kind: Service
metadata:
labels:
use-default-egress-policy: "true"
name: minio1
namespace: ns-monitoring
spec:
externalIPs: []
ports:
- port: 443
targetPort: 9001
selector:
cdk8s.deployment: stack-minio1-minio1-deployment-c8e6c44f
type: ClusterIP
As mentioned in the document:
.
As @user2311578 suggested Exposing it on the container level does not make it available automatically on the service. You must specify it there when you want to access it through the service (and its virtual IP).
Also check the GitHub link for more information.