Send Fluent-bit log streams from On-Premise K8s Cluster to AWS CloudWatch

75 Views Asked by At

I am here after 10 days of struggle to achieve this. All I need, is to send on-premise K8s cluster application logs to AWS CloudWatch using Fluent-bit.

Another thing is that I am on KairOS VM machine which is immutable and cannot download or install anything like any Linux/Windows machine, e.g. I cannot even perform setup for awscli2. In-short lot of limitations on the K8s cluster.

So, what I did? I used fluent-bit helm package to complete its installation on K8s cluster and edited the ConfigMap to add an [OUTPUT] for cloudwatch_logs plugin. The changes are as below.

[FILTER]
    Name   kubernetes
    Match  kube.*
    Kube_Tag_Prefix kube.var.log.containers.
    Kube_URL        https://kubernetes.default.svc:443
    Kube_CA_File    /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token

[OUTPUT]
    Name cloudwatch_logs
    Match   *
    region ap-south-1
    log_group_name /aws/my_cloudwatch_group
    log_stream_prefix from-fluent-bit-
    auto_create_group On

After restarting the pod, I can see the log streams are generated in the pod logs kubectl logs. Till here I am good.

Below are my questions where I am struggling.

  1. How do I connect from on-premise k8s cluster to AWS service CloudWatch without using awscli2?
    • What I tried for this? ANS - I did setup for ~/.aws/credentials and ~/.aws/config files in KairOS VM, these files store the access_key_id, secret_key and region (MFA for token is disabled in aws, so not used).

    • Note here - these files exist in local KairOS VM. The problem with this method is that when I restart the pod, the pod logs say that [aws-credentials] the file ~/.aws/credentials does not exist. I was surprised to see that error message.

    • Later I realized, it is expecting that credentials file to be available on the pod container. So I did kubectl exec into pod and created these files and surprisingly it worked. But since it's not a good idea to reveal credentials on pod itself, what is the best way to authenticate on-prem cluster to aws to achieve this? I have a default profile in these files where keys are added.

Pod logs: (why it is looking for the credentials file in the pod container? we cannot do that.) enter image description here

  1. Another way I tried to setup AWS RolesAnywhere using CA certificate, but I am unable to setup aws_signing-helper to add credentials. It's restricted by KairOS VM.

What is the way to connect an on-prem cluster to AWS? I have an IAM role and policy created in AWS but how do we use them with on-prem cluster?

"Sorry for lengthy question! I had to add this much to explain the problem"

Note - If you think, KairOS VM restrictions are stopping you to answer the question, please decommission it from question and think a case where a Linux machine without awscli2 installed in it, then how to achieve all of that. Thanks!!

0

There are 0 best solutions below