How to disable JSON format and send only the log message to Sumologic with Fluentbit?

1k Views Asked by At

We are using Fluentbit as as Sidecar container in our ECS fargate Cluster which is running a dotnet application, initially we faced the issue of fluentbit sending the logs in multiline and we solved it using Fluentbit Multilne feature. Now the logs are being sent to Sumologic in Multiple however it is being sent as Json format whereas we just want fluentbit send only the raw log

Logs are currently

{
date:1675120653.269619,
container_id:"xvgbertytyuuyuyu",
container_name:"XXXXXXXXXX",
source:"stdout",
log:"2023-01-30 23:17:33.269Z DEBUG [.NET ThreadPool Worker] Connection.ManagedDbConnection - ComponentInstanceEntityAsync - Executing stored proc: dbo.prcGetComponentInstance"
}

We want only the line

2023-01-30 23:17:33.269Z DEBUG [.NET ThreadPool Worker] Connection.ManagedDbConnection - ComponentInstanceEntityAsync - Executing stored proc: dbo.prcGetComponentInstance
2

There are 2 best solutions below

0
On

When using Fluent Bit's CloudWatch output plugin, the default behavior is to send the entire log record to CloudWatch. However, you can specify a specific key name to send only the value of that key to CloudWatch. This can be useful when you want to extract a specific field from your log record and send it as the log message to CloudWatch.

For example, if you are using the Fluentd Docker log driver, you can specify log_key log in the configuration for the CloudWatch output plugin. This will instruct Fluent Bit to extract the value of the log key from the log record and send it as the log message to CloudWatch.

Here's an example configuration snippet:

[OUTPUT]
  Name cloudwatch
  Match *
  region ${REGION}
  log_group_name ${LOG_GROUP_NAME}
  log_stream_name ${LOG_PREFIX}${STAMP}
  log_key log

In the above configuration, the log_key option is set to log, indicating that only the value of the log key will be sent as the log message to CloudWatch. Make sure to replace ${REGION} and ${LOG_GROUP_NAME} with the appropriate values for your AWS region and CloudWatch log group name.

By specifying the log_key option, Fluent Bit will extract the value of the specified key from the log record and send it as the log message, allowing you to customize the format of the logs sent to CloudWatch.

0
On

You need to modify Fluent Bit configuration to have the following filters and output configuration:

fluent.conf:

## prepare headers for Sumo Logic
[FILTER]
    Name record_modifier
    Match *
    Record headers.content-type text/plain

## Set headers as headers attribute
[FILTER]
    Name nest
    Match *
    Operation nest
    Wildcard headers.*
    Nest_under headers
    Remove_prefix headers.

[OUTPUT]
    Name             http
    ...
    # use log key as body
    body_key         $log
    # use headers key as headers
    headers_key      $headers

That way, you are going to craft HTTP request manually. This is going to send request per log, which is not necessary a good idea. In order to mitigate that you can add the following parser and use it (flush_timeout may need an adjustment):

parsers.conf

# merge everything as one big log
[MULTILINE_PARSER]
    name          multiline-all
    type          regex
    flush_timeout 500
    #
    # Regex rules for multiline parsing
    # ---------------------------------
    #
    # configuration hints:
    #
    #  - first state always has the name: start_state
    #  - every field in the rule must be inside double quotes
    #
    # rules |   state name  | regex pattern                  | next state
    # ------|---------------|--------------------------------------------
    rule      "start_state"   ".*"                             "cont"
    rule      "cont"          ".*"                             "cont"

fluent.conf:

[INPUT]
    name              tail
    ...
    multiline.parser  multiline-all