Skip to content

Auditing with FluentD

This guide focuses on the current log collection path used by a10e-manager: enabling Fluentd and configuring it to archive environment logs to S3.

When Fluentd is enabled, a10e-manager automatically:

  • attaches a Fluentd sidecar to each provisioned environment
  • mounts a shared log volume at /var/log/app
  • sets LOG_OUTPUT_PATH=/var/log/app/application.log on the main application container
  • generates the Fluentd configuration for the enabled outputs
  • enriches collected logs with environment and request metadata before forwarding them

The result is a request-scoped log pipeline that captures structured environment logs and writes them to S3 for retention and downstream analysis.

This guide uses config.toml examples as the operator-facing configuration format. a10e-manager translates that configuration into the runtime settings used to provision the Fluentd sidecar and its outputs.

For each environment where Fluentd is enabled, the platform creates a sidecar that:

  • tails JSON log files from /var/log/app/*.log
  • exposes a Fluent forward listener on port 24224
  • adds environment metadata such as environment_id, environment_name, stage, app_name, container_id, resource_id, and ip_address
  • writes the enriched records to the configured outputs

For the S3 output, records are written as gzipped JSON objects and buffered on disk before upload.

The minimum config.toml needed to enable Fluentd with S3 archival is:

[fluentd]
enabled = true
[fluentd.s3]
enabled = true
bucket = "company-a10e-logs"

These are the minimum settings required to turn Fluentd on and point it at an S3 bucket.

This is the recommended baseline for production use:

[fluentd]
enabled = true
image = "fluent/fluentd:v1.16-debian"
[fluentd.s3]
enabled = true
bucket = "company-a10e-logs"
region = "us-east-1"
prefix = "logs/"
flushInterval = "60s"

What each setting does:

  • [fluentd].enabled: enables Fluentd sidecar creation
  • [fluentd].image: Fluentd image used for the sidecar
  • [fluentd.s3].enabled: enables the S3 output
  • [fluentd.s3].bucket: destination bucket for archived logs
  • [fluentd.s3].region: AWS region for the bucket
  • [fluentd.s3].prefix: root prefix used before the generated per-tag and per-environment path
  • [fluentd.s3].flushInterval: upload window used by Fluentd’s S3 buffer

If the runtime already has AWS permissions through its execution environment, you can leave the credential fields unset.

If you need to provide explicit AWS credentials, configure them as environment-backed values in your rendered config.toml. A common pattern is to template the file with environment variable placeholders before starting a10e-manager:

[fluentd.s3]
accessKeyId = "${AWS_ACCESS_KEY_ID}"
secretKey = "${AWS_SECRET_ACCESS_KEY}"

If you are using temporary AWS credentials, also set:

[fluentd.s3]
sessionToken = "${AWS_SESSION_TOKEN}"

The full S3 credential surface is:

  • [fluentd.s3].accessKeyId
  • [fluentd.s3].secretKey
  • [fluentd.s3].sessionToken

When S3 credentials are configured, a10e-manager passes them into the Fluentd runtime so uploads can authenticate to AWS.

If you use placeholder values like ${AWS_ACCESS_KEY_ID}, make sure your deployment process expands them before a10e-manager starts.

This example enables Fluentd and ships environment logs to S3 under the prod/ prefix:

[fluentd]
enabled = true
image = "fluent/fluentd:v1.16-debian"
[fluentd.s3]
enabled = true
bucket = "company-a10e-logs"
region = "us-west-2"
prefix = "prod/"
flushInterval = "60s"
accessKeyId = "${AWS_ACCESS_KEY_ID}"
secretKey = "${AWS_SECRET_ACCESS_KEY}"
sessionToken = "${AWS_SESSION_TOKEN}"

Fluentd writes logs beneath the configured prefix using a time- and environment-scoped layout. The generated path includes:

  • the configured prefix
  • the Fluentd tag
  • year, month, and day
  • the environment ID

In practice, objects are written under a structure like:

logs/app.logs/2026/03/23/env_123/

Each uploaded object contains gzipped JSON log records.

The Fluentd sidecar enriches environment logs before they are written to S3. Records include fields such as:

  • timestamp
  • level
  • environment_id
  • environment_name
  • stage
  • app_name
  • container_id
  • collected_at
  • source
  • action_type
  • resource_type
  • resource_id
  • ip_address
  • metadata

This makes the S3 archive usable for audit review, incident response, and downstream ingestion into analytics or retention systems.

The same Fluentd integration can also write to:

  • PostgreSQL
  • Slack

Those outputs are controlled by separate Fluentd configuration blocks, but if your goal is log archival, the S3 settings above are the main configuration path.