Skip to main content
Version: 5.11.0

Manually Setup Fluentd for AWS Elasticsearch

This guide offers step-by-step instructions for manually configuring Fluentd with AWS Elasticsearch. It starts with cluster creation in AWS, moves through domain and node configurations, and outlines security measures via Kibana. This guide includes enabling audit logs, setting up a data retention policy, and is a comprehensive resource to integrate Fluentd and AWS Elasticsearch.

Cluster Creation

Prerequisites:

  • A 3-character set for the cluster (e.g., mhi)
  • VPC associated with the cluster

Instructions:

  1. Navigate to Domain Creation

    a. Open the AWS Console, then go to Amazon OpenSearch Service (formerly Elasticsearch Service).

    b. Click on Domains > Create Domain.

  2. Domain Configuration

    a. Domain Name: structsure-dev2 (rename as necessary)

    b. Custom Endpoint: Disabled

    c. Deployment Type: Production

    d. Elasticsearch Version: 7.10

    e. Compatibility Mode: Enabled

    f. Auto-Tune: Enabled

  3. Node Settings

    a. Data Nodes:

    b. Availability Zones: Choose either 3-AZ or 2-AZ based on availability.

    c. Instance Type: r6g.large.search

    d. Number of Nodes: 2 (1 per AZ)

    e. Storage Type: EBS

    f. EBS Volume: General Purpose (SSD), 200 GB (can be resized later)

  4. Master Nodes

    a. Instance Type: r6g.large.search

    b. Number of Master Nodes: 3

  5. Additional Settings

    a. Ultra Warm Data Nodes: Disabled

    b. Snapshot Frequency: Hourly

  6. Network Configuration

    a. VPC Access: Choose the VPC that is associated with your cluster.

    b. Subnets: Select a subnet from each Availability Zone (AZ).

    c. Security Groups: Set inbound rule for HTTPS.

    d. Rule Type: inbound

    e. Type: HTTPS

    f. Protocol: TCP

    g. Port: 443

    h. Source: 10.0.0.0/8

  7. Access Control

    a. Fine-Grained Access Control: Enabled

    b. Master Username: elastic

    c. Master Password: Generate randomly and store in secrets.enc.yaml

  8. Other Options

    a. SAML Authentication: Disabled

    b. Amazon Cognito Authentication: Disabled

    c. Encryption: All types enabled

    d. AWS KMS Key: Use AWS owned key (TODO: Investigate custom KMS key)

    e. Tags: None (TODO: Add tags)

  9. Create Cluster: The cluster should be ready in 15-30 minutes.

Connecting to the Elasticsearch Domain

To connect to the Elasticsearch domain, append your AWS region to NO_PROXY and no_proxy:

NO_PROXY=$NO_PROXY,.us-gov-west-1.es.amazonaws.com
no_proxy=$no_proxy,.us-gov-west-1.es.amazonaws.com

To verify access:

curl -u "$ES_USER:$ES_PASS" "$ES_ENDPOINT/_cat/health?v"

Adding Users to Elasticsearch

Additional users must be added through Kibana, logging in using the break glass user, if required.

  1. Access the security dashboard by going to Open Distro for Elasticsearch > Security.

  2. Navigate to Internal Users and create a new user.

    a. Specify a new user and password.

    b. Leave the backend roles and attributes blank.

    c. Save the user.

  3. Navigate to Roles.

    a. Click on the all_access role, then go to the Mapped users tab.

    b. Click on Manage mapping and add the new user to the list of users associated with the role.

    c. Save the mapping to allow the user access to all Kibana dashboards and objects.

    d. Repeat the process with the security_manager role to allow the user to add or modify other users.

The Fluentd user requires the logstash role in order to perform its functions, but the logstash role must be modified slightly, in order to match the index patterns used by our logs.

  1. Access the security dashboard by going to Open Distro for Elasticsearch > Security.

  2. Navigate to Roles.

  3. Click on the logstash role, then click Edit role.

  4. Under Index permissions, expand the existing permission for logstash-*.

  5. Add a new index pattern, logs-* alongside the existing one for logstash-*.

  6. Save the updates to the role.

AWS Elasticsearch Audit Logs

Enabling audit logging for Kibana in AWS ES is a two-part process.

AWS ES Console Steps

  1. Access AWS Console.

  2. Browse to the domain in Amazon OpenSearch or AWS Elasticsearch.

  3. Select the Logs tab.

  4. Enable Audit logs under CloudWatch Logs.

  5. Accept default options, or select existing log groups, if previously created.

Kibana Steps

  1. Access Kibana through a web interface.

  2. Expand the drop-down to the left.

  3. Security, found under Open Distro for Elasticsearch.

  4. Select Audit logs.

  5. Enable audit logging.

  6. Under General settings, remove all disabled categories in the REST layer.

  7. Under General settings, enable Transport layer, and remove any disabled categories.

Verification

  1. Ensure log groups and streams are created.

  2. Ensure login events are logged, login to Kibana, and validate with a query similar to below:

fields @timestamp, @message
| filter @message like "structsure.ctr"
| sort @timestamp desc
| limit 20

Data Retention

In AWS ES, we will define a data retention policy. The following guidance is a starting point; data will remain hot for 30 days before purge. We may implement a hot-warm-cold strategy in the future, but this will require UltraWarm nodes and S3 integration.

In Kibana, perform the following actions:

  1. Expand the drop-down to the left.

  2. Locate Index Management under Open Distro for Elasticsearch.

  3. When State Management Policies display, select Create policy.

  4. Copy/Paste the following, and provide a policy_id name:

{
"policy": {
"policy_id": "hot_delete",
"description": "Hot delete workflow after 30 days",
"last_updated_time": 1643917969280,
"schema_version": 1,
"error_notification": null,
"default_state": "hot",
"states": [
{
"name": "hot",
"actions": [],
"transitions": [
{
"state_name": "warm",
"conditions": {
"min_index_age": "3d"
}
}
]
},
{
"name": "warm",
"actions": [
{
"force_merge": {
"max_num_segments": 1
}
},
{
"index_priority": {
"priority": 0
}
}
],
"transitions": [
{
"state_name": "delete",
"conditions": {
"min_index_age": "30d"
}
}
]
},
{
"name": "delete",
"actions": [
{
"delete": {}
}
],
"transitions": []
}
],
"ism_template": [
{
"index_patterns": [
"logs-*"
],
"priority": 0,
"last_updated_time": 1640713486551
}
]
}
}
  1. Determine if existing indices are associated with the policy, and apply, if necessary.

  2. Verify if newly created indices are associated with the policy.