A python/boto tool to automatically enable logs on all my ELB load balancers – Part 1

The problem I am trying to solve

I like to have logging turned on for my ELB load balancers. In today’s post, I am focusing on the older generation of AWS load balancers, called ELB. If a cyber security breach happens, I want to have logs to be able to find out the extent of the breach. The task at hand is to turn on logging for all of my ELB’s. Problem is that I have many ELB’s, on many different AWS accounts. Manually doing this configuration is not fun. The goal is to automate. And the stretch goal will be to not only have a program that does it for today’s ELB’s, but to have automation in place that will add the logging configuration to newly created ELB’s.

I need to walk before I can run

It seems pretty obvious that one should know how to do something manually before one automates it. Let’s see what I have to do to use the AWS console to enable the logs on an ELB.

  1. Log into the AWS console account (One does need an AWS account for this)
  2. In the services, search for EC2
  3. On the left pane, navigate to Load Balancers
  4. (I will assume one already has one or more Load Balancers)
  5. Click the load balancer to see the details.

I then click the button to configure the access logs.

Check the Enable Access, enter a bucket name, and let AWS create that bucket on your behalf. That’s it, done – that was easy. What is the fuzz all about?  Well, it’s not automated. It does not scale. It is a one-shot deal.

The S3 Bucket

Now that I know how to set up ELB logging via the AWS console, I will examine the S3 bucket AWS created for me next. Search for S3 in the console, then choose buckets.

The UI should look something like the above. Click on the mycoolbucket to look at it. I am going to have to create this programmatically and looking at the created config is a good idea. Click on the bucket, and choose the Permissions tab

I give the Load Balancer permission to write to the bucket – I allow the 03367799424:root user (this is an AWS account# that varies by region) to write to my bucket, and only in a specific path. Without this security setup, I won’t get logs. Because I used the AWS console, AWS set up the bucket policy for me automagically, but I will have to do this if I create my own buckets via scripting. I am showing the full policy, including a statement that will permit S3 server logs  in the terragrunt.hcl listing further down in this post.

Note that the system logs every request going to each ELB, and this could account for a lot of data over time. Someone will have to pay for the S3 storage. Fortunately, one can set a policy to automatically wipe files older than X days from the bucket and to keep only the latest logs.

If one goes through the settings, one can have the bucket contents encrypted at rest, by choosing one’s own customer-provided KMS key. Don’t choose this setting, because ELB’s cannot write to encrypted buckets at the time of writing. (Mistake #1 for me).

Creating the bucket programmatically

After looking at the bucket permissions, need to think about how best to create a bucket via some sort of scripting (i.e NOT the AWS console).

Here I have many options:

  • Terraform
  • AWS Cloud Formation
  • Terragrunt (a terraform wrapper)
  • AWS SDK with some programming language like (Python, go, .net, node, ruby etc)
  • AWS CLI and bash
  • Other infrastructure creation tools like Pulomi, and other many other infrastructure as code tools.

The requirement is that I create and configure one log bucket before the main program is executed. Without going into detailed explanations, I will show a terragrunt approach.  Terragrunt uses terraform, and typically terraform modules; here I use cloudposse/terraform-aws-s3-bucket, which I found on GitHub. With terraform, one could use this module directly.

 

terraform {
  source = "${format("%s%s", dirname(get_parent_terragrunt_dir()), "/..//modules/terraform-aws-s3-bucket")}"
}

include "root" {
  path = find_in_parent_folders()
}

dependencies {
  paths = ["../kms-key"]
}

dependency "kms-key" {
  config_path = "../kms-key"
}

locals {
  env_vars = yamldecode(
    file("${find_in_parent_folders("environment.yaml")}"),
  )
}

inputs = {
  bucket_name                      = "mycooolbucketforlogs"
  enable_cloud_trail_bucket_policy = false
  versioning_enabled               = false
  acl                              = "private"
  s3_replication_enabled           = false
  transfer_acceleration_enabled    = false
  bucket_key_enabled               = false # NO ENCRYPTION
  sse_algorithm                    = "aws:kms"
  kms_master_key_arn               = dependency.kms-key.outputs.key_arn
  force_destroy                    = true

  lifecycle_configuration_rules = [
    {
      enabled = true
      id      = "v2rule"

      abort_incomplete_multipart_upload_days = 3

      filter_and = null
      expiration = {
        days = 10
      }

      noncurrent_version_expiration = {
        newer_noncurrent_versions = 1  # integer > 0
        noncurrent_days           = 10 # integer >= 0
      }

      transition                    = []
      noncurrent_version_transition = []
    }
  ]

  policy = jsonencode(
  {
    "Version": "2012-10-17",
    "Id": "AccessLogs-Policy",
    "Statement": [
        {
            "Sid": "For-Elb-Logs",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::127311923021:root"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::logs-bucket-123456789012/*"
        },
        {
            "Sid": "S3PolicyStmt-DO-NOT-MODIFY-1669647431450",
            "Effect": "Allow",
            "Principal": {
                "Service": "logging.s3.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::logs-bucket-812928068820/*"
        }
    ]
  })
}

 

Even if one doesn’t know what’s going on with the syntax, one can see the sorts of instructions I provide for creating the bucket. Note that I provide the embedded json bucket policy to specify bucket access permissions. One would run terragrunt apply (one will require another configuration for this tool). Terragrunt then creates the bucket with the correct policy and lifecycle.

Setting ELB configuration via python/boto

After all the preliminaries, I can now finally look at some code to help automate the ELB logging. Amazon provides a library for python called boto, which wraps the Amazon REST api. Using this library is very convenient. It is relatively easy to use if one knows a bit of python. I won’t get into the details of how to set up python, but let’s look at the code.

def main():
    print(datetime.datetime.now().strftime("%c") +
          " : This program checks and fixes logging configurations for Buckets & ELBs")
    args = parse_arguments()
    print(datetime.datetime.now().strftime("%c") + " : Process started for ELB Logging")
    enable_elb_logging(args)


def parse_arguments():
    parser = argparse.ArgumentParser(description="Fix ELB Logs")
    parser.add_argument("-r", "--region", type=str, default="us-east-1",
                        help="The default region is us-east-1.")
    parser.add_argument("-l", "--logbucket", help="Specify log bucket", type=str, default="")
    parser.add_argument("-p", "--profile", help='aws profile', type=str, default="default")
    return parser.parse_args()


if __name__ == "__main__":
    main()

This first bit is just about setting up main() and setting up command line arguments. I will need to pass in the region, the bucket name, and the name of an AWS profile (in ~/.aws/credentials), if I am testing this program locally, outside of the cluster. If I am running inside the cluster, I can use an IAM role to give my program the right to enumerate and change ELB’s.

The next bit is the core of what I want to do, and one can see the code below:

def enable_elb_logging(args):
    elb_names = []
    bucket_name = args.logbucket
    session = get_session(args)
    elb_client = session.client('elb')
    try:
        response = elb_client.describe_load_balancers()
        for item in response['LoadBalancerDescriptions']:
            elb_names.append(item['LoadBalancerName'])
            # Fetch and update attribute of LB
        for lb in elb_names:
            print(datetime.datetime.now().strftime("%c") + " : Found Elastic Load Balancer: " + lb)
            att = elb_client.describe_load_balancer_attributes(LoadBalancerName=lb)
            if not att['LoadBalancerAttributes']['AccessLog']['Enabled']:
                print(datetime.datetime.now().strftime("%c") + " : Access Logs Not enabled, Need Action")
                elb_client.modify_load_balancer_attributes(
                    LoadBalancerName=lb, LoadBalancerAttributes={'AccessLog': {'Enabled': True,
                                                                               'S3BucketName': bucket_name,
                                                                               'EmitInterval': 60,
                                                                               'S3BucketPrefix': ""}})
                print(datetime.datetime.now().strftime("%c") +
                      " : Successfully enabled access logs for %s at location %s" % (lb, bucket_name))
            else:
                print(datetime.datetime.now().strftime("%c") + " : AccessLogs already enabled for ELB: %s " % lb)
    except ClientError as e:
        # noinspection PyStatementEffect
        e.response['Error']['Code']
        print(datetime.datetime.now().strftime("%c") + " : Couldn't get elb.")
        raise

The program gets a list of ELB’s in the region from the API. For each one, it gets info about that particular ELB. It then checks if the ELB already logs to an S3 bucket — if not it changes the ELB attribute to enable logging to the specified bucket.

Partial Automation

This gets me to a state that I would call ‘partially automated. I now have a script that, when run, lists my ELBs and configures logging. I call this partially automated because I have to remember to run this program from time to time. If I don’t run this periodically, newly created ELBs may not have logging turned on.

Nginx and ELBs

I just want to give an example, where the system creates ELBs indirectly and automatically. Automatic tools may lack configuration and capability to specify to turn logging on when they create an ELB. When I am using the Nginx ingress controller, Nginx creates an ELB for each running instance. The whole ingress path will look like this Internet (using DNS CNAME) -> ELB -> Nginx reverse proxy -> service -> pod -> container in the pod. When I deploy Nginx, I can control ELB attributes via annotations. The level of control does unfortunately not allow specifying that the ELB shall log to mycoolllogbucket.

Kubernetes Cronjobs to achieve full automation

The idea is simple. I would like Kubernetes to launch a ‘job’ or pod on a 12-hour cycle on my behalf. Think of a Kubernetes job as running a container, and keeping the log of the container run. If I run the program with the Kubernetes scheduler via a cronjob, then I have a fire-and-forget solution. I no longer have to worry about new ELBs that are not logging to S3. Kubernetes Cronjobs creation in this scenario is complex enough to warrant a separate post. The next post, part 2, will continue the discussion

Leave a Reply