Securing AWS with native services Part I: AWS Config & Terraform

Through 2025, 99% of cloud security failures will be the customer’s fault, according to Gartner. Infrastructure misconfiguration remains the top cause of data breaches in the cloud.

In Amazon Web Services, this is where AWS Config security service comes into play.

Simplified overview of core AWS Config concepts

The AWS Config Concepts page provides a detailed but also complex definition of the AWS Config terminology. In this article, I’ll attempt to simplify it and mention only the core and most important concepts and functionalities of this tool. At last, I’ll demonstrate how AWS config can be terraformed for a single region deployment, or modularised for a multi account/region setup.

The mechanism that targets and assesses the AWS resources (such as the RDS databases, the S3 buckets, the EC2 instances, the EC2 Security Groups) is known as the configuration recorder. You can think of the configuration recorder as the brain of the AWS Config service.

The configuration recorder assesses the AWS resources based on some configuration rules. The configuration rules can be either custom (Lambda functions written by you) or managed (provided by Amazon).

An example of a managed configuration rule is the rds-storage-encrypted rule, that checks “Checks whether storage encryption is enabled for your RDS DB instances”. A full list of the managed AWS config rules can be found here.

A misconfigured resource is just a resource that is not in harmony with a config rule.

Apart from stand-alone rules, AWS provides the so-called conformance packs i.e a pack or rules. As before you can either define your custom conformance pack or use an existing one. For example, there is a conformance pack for HIPAA compliance, a set of rules that assess at what percentage the company’s infrastructure configuration is HIPAA compliant.

Let’s sum the concepts up. The configuration recorder assesses the aws resources against some config rules (custom or managed), that can be optionally be grouped in some conformance pack.

Among the dozens of terms that AWS config has, these are the only ones you will need in practice.

AWS config stores its findings to an S3 bucket. It can also publish them to an SNS topic. through a delivery channel.

Terraforming AWS Config in a single region

In this example we are going to terraform a configuration recorder, that simply deploys the conformance pack “Operational Best Practices for AWS Well Architected (WA) Security Pillar” in a single region.

Everything is quite straightforward, except the aws_config_aggregate_authorization resource. You can ignore this for now, it will be explained later.

Apparently the respective vars must be added to a variable.tf file.

Multi-account setup

AWS Config service is regional. That means that there must be one configuration recorder per region, for every AWS account. On the other hand, the S3 bucket on which the findings are stored can be unique and in a different account/region. It is a best practice to keep this bucket in the SOC AWS account, at the central region.

In that context, we will introduce a last concept. The configuration aggregator. An aggregator is a component of AWS config responsible for aggregating data from multiple configuration recorders (that can exist in multiple regions and accounts).

The source accounts must authorise the aggregator to collect data. The SOC account aggregator sends an invite (gist below), and the source accounts accept it (line 42 of the previous gist).

The previous terraform code has been setup in a way that can act as a terraform root module. The variables.tf and aws_config_root_module.tf snippets can be modularised in a single folder and invoked by any account with the following definition:

As mentioned above, an S3 bucket and an SNS topic can be also setup in the central SOC account, in order to retain the findings for a longer period of time, centralise them or send them to Slack through a custom lambda function.

Security Engineer