What is @aws-cdk/aws-logs?
@aws-cdk/aws-logs is an AWS Cloud Development Kit (CDK) library that allows you to define and manage AWS CloudWatch Logs resources using code. It provides a high-level, object-oriented abstraction to create and manage log groups, log streams, and metric filters, among other CloudWatch Logs features.
What are @aws-cdk/aws-logs's main functionalities?
Create a Log Group
This code sample demonstrates how to create a new CloudWatch Log Group with a specified retention period using the @aws-cdk/aws-logs package.
const logs = require('@aws-cdk/aws-logs');
const cdk = require('@aws-cdk/core');
const app = new cdk.App();
const stack = new cdk.Stack(app, 'MyStack');
new logs.LogGroup(stack, 'MyLogGroup', {
logGroupName: '/aws/my-log-group',
retention: logs.RetentionDays.ONE_WEEK
});
app.synth();
Create a Log Stream
This code sample demonstrates how to create a new CloudWatch Log Stream within an existing Log Group using the @aws-cdk/aws-logs package.
const logs = require('@aws-cdk/aws-logs');
const cdk = require('@aws-cdk/core');
const app = new cdk.App();
const stack = new cdk.Stack(app, 'MyStack');
const logGroup = new logs.LogGroup(stack, 'MyLogGroup', {
logGroupName: '/aws/my-log-group',
retention: logs.RetentionDays.ONE_WEEK
});
new logs.LogStream(stack, 'MyLogStream', {
logGroup: logGroup,
logStreamName: 'my-log-stream'
});
app.synth();
Create a Metric Filter
This code sample demonstrates how to create a Metric Filter for a CloudWatch Log Group that filters log events containing the term 'ERROR' and increments a custom metric in CloudWatch.
const logs = require('@aws-cdk/aws-logs');
const cdk = require('@aws-cdk/core');
const app = new cdk.App();
const stack = new cdk.Stack(app, 'MyStack');
const logGroup = new logs.LogGroup(stack, 'MyLogGroup', {
logGroupName: '/aws/my-log-group',
retention: logs.RetentionDays.ONE_WEEK
});
new logs.MetricFilter(stack, 'MyMetricFilter', {
logGroup: logGroup,
metricNamespace: 'MyNamespace',
metricName: 'MyMetric',
filterPattern: logs.FilterPattern.allTerms('ERROR'),
metricValue: '1'
});
app.synth();
Other packages similar to @aws-cdk/aws-logs
winston-cloudwatch
winston-cloudwatch is a transport for the winston logging library that allows you to send log messages to AWS CloudWatch Logs. Unlike @aws-cdk/aws-logs, which is used for defining and managing CloudWatch Logs resources, winston-cloudwatch is used for sending log data to CloudWatch Logs from your application.
bunyan-cloudwatch
bunyan-cloudwatch is a stream for the Bunyan logging library that sends log records to AWS CloudWatch Logs. Similar to winston-cloudwatch, it focuses on sending log data to CloudWatch Logs rather than managing the log resources themselves.
log4js-cloudwatch-appender
log4js-cloudwatch-appender is an appender for the log4js logging library that sends log events to AWS CloudWatch Logs. It is used for integrating log4js with CloudWatch Logs to send log data, in contrast to @aws-cdk/aws-logs, which is used for infrastructure management.
Amazon CloudWatch Logs Construct Library
This library supplies constructs for working with CloudWatch Logs.
Log Groups/Streams
The basic unit of CloudWatch is a Log Group. Every log group typically has the
same kind of data logged to it, in the same format. If there are multiple
applications or services logging into the Log Group, each of them creates a new
Log Stream.
Every log operation creates a "log event", which can consist of a simple string
or a single-line JSON object. JSON objects have the advantage that they afford
more filtering abilities (see below).
The only configurable attribute for log streams is the retention period, which
configures after how much time the events in the log stream expire and are
deleted.
The default retention period if not supplied is 2 years, but it can be set to
one of the values in the RetentionDays
enum to configure a different
retention period (including infinite retention).
retention example
Subscriptions and Destinations
Log events matching a particular filter can be sent to either a Lambda function
or a Kinesis stream.
If the Kinesis stream lives in a different account, a CrossAccountDestination
object needs to be added in the destination account which will act as a proxy
for the remote Kinesis stream. This object is automatically created for you
if you use the CDK Kinesis library.
Create a SubscriptionFilter
, initialize it with an appropriate Pattern
(see
below) and supply the intended destination:
const fn = new lambda.Function(this, 'Lambda', { ... });
const logGroup = new LogGroup(this, 'LogGroup', { ... });
new SubscriptionFilter(this, 'Subscription', {
logGroup,
destination: new LogsDestinations.LambdaDestination(fn),
filterPattern: FilterPattern.allTerms("ERROR", "MainThread")
});
Metric Filters
CloudWatch Logs can extract and emit metrics based on a textual log stream.
Depending on your needs, this may be a more convenient way of generating metrics
for you application than making calls to CloudWatch Metrics yourself.
A MetricFilter
either emits a fixed number every time it sees a log event
matching a particular pattern (see below), or extracts a number from the log
event and uses that as the metric value.
Example:
metricfilter example
Remember that if you want to use a value from the log event as the metric value,
you must mention it in your pattern somewhere.
A very simple MetricFilter can be created by using the logGroup.extractMetric()
helper function:
logGroup.extractMetric('$.jsonField', 'Namespace', 'MetricName');
Will extract the value of jsonField
wherever it occurs in JSON-structed
log records in the LogGroup, and emit them to CloudWatch Metrics under
the name Namespace/MetricName
.
Patterns
Patterns describe which log events match a subscription or metric filter. There
are three types of patterns:
- Text patterns
- JSON patterns
- Space-delimited table patterns
All patterns are constructed by using static functions on the FilterPattern
class.
In addition to the patterns above, the following special patterns exist:
FilterPattern.allEvents()
: matches all log events.FilterPattern.literal(string)
: if you already know what pattern expression to
use, this function takes a string and will use that as the log pattern. For
more information, see the Filter and Pattern
Syntax.
Text Patterns
Text patterns match if the literal strings appear in the text form of the log
line.
FilterPattern.allTerms(term, term, ...)
: matches if all of the given terms
(substrings) appear in the log event.FilterPattern.anyTerm(term, term, ...)
: matches if all of the given terms
(substrings) appear in the log event.FilterPattern.anyGroup([term, term, ...], [term, term, ...], ...)
: matches if
all of the terms in any of the groups (specified as arrays) matches. This is
an OR match.
Examples:
const pattern1 = FilterPattern.allTerms('ERROR', 'MainThread');
const pattern2 = FilterPattern.anyGroup(
['ERROR', 'MainThread'],
['WARN', 'Deadlock'],
);
JSON Patterns
JSON patterns apply if the log event is the JSON representation of an object
(without any other characters, so it cannot include a prefix such as timestamp
or log level). JSON patterns can make comparisons on the values inside the
fields.
- Strings: the comparison operators allowed for strings are
=
and !=
.
String values can start or end with a *
wildcard. - Numbers: the comparison operators allowed for numbers are
=
, !=
,
<
, <=
, >
, >=
.
Fields in the JSON structure are identified by identifier the complete object as $
and then descending into it, such as $.field
or $.list[0].field
.
FilterPattern.stringValue(field, comparison, string)
: matches if the given
field compares as indicated with the given string value.FilterPattern.numberValue(field, comparison, number)
: matches if the given
field compares as indicated with the given numerical value.FilterPattern.isNull(field)
: matches if the given field exists and has the
value null
.FilterPattern.notExists(field)
: matches if the given field is not in the JSON
structure.FilterPattern.exists(field)
: matches if the given field is in the JSON
structure.FilterPattern.booleanValue(field, boolean)
: matches if the given field
is exactly the given boolean value.FilterPattern.all(jsonPattern, jsonPattern, ...)
: matches if all of the
given JSON patterns match. This makes an AND combination of the given
patterns.FilterPattern.any(jsonPattern, jsonPattern, ...)
: matches if any of the
given JSON patterns match. This makes an OR combination of the given
patterns.
Example:
const pattern = FilterPattern.all(
FilterPattern.stringValue('$.component', '=', 'HttpServer'),
FilterPattern.any(
FilterPattern.booleanValue('$.error', true),
FilterPattern.numberValue('$.latency', '>', 1000)
));
Space-delimited table patterns
If the log events are rows of a space-delimited table, this pattern can be used
to identify the columns in that structure and add conditions on any of them. The
canonical example where you would apply this type of pattern is Apache server
logs.
Text that is surrounded by "..."
quotes or [...]
square brackets will
be treated as one column.
FilterPattern.spaceDelimited(column, column, ...)
: construct a
SpaceDelimitedTextPattern
object with the indicated columns. The columns
map one-by-one the columns found in the log event. The string "..."
may
be used to specify an arbitrary number of unnamed columns anywhere in the
name list (but may only be specified once).
After constructing a SpaceDelimitedTextPattern
, you can use the following
two members to add restrictions:
pattern.whereString(field, comparison, string)
: add a string condition.
The rules are the same as for JSON patterns.pattern.whereNumber(field, comparison, number)
: add a numerical condition.
The rules are the same as for JSON patterns.
Multiple restrictions can be added on the same column; they must all apply.
Example:
const pattern = FilterPattern.spaceDelimited('time', 'component', '...', 'result_code', 'latency')
.whereString('component', '=', 'HttpServer')
.whereNumber('result_code', '!=', 200);