I am using awslogs logging driver for my docker-compose stack. On Friday night, the EC2 instance ran out of space, and as such a NodeJS server dependent on a Redis server started throwing error:
BRPOPLPUSH ReplyError: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk., as Redis was not able to write to disk anymore. While this was remedied when I made some space on the server, the
awslogs driver logged the repeated error over a period of 1 and half days to the order of 250+ GB costing me ~$150 in data ingestion costs.
While this is resolved for now, is there is a way to rate-limit the AWS CloudWatch logs ingestion, either using the
awslogs driver or using the AWS CloudWatch agent? I use the CloudWatch agent to "watch" files for changes in many projects and this scares me about such an eventuality in any of them.
Source: Docker Questions