Rate limiting AWS CloudWatch logs ingestion

I am using awslogs logging driver for my docker-compose stack. On Friday night, the EC2 instance ran out of space, and as such a NodeJS server dependent on a Redis server started throwing error: BRPOPLPUSH ReplyError: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk., as Redis was not able to write to disk anymore. While this was remedied when I made some space on the server, the awslogs driver logged the repeated error over a period of 1 and half days to the order of 250+ GB costing me ~$150 in data ingestion costs.

While this is resolved for now, is there is a way to rate-limit the AWS CloudWatch logs ingestion, either using the awslogs driver or using the AWS CloudWatch agent? I use the CloudWatch agent to "watch" files for changes in many projects and this scares me about such an eventuality in any of them.

The only solution I saw so far, is SegmentIO’s rate-limiting-proxy described here. But the project is not maintained anymore and seems a bit complicated without documentation for composing pieces.

Source: Docker Questions