Rotate Nginx logs ยท zip and transfer them to AWS S3 Bucket and remove the zip files from server

Rotate Nginx logs, zip and transfer them to AWS S3 Bucket and remove the zip files from server

Steps

  • Rotate Nginx Logs daily/weekly as required and also by file size
  • Compress the logs date wise
  • Move the compressed files to AWS S3 within a folder of “AWS Instance Id” (if you are using AWS EC2 instance) else you can move it to any directory you need in S3
  • Remove the copied files in S3 from server

Step 1 & 2: Rotate Nginx Logs daily/weekly as required and also by file size

Login as root in the server where Nginx Logs needs to be rotated, then create a file

vi /etc/logrotate.d/nginx 

and add below content

/var/log/nginx/*log {
	create 0644 nginx nginx
	weekly
	missingok
	dateext
	dateformat _%Y-%m-%d
	rotate 10
	size 500M
	missingok
	notifempty
	compress
	delaycompress
	sharedscripts
	postrotate
		/bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true
		/bin/bash /path/to_your_backup_script/logs_backup.sh
	endscript
}

Note above I have used /var/log/nginx/*log which will apply for all nginx log files (access, error logs and logs specific to domains). You can change it to use specific logs.

Also in the postrotate config, I have used /run/nginx.pid as my nginx process id. This may be different path in your installation. Check this in your **/etc/nginx/nginx.conf **

As you can see I have used a shell script called logs_backup.sh. This script does all the action from Step 3 & 4. Before jumping on to the contents of this shell script, you need to download another shell script to get EC2’s instance id ** this is an optional step if you are using EC2 instance **

Optional Step if server is an AWS EC2 instance

Download the script and give execute permission

$ wget http://s3.amazonaws.com/ec2metadata/ec2-metadata
$ chmod u+x ec2-metadata

Test the command to see, if it provides the instance id details by executing below command

$ec2-metadata -i

Output should be as below

instance-id: i-03xxxxxxxxxxxxx

Step 3 & 4: Create a file “logs_backup.sh” in a directory and copy below contents

vi /path/to_your_backup_script/logs_backup.sh

and add below contents

#!/bin/bash
source=/var/log/nginx
shopt -s dotglob
for file in "${source}"/*.gz
do
   if [ -f "${file}" ]; then
      instance_details=` /path/to_your_backup_script/ec2-metadata -i`
      find_str="instance-id: "
      replace_str=""
      instance_id=${instance_details//$find_str/$replace_str}
      /usr/local/bin/aws s3 cp $file s3://your_bucket/WSNginxLogs/$instance_id/
      rm -f $file
   fi
done

After saving the file, we need to install aws-cli and configure the access_key and secret_key to access our S3 bucket

To install aws-cli on your system, please check this AWS-doc link

After aws-cli tool is intalled in your system. Execute below steps to configure the access_key and secret_key to access our S3 bucket

$ aws configure
AWS Access Key ID [None]: AAABBBCCCDDDEEEFFFGG
AWS Secret Access Key [None]: aaabbbcccdddeeefffggghhhiiijjjkkklllmmmn
Default region name [None]: us-east-1
Default output format [None]: json

Once this is configured, we can do the dry run of our nginx logrotate configuration by executing below command

logrotate -d /etc/logrotate.d/nginx

and then if there are no errors we can run the logrotate manually by executing below command and in future crontab will take care of running this based on time or log file size.

logrotate  /etc/logrotate.d/nginx

That’s all , Thank you.

Published:
comments powered by Disqus