Leeladharan Achar - alias - leelu ~ blogging...hola mi amigos..'s Blog

Posted Sept. 26, 2019   97 views

AWS CLI Commands to download/upload files to S3

S3 is one of the most widely used AWS offerings. After installing awscli (see references for info) you can access S3 operations in two ways:

$ aws s3 <command> (for simple filesystem stuff like mv, cp and so on)

$ aws s3api <command> (for other operations)

Use --help to view

We will use "samplebucket" as an example bucket name throughout

Download filePermalink

cp is for copy; . stands for the current directory


$ aws s3 cp s3://samplebucket/somefolder/afile.txt .

Download a specific folder and all subfolders, recursivelyPermalink

Useful for downloading log folders and things of that nature.


$ aws s3 cp s3://samplebucket/somefolder/ . --recursive

Delete a folder in a bucketPermalink

(along with any data within)


$ aws s3 rm s3://samplebucket/somefolder --recursive

Upload a single file to an S3 bucketPermalink


$ aws s3 cp /path/to/localfile s3://samplebucket/somefolder

Upload multiple files simultaneouslyPermalink

See my post on linux find examples for more ways to use find

Say, for instance, you have multiple files that you want to upload to the same bucket. If they all have names

$ ls
aa ab ac ad ae
$ find . -name "x*" | xargs -I {} aws s3 cp {} s3://samplebucket/samplefolder

Another way is to just move all files you want into a directory and then uploading all files in that directory using the --recursive parameter.

View stats about a bucket (total size, etc)Permalink

$ aws s3 ls s3://samplebucket/path/to/directory/ --summarize --human-readable --recursive

sample output (some rows ommited)
2015-08-24 23:11:03   40.0 MiB news.en-00098-of-00100
2015-08-24 23:11:04   39.9 MiB news.en-00099-of-00100
Total Objects: 100
Total Size: 3.9 GiB

Download just part of a large file from S3Permalink

For instance, download only the first 1MB (1 million bytes) from a file located under

s3://samplebucket/path/to/file.csv

$ aws s3api get-object --range "bytes=0-1000000" --bucket "samplebucket" --key "path/to/file.csv" output

Download a file via "Requester pays"Permalink

Many datasets and other large files are available via a requester-pays model. You can download the datat but you have to pay for the transaction.

One example is the IMDB website; they have recently made their datasets available via S3.

For example, to download one of their files, use --request-payer requester to signal that you know that you are going to be charged for it.


$ aws s3api get-object --bucket imdb-datasets --key documents/v1/current/name.basics.tsv.gz --request-payer requester name.basics.tsv.gz