S3 commandline client s3cmd
In this page you will find documentation about an S3 client. Here we refer to buckets, this is the S3 term for containers. They are identical.
s3cmd
Information of s3cmd may be found at http://s3tools.org/s3cmd and https://github.com/s3tools/s3cmd/blob/master/README.md.
Authentication
In your home directory you need to create a file called .s3cfg with a contents like:
[default] access_key = <access key> secret_key = <secret key> host_base = objectstore.surf.nl host_bucket = objectstore.surf.nl signature_v2 = True check_ssl_certificate = True check_ssl_hostname = True
Don’t forget to:
chmod 600 ~/.s3cfg
Create a bucket
s3cmd mb s3://mybucket
Upload/Download an object to/from a bucket
An object can be uploaded to a bucket by the following command:
s3cmd put <file name> s3://mybucket/myobject
It can be downloaded by:
s3cmd get s3://mybucket/myobject
List buckets in an account
$ s3cmd ls s3:// 2024-05-28 12:43 s3://backup 2024-07-15 08:42 s3://my-versioned-bucket 2024-07-15 09:56 s3://mybucket
List objects in a bucket
Objects in a bucket can be listed using s3cmd ls like is shown below:
$ s3cmd ls s3://mybucket DIR s3://mybucket/directory/ DIR s3://mybucket/directory2/
If the bucket was, for example, used to store a hierarchy of folders and files, then you need the –recursive flag in order to see the full contents of a bucket.
$ s3cmd ls --recursive s3://mybucket 2024-05-28 12:44 0 s3://mybucket/directory/ 2024-05-28 12:43 408 s3://mybucket/directory/haproxy.txt 2024-05-28 12:44 0 s3://mybucket/directory2/ 2024-05-28 12:43 0 s3://mybucket/directory2/subdirectory/ 2024-05-28 12:43 0 s3://mybucket/directory2/subdirectory/2.txt 2024-05-28 12:43 0 s3://mybucket/directory2/subdirectory/file.txt
Throwing buckets and objects away
Throwing away an object:
s3cmd rm s3://mybucket/myobject
Throwing away a bucket and its contents:
s3cmd rm --force --recursive s3://mybucket s3cmd rb s3://mybucket
Where on the first line all objects are thrown away and on the second line the bucket itself is thrown away.
Important
You can only delete an empty bucket.
Upload large files (>5GB)
For files > 5GB files need to be uploaded in parts. This is called multipart uploading. Below you can see how this works.
$ s3cmd put 6GB.object --multipart-chunk-size-mb=1024 s3://mybucket/ upload: '6GB.object' -> 's3://mybucket/6GB.object' [part 1 of 6, 1024MB] [1 of 1] 1073741824 of 1073741824 100% in 15s 66.80 MB/s done upload: '6GB.object' -> 's3://mybucket/6GB.object' [part 2 of 6, 1024MB] [1 of 1] 1073741824 of 1073741824 100% in 20s 50.31 MB/s done upload: '6GB.object' -> 's3://mybucket/6GB.object' [part 3 of 6, 1024MB] [1 of 1] 1073741824 of 1073741824 100% in 11s 87.43 MB/s done upload: '6GB.object' -> 's3://mybucket/6GB.object' [part 4 of 6, 1024MB] [1 of 1] 1073741824 of 1073741824 100% in 25s 40.55 MB/s done upload: '6GB.object' -> 's3://mybucket/6GB.object' [part 5 of 6, 1024MB] [1 of 1] 1073741824 of 1073741824 100% in 15s 64.77 MB/s done upload: '6GB.object' -> 's3://mybucket/6GB.object' [part 6 of 6, 602MB] [1 of 1] 631290880 of 631290880 100% in 8s 67.04 MB/s done
Downloading the file works the same as a regular download.
s3cmd get s3://mybucket/6GB.object
Sync folders
It is possible to sync folders with their contents to buckets and vice versa. The image below shows you how.
$ ls -1 testdir/ object-1 object-2 object-3 $ s3cmd mb s3://testdir Bucket 's3://testdir/' created $ s3cmd sync testdir/ s3://testdir upload: 'testdir/object-1' -> 's3://testdir/object-1' [1 of 3] 2 of 2 100% in 0s 125.87 B/s done upload: 'testdir/object-2' -> 's3://testdir/object-2' [2 of 3] 2 of 2 100% in 0s 20.58 B/s done upload: 'testdir/object-3' -> 's3://testdir/object-3' [3 of 3] 2 of 2 100% in 0s 59.42 B/s done Done. Uploaded 6 bytes in 1.0 seconds, 6.00 B/s. $ s3cmd ls s3://testdir 2024-07-15 10:16 2 s3://testdir/object-1 2024-07-15 10:16 2 s3://testdir/object-2 2024-07-15 10:16 2 s3://testdir/object-3
Getting metadata
The metadata of an object can be retrieved by:
$ s3cmd info s3://mybucket/myobject s3://mybucket/myobject (object): File size: 6 Last mod: Mon, 15 Jul 2024 09:57:16 GMT MIME type: text/plain Storage: STANDARD MD5 sum: aee97cb3ad288ef0add6c6b5b5fae48a SSE: none Policy: none CORS: none ACL: johndoe: FULL_CONTROL x-amz-meta-s3cmd-attrs: atime:1721037421/ctime:1721037421/gid:1000/gname:johndoe/md5:aee97cb3ad288ef0add6c6b5b5fae48a/mode:33204/mtime:1721037421/uid:1000/uname:johndoe
Setting metadata
S3cmd can be used to set custom metadata during the upload of a file. This is shown below:
$ s3cmd put --add-header=x-amz-meta-foo:bar myobject s3://mybucket upload: 'myobject' -> 's3://mybucket/myobject' [1 of 1] 6 of 6 100% in 1s 5.01 B/s done $ s3cmd info s3://mybucket/myobject s3://mybucket/myobject (object): File size: 6 Last mod: Mon, 15 Jul 2024 11:31:35 GMT MIME type: text/plain Storage: STANDARD MD5 sum: aee97cb3ad288ef0add6c6b5b5fae48a SSE: none Policy: none CORS: none ACL: jm: FULL_CONTROL x-amz-meta-foo: bar x-amz-meta-s3cmd-attrs: atime:1721043028/ctime:1721043028/gid:1000/gname:johndoe/md5:aee97cb3ad288ef0add6c6b5b5fae48a/mode:33204/mtime:1721043028/uid:1000/uname:johndoe
Adding and modifying metadata
Metadata can be added and deleted only by re-uploading the object:
$ s3cmd put --add-header=x-amz-meta-foo:baz myobject s3://mybucket upload: 'myobject' -> 's3://mybucket/myobject' [1 of 1] 6 of 6 100% in 0s 36.25 B/s done $ s3cmd info s3://mybucket/myobject s3://mybucket/myobject (object): File size: 6 Last mod: Mon, 15 Jul 2024 11:38:55 GMT MIME type: text/plain Storage: STANDARD MD5 sum: aee97cb3ad288ef0add6c6b5b5fae48a SSE: none Policy: none CORS: none ACL: johndoe: FULL_CONTROL x-amz-meta-foo: baz x-amz-meta-s3cmd-attrs: atime:1721043095/ctime:1721043028/gid:1000/gname:johndoe/md5:aee97cb3ad288ef0add6c6b5b5fae48a/mode:33204/mtime:1721043028/uid:1000/uname:johndoe $ s3cmd put myobject s3://mybucket upload: 'myobject' -> 's3://mybucket/myobject' [1 of 1] 6 of 6 100% in 0s 122.25 B/s done $ s3cmd info s3://mybucket/myobject s3://mybucket/myobject (object): File size: 6 Last mod: Mon, 15 Jul 2024 11:39:11 GMT MIME type: text/plain Storage: STANDARD MD5 sum: aee97cb3ad288ef0add6c6b5b5fae48a SSE: none Policy: none CORS: none ACL: johndoe: FULL_CONTROL x-amz-meta-s3cmd-attrs: atime:1721043095/ctime:1721043028/gid:1000/gname:johndoe/md5:aee97cb3ad288ef0add6c6b5b5fae48a/mode:33204/mtime:1721043028/uid:1000/uname:johndoe
Encryption
It is possible to let s3cmd encrypt your data before uploading. For this to work you have to setup gpg and add the following lines to your .s3cfg file.
gpg_command = /usr/bin/gpg gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = <password>
To upload an encrypted file you have to do the following:
s3cmd put -e <file name> s3://mybucket/myobject
Here the -e flag enforces the encryption. For downloading nothing special has to be done, so downloading the encrypted object is done by:
s3cmd get s3://mybucket/myobject