Customize your S3 connection

S3 Backend supported


s3fs is a FUSE filesystem backed by Amazon S3 that allows to mount an S3 bucket as a local filesystem. It stores files natively and transparently in S3 (i.e., you can use other programs to access the same files).


Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces.

List of Options available:


OptionsDefault ValueDescription


"300" (seconds)

Time to wait for connection before giving up



By default, it is the "private" ACL which set the default canned ACL to apply to all written s3 objects.

Other ACLs be set according to Amazon S3 implementation.

The following URL lists these options:



This option instructs a query to the ECS container credential metadata address instead of the instance metadata address.



Allow check data integrity of uploads via the MD5 checksum. This can add CPU overhead to transfers.


no IAM role

This option requires the IAM role name or "auto". If you specify "auto", it will automatically use the IAM role names that are set to an instance. If you specify this option without any argument, it is the same as that you have specified the "auto".



This specify the maximum number of keys returned by S3 list object API.


"100,000" entries (about 40MB))

maximum number of entries in the stat cache, and this maximum is also treated as the number of symbolic link cache.



Part size, in MB, for each multipart request. The minimum value is 5 MB and the maximum value is 5 GB.



This is about the maximum number of parallel requests for listing objects.


The server certificate won't be checked against the available certificate authorities. This is very useful if there is self-signed certificate.


Disable multipart uploads



Number of parallel requests for uploading big objects. by multipart post request and sends parallel requests. It is necessary to set this value depending on a CPU and a network band.


"120" (seconds)

Time to wait between read/write activity before giving up



Number of times the system will do a retry when an S3 transaction failed.



Maximum size, in MB, of a single-part copy before trying multipart copy.



When 0, do not verify the SSL certificate against the hostname.



This option has to be supported by the storage vendor; it allows to store the objects with specified storage class.

Possible values are standard, standard_ia, onezone_ia, reduced_redundancy and intelligent_tiering.

Encryption for S3

Nodeum allows the usage of the 3 types of Amazon's Server-Site Encryption:

  • SSE-S3

  • SSE-C


Server-side encryption is about protecting data at rest, it encrypts only the object data, not object metadata.

SSE-S3 uses Amazon S3-managed encryption keys

It is a Server-side encryption that protects data at rest. The S3 storage encrypts each data with a unique key. As an added safeguard, it encrypts the key itself with a master key that it rotates regularly.

S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data, 256-bit Advanced Encryption Standard (AES-256).

SSE-C uses customer-provided encryption keys

Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys.

With the encryption key you provide as part of your request, Amazon S3 manages the encryption as it writes to disks and decryption when you access your objects.

It is important to understand that the only thing to do is to manage the encryption keys you provide.

When a file is copied to the Cloud S3 storage, the S3 storage uses the encryption key you supply to apply AES-256 encryption to your data and removes the encryption key from memory.

When you retrieve the file, you must supply the same encryption key. The S3 storage first verifies that the encryption key you supplied matches and then decrypts the object before returning the data to you.

SSE-KMS uses the master key which you manage in AWS KMS

Server-side encryption is the encryption of data at its destination by the application or service that receives it.

AWS Key Management Service (AWS KMS) is a service that combines secure, universally available hardware and software to provide a key management system scaled for the cloud. It uses AWS KMS customer master keys (CMKs) to encrypt your data. AWS KMS encrypts only the object data. Object metadata is not encrypted.

SSE Usage Recommendations

Well know situation :

You don't specify a file with a 32 char key

Situation :

The SSE required a 32 char key to be able to encrypt the contents which are sent to the bucket. Make sure that you include a 32 char key for having the feature working properly.

Keys rotation

Situation :

The uploader key file can include multiple keys, be careful about the syntax and file organization :

  • first line is always the main encryption / decryption key

  • second lines are always the decryption keys

Example : If you want to change the key every month (for encryption), you will store the last key in the first line, and store all previous keys in the next lines ; this to keep the possibility to decrypt the files which have been stored and encrypted with this key.

Example :


01234567890123456789012345678911 11234567890123456789012345678911 21234567890123456789012345678911

  • 01234567890123456789012345678911 is the key used to encrypt - decrypt datas

  • 11234567890123456789012345678911 is a previous key which is still required to decrypt the data

  • 21234567890123456789012345678911 is a previous key which is still required to decrypt the data

Bucket Encryption Configuration Changes

Situation : You store data in a bucket from a while and after a certain time, you decide to set an encryption option.

This change only impact the new data which will be encrypted and keep all previous written files not encrypted.

Encryption Visualization

In an S3 object storage supporting SSE, we can easily see icons showing the encrypted files :

Last updated