How to Mount an S3 Bucket Locally on Linux

AWS logo

In many ways, S3 compartments act like cloud hard drives, but are just “object level storage”, not block level storage like EBS or EFS. However, it is possible to mount a bucket as a file system and access it directly by reading and writing files.

The advantages and limitations of S3 as a file system

The magic that makes all this configuration work is a utility called s3fs-fuse. FUSE stands for Filesystem in Userspace and creates a mounted virtual file system. s3fs interfaces with S3 and supports a large subset of POSIX, including reading, writing, creating directories, and defining file metadata.

One of the great advantages of using S3 over traditional storage is that it is very efficient for storing individual items in the long term, without any limit on the total size of the compartment. You can store 10 photos or 10 million photos in S3, and it will work much the same way. In applications where you need a large (and inexpensive) drive, S3 makes sense, and if the application you’re integrating wants to access files, it’s a good way to connect the two.

Of course, it is not without limits. Although it works relatively comparable to an S3 API in terms of performance when storing and retrieving entire files, it obviously does not entirely replace much faster network-attached bulk storage. There is a reason why this configuration is not officially supported by AWS: you will run into competition issues with multiple clients using files, especially if you have clients in different regions accessing the same bucket. Of course, S3 also has this limitation, and that doesn’t stop you from having multiple clients connected, but it’s more apparent when FUSE seems to give you “direct” access. It is not, and you should keep these limitations in mind.

AWS offers a service similar to this:Storage gateway, which can act as a local NAS and provides local bulk storage supported by S3. However, it is more of an enterprise solution and requires a full physical server to deploy a VMWare image. s3fs, on the other hand, is a simple, single-server solution, although it doesn’t do much caching.

So if you can convert apps using the S3 API rather than a FUSE, you should do it instead. But, if you agree with a little hacky solution, s3fs can be useful.

Configuration of s3fs-fuse

Compared to how hacky it is, it’s surprisingly easy to configure. s3fs-fuse is available of most package managers, although it can simply be called s3fs on some systems. For Debian-based systems like Ubuntu, it would be:

sudo apt install s3fs

You should create an IAM userand give it permission to access the compartment you wish to mount. At the end, you will get a secret access key:

Successful configuration of s3fs-fuse and reception of a secret access key.

You can paste them in the standard AWS credentials file, ~ / .aws / credentials, but if you want to use a different key, s3fs supports a custom password file. Paste both the access key ID and the secret in / etc / passwd-s3fs, in the following format:

echo ACCESS_KEY_ID: SECRET_ACCESS_KEY> / etc / passwd-s3fs

And make sure the permissions on this keystore are set correctly, otherwise it will complain:

chmod 600 / etc / passwd-s3fs

Then you can mount the compartment with the following command:

s3fs bucket-name / mnt / bucket-name

If that doesn’t work, you can enable debugging output with a few additional flags:

-o dbglevel = info -f -o curldbg

If you want this to go up at startup, you will need to add the following to your / etc / fstab:

s3fs # bucket-name / mnt / bucket-name fuse _netdev, allow_other, umask = 227, uid = 33, gid = 33, use_cache = / root / cache 0 0

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.