AWS quietly launched S3 Files - a way to mount an S3 bucket and work with it like a regular file system. No custom SDK, no aws s3 cp, just standard file operations on top of S3.

How it works

You mount the bucket via a managed endpoint and get a POSIX-compatible interface on EC2, Lambda, EKS, and ECS. Your existing tools and applications don’t need to know it’s S3 underneath.

Importantly, AWS did not just bolt a POSIX layer on top of S3. That has been tried before - s3fs-fuse, goofys, and even Mountpoint all went that route. S3 Files is built on EFS infrastructure, with your authoritative data staying in the S3 bucket. The filesystem maintains a view of your objects and translates filesystem operations into efficient S3 requests. Writes go through the filesystem and sync back to S3.

This is also not the first time AWS has tried to solve this. The S3 CSI driver did something similar but was EKS-only and painful to set up. Real-world feedback was not great - teams who tried it for read workloads found it so slow they abandoned it, with some reporting prefetching data in an init container was almost an order of magnitude faster. S3 Files is a proper managed solution, not a CSI workaround.

S3 still is not a filesystem. But your S3 data can now be used with one.

POSIX caveat: S3 Files is not fully POSIX-compliant. It supports only advisory locking and does not support atomic renames. Less janky than FUSE-based approaches, but worth knowing before you assume full compatibility.

Pricing

There are a few layers to understand:

Component Price
S3 storage Standard S3 rates
High-performance storage $0.30 / GB per month
High-performance reads $0.03 / GB per month
Standard S3 reads Free
Writes $0.06 / GB per month

For a typical mixed read/write workload, the overhead on top of S3 storage comes out to roughly $5 per TB per month. That does not include the underlying S3 storage cost itself.

The hot/cold split: there is a file size threshold (defaults to 128 KB). Files smaller than that get loaded onto the high-performance storage when accessed. Files 128 KB or larger stream directly from S3 - no S3 Files charge at all. Untouched data is evicted from the fast tier automatically after 1-365 days (default 30).

Billing gotchas:

  • Every data access operation has a 32 KB minimum. Read a 1-byte file? Metered as 32 KB. Write a 4-byte config update? Same. Metadata operations (listing, checking attributes, creating files) cost 4 KB each. If your workload does millions of small operations, those minimums add up fast.
  • First read of a small file costs $0.06/GB, not $0.03. The file gets imported to fast storage (write charge) and the read is included in that. Subsequent reads are $0.03/GB. Large files read directly from S3 are free.
  • Renaming a directory is metered for every object with that prefix individually. Moving a folder with 50,000 files is 50,000 operations.

How it compares to EFS

The rates ($0.30/GB storage, $0.03/GB reads, $0.06/GB writes) are identical to EFS Performance-optimized pricing - because S3 Files is built on EFS infrastructure. The difference is in what you pay it on.

EFS charges you for every byte stored whether you touched it this month or not. S3 Files only charges those rates on the small hot fraction you actually access. The rest stays at standard S3 prices ($0.023/GB-month), doing nothing.

Also: reads of files 128 KB or larger are free via S3 Files (they stream from S3 directly). The same read costs $0.03/GB on EFS Performance-optimized. If your workload is mostly large files, that difference matters.

The underlying bucket can be Intelligent-Tiering or Infrequent Access too. S3 Files won’t access Glacier Flexible Retrieval or Deep Archive (those need a restore first), but everything else works. Your cold data can sit in IT at ~$0.0125/GB-month and S3 Files only charges its surcharge on the active slice.

When it makes sense

  • ML training pipelines reading large files from S3 - large file reads are free, and you get a proper mount point instead of duct-taping Mountpoint together
  • Agentic AI workloads that need shared storage without your team becoming S3 API experts
  • Legacy applications that assume POSIX semantics and currently run on EFS or FSx just to have something to mount

If you are using S3 APIs directly today and it works, keep doing that. This is an additional access pattern for workloads that think in files, not objects.


How it compares to other mount options

S3 Files is not the only way to mount a bucket. There are four other tools that do roughly the same thing, each with a different trade-off.

Mountpoint for S3

Mountpoint is AWS’s own open-source FUSE driver. Free - you only pay for S3 API calls. It is explicitly not trying to be fully POSIX-compatible. If an operation can’t be done efficiently against the S3 API (rename, hard links, xattr, chmod), it fails instead of emulating it. That is a deliberate choice.

What it is good at: high-throughput sequential reads. ML training jobs, analytics pipelines, anything that reads large files from start to finish. Multiple readers, one writer per file.

What it does not support: random writes, in-place file edits, atomic renames on general-purpose buckets.

s3fs-fuse

s3fs-fuse is older, open-source, and tries harder to look like a real file system. It emulates more POSIX operations than Mountpoint by caching and working around S3 limitations. That makes it more compatible with general-purpose tools but slower and less reliable under concurrent access. Common use case: giving analysts a familiar disk-like interface without changing their tooling.

Also free - pays only for S3 API calls.

Cyberduck

Cyberduck is a desktop GUI client. It is not really a mount - it is a file browser and transfer tool. You drag files in and out, browse the bucket, edit single files. Good for occasional manual access. Not useful for applications that need a mounted path.

CloudMounter

A commercial macOS app. Mounts S3 (and other cloud storage) as a drive in Finder. Aimed at non-technical users who need to work with files without touching a terminal. Reportedly one of the few tools in this category that actually works reliably - several alternatives look polished but fail in practice.

ZeroFS

ZeroFS is an open-source self-hosted option worth knowing about. It exposes S3 as NFS, 9P, or a raw block device via NBD. Under the hood it uses an LSM tree (SlateDB) to batch writes efficiently to S3, with built-in XChaCha20 encryption and LZ4/Zstandard compression.

What makes it stand out: it passes 8,662 POSIX compliance tests and can run ZFS directly on top of an NBD volume. That is a meaningfully stronger POSIX story than any of the AWS-native options.

The trade-off: it is self-hosted and requires S3 backends that support conditional writes (put-if-not-exists). Standard AWS S3 qualifies. If you need full POSIX compliance and are willing to run your own infrastructure, this is the most capable option in the list.


Which one to use

Tool Best for POSIX Cost
S3 Files Managed, production, broad compatibility Partial Extra per GB
Mountpoint High-throughput reads, ML/analytics No (by design) Free
s3fs-fuse General file access, analyst workflows Partial Free
Cyberduck Manual browsing, occasional transfers No Free / one-time
CloudMounter Non-technical users on macOS Partial Paid
ZeroFS Full POSIX compliance, self-hosted Yes Free (self-hosted)

If you’re running workloads in AWS and need a managed solution with broad app compatibility - S3 Files. If you’re optimizing for read throughput and control costs - Mountpoint. If you’re setting up access for analysts who just need a disk - s3fs-fuse. If someone on the team needs to browse a bucket from their Mac without touching a terminal - CloudMounter.