Deep Dive: Why EFS CSI Volumes in Kubernetes Mount to 127.0.0.1:/ via NFS4
- Sep 3
- 3 min read

In modern Kubernetes environments, persistent storage is a foundational requirement for stateful workloads. Amazon’s Elastic File System (EFS) provides a scalable, managed NFS file system that can be used across multiple pods and nodes — making it an ideal choice for shared storage.
Kubernetes supports EFS via the EFS CSI Driver, which abstracts away complex mount operations. However, many engineers notice an odd behavior when inspecting volumes mounted by the EFS CSI driver:
mount | grep nfs 127.0.0.1:/ on /var/lib/kubelet/pods/... type nfs4 ...Why is the volume mounted from 127.0.0.1:/ instead of the expected EFS DNS name like fs-12345678.efs.us-east-1.amazonaws.com:/?
Let’s break down what’s really going on under the hood.
EFS CSI Driver: Quick Overview
The Amazon EFS CSI (Container Storage Interface) Driver allows Kubernetes to use EFS file systems as persistent volumes. The key components are:
Controller Plugin – Manages volume provisioning and lifecycle.
Node Plugin – Handles actual mounting of volumes on the worker node.
EFS Access Points (optional) – Provide secure, isolated entry points into the EFS filesystem, with IAM and POSIX permission support.
How Volume Mounting Works with EFS CSI
When you create a PersistentVolumeClaim (PVC) that binds to an EFS-backed PersistentVolume (PV), and a pod consumes it, the following steps occur:
The EFS CSI Node Plugin runs on the worker node via a DaemonSet.
It receives a mount request via the Kubernetes CSI interface.
Instead of directly mounting EFS via NFS, it:
Starts a helper process (a sidecar) locally on the node.
This process is often mount.efs or efs-utils, a tool provided by AWS.
The helper process mounts EFS using the standard NFSv4 protocol internally.
It proxies this mount locally via 127.0.0.1:/, making it appear like the mount source is localhost.
From the pod’s perspective, the mount point is just:
127.0.0.1:/ (type nfs4)
But under the hood, this is just a proxy tunnel to the real EFS endpoint.
Why Proxy via 127.0.0.1?
This proxy-based architecture isn't a bug — it's a feature designed for security, compatibility, and enhanced functionality.
TLS Support (Encryption in Transit)
EFS supports TLS when mounted using the amazon-efs-utils helper. The helper sets up a stunnel connection, which forwards NFS traffic securely over SSL. But traditional NFS clients don't support TLS natively.
So, the EFS CSI driver uses a local proxy on 127.0.0.1 to handle the encrypted session, keeping the actual EFS mount encrypted while exposing it as plain NFS internally.
EFS Access Point Handling
Access Points are a secure way to isolate access within EFS. The CSI driver uses the proxy process to:
Enforce user/group IDs (UID/GID) from the Access Point.
Apply root directory constraints.
Maintain isolation per pod or per workload.
Mount Isolation per Pod
Direct NFS mounts are system wide. Using a local proxy helps isolate and control each volume mount independently, which is crucial in multi-tenant environments.
Simplified Permissions and Lifecycle Management
By keeping mount logic inside the CSI plugin and helper process, Kubernetes can cleanly mount/unmount volumes even as pods are rescheduled across nodes.
Debugging: Tracing Real EFS Mount
If you want to inspect how the EFS CSI driver is setting things up:
Check running processes:
ps aux | grep mount.efs
You'll often see:
stunnel
nfs4 mounts
Proxy forwarding processes
Check logs from the CSI node plugin:
kubectl -n kube-system logs <efs-csi-node-pod> -c efs-plugin
Use netstat or ss to check outbound NFS connections:
netstat -plant | grep :2049
This shows the connection to the actual EFS server (e.g., fs-xxxxxx.efs.us-east-1.amazonaws.com).
What If EFS Is Unreachable?
If EFS is not reachable due to:
Network issues (VPC, security groups, DNS)
Missing EFS mount targets
AWS service disruptions
Then:
The CSI driver retries mounting using the helper.
Pods may stay in ContainerCreating or crash with mount errors.
Kubernetes will not failover automatically — there is no built-in fallback.
You must ensure EFS is reachable across all subnets where your pods run.
Best Practices
To avoid issues and make the most of the EFS CSI driver:
Practice | Why It Matters |
Deploy EFS in Multi-AZ | Ensures HA across Availability Zones |
Use Access Points | Better security and namespace isolation |
Ensure NFS (port 2049) is open in security groups | Required for communication |
Use EFS mount helper (amazon-efs-utils) | Supports TLS and retries |
Monitor mount logs and metrics | To detect failures early |
Spread pods across subnets with EFS mount targets | Prevent single-point-of-failure |
Conclusion
When your Kubernetes volumes are mounted to 127.0.0.1:/ via nfs4, it's not a misconfiguration — it's intentional behavior by the Amazon EFS CSI driver.
This architecture:
Enables encryption (TLS)
Supports Access Points cleanly
Isolates mounts for security
Simplifies dynamic volume management
Understanding this behavior helps you debug smarter, secure your workloads, and design for high availability.
Have Questions?
We at Ananta Cloud love going deep into cloud-native architectures. If you’re looking to build highly available, scalable Kubernetes platforms with robust storage, we’d love to help!
📧 Reach out to us at:
Email: hello@anantacloud.com | LinkedIn: @anantacloud | Schedule Meeting
