19 NFS Best Practice and Implementation Guide © 2017 NetApp, Inc. All Rights Reserved
3.8 Pseudo File Systems in Clustered Data ONTAP
The clustered Data ONTAP architecture has made it possible to have a true pseudofile system, which
complies with the RFC 7530 NFSv4 standards.
Servers that limit NFS access to "shares" or "exported" file systems should provide a pseudo-file
system into which the exported file systems can be integrated, so that clients can browse the
server's namespace. The clients' view of a pseudo-file system will be limited to paths that lead
to exported file systems.
And in section 7.3:
NFSv4 servers avoid this namespace inconsistency by presenting all the exports within the
framework of a single-server namespace. An NFSv4 client uses LOOKUP and READDIR operations to
browse seamlessly from one export to another. Portions of the server namespace that are not
exported are bridged via a "pseudo-file system" that provides a view of exported directories
only. A pseudo-file system has a unique fsid and behaves like a normal, read-only file system.
The reason for this is because clustered Data ONTAP has removed the /vol requirement for exported
volumes and instead uses a more standardized approach to the pseudo-file system. Because of this, you
can now seamlessly integrate an existing NFS infrastructure with NetApp storage because “/” is truly “/”
and not a redirector to /vol/vol0, as it was in 7-Mode.
A pseudo file system applies only in clustered Data ONTAP if the permissions flow from more restrictive
to less restrictive. For example, if the vsroot (mounted to /) has more restrictive permissions than a data
volume (such as /volname) does, then pseudo file system concepts apply.
History of Pseudo File Systems in Data ONTAP
Most client systems mount local disks or partitions on directories of the root file system. NFS exports are
exported relative to root or “/.” Early versions of Data ONTAP had only one volume, so directories were
exported relative to root just like any other NFS server. As data requirements grew to the point that a
single volume was no longer practical, the capability to create multiple volumes was added. Because
users don't log directly into the NetApp storage system, there was no reason to mount volumes internally
to the NetApp system.
To distinguish between volumes in 7-Mode, the /vol/volname syntax was created. To maintain
compatibility, support was kept for directories within the root volume to be exported without any such
prefix. So /home is equivalent to /vol/vol0/home, assuming that vol0 is the root volume, / is the
physical root of the system, and /etc is for the configuration information.
NetApp storage systems running 7-Mode are among the few implementations, possibly the only one, that
require a prefix such as “/vol” before every volume that is exported. In some implementations, this
means that deployers can't simply drop the NetApp 7-Mode system into the place of an existing NFS
server without changing the client mounts, depending on how things are implemented in /etc/vfstab
or automounters. In NFSv3, if the complete path from /vol/vol0 is not used and <NetApp storage:
/> is mounted, the mount point is NetApp storage:/vol/vol0. That is, if the path does not begin
with /vol in NFSv3, then Data ONTAP assumes that /vol/vol0 is the beginning of the path and
redirects the request. This does not get users into the desired areas of the NFS file system.
Pseudo File System Operations in Clustered Data ONTAP Versus 7-Mode
As previously mentioned, in clustered Data ONTAP, there is no concept of /vol/vol0. Volumes are
junctioned below the root of the SVM, and nested junctions are supported. Therefore, in NFSv3, there is
no need to modify anything when cutting over from an existing NFS server. It simply works.
In NFSv4, if the complete path from /vol/vol0 is not used and <NetApp storage:/> is mounted,
that is considered the root of the pseudo file system and not /vol/vol0. Data ONTAP does not add
/vol/vol0 to the beginning of the path, unlike NFSv3. Therefore, if <NetApp storage:/ /n/NetApp