synology-nfs-mount-tools/sources/NFS_Overview/NFS Overview and Gotchas.htm

1161 lines
50 KiB
HTML

<!DOCTYPE doctype PUBLIC "-//w3c//dtd html 4.0 transitional//en">
<html><link type="text/css" rel="stylesheet" id="dark-mode-general-link"><link type="text/css" rel="stylesheet" id="dark-mode-custom-link"><style type="text/css" id="dark-mode-custom-style"></style><head>
<link href="NFS%20Overview%20and%20Gotchas_files/monthfea.css" type="text/css" rel="stylesheet">
<script language="javascript" src="NFS%20Overview%20and%20Gotchas_files/monthfea.js"></script>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
<meta name="author" content="slitt">
<meta name="Description" content="How and why to install Linux via NFS">
<meta name="KeyWords" content="Linux,linux,nfs,network,share,file share,file sharing,NFS,Network File System,network file system,redhat,red hat,mandrake">
<title>NFS: Overview and Gotchas</title>
<style>
.vimvixen-console-frame {
margin: 0;
padding: 0;
bottom: 0;
left: 0;
width: 100%;
height: 100%;
position: fixed;
z-index: 2147483647;
border: none;
background-color: unset;
pointer-events:none;
}
.vimvixen-hint {
background-color: yellow;
border: 1px solid gold;
font-weight: bold;
position: absolute;
text-transform: uppercase;
z-index: 2147483647;
font-size: 12px;
color: black;
}
</style><style id="style-1-cropbar-clipper">/* Copyright 2014 Evernote Corporation. All rights reserved. */
.en-markup-crop-options {
top: 18px !important;
left: 50% !important;
margin-left: -100px !important;
width: 200px !important;
border: 2px rgba(255,255,255,.38) solid !important;
border-radius: 4px !important;
}
.en-markup-crop-options div div:first-of-type {
margin-left: 0px !important;
}
</style></head>
<body onload="loaddiv()" data-gr-c-s-loaded="true" vlink="#551a8b" text="#000000" link="#0000ee" bgcolor="#ffffff" alink="#ff0000">
<center>
<h2> <a href="http://www.troubleshooters.com/troubleshooters.htm">Troubleshooters.Com</a> and <a href="http://www.troubleshooters.com/linux/index.htm">T.C Linux Library</a> Present</h2>
</center>
<center><font color="#cc0000"><font size="+4">NFS: Overview and Gotchas</font></font>
<p><a href="http://www.troubleshooters.com/cpyright.htm">Copyright (C) 2003 by Steve Litt</a>, All rights
reserved. Material provided as-is, use at your own risk.</p>
</center>
<div class="monthfea" id="monthfea"><p class="monthfea">May 2020 featured book:</p><p class="monthfea"><a href="http://www.troubleshooters.com/bookstore/ttech.htm">Troubleshooting Techniques of the successful technologist</a></p></div>
<big><big><big><b>Contents:<br>
</b></big></big></big><br>
<ul>
<li><b><a href="#_Executive_Summary">Executive Summary:</a></b></li>
<li><b><a href="#_Directories_and_IP_addresses_used">Directories and IP
addresses used in these examples</a></b></li>
<li><b><a href="#_Get_the_Daemons_Running">Get the Daemons Running</a></b></li>
<li><b><a href="#_Configure_the_NFS_Server">Configure the NFS Server</a></b></li>
<li><b><a href="#_Mounting_an_NFS_Share_on_a_Client">Mounting an NFS Share
on a Client</a></b></li>
<li><b><a href="#_Gotchas">Gotchas</a></b></li>
<li><b><a href="#_Checking_the_Daemons">1: Check the Daemons on the Server</a></b></li>
<li><b><a href="#_Syntax_Problems">2: Eyeball the Syntax</a></b></li>
<li><b><a href="#_Carefully_read_error_messages_and_develop">3: Carefully
read error messages and develop a symptom description</a></b></li>
<li><b><a href="#_If_it_mounts_but_cant_access">4: If it mounts but can't
access, check permissions, gid's and uid's</a></b></li>
<li><b><a href="#_disable_firewalls">5: If there are still problems, disable
firewalls or log firewalls</a></b></li>
<li><b><a href="#_investigate_DNS">6: If there are still problems, investigate
the server's DNS, host name resolution, etc</a></b></li>
<li><b><a href="#_Summary">Summary</a></b></li>
</ul>
<h1><a name="_Executive_Summary"></a>Executive Summary:</h1>
NFS is the best thing since sliced bread. It stands for Network File System.
NFS is a file and directory sharing mechanism native to Unix and Linux.<br>
<br>
NFS is conceptually simple. On the server (the box that is allowing others
to use its disk space), you place a line in <tt>/etc/exports</tt> to enable
its use by clients. This is called <i>sharing</i>. For instance, to share
<tt>/home/myself</tt> for both read and write on subnet 192.168.100, netmask
255.255.255.0, place the following line in <tt>/etc/exports</tt> on the server:<br>
<pre>/home/myself 192.168.100.0/24(rw)<br></pre>
To share it read only, change the <tt>(rw)</tt>to <tt>(ro)</tt>.<br>
<br>
On the client wanting to access the shared directory, use the <tt>mount</tt>
command to access the share:<br>
<pre>mkdir /mnt/test<br>mount -t nfs -o rw 192.168.100.85:/home/myself /mnt/test<br></pre>
The preceding must be performed by user root, or else as a sudo command.
Another alternative is to place the mount in <tt>/etc/fstab</tt>. That will
be discussed later in this document.<br>
<br>
As mentioned, NFS is conceptually simple, but in practice you'll encounter
some truly nasty gotchas:<br>
<ul>
<li>Non-running NFS or portmap daemons</li>
<li>Differing uid's for the same usernames<br>
</li>
<li>NFS hostile firewall</li>
<li>Defective DNS causes timeouts</li>
</ul>
To minimize troubleshooting time, quickcheck for each of these problems.
Each of these gotchas is explained in detail in this document.<br>
<h1><a name="_Directories_and_IP_addresses_used"></a>Directories and IP addresses
used in these examples</h1>
The following settings are used in examples on this page:<br>
<ul>
<li>Server (computer donating disk space) Settings</li>
<ul>
<li>FQDN hostname = myserver.domain.cxm</li>
<li>IP address = 192.168.100.85</li>
<li>Netmask = 255.255.255.0</li>
<li>RedHat ISO container directory = /scratch/rh8iso</li>
<li>Mandrake RPM container directory = /scratch/mand9iso<br>
</li>
</ul>
<li>Client (computer using the donated disk space) settings</li>
<ul>
<li>FDQN hostname = mydesk.domain.cxm</li>
<li>IP address = 192.168.100.2</li>
<li>Netmask = 255.255.255.0<br>
</li>
</ul>
</ul>
Please make note of these settings so that you're not confused in the examples.<br>
<h1><a name="_Get_the_Daemons_Running"></a>Get the Daemons Running</h1>
You can't run NFS without the server's portmap and NFS daemons running. This
article discusses how to set them to run at boot, and how to check that they're
currently running. First you'll use the <tt>chkconfig</tt> program to set
the portmap, nfs and mountd daemons to run at boot. Then you'll check whether
these daemons are running. Finally, whether they're running or not, you'll
restart these daemons.<br>
<h2>Checking the portmap Daemon</h2>
In order to run NFS, the portmap daemon must run. Check for automatic running
at boot with the following command:<br>
<br>
<table cellspacing="2" cellpadding="2" border="1">
<tbody>
<tr>
<td valign="top" bgcolor="#ffffcc">
<pre>[root@myserver root]# <b>chkconfig --list portmap</b><br>portmap 0:off 1:off 2:off 3:on 4:on 5:on 6:off<br>[root@myserver root]#<br></pre>
</td>
</tr>
</tbody>
</table>
<br>
Note that in the preceding example, runlevels 3, 4, and 5 say "on". That
means that at boot, for runlevels 3, 4 and 5, the portmap daemon is started
automatically. If either 3, 4 or 5 say "off", turn them on with the following
command:<br>
<pre>chkconfig portmap on<br></pre>
Now check that the portmapper is really running, using the <tt>ps</tt> and
<tt>grep</tt> commands:<br>
<br>
<table cellspacing="2" cellpadding="2" border="1">
<tbody>
<tr>
<td valign="top" bgcolor="#ffffcc">
<pre>[root@myserver root]# <b>ps ax | grep portmap</b><br> 3171 ? S 0:00 portmap<br> 4255 pts/0 S 0:00 grep portmap<br>You have new mail in /var/spool/mail/root<br>[root@myserver root]#<br></pre>
</td>
</tr>
</tbody>
</table>
<br>
The preceding shows the portmap daemon running at process number 3171.<br>
<h2>Checking the NFS Daemon</h2>
Next perform the exact same steps for the NFS daemon. Check for automatic
run at boot:<br>
<br>
<table cellspacing="2" cellpadding="2" border="1">
<tbody>
<tr>
<td valign="top" bgcolor="#ffffcc">
<pre>[root@myserver root]# chkconfig --list nfs<br>nfs 0:off 1:off 2:on 3:on 4:on 5:on 6:off<br>[root@myserver root]#<br></pre>
</td>
</tr>
</tbody>
</table>
<br>
If either of runlevels 3, 4 or 5 say "off", turn all 3 on with the following
command:<br>
<pre>chkconfig nfs on<br></pre>
And check that the NFS and the mountd daemons are running as follows:<br>
<pre>ps ax | grep nfs<br>ps ax | grep mountd<br></pre>
You might get several different different nfs daemons -- that's OK.<br>
<h2>Restarting the Daemons</h2>
When learning or troubleshooting, there's nothing so wonderful as a known
state. On that theory, restart the daemons before proceeding. Or if you want
a truly known state, reboot. Always restart the portmap daemon BEFORE restarting
the NFS daemon. Here's how you restart the two daemons:<br>
<pre>service portmap restart<br>service nfs restart<br></pre>
You'll see messages on the screen indicating if the startups were successful.
If they are not, troubleshoot. If they are, continue.<br>
<br>
Note that the mountd daemon is started by restarting nfs.<br>
<br>
You don't need to restart these daemons every time. Now that you've enabled
the daemons on reboot, you can safely assume they're running (unless there's
an NFS problem -- then don't make this assumption). From now on the only
time you need to restart the NFS daemon is when you change <tt>/etc/exports</tt>.
Theoretically you should never need to restart the portmap daemon unless
there's a problem.<br>
<h2>Summary</h2>
NFS requires a running portmap daemon and NFS daemon. Use <tt>chkconfig</tt>
to make sure these daemons run at boot, make sure they're running, and the
first time, restart the portmap daemon and then the NFS daemon just to be
sure you've achieved a known state.<br>
<h1><a name="_Configure_the_NFS_Server"></a>Configure the NFS Server</h1>
Configuring an NFS server is as simple as placing a line in the <tt>/etc/exports</tt>
file. That line has three pieces of information:<br>
<ol>
<li>The directory to be shared (exported)</li>
<li>The computer, NIS group, hostname, domain name or subnet allowed to
access that directory</li>
<li>Any options, such as <tt>ro</tt> or <tt>rw</tt>, or several other options<br>
</li>
</ol>
There's one line for each directory being shared. The general syntax is:<br>
<pre>directory_being_shared subnet_allowed_to_access(options)<br></pre>
Here's an example:<br>
<pre>/home/myself 192.168.100.0/24(ro)<br></pre>
In the preceding example, the directory being shared is <tt>/home/myself</tt>,
the subnet is <tt>192.168.100.0/24</tt>, and the options are <tt>ro</tt>(read
only). The subnet can also be a single host, in which case there would be
an IP address with no bitmask (the <tt>/24</tt> in the preceding example).
Or it can be an NIS netgroup, in which case the IP address is replaced with
<tt>@groupname</tt>. You can use a wildcard such as ? or * to replace part
of a Fully Qualified Domain Name (FQDN). An example would be <tt>*.accounting.domain.cxm</tt>.
Do not use wildcards in IP addresses, as they are intermittent in IP addresses.<br>
<br>
There are two kinds of options: General options and User ID Mapping options.
Read on...<br>
<h2>General Options</h2>
Many options can go in the parentheses. If more than one, they are delimited
by commas. Here are the common options:<br>
<table cellspacing="2" cellpadding="2" border="1">
<tbody>
<tr>
<td valign="top" bgcolor="#cccccc"><big><b>Option<br>
</b> </big></td>
<td valign="top" bgcolor="#cccccc"><big><b>What it does<br>
</b> </big></td>
<td valign="top" bgcolor="#cccccc"><big><b>Comment<br>
</b> </big></td>
</tr>
<tr>
<td valign="top">ro<br>
</td>
<td valign="top">Read Only<br>
</td>
<td valign="top">The share cannot be written. This is the default.<br>
</td>
</tr>
<tr>
<td valign="top">rw<br>
</td>
<td valign="top">Read Write<br>
</td>
<td valign="top">The share can be written.<br>
</td>
</tr>
<tr>
<td valign="top">secure<br>
</td>
<td valign="top">Ports under 1024<br>
</td>
<td valign="top">Requires that requests originate on a port less&nbsp;
than IPPORT_RESERVED (1024). This is the default.<br>
</td>
</tr>
<tr>
<td valign="top">insecure<br>
</td>
<td valign="top">Negation of secure<br>
</td>
<td valign="top"><br>
</td>
</tr>
<tr>
<td valign="top">async<br>
</td>
<td valign="top">Reply before disk write<br>
</td>
<td valign="top">Replies to requests before the data is written to
disk. This improves performance, but results in lost data if the server goes
down.</td>
</tr>
<tr>
<td valign="top">sync</td>
<td valign="top">Reply only after disk write</td>
<td valign="top">Replies to the NFS request only after all data has
been written to disk. This is much safer than async, and is the default in
all nfs-utils versions after 1.0.0.</td>
</tr>
<tr>
<td valign="top">no_wdelay<br>
</td>
<td valign="top">Write disk as soon as possible<br>
</td>
<td valign="top">NFS has an optimization algorithm that delays disk
writes if NFS deduces a likelihood of a related write request soon arriving.
This saves disk writes and can speed performance.<br>
<b><big>BUT...</big></b><br>
If NFS deduced wrongly, this behavior causes delay in every request, in which
case this delay should be eliminated. That's what the no_wdelay option does
-- it eliminates the delay. In general, no_wdelay is recommended when most
NFS requests are small and unrelated.<br>
</td>
</tr>
<tr>
<td valign="top">wdelay</td>
<td valign="top">Negation of no_wdelay<br>
</td>
<td valign="top">This is the default.<br>
</td>
</tr>
<tr>
<td valign="top">nohide<br>
</td>
<td valign="top">Reveal nested directories<br>
</td>
<td valign="top">Normally, if a server exports two filesystems one of
which is mounted on the other, then&nbsp; the&nbsp; client&nbsp; will&nbsp;
have&nbsp; to mount&nbsp; both filesystems explicitly to get access to them.&nbsp;
If it just mounts the parent, it will see an empty&nbsp; directory&nbsp; at&nbsp;
the place where the other filesystem is mounted.&nbsp; That filesystem is
"hidden".<br>
<br>
Setting the nohide option on a filesystem causes it&nbsp; not&nbsp; to&nbsp;
be hidden,&nbsp; and&nbsp; an appropriately authorised client will be able
to move from the parent to that&nbsp; filesystem&nbsp; without&nbsp; noticing&nbsp;
the change.<br>
<br>
However,&nbsp; some&nbsp; NFS clients do not cope well with this situation
as, for instance, it is then possible for two files in&nbsp; the&nbsp; one
apparent filesystem to have the same inode number.<br>
<br>
The&nbsp; nohide&nbsp; option&nbsp; is&nbsp; currently only effective on single
host exports.&nbsp; It does not work reliably with&nbsp; netgroup,&nbsp; subnet,&nbsp;
or wildcard exports.<br>
<br>
This option can be very useful in some situations, but it should be used with
due care, and only after confirming that the client system copes with the
situation effectively.<br>
</td>
</tr>
<tr>
<td valign="top">hide<br>
</td>
<td valign="top">Negation of nohide<br>
</td>
<td valign="top">This is the default<br>
</td>
</tr>
<tr>
<td valign="top">subtree_check<br>
</td>
<td valign="top">Verify requested file is in exported tree<br>
</td>
<td valign="top">This is the default. Every file request is checked
to make sure that the requested file is in an exported subdirectory. If this
option is turned off, the only verification is that the file is in an exported
filesystem.<br>
</td>
</tr>
<tr>
<td valign="top">no_subtree_check </td>
<td valign="top">Negation of subtree_check<br>
</td>
<td valign="top">Occasionally, subtree checking can produce problems
when a requested file is renamed while the client has the file open. If many
such situations are anticipated, it might be better to set <tt>no_subtree_check</tt>.
One such situation might be the export of the <tt>/home</tt> filesystem. Most
other situations are best handed with <tt>subtree_check</tt>. </td>
</tr>
<tr>
<td valign="top">secure_locks<br>
</td>
<td valign="top">Require authorization for lock requests<br>
</td>
<td valign="top">This is the default. Require authorization of all locking
requests.<br>
</td>
</tr>
<tr>
<td valign="top">insecure_locks<br>
</td>
<td valign="top">Negation of secure_locks<br>
</td>
<td valign="top">Some NFS clients don't send credentials with lock requests,
and hence work incorrectly with <tt>secure_locks</tt>., in which case you
can only lock world-readable files. If you have such clients, either replace
them with better ones, or use the <tt>insecure_locks</tt> option.<br>
</td>
</tr>
<tr>
<td valign="top">auth_nlm<br>
</td>
<td valign="top">Synonym for secure_locks<br>
</td>
<td valign="top"><br>
</td>
</tr>
<tr>
<td valign="top">no_auth_nlm<br>
</td>
<td valign="top">Synonym for secure_locks<br>
</td>
<td valign="top"><br>
</td>
</tr>
</tbody>
</table>
<h2>User ID Mapping Options</h2>
In an ideal world, the user and group of the requesting client would determine
the permissions of the data returned. We don't live in an ideal world. Two
real-world problems intervene:<br>
<ol>
<li>You might not trust the root user of a client with root access to the
server's files.</li>
<li>The same username on client and server might have different numerical
ID's</li>
</ol>
Problem 1 is conceptually simple. John Q. Programmer is given a test machine
for which he has root access. In no way does that mean that John Q. Programmer
should be able to alter root owned files on the server. Therefore NFS offers
<i>root squashing</i>, a feature that maps uid 0 (root) to the anonymous
(nfsnobody) uid, which defaults to -2 (65534 on 16 bit numbers).<br>
<br>
So when John Q. Programmer mounts the share, he can access only what the
anonymous user and group can access. That means files that are world readable
or writeable, or files that belong to either user <tt>nfsnobody</tt> or group
<tt>nfsnobody</tt>and allow access by the user or group. One way to do this
is to export a chmod 777 directory (booooooo). A better way is to export
a directory belonging to user <tt>nfsnobody</tt> or group <tt>nfsnobody,</tt>
and permissioning accordingly. Now root users from other boxes can write
files in that directory, and read the files they write, but they can't read
or write files created by root on the server itself.<br>
<br>
Now that you know what root squashing is, how do you enable or disable it
on a per-share basis? If you want to enable root squashing, that's simple,
because it's the default. If you want to disable it, so that root on any
box operates as root within the mounted share, disable it with the <tt>no_root_squash</tt>
option as follows:<br>
<pre>/data/foxwood 192.168.100.0/24(rw,no_root_squash)</pre>
If, for documentation purposes or to guard against a future change in the
default, you'd like to explicitly specify root squashing, use the <tt>root_squash</tt>
option.<br>
<br>
Perhaps you'd like to change the default anonymous user or group on a per-share
basis. That way the client's root user can access files within the share
as a specific user, let's say user <tt>myself</tt>. No problem. Use the <tt>anonuid</tt>
or <tt>anongid</tt> option. The following example uses the <tt>anongid</tt>
option to access the share as group <tt>myself</tt>., assuming that on the
server group <tt>myself</tt> has gid 655:<br>
<pre>/data/wekiva 192.168.100.0/24(rw,anongid=655)</pre>
The preceding makes the client's root user group 655 (which happens to be
group <tt>myself</tt> on share <tt>/data/wekiva</tt>. Files created by the
client's <tt>root</tt> user are user and group 655, but files modified by
the client's root are group 655, and a different user.<br>
<br>
Now imagine that instead of mapping incoming client root requests to the
anonymous user or group, you want ALL incoming NFS requests to be mapped
to the anonomous user or the anonymous group. To accomplish that you use
the <tt>all_squash</tt>option, as follows:<br>
<pre>/data/altamonte 192.168.100.0/24(rw,all_squash)</pre>
You can combine the <tt>all_squash</tt> option with the <tt>anonuid</tt>and<tt>anongid</tt>
options to make directories accessible as if the incoming request was from
that user or that group. The one problem with that is that, for NFS purposes,
it makes the share world readable and/or world writeable, at least to the
extent of which hosts are allowed to mount the share.<br>
<br>
We'll get into this subject a little bit more when discussing the Gotcha
concerning different user and group id numbers.<br>
<br>
The following table lists the User ID Mapping Options:<br>
<br>
<br>
<table cellspacing="2" cellpadding="2" border="1">
<tbody>
<tr>
<td valign="top" bgcolor="#cccccc"><big><b>Option<br>
</b> </big></td>
<td valign="top" bgcolor="#cccccc"><big><b>What it does<br>
</b> </big></td>
<td valign="top" bgcolor="#cccccc"><big><b>Comment<br>
</b> </big></td>
</tr>
<tr>
<td valign="top">
<pre>root_squash</pre>
</td>
<td valign="top">Convert incoming requests from user root to the anonymous
uid and gid.<br>
</td>
<td valign="top">This is the default.<br>
</td>
</tr>
<tr>
<td valign="top">
<pre>no_root_squash</pre>
</td>
<td valign="top">Negation of <tt>root_squash</tt><br>
</td>
<td valign="top"><br>
</td>
</tr>
<tr>
<td valign="top"><tt>anonuid</tt> </td>
<td valign="top">Set anonymous user id to a specific id<br>
</td>
<td valign="top">The id is a number, not a name. This number can be
obtained by this command on the server:<br>
<pre>grep myself /etc/passwd<br></pre>
Where <tt>myself</tt> is the username whose uid you want to find.<br>
</td>
</tr>
<tr>
<td valign="top"><tt>anongid</tt> </td>
<td valign="top">Set anonymous group id to a specific id </td>
<td valign="top">The id is a number, not a name. This number can be
obtained by this command on the server:<br>
<pre>grep myself /etc/group<br></pre>
Where <tt>myself</tt> is the name of the group whose uid you want to find.
</td>
</tr>
<tr>
<td valign="top"><tt>all_squash</tt><br>
</td>
<td valign="top">Convert incoming requests, from ALL users, to the
anonymous uid and gid. </td>
<td valign="top">Remember that this gives all incoming users the same
set of rights to the share. This may not be what you want.<br>
</td>
</tr>
</tbody>
</table>
<br>
<h1><a name="_Mounting_an_NFS_Share_on_a_Client"></a>Mounting an NFS Share
on a Client</h1>
Mounting an NFS share on a client can be simple. At its simplest it might
look like this:<br>
<pre>mount -t nfs -o ro 192.168.100.85:/data/altamonte /mnt/test<br></pre>
The English translation of the preceding is this: mount type (<tt>-t</tt>)
nfs with options (<tt>-o</tt>) read only (<tt>ro</tt>) server <tt>192.168.100.85</tt>'s
directory <tt>/data/altamonte</tt>at mount point <tt>/mnt/test</tt>. What
usually changes is the comma delimited list of options (<tt>-o</tt>). For
instance, NFS typically performs better with <tt>rsize=8192</tt>and <tt>wsize=8192</tt>.
These are the read and write buffer sizes, and it's been found that in general
8192 performs better than the default 4096. The<tt>hard</tt> option keeps
the request alive even if the server goes down, whereas the <tt>soft</tt>option
enables the mount to time out if the server goes down. The <tt>hard</tt>option
has the advantage that whenever the server comes back up, the file activity
continues where it left off.<br>
<br>
Besides these and a few other NFS specific options, there are filesystem
independent options such as async/sync/dirsync, atime/noatime, auto/noauto,
defaults,dev/nodev, exec/noexec, _netdev, remount, ro, rw, suid/nosuid, user/nouser.<br>
<table cellspacing="2" cellpadding="2" border="1">
<tbody>
<tr>
<td valign="top" bgcolor="#cccccc"><b><big>Option<br>
</big></b></td>
<td valign="top" bgcolor="#cccccc"><b><big>Action<br>
</big></b></td>
<td valign="top" bgcolor="#cccccc"><b><big>Default?<br>
</big></b></td>
<td valign="top" bgcolor="#cccccc"><b><big>Comment<br>
</big></b></td>
<td valign="top" bgcolor="#cccccc"><b><big>Negation<br>
option<br>
</big></b></td>
</tr>
<tr>
<td valign="top">async</td>
<td valign="top">All I/O done asynchronously<br>
</td>
<td valign="top">Y<br>
</td>
<td valign="top">Better performance, more possiblity of corruption
when things crash. Do not use when the same file is being modified by different
users.<br>
</td>
<td valign="top">sync</td>
</tr>
<tr>
<td valign="top">sync</td>
<td valign="top">All I/O done synchronously</td>
<td valign="top">N<br>
</td>
<td valign="top">Less likelihood of corruption, less likelihood of
overwrite by other users.<br>
</td>
<td valign="top">async</td>
</tr>
<tr>
<td valign="top">dirsync</td>
<td valign="top">All I/O to directories done synchronously<br>
</td>
<td valign="top">N<br>
</td>
<td valign="top"><br>
</td>
<td valign="top"><br>
</td>
</tr>
<tr>
<td valign="top">atime</td>
<td valign="top">Update inode access time for each&nbsp; access.<br>
</td>
<td valign="top">Y<br>
</td>
<td valign="top"><br>
</td>
<td valign="top">noatime</td>
</tr>
<tr>
<td valign="top">auto</td>
<td valign="top">Automatic mounting.<br>
</td>
<td valign="top">Y<br>
</td>
<td valign="top">Can be mounted with the -a option. Mounted at boot
time.</td>
<td valign="top">noauto</td>
</tr>
<tr>
<td valign="top">defaults</td>
<td valign="top">Shorthand for default options.<br>
</td>
<td valign="top"><br>
</td>
<td valign="top">rw,suid,dev,exec,auto,nouser,async.<br>
</td>
<td valign="top"><br>
</td>
</tr>
<tr>
<td valign="top">dev</td>
<td valign="top">Device<br>
</td>
<td valign="top">Y<br>
</td>
<td valign="top">Interpret character or block special devices on the&nbsp;
file system.<br>
</td>
<td valign="top">nodev</td>
</tr>
<tr>
<td valign="top">exec</td>
<td valign="top">Permit execution of binaries.<br>
</td>
<td valign="top">Y<br>
</td>
<td valign="top"><br>
</td>
<td valign="top">noexec</td>
</tr>
<tr>
<td valign="top"> _netdev</td>
<td valign="top">Device requires network.<br>
</td>
<td valign="top"><br>
</td>
<td valign="top">The&nbsp; device holding the filesystem requires network
access. Do not mount until the network has been enabled.<br>
</td>
<td valign="top"><br>
</td>
</tr>
<tr>
<td valign="top">remount</td>
<td valign="top">Remount a mounted system.<br>
</td>
<td valign="top"><br>
</td>
<td valign="top">Used to change the mount flags, especially to toggle
between rw and ro.<br>
</td>
<td valign="top"><br>
</td>
</tr>
<tr>
<td valign="top">ro</td>
<td valign="top">Allow only read access.<br>
</td>
<td valign="top">N<br>
</td>
<td valign="top">Used to protect the mounted filesystem from writes.
Even if the filesystem is writeable by the user, and is exported writeable,
this still protects it.<br>
</td>
<td valign="top">rw</td>
</tr>
<tr>
<td valign="top">rw</td>
<td valign="top">Allow both read and write.<br>
</td>
<td valign="top">Y<br>
</td>
<td valign="top">Allow writing to the filesystem, assuming that the
system is writeable by the user and has been exported writeable.<br>
</td>
<td valign="top">ro </td>
</tr>
<tr>
<td valign="top">suid</td>
<td valign="top">Allow set-user-identifier and/or set-group-identifier
bits to take effect.<br>
</td>
<td valign="top">Y<br>
</td>
<td valign="top"><br>
</td>
<td valign="top">nosuid</td>
</tr>
<tr>
<td valign="top">user</td>
<td valign="top">Allow mounting by ordinary user.<br>
</td>
<td valign="top">N<br>
</td>
<td valign="top">When used in <tt>/etc/fstab</tt>, this allows mounting
by an ordinary user. Only the user performing the mount can unmount it.<br>
</td>
<td valign="top">nouser</td>
</tr>
<tr>
<td valign="top">users<br>
</td>
<td valign="top">Allow mounting and dismounting by arbitrary user.</td>
<td valign="top">N<br>
</td>
<td valign="top">When used in <tt>/etc/fstab</tt>, this allows mounting
by an ordinary user. Any user can unmount it at any time, regardless of who
initially mounted it.</td>
<td valign="top"><br>
</td>
</tr>
</tbody>
</table>
<br>
<h3><tt>/etc/fstab syntax</tt></h3>
Like any other mount, NFS mounting can be done in <tt>/etc/fstab</tt>. The
advantages to placing it in <tt>/etc/fstab</tt> are:<br>
<ul>
<li>It can be mounted automatically (auto) either with mount -a or on boot.</li>
<li>It can easily be configured to be mountable by ordinary users (user
or users).</li>
<li>The mount is documented in <tt>/etc/fstab.</tt></li>
</ul>
The disadvantages to placing a mount in <tt>/etc/fstab</tt> are:<br>
<ul>
<li><tt>/etc/fstab</tt> can become cluttered by too many mounts.</li>
<li>The mountpoint cannot be used for different filesystems.</li>
</ul>
The following example shows an NFS mount:<br>
<pre>192.168.100.85:/home/myself /mnt/test nfs users,noauto,rw 0 0<br></pre>
The preceding is a typical example. Just like other <tt>/etc/fstab</tt> mounts,
NFS mounts in <tt>/etc/fstab</tt> have 6 columns, listed in order as follows:<br>
<ol>
<li>The filesystem to be mounted (<tt>192.168.100.85:/home/myself</tt>)</li>
<li>The mountpoint (<tt>/mnt/test</tt>)</li>
<li>The type of the filesystem (<tt>nfs</tt>)</li>
<li>The options (<tt>users,noauto,rw</tt>)</li>
<li>Frequency to be dumped (a backup method)&nbsp; (<tt>0</tt>)</li>
<li>Order in which to be fsck'ed at boot time.&nbsp; (<tt>0</tt>). The
root filesystem should have a value of 1 so it gets fsck'ed first. Others
should have 2 or more so they get fsck'ed later. A value of 0 means don't
perform the fsck at all.</li>
</ol>
<h2>Summary</h2>
The server exports a share, but to use it the client must mount that share.
The mount is performed with a mount command, like this:<br>
<pre>mount -t nfs -o rw 192.168.100.85:/data/altamonte /mnt/test<br></pre>
That same mount can be performed in <tt>/etc/fstab</tt> with the following
syntax:<br>
<pre>192.168.100.85:/data/altamonte /mnt/test nfs rw 0 0</pre>
There are many mount options that can be used, and those are listed in this
article.<br>
<h1><a name="_Gotchas"></a>Gotchas</h1>
If you've worked with NFS, you know it's not that simple. Often times the
mount fails, times out, or takes so long as to discourage use. Sometimes
the mount succeeds but the data is inaccessible. These problems can be a
bear to troubleshoot.<br>
<br>
To make troubleshooting easier this article lists the usual causes of NFS
failure, ways to quickly check whether these problems are the cause, and
methods to overcome these problems. Here are the typical causes of NFS problems:<br>
<ul>
<li>The <tt>portmap</tt> or <tt>nfs</tt> daemons are not running<br>
</li>
<li>Syntax error on client mount command or server <tt>/etc/exports</tt></li>
<ul>
<li>A space between the mount point and the <tt>(rw)</tt> causes the
<tt>(rw)</tt> to be ignored.<br>
</li>
</ul>
<li>Problems with permissions, uid's and gid's</li>
<li>Firewalls filtering packets necessary for NFS. The offending firewall
is typically on the server, but it could also be on the client.</li>
<li>Bad DNS on server (including <tt>/etc/resolv.conf</tt> on the server).</li>
</ul>
<table width="60%" cellspacing="2" cellpadding="2" border="1" bgcolor="#ffcccc" align="center">
<tbody>
<tr>
<td valign="top">
<div align="center"><big><b>!! WARNING !!</b></big><br>
</div>
<br>
Always restart the nfs service after making a change to <tt>/etc/exports</tt>.
Otherwise your changes will not be recognized, leading you down a long and
winding dead end.<br>
</td>
</tr>
</tbody>
</table>
<br>
<table cellspacing="2" cellpadding="2" border="1">
<tbody>
<tr>
<td valign="top" bgcolor="#cccccc"><big><b>Cause category<br>
</b> </big></td>
<td valign="top" bgcolor="#cccccc"><big><b>Symptom<br>
</b> </big></td>
</tr>
<tr>
<td valign="top">The <tt>portmap</tt> or <tt>nfs</tt> daemons are not
running</td>
<td valign="top">Typically, failure to mount</td>
</tr>
<tr>
<td valign="top">Syntax error on client mount command or server's <tt>/etc/exports</tt><br>
</td>
<td valign="top">Typically, failure to mount or failure to write enable.
A space between the mount point and the <tt>(rw)</tt> causes the share to
be read-only -- a frustrating and hard to diagnose problem.</td>
</tr>
<tr>
<td valign="top">Problems with permissions, uid's and gid's</td>
<td valign="top">Mounts OK, but access to the data is impossible or
not as specified<br>
</td>
</tr>
<tr>
<td valign="top">Firewalls filtering packets necessary for NFS</td>
<td valign="top">Mount failures, timeouts, excessively slow mounts,
or intermittent mounts</td>
</tr>
<tr>
<td valign="top">Bad DNS on server</td>
<td valign="top">Mount failures, timeouts, excessively slow mounts,
or intermittent mounts</td>
</tr>
</tbody>
</table>
<br>
Here's your predefined diagnostic:<br>
<ol>
<li>Check the daemons on the server</li>
<li>Eyeball the syntax of the client mount command and the server <tt>/etc/exports</tt>.
Pay particular attention that the mountpoint is NOT separated from the parenthasized
options list, because a space between the mountpoint and the opening paren
causes the options to be ignored.</li>
<li>Carefully read error messages and develop a symptom description</li>
<li>If the symptom involves successful mounts but you can't correctly access
the data, check permissions, gid's and uid's. Correct as necessary.</li>
<li>If there are still problems, disable firewalls or log firewalls.&nbsp;</li>
<li>If there are still problems, investigate the server's DNS, host name
resolution, etc.<br>
</li>
</ol>
For maximum diagnostic speed, quickly check that the portmap and nfs daemons
are running on the server. If not, investigate why not. Next, eyeball the
syntax on the client's mount command and the server's <tt>/etc/exports</tt>
file. Look for not only bad syntax, but wrong information such as wrong IP
addresses, wrong filesystem directories, and wrong mountpoints. If you find
bad syntax, correct it. These two steps should take no more than 3 minutes,
and will find the root cause in many cases.<br>
<br>
Next, carefully read the error message, and formulate a symptom description.
Try to determine whether the mount has succeeded. If the mount succeeded
but you can't access the data, it's likely a problem with permissions, uid's
or gid's. Investigate that. If the mount succeeds but it's slow, investigate
firewalls and DNS. A healthy NFS system should mount instantaneously. By
the time you lift your finger off the Enter key, the mount should have been
completed. If it takes more than one second, there's a problem that bears
investigation.<br>
<br>
The hardest problems are those in which you experience mount failures, timeouts,
excessively slow mounts, or intermittent mounts. In such situations, it's
likely either a firewall problem or a server DNS problem. Investigate those.<br>
<br>
Each of these problem categories is discussed in an article later in this
document.<br>
<h1><a name="_Checking_the_Daemons"></a>1: Check the Daemons on the Server</h1>
This will take you all of a minute. Perform the following 2 commands on the
server:<br>
<pre>ps ax | grep portmap<br>ps ax | grep nfs<br></pre>
If either shows nothing (or if it shows just the grep command), that server
is not running. Investigate why. Start by seeing if it's even set to run
at boot:<br>
<pre>/sbin/chkconfig --list portmap<br>/sbin/chkconfig --list nfs<br></pre>
Each command will output a line showing the run levels at which the command
is on. If either one is not on at any runlevel between 3 and 5 inclusive,
turn it on with one or both of these commands:<br>
<pre>/sbin/chkconfig portmap on<br>/sbin/chkconfig nfs on<br></pre>
The preceding commands set it to fire at boot, but do not run the daemon.
You must run them manually:<br>
<pre>service portmap restart<br>service nfs restart<br></pre>
Always restart the portmap daemon before restarting the nfs daemon, because
NFS needs the portmapper to function. If either of those commands fails or
produces an error message, investigate.<br>
<br>
<big><b>IMPORTANT NOTE: </b><small>Even if the daemons were both running
when you investigated, restart them both anyway. First, you might see an
error message. Second, it's always nice to achieve a known state. Restarting
these two daemons should take a minute. That one minute is a tiny price to
pay for the peace of mind you achieve knowing that there's no undiscovered
problem with the daemons.</small></big><br>
<br>
If NFS fails to start, investigate the syntax in <tt>/etc/exports</tt>, and
possibly comment out everything in that file, and try another restart. If
that changes the symptom, divide and conquer. If restarting NFS takes a huge
amount of time, investigate the server's DNS.<br>
<h1><a name="_Syntax_Problems"></a>2: Eyeball the Syntax</h1>
If the daemons work, eyeball the syntax of the mount command on the client
and the <tt>/etc/exports</tt> file on the server. Obviously, if you use the
wrong syntax (or wrong IP addresses or directories) in your mount command,
the mount fails. You needn't take a great deal of time -- just verify that
the syntax is correct and you're using the correct IP addresses, directories
and mount points. Correct as necessary, and retest.<br>
<br>
Pay SPECIAL attention to make sure there is no space between the mountpoint
and the opening paren of the options list. A space between them causes the
options to be ignored -- clearly not what you want. If you can't figure out
why a mount is read-only, even though the client mount command specifies
read-write and the server's directory is clearly read-write with the correct
user and group (not a number, but an actual name), suspect this intervening
space.<br>
<table width="60%" cellspacing="2" cellpadding="2" border="1" bgcolor="#ffcccc" align="center">
<tbody>
<tr>
<td valign="top">
<div align="center"><big><b>!! WARNING !!</b></big><br>
</div>
<br>
Always restart the nfs service after making a change to <tt>/etc/exports</tt>.
Otherwise your changes will not be recognized, leading you down a long and
winding dead end.<br>
</td>
</tr>
</tbody>
</table>
<h1><a name="_Carefully_read_error_messages_and_develop"></a>3: Carefully
read error messages and develop a symptom description</h1>
The first two steps were general maintenance -- educated guesses designed
to yield quick solutions. If they didn't work, it's time to buckle down and
troubleshoot. The first step is to read the error message, and learn more
about it. You might want to check the system logs (start with <tt>/var/log/messages</tt>)
in case relevent messages were written.<br>
<br>
Try several mounts and umounts, and note exactly what the malfunction looks
like:<br>
<ul>
<li>Does the mount produce an error message?</li>
<li>Does the mount time out?</li>
<li>Does the mount appear to hang forever (more than 5 minutes)?</li>
<li>Does the mount appear to succeed, but the data can't be seen, read
or written as expected?</li>
<li>Does the symptom change over time, or with reboots?<br>
</li>
</ul>
The more you learn and record about the symptom, the better your chances
of quickly and accurately solving the problem.<br>
<h1><a name="_If_it_mounts_but_cant_access"></a>4: If it mounts but can't
access, check permissions, gid's and uid's</h1>
Generally speaking, the permissions on the server don't affect the mounting
or unmounting of the NFS share. But they very much affect whether such a
share can be seen, executed, read or written. Often the cause is obvious.
If the directory is owned by root, permissioned 700, it obviously can't be
read and written by user <tt>myself</tt>. &nbsp;This type of problem is easy
to diagnose and fix.<br>
<br>
Tougher are root squashing problems. You access an NFS share as user root,
and yet you can't see the mounted share or its contents. You need to remember
this is probably happening because on the server you're operating not as root,
but as the anonomous user. A quick test can be done by changing the server's
export to export to a <tt>no_root_squash</tt> and single IP address (for
security). If the problem goes away, it's a root squashing problem. Either
access it as non-root, or change the ownership of the directory and contents
to the anonomous gid or uid.<br>
<br>
By far the toughest problems are caused by non-matching uid's and gid's. Let's
say you share your home directory on the server, and you log in as yourself
on the client and mount that share. It mounts ok (we'll assume you used <tt>su
-c</tt> or <tt>sudo</tt> to mount it), but you can't read the data -- permission
denied!<br>
<br>
That's crazy. The directory you're sharing is owned by <tt>myself</tt>, and
you're logged into the client as <tt>myself</tt>, and yet you don't have permission
to read. What's up?<br>
<br>
It turns out that under the hood, NFS requests contain numeric uid's and gid's,
but not actual usernames or groupnames. What that means is that if user <tt>myself</tt>
is uid 555 on the server, but uid 600 on the client, you're trying to access
files owned by uid 555 when you're uid 600. That means your only rights to
the mounted material are permissions granted to "other" -- not to "user"
or "group".<br>
<br>
The best solution to this problem is to create a system in which all boxes
on your network have the same uid for each username and the same gid for each
groupname. This can be accomplished either by attention to detail, by using
NIS to assign users and groups, or by using some other authentication scheme
yielding global users and groups.<br>
<br>
If you cannot have a single uid for all instances of a username, suboptimal
steps must be taken. In some instances you could make the directory and files
world-readable, thereby enabling all users to read it. It could also be made
world-writeable, but that's always a bad idea. It could be mounted <tt>all_squash</tt>
with a specific <tt>anonuid</tt> and/or a specific <tt>anongid</tt> to cure
the problem, but once again, at least from the NFS viewpoint, that's equivalent
to making it world readable or writeable.<br>
<br>
If you have problems accessing mounts, always check the gid's and uid's on
both sides and make sure they match. If they don't, find a way of fixing it.
Sometimes it's as simple as editing <tt>/etc/passwd</tt> and <tt>/etc/group</tt>
to change the numeric ID's on one or both sides. Remember that if you do that,
you need to perform the proper <tt>chown</tt> command on any files that were
owned or grouped by the owner and/or group that you renumbered. A dead giveaway
are files that are listed with numbers rather than names for group and user.<br>
<h1><a name="_disable_firewalls"></a>5: If there are still problems, disable
firewalls or log firewalls</h1>
Many supposed NFS problems are really problems with the firewall. In order
for your NFS server to successfully serve NFS shares, its firewall must enable
the following:<br>
<ul>
<li>ICMP Type 3 packets</li>
<li>Port 111, the Portmap daemon</li>
<li>Port 2049, NFS</li>
<li>The port(s) assigned to the mountd daemon</li>
</ul>
The easiest way to see whether your problem resides in the firewall is to
completely open up the client and server firewalls and anything in between.
For details on how to manipulate <tt>iptables</tt> see the <a href="http://www.troubleshooters.com/lpm/200305/200305.htm">May 2003 Linux Productivity Magazine</a>.<br>
<br>
Note that opening up firewalls is appropriate only if you're disconnected
from the Internet, or if you're in a very un-hostile environment. Even so,
you should open up the firewalls for a very short time (less than 5 minutes).
If in doubt, instead of opening the firewalls, insert logging statements
in IPTables to show what packets are being rejected during NFS mounts, and
take action to enable those ports. For details on IPTables diagnostic logging,
see the <a href="http://www.troubleshooters.com/lpm/200305/200305.htm">May 2003
Linux Productivity Magazine</a>.<br>
<br>
The mountd daemon ports are especially problematic, because they're normally
assigned by the portmap daemon, and vary from NFS restart to NFS restart.
The <tt>/etc/rc.d/init.d/nfs</tt> script can be changed to nail down the
mountd daemon to a specific port, which then enables you to pinhole a specific
port. The <a href="http://www.troubleshooters.com/lpm/200305/200305.htm#_A_Somewhat_Practical_Server_Firewall">A Somewhat
Practical Server Firewall</a> article in the <a href="http://www.troubleshooters.com/lpm/200305/200305.htm">May 2003 Linux Productivity
Magazine</a>. explains how to do this.<br>
<br>
If for some reason you don't want to nail down the port, your only other alternatives
are to create a firewall enabling a huge range of ports in the 30000's, or
to create a master NFS restart script which does the following:<br>
<ol>
<li>Use the <tt>rcpinfo</tt> program to find all ports used by <tt>mountd</tt>.</li>
<li>Issue <tt>iptables</tt> commands to find the rule numbers for those
ports.</li>
<li value="2">Issue <tt>iptables</tt> commands to delete all rules on those
ports.</li>
<li>Restart NFS</li>
<li>Use the <tt>rcpinfo</tt> program to find all ports used by <tt>mountd</tt>.</li>
<li>Issue <tt>iptables</tt> commands to insert rules for those ports where
the rules for those ports used to be.</li>
</ol>
One technique that might make that easier is to create a user defined chain
just to hold mountd rules. In that case you'd simply empty that chain, restart
NFS, use <tt>rpcinfo</tt> to find the port numbers, and add the proper rules
using the <tt>iptables -A</tt> command.<br>
<br>
It bears repeating that the <a href="http://www.troubleshooters.com/lpm/200305/200305.htm">May 2003 Linux Productivity
Magazine</a> details how to create an NFS friendly firewall.
<h1><a name="_investigate_DNS"></a>6: If there are still problems, investigate
the server's DNS, host name resolution, etc</h1>
Bad forward and reverse name resolution can mess up any server app, including
NFS. Like other apps, bad DNS most often results in very slow performance
or timeouts. Be sure to check your <tt>/etc/resolv.conf</tt> and make sure
you're querying the correct DNS server. Check your DNS server with DNSwalk
or DNS lint or another suitable utility.<br>
<h1><a name="_Summary"></a>Summary</h1>
NFS is wonderful. It's a convenient and lightning fast way to use a network.
Although it's not particularly secure, its security can be beefed up with
firewalls. Its security can also be strengthened by authentication schemes.<br>
<br>
Although conceptually simple, NFS often requires overcoming troubleshooting
challenges before a working system is achieved. Here's a handy predefined
diagnostic:<br>
<ol>
<li>Check the daemons on the server</li>
<li>Eyeball the syntax of the client mount command and the server <tt>/etc/exports</tt></li>
<li>Carefully read error messages and develop a symptom description</li>
<li>If the symptom involves successful mounts but you can't correctly access
the data, check permissions, gid's and uid's. Correct as necessary.</li>
<li>If there are still problems, disable firewalls or log firewalls.&nbsp;</li>
<li>If there are still problems, investigate the server's DNS, host name
resolution, etc.</li>
</ol>
If you suspect firewall problems are stopping your NFS, see the <a href="http://www.troubleshooters.com/lpm/200305/200305.htm">May 2003 Linux Productivity
Magazine</a> , which details IPTables and how to create an NFS-friendly firewall.<br>
<p> </p>
<center>
<h2> <a href="http://www.troubleshooters.com/troubleshooters.htm">Back to Troubleshooters.Com</a> * <a href="http://www.troubleshooters.com/linux/index.htm">Back to Linux Library</a></h2>
</center>
<br>
<br>
<iframe src="NFS%20Overview%20and%20Gotchas_files/console.html" id="vimvixen-console-frame" class="vimvixen-console-frame"></iframe></body></html>