NFS Setup

I’ve been asked to setup NFS a few times in the past. The most painful part is usually making it play nice with firewalls. Then a friend asked me if I knew the difference between a soft NFS mount and a hard one and I admit I was clueless. Let’s take a look and investigate both of the above.

In those examples, I’m using 2x VPS on the same private network, both running CentOS 7.3.

First assign a private IP on each one:

On the VPS acting as the NFS server:

ip addr add dev eth1

On the VPS acting as the NFS client:

ip addr add dev eth1

Optional – make the private IPs permanent:

If you want to make those settings permanent, you’ll need to edit the file /etc/sysconfig/network-scripts/ifcfg-eth1 on each server and make sure you have something like this in place:

# Virtio Network Device Private Interface

You can always follow up with `service network restart` to have CentOS pick up those changes without rebooting.

On the VPS acting as a server:

1. Install NFS:

yum -y install nfs-utils

2. Also install the net-tools package so that we can get netstat – this is always useful for diagnosing issues:

yum -y install net-tools

3. Set NFS to start on boot:

systemctl enable nfs

4. Move your old NFS configuration file out of the way:

mv /etc/sysconfig/nfs /etc/sysconfig/nfs-old

5. Create /etc/sysconfig/nfs and use this config — this will force NFS to use those ports instead of assigning random ones.

# Port rpc.mountd should listen on.
# Port rpc.statd should listen on.
# Outgoing port statd should used. The default is port
# is random
# Specify callout program
# Enable usage of gssproxy. See gssproxy-mech(8).

6. Restart the necessary daemons so that those changes are picked up:

systemctl restart nfslock rpcbind nfs

7. There’s 3 mandatory ports that NFS needs to be listening on – those are: 111, 892 and 2049 – check if those are currently listening:

netstat -ntlp | egrep '111|892|2049'
tcp 0 0* LISTEN 1/systemd 
tcp 0 0* LISTEN 1230/rpc.mountd 
tcp 0 0* LISTEN - 
tcp6 0 0 :::111 :::* LISTEN 1/systemd 
tcp6 0 0 :::892 :::* LISTEN 1230/rpc.mountd 
tcp6 0 0 :::2049 :::* LISTEN -

8. Now let’s create a directory that we are going to use as an NFS share:

mkdir /nfsshare

9. For NFS to make this directory available to others, we need to add it to our exports – edit the file /etc/exports and add the line:


10. Tell NFS to re-read the exports file to make this change live:

exportfs -va

You should get output like this:


11. Before we even try to mount this share from our client, let’s see if the export actually shows up when queried for. To do this, we use the showmount command on the localhost:

showmount -e
Export list for

We’re good! Let’s move on to the client.

On the VPS acting as a client:

1. Install the necessary utilities:

yum -y install nfs-utils

2. See if we can query the shares over the network:

showmount -e

You should see the same exports as before:

Export list for

3. Attempt to mount the /nfsshare on /media:

mount.nfs /media -v

Note the -v – this is to get verbose output – this is what I got:

mount.nfs: timeout set for Wed Jun 7 05:13:17 2017
mount.nfs: trying text-based options 'vers=4,addr=,clientaddr='

This worked, but is NFS using UDP or TCP ? Let’s find out:

mount | grep nfs | grep proto=tcp on /media type nfs4 (rw,relatime,vers=4.0,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=

The above gives us an abundance of good info – let’s break it down:

type nfs4 = this is using NFS4 instead of the older NFS3
hard = this is a hard mount – this is a very important option – it means that in the case the NFS server fails, the system will infinitely continue to try to reconnect. This sounds like a good idea, but in reality it can be disastrous. If your NFS server goes down, the client will lock and even a simple `ls /media` will not be interruptible.
timeo=600 = this can be a very misleading and dangerous option. First of all – this number is not in seconds, it’s in deciseconds (tenths of a second). In other words a value of 600, means that the timeout is 60 seconds. But there’s more – allow me to quote the manual:

For NFS over TCP the default timeo value is 600 (60 seconds). The NFS client performs linear backoff: After each retransmission the timeout is increased by timeo up to the maximum of 600 seconds."

Waaaaaaaait a second… remember how I said “the system will infinitely continue to try to reconnect" ? Here’s what’s going on:

If the NFS mount is a “hard" mount, then the “NFS requests are retried indefinitely“.

retrans=2 = I’ll copy paste what the manual says on this one:

The number of times the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times.

The NFS client generates a “server not responding" message after retrans retries, then attempts further recovery (depending on whether the hard mount option is in effect)"

OK – seriously – reread that last part:

generates a .. message, then attempts further recovery.. hard mount option is in effect“. Allow me to save you some time and rephrase: If it’s a hard mount, it will simply print a message saying that the server is not responding and go it’s merry way continuing indefinitely to reconnect, which means that the “retrans" option doesn’t really do anything!

In sort — “hard" mounts can be a very bad thing on a cloud environment where servers should not be taken for granted.

Let’s unmount this mount and re-mount it as a soft mount:

umount /media
mount -o proto=tcp,soft,timeo=10,retrans=3 /media

Now let’s review what’s going on here:

soft = this is a soft mount, so it will not indefinitely try to reconnect
timeo=10 = remember – this is not in seconds, it’s in deciseconds, so this equals 1 second. Why so low? Because we’re also using ‘retrans=3’
retrans=3 = The connection will be automatically re-attempted 3 times and “The NFS client performs linear backoff: After each retransmission the timeout is increased by timeo up to the maximum of 600 seconds." So we don’t really want timeo to have a high value, NFS will increase it on it’s own.

So are “hard" mounts really such a bad thing?

If your servers are likely to “disappear", then yes, a hard mount is a very bad thing. However, soft mounts can lead to data corruption as if a file transfer is interrupted half-way and is not retried then that means you’ll end up with an incomplete (corrupt) file.



Oracle VM collect log for SR

Creating Oracle support diagnostics for OVM 3.4.6

Using vmpinfo3

VMPinfo3 is an Oracle support tool for capturing log files and other details from Oracle VM 3.x Manager and Servers. To use it ssh to you Oracle VM Manager as root and execute as below:

The Oracle Support Document VMPinfo3 Diagnostic Tool For Oracle VM 3.2, 3.3 and 3.4 Troubleshooting (Doc ID 1521931.1) directs you to /u01/app/oracle/ovm-manager-3/ovm_shell/tools/vmpinfo/ however in 3.4.6 has been relocated to /u01/app/oracle/ovm-manager-3/ovm_tools/support/

First, start by using the listservers option to report the OVM Servers known by your OVM Manager e.g.

[root@z-ovmm ~]# cd  /u01/app/oracle/ovm-manager-3/ovm_tools/support
[root@z-ovmm support]# ./ --username=admin listservers
 Enter OVM Manager Password: 
 The following server(s) are owned by this manager: ['z-ovm']

Next, re-run vmpinfo3 this time with the servers option providing the server(s) listed, note if you have multiple OVM servers use a comma delimiter e.g. servers=server1,server2,server3

[root@z-ovmm support]# ./ --username=admin servers=z-ovm
 Enter OVM Manager Password: 

 Starting data collection
 Gathering files from servers: z-ovm. This process may take some time.
 Gathering OVM Model Dump files
 Gathering sosreports from servers
 The following server(s) will get info collected: [z-ovm]
 Gathering sosreport from z-ovm
 Data collection complete
 Gathering OVM Manager Logs
 Clean up metrics
 Copying model files
 Copying DB backup log files
 Running lightweight sosreport
 Archiving vmpinfo3-20181220-121620
  Please send /tmp/vmpinfo3- to Oracle support

You can now upload the compressed tar file to your open SR.


Oracle Gather Statistics – Check last




This would collect statistics about Fixed objects.
These are the X$ and K$ tables and their indexes.
The V$ views in Oracle are defined in top of X$ tables (for example V$SQL and V$SQL_PLAN).

How to identify when DBMS_STATS.GATHER_FIXED_OBJECTS_STATS was executed in the database ?

SELECT, ts.analyzetime
WHERE v.object_id = ts.obj#;

no rows returned

SELECT COUNT(*) FROM sys.tab_stats$
count(*) was 0.

This takes few minutes.

SELECT COUNT(*) FROM sys.tab_stats$
returns 761

This procedure gathers statistics for dictionary schemas ‘SYS’, ‘SYSTEM’.

When was it last run?
Check the the last_analyzed column for tables owned by SYS.




This procedure gathers system statistics.
The actual gathered statistics would depend upon system being under workload, or not.
DBMS_STATS.GATHER_SYSTEM_STATS procedure gathers statistics relating to the performance of your systems I/O and CPU.
Giving the optimizer this information makes its choice of execution plan more accurate, since it is able to weigh the relative costs of operations using both the CPU and I/O profiles of the system.
The output from DBMS_STATS.GATHER_SYSTEM_STATS is stored in the AUX_STATS$ table.
NAME                 PNAME                     PVAL1 PVAL2
——————– ——————– ———- ——————–
SYSSTATS_INFO        STATUS                          COMPLETED
SYSSTATS_INFO        DSTART                          10-26-2008 13:08
SYSSTATS_INFO        DSTOP                           10-26-2008 13:08
SYSSTATS_INFO        FLAGS                         1
SYSSTATS_MAIN        CPUSPEEDNW           1108.95499
SYSSTATS_MAIN        IOSEEKTIM                    10
SYSSTATS_MAIN        IOTFRSPEED                 4096
Option A. – noworkload
All databases come bundled with a default set of noworkload statistics, but they can be replaced with more accurate information.
When gathering noworkload stats, the database issues a series of random I/Os and tests the speed of the CPU.
As you can imagine, this puts a load on your system during the gathering phase.
Option B. – Workload
When initiated using the start/stop or interval parameters, the database uses counters to keep track of all system operations, giving it an accurate idea of the performance of the system.
If workload statistics are present, they will be used in preference to noworkload statistics.
— Wait some time, say 120 minutes, during workload hours
EXEC DBMS_STATS.GATHER_SYSTEM_STATS(‘interval’, interval => 120);
When to run these procedures?
– When there was a change to init.ora Instance parameters


– When there was a change to dictionary structure – new schema, etc.
– When there was a major change to the host hardware.




set pages 200

col index_owner form a10
col table_owner form a10
col owner form a10

spool checkstat.lst

PROMPT Regular Tables

select owner,table_name,last_analyzed, global_stats
from dba_tables
where owner not in (‘SYS’,’SYSTEM’)
order by owner,table_name

PROMPT Partitioned Tables

select table_owner, table_name, partition_name, last_analyzed, global_stats
from dba_tab_partitions
where table_owner not in (‘SYS’,’SYSTEM’)
order by table_owner,table_name, partition_name

PROMPT Regular Indexes

select owner, index_name, last_analyzed, global_stats
from dba_indexes
where owner not in (‘SYS’,’SYSTEM’)
order by owner, index_name

PROMPT Partitioned Indexes

select index_owner, index_name, partition_name, last_analyzed, global_stats
from dba_ind_partitions
where index_owner not in (‘SYS’,’SYSTEM’)
order by index_owner, index_name, partition_name