Enable NFS Share on CentOS/RHEL 6

Hello,

NFS (Network File System) is the equivalent to SMB (Samba/CIFS) from the Windows world in the Unix world. Over NFS you can share folders on the network. Building a NFS share is quite easy but the configuration a bit tricky if you plan the usage of a firewall, for example iptables.

You need following ports open:

TCP/UDP 111 (RPC portmapper)
TCP/UDP 2049 (NFSD server)
TCP/UDP 32803 (*)
TCP/UDP 32769 (*)
TCP/UDP 892 (*)
TCP/UDP 875 (*)
TCP/UDP 662 (*)
TCP/UDP 2020 (*)

(*) Because NFS choses random ports every time it's started we need to fix several ports in the config file /etc/sysconfig/nfs. Without these fixed ports we can't do firewalling on a nfs server. So, to activate these ports uncomment the following lines in the mentioned config file:

LOCKD_TCPPORT=32803
LOCKD_UDPPORT=32769
MOUNTD_PORT=892
RQUOTAD_PORT=875
STATD_PORT=662
STATD_OUTGOING_PORT=2020

Afterwards restart all daemons needed for the nfs server:

# /etc/init.d/rpcbind restart
# /etc/init.d/nfs restart
# /etc/init.d/rpcsvcgssd restart

Now that the server is running you only need to add the ports to your iptables config. Open /etc/sysconfig/iptables and repeat the following 2 lines for each port:

-A INPUT -m state –state NEW -p tcp –dport <port> -j ACCEPT
-A INPUT -m state –state NEW -p udp –dport <port> -j ACCEPT

Now we have to export a folder. First open the file /etc/exports and add the export, here an example line:

/home/BACKUP    192.168.0.0/24(rw,sync,root_squash)

Short explanation:
/home/BACKUP – That's the folder you want to export
192.168.0.0/24 – That's the host part which has access to the share (here the whole mentioned network)
(rw,sync,root_squash) – That's the option part (here read/write, sync and act as root)

For more explanations on the options you can consult the manpages (# man exports)

After you have created the share and saved the file, push it online with
# exportfs -a

I also restart the nfs server every time after the exportfs command but i don't know if it's really needed.

LVM Resizing

Hi,

IMPORTANT: PLEASE ALWAYS BACKUP YOUR DATA FIRST BEFORE TOUCHING PARTITION TABLES ETC

first there are a few tools we need to accomplish this: resize2fs, lvscan, lvresize. Use "lvscan" to show your LVM group(s).

Here the steps to extend your LV:

  1. # lvscan
    Scans for available LV
  2. # fschk.ext3 /dev/VolGroup00/LogVol00
    Does a sanity check and corrections on the filesystem before further manipulation. Here the type is Ext3.
  3. # lvresize -L 15G /dev/VolGroup00/LogVol00
    Sets capacity of the concerned LV (here LogVol00) to 15 GB
  4. # resize2fs /dev/VolGroup00/LogVol00 15G
    Resizes the partition inside the LV to a capacity of 15 GB

Here are the steps to shrink your LV:

  1. # lvscan
    Scans for available LV
  2. # fschk.ext3 /dev/VolGroup00/LogVol00
    Does a sanity check and corrections on the filesystem before further manipulation. Here the type is Ext3.
  3. # resize2fs /dev/VolGroup00/LogVol00 15G
    Sets capacity of the concerned LV (here LogVol00) to 15 GB
  4. # lvresize -L 15G /dev/VolGroup00/LogVol00
    Sets capacity of the concerned LV (here LogVol00) to 15 GB

All these steps can be done with the disk online, except for the root partition. If you want to modify this, you have to boot into a live cd. Also note that step 3 and 4 are inverted. If you want to increase your capacity you first need to grow your LV then the partition, if you want to shrink your capacity first shrink your partition then the LV. PLEASE TAKE CARE that your LV is NOT smaller than your partition! In this case DATA LOSS is almost sure.

NIC Bonding / Port Trunking with CentOS/RHEL

Hello,

with NIC Bonding or Port Trunking you can provide higher throughput and redundency to your network cards. Basically it bonds two, let's say 1Gb network cards, to one 2Gb card. Bonding can be achieved with more than 2 cards in a system.

There are several modes:

Mode 0 (round-robin – load balancing/fault tolerence):
This is the default mode and sends packets from the first to the last slave. 1st packet -> 1st NIC, 2nd packet -> 2nd NIC, 3rd packet -> 1st NIC etc…

Mode 1 (active backup – fault tolerence):
In this mode only 1 card/slave is active at the moment. Another takes over as soon as the other goes down. Only the MAC address of the bond is visible on the outside.

Mode 2 (balance-xor – load balancing/fault tolerence – Static Link Aggregation):
In this mode every destination gets the packets from the same source address based on MAC addresses.

Mode 3 (broadcast – fault tolerence):
In this mode all packets go out on every interface. Incoming traffic is not affected.

Mode 4 (802.3ad – Dynamic Link Aggregation, LACP):
In this mode there is a group created wth the same speed an duplex possibilities according to IEEE 802.3ad. There are some prerequisites fot this: Ethtool support in drivers to get speed and duplex of each slave and a switch which supports 802.3ad.

Mode 5 (balance-tlb – load balancing):
In this mode the outgoing packets are distributed over all slaves, but only the active slaves receives packets. If this goes down another slave takes over.

Mode 6 (balance-alb – load balancing):
In this mode all outgoing and incoming traffic in distributed over all salves.

Now after shortly explaining all modes here how to create such a bond:

  • add these lines to /etc/modprobe.conf:
    alias bond0 bonding
  • create and open the file /etc/sysconfig/network-scripts/ifcfg-bond0:
    DEVICE=bond0
    IPADDR=<ip address>
    NETMASK=<your netmask>
    NETWORK=<network>
    BROADCAST=<broadcast>
    GATEWAY=<gateway>
    ONBOOT=yes
    BOOTPROTO=none
    USERCTL=no
    BONDING_OPTS="mode=<your selected mode (0-6)> miimon=100"
  • then change your /etc/sysconfig/network-scripts/ifcfg-ethX files to:
    DEVICE=ethX
    ONBOOT=yes
    BOOTPROTO=none
    USERCTL=no
    MASTER=bond0
    SLAVE=yes

Now you should only need to restart your server. If you can't restart it please load the bond kernel module and restart your network:

# modprobe bonding
(ATTENTION: My host's connection froze here and i had to go to the physical server to restart the network)

# /etc/init.d/network restart