How to Configure Network NIC Bonding/Teaming on Ubuntu/Debian

To Configure Network NIC Bonding/Teaming on Debian

Configuration of network NIC Teaming/Bonding on Debian Linux will be explained in this article.

Bonding in Debian Linux

They are six bonding modes are supported for the Linux kernel. Some of the bonds are " modes" which is simple to setup and others needs a special config on the switches to connect with the link.

To understand the Bond Modes

Bond Mode 0 &ndash Load Balance or Round Robin

This NIC teaming method is named as " Round-Robin" . Network packets are rotated via each of the network interface which makes up the bonded interface in the bond method.

For Example, a machine with eth0, eth1 and eth2 all saved to a bond0 interface. Here enable the bond mode 0, will send the first packet out to eth0 and the second packet out to eth1 and then the third packet out to eth2. After that start back as circle at eth0 with the fourth packet and that why it is called as round robin.

Bond Mode 1 - Active-Backup

Here the network interface is active and all others are simply waiting for the link failure to primary network interface card.

Bond Mode 2 - Balance XOR

First evaluating the source and destination of the mac addresses to determine the interface, to send the network packets out. It is capable for fault tolerance and load balancing.

Bond Mode 3 - Broadcast

Here the bond device will transfer data out to all the slave interfaces. It gives a level of fault tolerance.


Bond Mode 4 - 802.3ad

This mode obeys the standards of IEEE for link aggregation and gives both increased bandwidth and the fault tolerance.

Bond Mode 5 &ndash Transmit Load Balancing

To transfer data on the base of queue/load for each of the interfaces.

Bond Mode 6 &ndash Adaptive Load Balancing

In ALB, the bond will balance the load like Bond Mode 5.

To set up Network Bonding on Ubuntu

We need to install ifenslave package for setting up teaming, Before that update your repositories.

root@linuxhelp:~# apt-get update 
Hit http://in.archive.ubuntu.com wily InRelease
Hit http://in.archive.ubuntu.com wily-updates InRelease               
.
.
.         
Hit http://in.archive.ubuntu.com wily-backports/multiverse Translation-en      
Hit http://in.archive.ubuntu.com wily-backports/restricted Translation-en      
Hit http://in.archive.ubuntu.com wily-backports/universe Translation-en        
Reading package lists... Done

root@linuxhelp:~# apt-get install ifenslave-2.6
Reading package lists... Done
Building dependency tree       
Reading state information... Done
.
.
.
Processing triggers for man-db (2.7.4-1) ...
Setting up ifenslave (2.7ubuntu1) ...
Setting up ifenslave-2.6 (2.7ubuntu1) ...


Now we need to configure the kernel module for bonding, to be loaded automatically at the system boot time. So we are supposed to add entry in /etc/modules file as follows.

root@linuxhelp:~# vim /etc/modules
Entry:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with " #"  are ignored.
bonding


Then the networking service should be stopped before loading the kernel modules. Now load the kernel module for bonding use modprobe command as follows.

root@linuxhelp:~# /etc/init.d/networking stop
[ ok ] Stopping networking (via systemctl): networking.service.
root@linuxhelp:~# modprobe bonding


For creating the actual bonded interface, edit the file using any editor in the directory " /etc/network/interfaces" .

root@linuxhelp:~# vim /etc/network/interfaces


Now add the following lines for creating network bond for eno33554992 and eno50332216 network interfaces as follows.

#eno33554992 configuration
auto eno33554992
iface eno33554992 inet manual
bond-master bond0
bond-primary eno33554992

#eno50332216 configuration
auto eno50332216
iface eno50332216 inet manual
bond-master bond0

# Bonding eno33554992 &  eno50332216 to create bond0 NIC
auto bond0
iface bond0 inet static
address 192.168.5.200
gateway 192.168.5.1
netmask 255.255.255.0
bond-mode active-backup
bond-miimon 100
bond-slaves none


The appropriate bond interface is enslaving the two physical network cards and make it as one logical interface.

Here " auto bond0" denotes the machine to initialize the bond and " iface bond0 inet static" interface called bond0.

The " bond-mode active-backup" helps to determine the bond mode. " bond-primary" denotes the primary interface for the bond and " slaves eno33554992 & eno50332216" represents the physical interfaces.

The " bond-miimon 100" represents the kernel to check the link for every 100 ms and " bond-downdelay 400" specifies that the machine can wait for 400 ms. The " bond-updelay 800" helps to instruct the machine to wait for new active interface upto 800 ms.

Now start the networking service to activate that bond0 interface. If it’ s doesn' t work try restarting the same networking service again.

root@linuxhelp:~# /etc/init.d/networking restart 
[ ok ] Restarting networking (via systemctl): networking.service.


To test bond0 interface

Now run the following command to verify whether the bond0 interface is correctly configured for network bonding or not.

root@linuxhelp:~# cat /proc/net/bonding/bond0 
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eno33554992 (primary_reselect always)
Currently Active Slave: eno33554992
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eno33554992
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:96:5f:b7
Slave queue ID: 0

Slave Interface: eno50332216
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:96:5f:c1
Slave queue ID: 0


Now you can see the interface bond0 is successfully configured for active-backup mode (mode 1), and the two network interfaces are acting as a slave for this bond0 master interface.


To verify the IP address and list of the interfaces run the below command.

root@linuxhelp:~# ifconfig 
bond0     Link encap:Ethernet  HWaddr 00:0c:29:96:5f:b7  
          inet addr:192.168.5.200  Bcast:192.168.5.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe96:5fb7/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:36 errors:0 dropped:36 overruns:0 frame:0
          TX packets:92 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2160 (2.1 KB)  TX bytes:10781 (10.7 KB)

eno16777736 Link encap:Ethernet  HWaddr 00:0c:29:96:5f:ad  
          inet addr:192.168.5.222  Bcast:192.168.5.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe96:5fad/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6639 errors:0 dropped:0 overruns:0 frame:0
          TX packets:901 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2223136 (2.2 MB)  TX bytes:100426 (100.4 KB)

eno33554992 Link encap:Ethernet  HWaddr 00:0c:29:96:5f:b7  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:28 errors:0 dropped:0 overruns:0 frame:0
          TX packets:305 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3187 (3.1 KB)  TX bytes:34845 (34.8 KB)

eno50332216 Link encap:Ethernet  HWaddr 00:0c:29:96:5f:b7  
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:159 errors:0 dropped:52 overruns:0 frame:0
          TX packets:42 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:11116 (11.1 KB)  TX bytes:6028 (6.0 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:1253 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1253 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:114324 (114.3 KB)  TX bytes:114324 (114.3 KB)
Tag : NIC teaming
FAQ
Q
What is the command to Test configuration of NIC bonding in centos/rhel?
A
Use the following command, to test configuration of NIC bonding

# modprobe bonding
Q
What is Nic stand for?
A
NIC stands for network interface card. A network interface card (NIC) is a circuit board or card that is installed in a computer so that it can be connected to a network. A network interface card provides the computer with a dedicated, full-time connection to a network.
Q
What is NIC teaming?
A
NIC Teaming is the process of combining multiple network cards together for performance and redundancy reasons. Microsoft refers to this as NIC teaming, however other vendors may refer to this as bonding, balancing or aggregation.Software running on the computer will communicate with the virtual network adapter
Q
What is the use of NIC bonding?
A
When bonded, two NICs appear to be the same physical device and they also have the same MAC address. Linux uses a special kernel module called bonding to allow users to bond multiple networks
Q
Does NIC teaming increase bandwidth?
A
Yes, NIC teaming will increase the bandwidth according to our network we connect.