This post will help you to understand What is Bonding & How to Configure Bonding in Linux.
If you are interested in learning, Request you to go through the below recommended tutorial.
DevOps Full Course Tutorial for Beginners - DevOps Free Training Online
Docker Full Course Tutorial for Beginners - Docker Free Training Online
Kubernetes Full Course Tutorial for Beginners - Kubernetes Free Training Online
Ansible Full Course Tutorial for Beginners - Ansible Free Training Online
Openstack Full Course Tutorial for Beginners - Openstack Free Training Online
Docker Full Course Tutorial for Beginners - Docker Free Training Online
Kubernetes Full Course Tutorial for Beginners - Kubernetes Free Training Online
Ansible Full Course Tutorial for Beginners - Ansible Free Training Online
Openstack Full Course Tutorial for Beginners - Openstack Free Training Online
What is Bonding?
Bonding is nothing but Linux kernel feature that allows to aggregate multiple link interfaces (such as eth0, eth1) into a single virtual link such as bond0. The idea is pretty simple get higher data rates and as well as link failover.Red Hat described bonding in documents as, Linux allows administrators to bind multiple network interfaces together into a single channel using the bonding kernel module and a special network interface called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy.
If one physical NIC is down or unplugged, it will automatically move resource to other NIC card. Channel bonding will work with the help of bonding driver in kernel.
How to Configure Bonding in Linux?
Step 1: Create Bonding Channel Configuration File
Linux and other platforms stores network configuration by default in /etc/sysconfig/network-scripts/ directory. First, we need to create a bond0 config file as follows, change the values like IP Address as per your environment.
[root @server ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IPADDR=192.168.2.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
Save and close the file.
Step #2: Modify eth0 and eth1 config files (Network cards you wish to bring in bonding)
Edit both the configuration files as follows for eth0 and eth1 interfaces. Here MASTER and SLAVE directives are mandatory to specify which bonding channels are we going to use for this network cards if you have multiple bonding in the same servers.
[root @server ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
[root @server ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Save and close the file.
Step 3: Load bond driver/module
Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:
[root @server ~]# vi /etc/modprobe.conf
Append following two lines:
alias bond0 bonding
options bond0 mode=balance-alb miimon=100
Save file and exit to shell prompt. You can learn more about all bounding options in detail below.
Step 4: Check Network Bonding Status - Test configuration
First, load the bonding module, enter:
[root @server ~]# modprobe bonding
Restart the networking service in order to bring up bond0 interface, enter:
[root @server ~]# service network restart
Make sure everything is working. Type the following cat command to query the current status of Linux kernel bounding driver, enter:
# cat /proc/net/bonding/bond0
Sample outputs:
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:59
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:c6:be:63
To list all network interfaces, enter:
[root @server ~]# ifconfig
Sample outputs:
bond0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
inet addr:192.168.2.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
eth0 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
inet addr:192.168.2.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB)
Interrupt:11 Base address:0x1400
eth1 Link encap:Ethernet HWaddr 00:0C:29:C6:BE:59
inet addr:192.168.2.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB)
Interrupt:10 Base address:0x1480
Most administrators assume that bonding multiple network cards together instantly results in double the bandwidth and high-availability in case a link goes down. Unfortunately, this is not true. Let's start with the most common example, where you have a server with high network load, and wish to allow more than 1Gb/s.
Bonding With 802.3Ad
You connect two interfaces to your switch, enable bonding, and discover half your packets are getting lost. If Linux is configured for 802.3ad link aggregation, the switch must also be told about this. In the Cisco world, this is called an EtherChannel. Once the switch knows those two ports are actually supposed to use 802.3ad, it will load balance the traffic destined for your attached server.
This works great if a large number of network connections from a diverse set of clients are connecting. If, however, the majority of the throughput is coming from a single server, you won't get better than the 1Gb/s port speed. Switches are load balancing based on the source MAC address by default, so if only one connection takes place, it always gets sent down the same link. Many switches support changing of the load balancing algorithm, so if you fall into the single server-to-server category, make sure you allow it to round-robin the Ethernet frames.
Generic Bonding
There are multiple modes you can set in Linux, and the most common "generic" one is bonding-alb. This mode works effectively in most situations, without needing to configure a switch or trick anything else. It does, however, require that your network interface support changing the MAC address on the fly. This mode works well "generically" because it is constantly swapping MAC addresses to trick the other end (be it a switch or another connected host) into sending traffic across both links. This can wreak havoc on a Cisco network with port security enabled, but in general it's a quick and dirty way to get it working.
Channel Bonding Modes
Channel Bonding modes can be broken into three categories: generic, those that require switch support, and failover-only.
The failover-only mode is active-backup: One port is active until the link fails, then the other takes over the MAC and becomes active.
Modes that require switch support are:
balance-rr: Frames are transmitted in a round-robin fashion without hashing, to truly load balance.
802.3ad: This mode is the official standard for link aggregation, and includes many configurable options for how to balance the traffic.
balance-xor: Traffic is hashed and balanced according to the receiver on the other end. This mode is also available as part of 802.3ad.
Note that modes requiring switch support can be run back-to-back with crossover cables between two server as well. This is especially useful, for example, when using DRBD to replicate two partitions.
Generic modes include:
broadcast: This mode is not really link aggregation - it simply broadcasts all traffic out both interfaces, which can be useful when sending data to partitioned broadcast domains for high availability (see below). If using broadcast mode on a single network, switch support is recommended.
balance-tlb: Outgoing traffic is load balanced, but incoming only uses a single interface. The driver will change the MAC address on the NIC when sending, but incoming always remains the same.
balance-alb: Both sending and receiving frames are load balanced using the change MAC address trick.
Related Content on Linux might be useful to you to improve your Linux Skills.
How to Configure IP Address on Ubuntu using Netplan
How to Access Linux Server from Windows Remotely
Configure SSH Passwordless Login Authentication (SSH-keygen)
How to Create LVM Partition in Linux – LVM Tutorial
Install & Configure Samba Server on Linux (RHEL7 / CentOS7)
How to Access Linux Server from Windows Remotely
Configure SSH Passwordless Login Authentication (SSH-keygen)
How to Create LVM Partition in Linux – LVM Tutorial
Install & Configure Samba Server on Linux (RHEL7 / CentOS7)
Keep practicing and have fun. Leave your comments if any.
Support Us: Share with your friends and groups.Stay connected with us on social networking sites, Thank you.
0 تعليقات