Friday, December 21, 2012

[Ubuntu] Bonding - ALB and 802.3ad

Update 31/07/13: Be sure to install ifenslave before configuring the bonding.

Office network switch was upgraded to layer 3 switch recently, play around with bonding to create link aggregation. This can be done easily under Windows, but in Linux it is really big headache. Not because the steps are complicated and redundant, but you simply can't proof whether the configuration is working or not because Linux doesn't have a friendly way for user to check whether the bonding is up and running fine.

We can check under /proc/net/bonding/bonding, but even if it is up and running, it doesn't mean the networking is actually well connected.

By the time I write this article, I still unable to get it run fine, probably due to the switch is not well configured yet, waiting for technician to do a check on that. Meanwhile, I would like to share my /etc/networking/interfaces configuration here, so whoever encountered Linux bonding problem can refer to here, without the need to search for thousand solutions online but none of those are beginner friendly or prove working.

I would say the configuration below "should be working", due to the fact that such configuration appear working when we do "netstat -i" or "ifconfig". There are a lot of solutions online, some help you to understand the concept but can never get the "bond0" interface up.

Without more words, here we go:

Adaptive Load Balancing, bonding mode=6
 auto lo  
 iface lo inet loopback  
 auto bond0  
 iface bond0 inet static  
      slaves none  
      address 192.168.0.2  
      gateway 192.168.0.1  
      netmask 255.255.255.0  
      dns-nameservers 8.8.8.8  
      bond-mode 6  
      bond-miimon 100  
 auto eth0  
 iface eth0 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  
 auto eth1  
 iface eth1 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  
 auto eth2  
 iface eth2 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  
 auto eth3  
 iface eth3 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  
802.3ad Link Aggregation, bonding mode=4
 auto lo  
 iface lo inet loopback  
 auto bond0  
 iface bond0 inet static  
      slaves none  
      address 192.168.0.2  
      gateway 192.168.0.1  
      netmask 255.255.255.0  
      dns-nameservers 8.8.8.8  
      bond-mode 4  
      bond-lacp-rate 1  
      bond-miimon 100  
 auto eth0  
 iface eth0 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  
 auto eth1  
 iface eth1 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  
 auto eth2  
 iface eth2 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  
 auto eth3  
 iface eth3 inet manual  
      bond-master bond0  
      bond-primary eth0 eth1 eth2 eth3  

I hope the above will help one who stuck with bonding. If this doesn't work properly and better solution is found, I would greatly welcome feedback and input.

Enjoy bonding!

========================================

Update 24/12/12:
The bonding configuration above is proven working, the issue I encountered was caused by the Dell PowerConnect layer 3 switch, which has some unknown issue that made the ports reserved for LACP not working properly. Re-configure and restart has get it up and running.

One more matter to take note is that you can actually remove the "bond-primary" under each eth. As for the dns-nameservers, it is advisable to specify that. Without that we have encountered some strange issue with VMs:

  1. the host and guest OS cannot ping and see each others.
  2. there is a "ghost" IP address, 192.168.0.222 appeared and tie with one of the ethernet. Bring up/down the bonding as well as each ethernet cannot get rid of it. Not even after restart.
Issues above being solved once the dns-nameservers is specified.

Now my server is running @ 4 Gbps and the HDDs has now become the bottleneck. Sigh... it's time to upgrade!

1 comment:

  1. Thanks for the dns-nameservers tip. Was wondering why I couldn't lookup anything once I followed the Ubuntu tutorial, but seems like you need to do it (strange the example given in the documentation doesn't show it).

    I have another question, which is I setup ALB using eth1 and eth2 (both of which are on a Intel Pro/1000 network card) for my Ubuntu server. However, I'm dedicating eth0, the 1GB NIC on my MB, to my VM (which is running another server--Mac OS X using VMWare). Do you think that's necessary or would you simply just bond all 3 together and then have the Mac VM setup to use bridged mode?

    ReplyDelete