link aggregation

{{short description|Using multiple network connections in parallel to increase capacity and reliability}}

{{redirect-distinguish|IEEE 802.3ad|IEEE 802.1ad}}

{{Use dmy dates|date=November 2024}}

File:Link Aggregation1.svg

In computer networking, link aggregation is the combining (aggregating) of multiple network connections in parallel by any of several methods. Link aggregation increases total throughput beyond what a single connection could sustain, and provides redundancy where all but one of the physical links may fail without losing connectivity. A link aggregation group (LAG) is the combined collection of physical ports.

Other umbrella terms used to describe the concept include trunking, bundling, bonding, channeling or teaming.

Implementation may follow vendor-independent standards such as Link Aggregation Control Protocol (LACP) for Ethernet, defined in IEEE 802.1AX or the previous IEEE 802.3ad, but also proprietary protocols.

Motivation

Link aggregation increases the bandwidth and resilience of Ethernet connections.

Bandwidth requirements do not scale linearly. Ethernet bandwidths historically have increased tenfold each generation: {{val|10|ul=Mbit/s}}, {{val|100|u=Mbit/s}}, {{val|1000|u=Mbit/s}}, {{val|10,000|u=Mbit/s}}. If one started to bump into bandwidth ceilings, then the only option was to move to the next generation, which could be cost prohibitive. An alternative solution, introduced by many of the network manufacturers in the early 1990s, is to use link aggregation to combine two physical Ethernet links into one logical link. Most of these early solutions required manual configuration and identical equipment on both sides of the connection.{{cite web |title=IEEE 802 Trunking Tutorial |url=https://grouper.ieee.org/groups/802/3/trunk_study/tutorial/index.html |date=1997-11-11 |access-date=2013-08-13 |url-status=live |archive-url=https://web.archive.org/web/20131207014147/http://grouper.ieee.org/groups/802/3/trunk_study/tutorial/index.html |archive-date=2013-12-07 }}

There are three single points of failure inherent to a typical port-cable-port connection, in either a computer-to-switch or a switch-to-switch configuration: the cable itself or either of the ports the cable is plugged into can fail. Multiple logical connections can be made, but many of the higher level protocols were not designed to fail over completely seamlessly. Combining multiple physical connections into one logical connection using link aggregation provides more resilient communications.

Architecture

Network architects can implement aggregation at any of the lowest three layers of the OSI model. Examples of aggregation at layer 1 (physical layer) include power line (e.g. IEEE 1901) and wireless (e.g. IEEE 802.11) network devices that combine multiple frequency bands. OSI layer 2 (data link layer, e.g. Ethernet frame in LANs or multi-link PPP in WANs, Ethernet MAC address) aggregation typically occurs across switch ports, which can be either physical ports or virtual ones managed by an operating system. Aggregation at layer 3 (network layer) in the OSI model can use round-robin scheduling, hash values computed from fields in the packet header, or a combination of these two methods.

Regardless of the layer on which aggregation occurs, it is possible to balance the network load across all links. However, in order to avoid out-of-order delivery, not all implementations take advantage of this. Most methods provide failover as well.

Combining can either occur such that multiple interfaces share one logical address (i.e. IP) or one physical address (i.e. MAC address), or it allows each interface to have its own address. The former requires that both ends of a link use the same aggregation method, but has performance advantages over the latter.

Channel bonding is differentiated from load balancing in that load balancing divides traffic between network interfaces on per network socket (layer 4) basis, while channel bonding implies a division of traffic between physical interfaces at a lower level, either per packet (layer 3) or a data link (layer 2) basis.{{citation needed|date=September 2020}}

= Standardization process =

By the mid-1990s, most network switch manufacturers had included aggregation capability as a proprietary extension to increase bandwidth between their switches. Each manufacturer developed its own method, which led to compatibility problems. The IEEE 802.3 working group took up a study group to create an interoperable link layer standard (i.e. encompassing the physical and data-link layers both) in a November 1997 meeting. The group quickly agreed to include an automatic configuration feature which would add in redundancy as well. This became known as Link Aggregation Control Protocol (LACP).

== 802.3ad ==

{{As of | 2000}}, most gigabit channel-bonding schemes used the IEEE standard of link aggregation which was formerly clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force.{{cite web|url=http://www.ieee802.org/3/ad/|title=IEEE 802.3ad Link Aggregation Task Force|website=www.ieee802.org|access-date=9 May 2018|url-status=live|archive-url=https://web.archive.org/web/20171027055813/http://www.ieee802.org/3/ad/|archive-date=27 October 2017}} Nearly every network equipment manufacturer quickly adopted this joint standard over their proprietary standards.

== 802.1AX ==

The 802.3 maintenance task force report for the 9th revision project in November 2006 noted that certain 802.1 layers (such as 802.1X security) were positioned in the protocol stack below link aggregation which was defined as an 802.3 sublayer.{{cite web |first=David |last=Law |title=IEEE 802.3 Maintenance |url=http://www.ieee802.org/3/maint/public/maint_open_1106.pdf |page=9 |date=2006-11-13 |access-date=2009-08-18 |quote=Proposal to move Link Aggregation to IEEE 802.1 •It is an 802.3 sublayer but it has to go above IEEE Std 802.1x |url-status=live |archive-url=https://web.archive.org/web/20081007130153/http://www.ieee802.org/3/maint/public/maint_open_1106.pdf |archive-date=2008-10-07 }} To resolve this discrepancy, the 802.3ax (802.1AX) task force was formed,{{cite web|title=IEEE 802.3ax (IEEE P802.1AX) Link Aggregation Project Authorization Request (approved)|url=http://standards.ieee.org/board/nes/projects/802-1AX.pdf|page=3|date=2007-03-22|quote=It has been concluded between 802.1 and 802.3 that future development of Link Aggregation would be more appropriate as an 802.1 standard|url-status=dead|archive-url=https://web.archive.org/web/20101116135600/http://standards.ieee.org/board/nes/projects/802-1AX.pdf|archive-date=2010-11-16|access-date=2018-01-10}} resulting in the formal transfer of the protocol to the 802.1 group with the publication of IEEE 802.1AX-2008 on 3 November 2008.{{cite web |title=IEEE SA - 802.1AX-2008 - IEEE Standard for Local and metropolitan area networks--Link Aggregation |url=http://standards.ieee.org/findstds/standard/802.1AX-2008.html |access-date=2013-08-13 |url-status=dead |archive-url=https://web.archive.org/web/20130807055327/http://standards.ieee.org/findstds/standard/802.1AX-2008.html |archive-date=2013-08-07 }} As of February 2025 the current revision of the standard is [https://standards.ieee.org/ieee/802.1AX/6768/ 802.1AX-2020].{{Cite web |title=IEEE Standards Association |url=https://standards.ieee.org/ieee/802.1AX/6768/ |access-date=2025-02-04 |website=IEEE Standards Association - IEEE 802.1AX-2020 |language=en}}{{Cite journal |date=2020-05-29 |title=IEEE Standard for Local and Metropolitan Area Networks--Link Aggregation |url=https://ieeexplore.ieee.org/document/9105034 |journal=IEEE STD 802.1AX-2020 (Revision of IEEE STD 802.1AX-2014) |pages=1–333 |doi=10.1109/IEEESTD.2020.9105034|isbn=978-1-5044-6428-4 }} For an overview of the history of the 802.1AX standard, see this part of the table on the IEEE 802.1 family of standards.

= Link Aggregation Control Protocol =

Within the IEEE Ethernet standards, the Link Aggregation Control Protocol (LACP) provides a method to control the bundling of several physical links together to form a single logical link. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to their peer, a directly connected device that also implements LACP.

LACP Features and practical examples

  1. Maximum number of bundled ports allowed in the port channel: Valid values are usually from 1 to 8.
  2. LACP packets are sent with multicast group MAC address {{MACaddr|01:80:c2:00:00:02}}
  3. During LACP detection period
  4. * LACP packets are transmitted every second
  5. * Keep-alive mechanism for link member: (slow = 30s, fast=1s)
  6. Selectable load-balancing mode is available in some implementations{{cite web |title=What Is LACP? |url=https://support.huawei.com/enterprise/en/doc/EDOC1100086560#EN-US_TOPIC_0169439602 |access-date=5 March 2020}}
  7. LACP mode :
  8. *Active: Enables LACP unconditionally.
  9. *Passive: Enables LACP only when an LACP device is detected.

== Advantages over static configuration ==

  • Failover occurs automatically: When a link has an intermediate failure, for example in a media converter between the devices, a peer system may not perceive any connectivity problems. With static link aggregation, the peer would continue sending traffic down the link causing the connection to fail.
  • Dynamic configuration: The device can confirm that the configuration at the other end can handle link aggregation. With static link aggregation, a cabling or configuration mistake could go undetected and cause undesirable network behavior.{{cite web |title=Link aggregation on Dell servers |url=http://support.dell.com/support/edocs/network/LAG1855/LAGConsiderationv0.5.pdf |archive-url=https://web.archive.org/web/20120313024309/http://support.dell.com/support/edocs/network/LAG1855/LAGConsiderationv0.5.pdf |archive-date=13 March 2012 }}

== Practical notes ==

LACP works by sending frames (LACPDUs) down all links that have the protocol enabled. If it finds a device on the other end of a link that also has LACP enabled, that device will independently send frames along the same links in the opposite direction enabling the two units to detect multiple links between themselves and then combine them into a single logical link. LACP can be configured in one of two modes: active or passive. In active mode, LACPDUs are sent 1 per second along the configured links. In passive mode, LACPDUs are not sent until one is received from the other side, a speak-when-spoken-to protocol.

Linux drivers

The Linux bonding driver[http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding The Linux Foundation: Bonding] {{webarchive|url=https://web.archive.org/web/20101228000337/https://www.linuxfoundation.org/collaborate/workgroups/networking/bonding |date=2010-12-28 }} provides a method for aggregating multiple network interface controllers (NICs) into a single logical bonded interface of two or more so-called (NIC) slaves. The majority of modern Linux distributions come with a Linux kernel which has the Linux bonding driver integrated as a loadable kernel module and the ifenslave (if = [network] interface) user-level control program pre-installed. Donald Becker programmed the original Linux bonding driver. It came into use with the Beowulf cluster patches for the Linux kernel 2.0.

Modes for the Linux bonding driver (network interface aggregation modes) are supplied as parameters to the kernel bonding module at load time. They may be given as command-line

arguments to the insmod or modprobe commands, but are usually specified in a Linux distribution-specific configuration file. The behavior of the single logical bonded interface depends upon its specified bonding driver mode. The default parameter is balance-rr.

;Round-robin (balance-rr): Transmit alternate network packets in sequential order from the first available NIC slave through the last. This mode provides load balancing and fault tolerance.{{Cite web|url=https://www.enterprisenetworkingplanet.com/news/understanding-nic-bonding-with-linux/|title = Understanding NIC Bonding with Linux|date = 2 December 2009}} This mode can cause congestion control issues due to the packet reordering it can introduce.{{Cite web |date=2011-04-27 |title=Linux Ethernet Bonding Driver HOWTO |url=https://www.kernel.org/doc/Documentation/networking/bonding.txt |access-date=2024-11-04 |website=kernel.org}}

;Active-backup (active-backup): Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to simplify forwarding in the network switch. This mode provides fault tolerance.

;XOR (balance-xor): Transmit network packets based on a hash of the packet's source and destination. The default algorithm only considers MAC addresses (layer2). Newer versions allow selection of additional policies based on IP addresses (layer2+3) and TCP/UDP port numbers (layer3+4). This selects the same NIC slave for each destination MAC address, IP address, or IP address and port combination, respectively. Single connections will have guaranteed in order packet delivery and will transmit at the speed of a single NIC. This mode provides load balancing and fault tolerance.

;Broadcast (broadcast): Transmit network packets on all slave network interfaces. This mode provides fault tolerance.

;IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP): Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. This mode is similar to the XOR mode above and supports the same balancing policies. The link is set up dynamically between two LACP-supporting peers.

;Adaptive transmit load balancing (balance-tlb): Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

;Adaptive load balancing (balance-alb): includes balance-tlb plus receive load balancing (rlb) for IPv4 traffic and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

The Linux Team driver{{cite web|url=http://www.libteam.org|title=libteam by jpirko|website=www.libteam.org|access-date=9 May 2018|url-status=live|archive-url=https://web.archive.org/web/20170914032423/http://libteam.org/|archive-date=14 September 2017}} provides an alternative to bonding driver. The main difference is that Team driver kernel part contains only essential code and the rest of the code (link validation, LACP implementation, decision making, etc.) is run in userspace as a part of teamd daemon.

Usage

= Network backbone =

Link aggregation offers an inexpensive way to set up a high-capacity backbone network that transfers multiple times more data than any single port or device can deliver. Link aggregation also allows the network's backbone speed to grow incrementally as demand on the network increases, without having to replace everything and deploy new hardware.

Most backbone installations install more cabling or fiber optic pairs than is initially necessary. This is done because labor costs are higher than the cost of the cable, and running extra cable reduces future labor costs if networking needs change. Link aggregation can allow the use of these extra cables to increase backbone speeds for little or no extra cost if ports are available.

= Order of frames =

When balancing traffic, network administrators often wish to avoid reordering Ethernet frames. For example, TCP suffers additional overhead when dealing with out-of-order packets. This goal is approximated by sending all frames associated with a particular session across the same link. Common implementations use L2 or L3 hashes (i.e. based on the MAC or the IP addresses), ensuring that the same flow is always sent via the same physical link.{{cite web |url=https://grouper.ieee.org/groups/802/3/hssg/public/apr07/frazier_01_0407.pdf |title=IEEE 802.3ad Link Aggregation (LAG) what it is, and what it is not |access-date=2007-05-14 |url-status=live |archive-url=https://web.archive.org/web/20071129153021/http://grouper.ieee.org/groups/802/3/hssg/public/apr07/frazier_01_0407.pdf |archive-date=2007-11-29 }}{{unreliable source?|date=January 2013|reason=WP:PRIMARY, WP:SPS,see Talk:Link aggregation#Link Aggregation References}}{{cite IETF |rfc=7424 |title=Mechanisms for Optimizing Link Aggregation Group (LAG) and Equal-Cost Multipath (ECMP) Component Link Utilization in Networks}}{{cite book|title=Procurve 2910al Management and Configuration Guide |section=Outbound Traffic Distribution Across Trunked Links |publisher=Hewlett Packard |date=February 2009}}

However, this may not provide even distribution across the links in the trunk when only a single or very few pairs of hosts communicate with each other, i.e. when the hashes provide too little variation. It effectively limits the client bandwidth in aggregate. In the extreme, one link is fully loaded while the others are completely idle and aggregate bandwidth is limited to this single member's maximum bandwidth. For this reason, an even load balancing and full utilization of all trunked links is almost never reached in real-life implementations.

= Use on network interface cards =

NICs trunked together can also provide network links beyond the throughput of any one single NIC. For example, this allows a central file server to establish an aggregate 2-gigabit connection using two 1-gigabit NICs teamed together. Note the data signaling rate will still be {{nowrap|1 Gbit/s}}, which can be misleading depending on methodologies used to test throughput after link aggregation is employed.

== Microsoft Windows ==

Microsoft Windows Server 2012 supports link aggregation natively. Previous Windows Server versions relied on manufacturer support of the feature within their device driver software. Intel, for example, released Advanced Networking Services (ANS) to bond Intel Fast Ethernet and Gigabit cards.[http://www.intel.com/support/network/sb/cs-009747.htm Intel Advanced Networking Services] {{webarchive|url=https://web.archive.org/web/20070124201010/http://www.intel.com/support/network/sb/CS-009747.htm |date=2007-01-24 }}

Nvidia supports teaming with their Nvidia Network Access Manager/Firewall Tool. HP has a teaming tool for HP-branded NICs which supports several modes of link aggregation including 802.3ad with LACP. In addition, there is a basic layer-3 aggregation{{cite web|url=https://technet.microsoft.com/pl-pl/library/cc784947(v=ws.10).aspx|title=RandomAdapter: Core Services|last=Archiveddocs|website=technet.microsoft.com|date=8 October 2009 |access-date=9 May 2018|url-status=live|archive-url=https://web.archive.org/web/20160425094804/https://technet.microsoft.com/pl-pl/library/cc784947(v=ws.10).aspx|archive-date=25 April 2016}} that allows servers with multiple IP interfaces on the same network to perform load balancing, and for home users with more than one internet connection, to increase connection speed by sharing the load on all interfaces.{{cite web|url=http://www.pctools.com/guides/registry/detail/951/|title=Norton Internet Security™ - PC Protection|website=www.pctools.com|access-date=9 May 2018|url-status=live|archive-url=https://web.archive.org/web/20170401111624/http://www.pctools.com/guides/registry/detail/951/|archive-date=1 April 2017}}

Broadcom offers advanced functions via Broadcom Advanced Control Suite (BACS), via which the teaming functionality of BASP (Broadcom Advanced Server Program) is available, offering 802.3ad static LAGs, LACP, and "smart teaming" which doesn't require any configuration on the switches to work. It is possible to configure teaming with BACS with a mix of NICs from different vendors as long as at least one of them is from Broadcom and the other NICs have the required capabilities to support teaming.Broadcom [http://www.broadcom.com/support/ethernet_nic/management_applications.php Windows Management Applications] {{webarchive|url=https://web.archive.org/web/20120801043221/http://www.broadcom.com/support/ethernet_nic/management_applications.php |date=2012-08-01 }}, visited 8 July 2012

== Linux and UNIX ==

Linux, FreeBSD, NetBSD, OpenBSD, macOS, OpenSolaris and commercial Unix distributions such as AIX implement Ethernet bonding at a higher level and, as long as the NIC is supported by the kernel, can deal with NICs from different manufacturers or using different drivers.

== Virtualization platforms ==

Citrix XenServer and VMware ESX have native support for link aggregation. XenServer offers both static LAGs as well as LACP. vSphere 5.1 (ESXi) supports both static LAGs and LACP natively with their virtual distributed switch.{{cite web |url=http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf |title=What's New in vSphere 5.1 networking |archive-url=https://web.archive.org/web/20140123083838/https://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf |archive-date=2014-01-23 |date=June 2012}}

Microsoft's Hyper-V does not offer link aggregation support from the hypervisor level, but the above-mentioned methods for teaming under Windows apply to Hyper-V.

Limitations

= Single switch =

With the modes balance-rr, balance-xor, broadcast and 802.3ad, all physical ports in the link aggregation group must reside on the same logical switch, which, in most common scenarios, will leave a single point of failure when the physical switch to which all links are connected goes offline. The modes active-backup, balance-tlb, and balance-alb can also be set up with two or more switches. But after failover (like all other modes), in some cases, active sessions may fail (due to ARP problems) and have to be restarted.

However, almost all vendors have proprietary extensions that resolve some of this issue: they aggregate multiple physical switches into one logical switch. Nortel's split multi-link trunking (SMLT) protocol allows multiple Ethernet links to be split across multiple switches in a stack, preventing any single point of failure and additionally allowing all switches to be load balanced across multiple aggregation switches from the single access stack. These devices synchronize state across an Inter-Switch Trunk (IST) such that they appear to the connecting (access) device to be a single device (switch block) and prevent any packet duplication. SMLT provides enhanced resiliency with sub-second failover and sub-second recovery for all speed trunks while operating transparently to end-devices.

Multi-chassis link aggregation group provides similar features in a vendor-nonspecific manner. To the connected device, the connection appears as a normal link aggregated trunk. The coordination between the multiple sources involved is handled in a vendor-specific manner.

=Ethernet aggregation mismatch=

Aggregation mismatch refers to not matching the aggregation type on both ends of the link. Some switches do not implement the 802.1AX standard but support static configuration of link aggregation. Therefore, link aggregation between similarly statically configured switches may work but will fail between a statically configured switch and a device that is configured for LACP.

Examples

=Ethernet=

{{Further|topic=link aggregation|EtherChannel|Multi-link trunking|802.3ad}}

On Ethernet interfaces, channel bonding requires assistance from both the Ethernet switch and the host computer's operating system, which must stripe the delivery of frames across the network interfaces in the same manner that I/O is striped across disks in a RAID 0 array.{{citation needed|date=May 2012}} For this reason, some discussions of channel bonding also refer to Redundant Array of Inexpensive Nodes (RAIN) or to redundant array of independent network interfaces.

{{cite book

| editor = Jielin Dong

| title = Network Dictionary

| url = https://books.google.com/books?id=On_Hh23IXDUC

| access-date = 2013-08-07

| series = ITPro collection

| year = 2007

| publisher = Javvin Technologies Inc.

| isbn = 9781602670006

| page = 95

| quote = Channel bonding, sometimes also called redundant array of independent network interfaces (RAIN), is an arrangement in which two or more network interfaces on a host computer are combined for redundancy or increased throughput.

}}

=Modems=

In analog modems, multiple dial-up links over POTS may be bonded. Throughput over such bonded connections can come closer to the aggregate bandwidth of the bonded links than can throughput under routing schemes which simply load-balance outgoing network connections over the links.

=DSL=

Similarly, multiple DSL lines can be bonded to give higher bandwidth; in the United Kingdom, ADSL is sometimes bonded to give for example {{nowrap|512 kbit/s}} upload bandwidth and {{nowrap|4 Mbit/s}} download bandwidth, in areas that only have access to {{nowrap|2 Mbit/s}} bandwidth.{{cn|date=September 2023}}

=DOCSIS=

Under the DOCSIS 3.0 and 3.1 specifications for data over cable TV systems, multiple channels may be bonded. Under DOCSIS 3.0, up to 32 downstream and 8 upstream channels may be bonded.[https://apps.cablelabs.com/specification/CM-SP-PHYv3.0 DOCSIS 3.0 Physical Interface Specification] These are typically 6 or 8 MHz wide. DOCSIS 3.1 defines more complicated arrangements involving aggregation at the level of subcarriers and larger notional channels.[https://apps.cablelabs.com/specification/CM-SP-PHYv3.1 DOCSIS 3.1 Physical Interface Specification]

=Broadband=

Broadband bonding is a type of channel bonding that refers to aggregation of multiple channels at OSI layers at level four or above. Channels bonded can be wired links such as a T-1 or DSL line. Additionally, it is possible to bond multiple cellular links for an aggregated wireless bonded link.

Other bonding methodologies reside at lower OSI layers, requiring coordination with telecommunications companies for implementation. Broadband bonding, because it is implemented at higher layers, can be done without this coordination.{{cite web|url=http://engineeringbook.net/articles/broadband-bonding-offers-high-speed-alternative/|title=Broadband bonding offers high-speed alternative|publisher=engineeringbook.net|access-date=5 April 2013|url-status=dead|archive-url=https://web.archive.org/web/20120707084446/http://engineeringbook.net/articles/broadband-bonding-offers-high-speed-alternative/|archive-date=7 July 2012}}

Commercial implementations of broadband channel bonding include:

  • Connectify's Speedify fast bonding VPN - software app for multiple platforms: PC, Mac, iOS and Android[http://speedify.com Connectify's Speedify Service]
  • Peplink's SpeedFusion Bonding Technology[http://www.peplink.com/technology/speedfusion-bonding-technology/ Peplink's SpeedFusion Bonding Technology ]
  • Viprinet's Multichannel VPN Bonding Technology[http://www.viprinet.com/en/technology/viprinet-principle Viprinet's Multichannel VPN Bonding Technology ]
  • Synopi's Natiply Internet Bonding Technology[http://www.synopi.com/natiply/ Synopi's Natiply Internet Bonding Technology]
  • ComBOX Networks multi-wan bonding as a service[https://www.combox-networks.com comBOX multi-wan services Broadband bonding]

=Wi-Fi=

{{Further|topic=channel bonding|Super G (wireless networking)}}

On 802.11 (Wi-Fi), channel bonding is used in Super G technology, referred to as {{nowrap|108 Mbit/s}}. It bonds two channels of standard 802.11g, which has {{nowrap|54 Mbit/s}} data signaling rate per channel.

On IEEE 802.11n, a mode with a channel width of 40 MHz is specified. This is not channel bonding, but a single channel with double the older 20 MHz channel width, thus using two adjacent 20 MHz bands. This allows direct doubling of the PHY data rate from a single 20 MHz channel.

See also

References

{{reflist|refs=

{{cite web |url=http://www.cisco.com/en/US/docs/ios/12_2sb/feature/guide/sbcelacp.html |title=IEEE 802.3ad Link Bundling |publisher=Cisco Systems |date=2007-02-27 |access-date=2012-03-15 |url-status=live |archive-url=https://web.archive.org/web/20120419100517/http://www.cisco.com/en/US/docs/ios/12_2sb/feature/guide/sbcelacp.html |archive-date=2012-04-19 }}

{{cite journal |last = Guijarro |first = Manuel |author2=Ruben Gaspar|year = 2008 |title = Experience and Lessons learnt from running High Availability Databases on Network Attached Storage |journal = Journal of Physics: Conference Series |volume = 119 |series = Conference Series |publisher = IOP Publishing |doi = 10.1088/1742-6596/119/4/042015 |issue = 4 |url = http://www.iop.org/EJ/article/1742-6596/119/4/042015/jpconf8_119_042015.pdf |access-date = 2009-08-17|quote = Network bonding (also known as port trunking) consists of aggregating multiple network interfaces into a single logical bonded interface that correspond to a single IP address. |pages = 042015 |bibcode = 2008JPhCS.119d2015G |display-authors=etal|doi-access = free }}

{{Cite web|url=https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurationGuide/EtherChannel.html|title=Cisco Nexus 5000 Series NX-OS Software Configuration Guide - Configuring Port Channels [Cisco Nexus 5000 Series Switches]|website=Cisco|language=en|access-date=2019-10-25}}

}}