Multipath routing is the routing technique of using multiple alternative paths through a network, which can yield a variety of benefits such as fault tolerance, increased bandwidth, or improved security. The multiple paths computed might be overlapped, edge-disjointed or node-disjointed with each other. Extensive research has been done on multipath routing techniques, but multipath routing is not yet widely deployed in practice.
Wireless networks
To improve performance or fault tolerance: Concurrent multipath routing is often taken to mean simultaneous management and utilization of multiple available paths for the transmission of streams of data. The streams may be emanating from a single application, or multiple applications. A stream is assigned a separate path, as uniquely possible given the number of paths available. This provides better utilization of bandwidth by creating multiple transmission queues. It provides a degree of fault tolerance in that should a path fail, only the traffic assigned to that path is affected. There is also, ideally, an alternative path immediately available upon which to continue or restart the interrupted stream. CMR provides better transmission performance and fault tolerance by providing:
Avoidance of path discovery when reassigning an interrupted stream.
Shortcomings of CMR are:
Some applications may be slower in offering traffic to the transport layer, thus starving paths assigned to them, causing under-utilization.
Moving to the alternative path will incur a potentially disruptive period during which the connection is re-established.
True CMR
A more powerful form of CMR goes beyond merely presenting paths to applications to which they can bind. True CMR aggregates all available paths into a single, virtual path. Applications send their packets to this virtual path, which is de-multiplexed at the Network Layer. The packets are distributed to the physical paths via some algorithm including round-robin or weighted fair queuing. Should a link fail, succeeding packets are not directed to that path. The stream continues uninterrupted, transparently to the application. This method provides significant performance benefits over the application level CMR:
By continually offering packets to all paths, the paths are more fully utilized.
No matter how many nodes fail, so long as at least one path constituting the virtual path is still available, all sessions remain connected. This means that no streams need to be restarted from the beginning and no re-connection penalty is incurred.
True CMR can, by its nature of using differing routes, causes out-of-order delivery of packets. That is severely debilitating for standard TCP. Standard TCP, however, has been exhaustively proven to be inappropriate for use in challenged wireless environments and must, in any case, be augmented by a facility, such as a TCP gateway, that is designed to meet the challenge. One such gateway tool is SCPS-TP, which by using the Selective Negative Acknowledgement capability instead of ACK all datagrams, deals successfully with the OOOD problem. Another important benefit of true CMR, desperately needed in wireless network communications, is its support for enhanced security. Simply put, for an exchange to be compromised, multiple of the routes it traverses must be compromised. The reader is referred to the references in the “To improve network security” section for discussion on this topic.
* Again, minimize the maximal load of all remaining links, but now without the bottlenecks of the 2nd network layer as well.
Repeat this algorithm until the entire communication footprint is enclosed in the bottlenecks of the constructed layers.
At each functional layer of the network protocol, after minimizing the maximal load of links, the bottlenecks of the layer are discovered in a bottleneck detection process.
At each iteration of the detection loop, we minimize the sending of traffic over all links having maximal loading, and being suspected as bottlenecks.
Links unable to maintain their traffic load at the maximum are eventually removed from the candidate path list.
The bottleneck detection process stops when there are no more links to remove, because this best path is now known.