WiFi: Struggling Handling Handoff’s
Apologies to my long suffering readers who have been awaiting a daily dose of TeleBusillis, but I’ve been struggling to catch up since my holiday and fortunately have spent most of the day laughing rather than catching-up: a rather obscure paper on unusual behaviour in a heavily congested WiFi network has tickled my fancy.
To place this into context we have to travel back in time to a period where mobile handsets resembled bricks and a group of engineers were struggling in relative privacy to implement the “Great Software Monster” which has been abbreviated to GSM in today’s language. If my memory serves me well: the hardest nut to crack was the problem of handing over a call from one base station to another without the user knowing and the call dropping.
Step forward in time a decade or so and a different set of engineers were working on a different technology this time in the full glare of publicity on cracking a rather similar problem – handing over a call from a UMTS base station to a GSM station and back again without a call dropping.
I was reading today about the wireless network traffic at the 65th IETF meeting in Dallas, Texas in March 2006. It is well known that the current family of 802.11 standards does not perform well in heavily congested environments and 500 IETF engineers with laptops in a single room is probably as congested as you can get. The main conclusion of the paper was the abnormal level of handoff’s adding significant traffic to a congested network as clients seek better bandwidth from an alternative Access Points. Furthermore, different vendor implementations of the standard had different handoff performance with Apple appearing to have the best algorithm with Intel and Cisco vying for the worst. Even worse was the treatment of a “Level 3 handoff” which basically closed down all network sockets on the client and reassigned a new IP address before re-establishing sockets: the average time taken to detect the loss of connection was 2.5secs with the average handoff time being 1.2secs. In other words 3.7secs of “dead time” - hardly acceptable for a voice call or video watching for instance. It seems pretty obvious that the 802.11 standards aren’t really designed to deal with handoffs efficiently.
Now with the blueprints for the next greatest technology (Wimax or 802.16) being argued about (sorry discussed) in the IETF corridors, I wonder if the biggest problem they’ll have to address is handoffs or will they brush it to the side like WiFi?
By the way the paper was “IEEE 802.11 in the Large: Observations at an IETF Meeting” by Andrea G. Forte, Sangho Shin and Henning Schulzrinne. Professor Schulzrinne is seen by many as the Godfather of the SIP protocols so I seriously doubt that anyone can doubt his impartiality.
To place this into context we have to travel back in time to a period where mobile handsets resembled bricks and a group of engineers were struggling in relative privacy to implement the “Great Software Monster” which has been abbreviated to GSM in today’s language. If my memory serves me well: the hardest nut to crack was the problem of handing over a call from one base station to another without the user knowing and the call dropping.
Step forward in time a decade or so and a different set of engineers were working on a different technology this time in the full glare of publicity on cracking a rather similar problem – handing over a call from a UMTS base station to a GSM station and back again without a call dropping.
I was reading today about the wireless network traffic at the 65th IETF meeting in Dallas, Texas in March 2006. It is well known that the current family of 802.11 standards does not perform well in heavily congested environments and 500 IETF engineers with laptops in a single room is probably as congested as you can get. The main conclusion of the paper was the abnormal level of handoff’s adding significant traffic to a congested network as clients seek better bandwidth from an alternative Access Points. Furthermore, different vendor implementations of the standard had different handoff performance with Apple appearing to have the best algorithm with Intel and Cisco vying for the worst. Even worse was the treatment of a “Level 3 handoff” which basically closed down all network sockets on the client and reassigned a new IP address before re-establishing sockets: the average time taken to detect the loss of connection was 2.5secs with the average handoff time being 1.2secs. In other words 3.7secs of “dead time” - hardly acceptable for a voice call or video watching for instance. It seems pretty obvious that the 802.11 standards aren’t really designed to deal with handoffs efficiently.
Now with the blueprints for the next greatest technology (Wimax or 802.16) being argued about (sorry discussed) in the IETF corridors, I wonder if the biggest problem they’ll have to address is handoffs or will they brush it to the side like WiFi?
By the way the paper was “IEEE 802.11 in the Large: Observations at an IETF Meeting” by Andrea G. Forte, Sangho Shin and Henning Schulzrinne. Professor Schulzrinne is seen by many as the Godfather of the SIP protocols so I seriously doubt that anyone can doubt his impartiality.
<< Home