If small and medium business (SMB) spending on wireless data services is any indication, the U.S. economy is finally starting to rebound. By 2015, SMBs alone — home to roughly 2 out of every 3 new jobs — will spend 42 percent more on wireless data services than they did in 2010, predicts In-Stat, an analyst firm.
But there’s a dark lining to this silver cloud: Cellular networks are already struggling to keep up with today’s traffic. Accommodating a 42-percent increase just from SMBs — plus whatever large enterprises and consumers need — will be next to impossible without a fundamental change in how wireless devices get a connection.
Enter the fledgling concept of “network virtualization,” in which devices such as smartphones and tablets would constantly hop from network to network in search of the best connection and use two networks simultaneously to increase capacity. Depending on the application’s or user’s needs, the best network might be the fastest one, while in other cases, it might the cheapest or most secure. Network virtualization has been around for decades in wireline, such as packet-switching and least-cost routing.
But for wireless, the ability to flit between and combine networks — automatically, seamlessly and in real time — would be not just a new option, but a paradigm shift. Today, such switches typically require manual intervention. For example, when iPhone users attempt to download an app that’s 20 MB or larger, they get an alert telling them to switch to Wi-Fi first. That requirement highlights how today’s cellular networks are ill-equipped to handle bandwidth-intensive tasks, such as downloading large files and streaming HD videos.
Network virtualization would automate that process: The transceiver in devices such as smartphones and tablets would be constantly sniffing for all available networks nearby and then, unbeknownst to the user, automatically switch to the one that’s best suited to support whatever the user wants to do at that moment. This process wouldn’t be limited to a single air-interface technology, such as CDMA, or to a single wireless carrier. Instead, the transceiver would be scanning every band in a wide swath of spectrum — potentially between 400 MHz and 60 GHz — looking for potential connections, regardless of whether they use CDMA, LTE, WiMAX, Wi-Fi or some technology that has yet to be invented.
Network virtualization would also aggregate disparate networks when that’s the best way to support what the user is doing. For example, suppose that the user launches a streaming video app and selects the 1080p HD feed of a movie and that a WiMAX network and a Wi-Fi network are both available. Individually, those networks couldn’t supply enough bandwidth to provide a good viewing experience and still have enough capacity left over to accommodate other users. So instead, network virtualization would enable the device to connect to both networks simultaneously to get enough bandwidth, without taxing either of them.
“It allows you to use the network most efficiently,” says William Merritt, president and CEO of InterDigital, one of the companies pioneering the concept of network virtualization. “The driver behind this is the huge bandwidth crunch. You have a limited supply of spectrum, so you have to figure out how to use that spectrum most efficiently.”
Network virtualization may sound simple, but the concept can’t simply be ported wholesale from the wireline domain to wireless, whose unique considerations — particularly interference, battery life and signaling — have no counterparts in the wired world. So, just as network virtualization can fundamentally change wireless, network virtualization must go through some fundamental changes before it can be a commercially viable architecture.
What’s a Cognitive Radio?
Although network virtualization sounds deceptively straightforward, making it a commercial reality is anything but. A prime example is the transceiver. Today, device vendors support multiple air-interface technologies by including a separate transceiver for each: one radio for 3G, another for Wi-Fi and still another for WiMAX. That architecture has practical limitations, including cost, complexity, physical size and battery demands.
The ideal alternative would be a cognitive radio: a single transceiver that can span dozens of bands and technologies. That’s fundamentally different from a software-defined radio — something the wireless industry has been working on for more than a decade — because it wouldn’t be locked to a single air-interface technology. As a result, a cognitive radio could connect to a wider range of networks.
Besides cognitive radios, network virtualization would also require a framework that enables voice, video and data sessions to be seamlessly handed from one network or technology to another. A nascent version of those inter-standard handoffs is available today when voice calls are passed between cellular and Wi-Fi networks.
InterDigital recognized the need for inter-standard handoffs early on. In 2007, it invested in Kineto Wireless, one of the companies developing a framework known as Unlicensed Mobile Access, which bridges GSM/GPRS, Wi-Fi and Bluetooth. Meanwhile, InterDigital was also developing additional inter-standard handoffs for other wireless technologies. That work eventually wound up being commercialized for use in SK Telecom’s UMTS and WiBro networks, and eventually standardized in IEEE 802.21.
Another piece of the puzzle is back-office systems that can analyze traffic in real time and route it over the appropriate network. For example, suppose a tablet is running a videoconferencing app and a cloud-based file-sharing app simultaneously. The system might run video and audio over the lowest latency, highest bandwidth network available, while the files that the conference participants are discussing are downloaded over a slower or more secure network. InterDigital is currently working with Alcatel-Lucent and standards bodies to create those kinds of policy-control mechanisms.
“As we mature each piece, we bring them into the market, get validation that they work, with the ultimate destination that all of this comes together at some point and creates this virtual network,” says Merritt.
But Is It Cheap Enough and Secure Enough?
Although sales of smartphones and tablets remain brisk, they’ll eventually be a niche play compared to the roughly 50 billion machine-to-machine (M2M) devices that could be in service by 2020. M2M devices are essentially the guts of a cell phone attached to utility meters, shipping containers, surveillance cameras and even pets to support tasks such as location tracking, usage monitoring and video backhaul.
The M2M market is notoriously price-sensitive in terms of both hardware and service costs. One obvious question is whether a cognitive radio would be too expensive to tap that market. The answer lies in another part of the network virtualization architecture: trusted devices, which serve as gateways that other devices connect to and then through.
In M2M, the trusted device would have the cognitive radio and use Wi-Fi, Bluetooth, ZigBee or another technology to communicate with nearby M2M devices, which would be kept inexpensive by using a conventional RF design. This architecture could save money on the service side too, because though the M2M device might support only a single air-interface technology and band, it would have the ability to choose the cheapest network via the trusted gateway device.
Because network virtualization dramatically expands the number of device-network combinations, security is another key concern. Trusted devices at each network edge can help there too by identifying each device’s unique signature to determine whether it’s legitimate, and thus whether it deserves access to the network.
In cellular networks, this process would also reduce a problem that’s often overshadowed by bandwidth concerns: signaling, which can have just as much impact on the user experience. Even low-bandwidth services can generate so much signaling traffic that it clogs up the network. For example, in 2009, a single Android IM app generated so much signaling traffic that it nearly crashed T-Mobile USA’s network in multiple markets. With trusted devices, the authentication-related signaling could be confined to the network’s edge to reduce the core network’s workload.
Business and Regulatory Realities
As the pool of potential networks grows, so does the likelihood that some of them will belong to multiple companies. That’s another paradigm shift. Today, a consumer or business buys a wireless device from a carrier, which then provides service via its network(s) and those of its partners. Network virtualization upends that longstanding business model by freeing the device to connect to unaffiliated and even rival service providers.
For wireless carriers, one option might be to eliminate device subsidies — something they’ve been trying to do for decades anyway because of the cost — in exchange for allowing the cognitive radio to connect to any network. Another potential option is for carriers to build, buy or partner with as many networks as possible, a trend that’s already well underway in North America, Europe and other world regions.
Standards work is already laying the technological groundwork for such combinations. For example, the upcoming 3GPP Release 10 standard supports simultaneous connections to LTE and Wi-Fi, a tacit acknowledgement that no single technology can do it all.
“To some extent, those hurdles are falling away as a result of industry consolidation,” says Merritt. “Others will fall away as a result of necessity.”
Merritt argues that many of the business and regulatory hurdles are already set to fall because operators and governments are under pressure to find creative ways to reconcile bandwidth demands with a finite amount of spectrum. For example, in the Americas alone, governments will have to free up between 721 MHz and 1161 MHz of additional spectrum over the next nine years based on usage trends, according to a 2009 Arthur D. Little study. Freeing up that much spectrum — and that quickly — will be nearly impossible, which could make wireless carriers, telecom vendors and regulators more receptive to cutting-edge alternatives, such as network virtualization.
The result would be a wireless world that’s fundamentally different both behind the scenes and in terms of the user experience.
“Networks today are largely islands,” says Merritt. “Ultimately, the idea is you would have a pool of resources that you draw on in an intelligent, dynamic and seamless way to create what is, in effect, a virtual network.”
Photo Credit: @iStockphoto.com/cteconsulting