Welcome to The IP Development Network Blog
Tuesday, 3 April 2007
Part 5: The problem is also how to route the packets
It’s no good just working out how to get paid for delivering video, although of course that helps. The amounts that you can make are small in comparison with the cost of inefficiencies in the way that the broadband networks route packets.
The incremental cost of delivering a 2 hour 1080p HD movie can go from £2.10 to zero by exploiting intelligence at the edge. That’s right, zero cost to the second user in a local community, and the third, and the fourth… All at zero incremental cost because you are only using the local loops which do not have an incremental cost. £1.30 a month for up to 24 megs always on, with no bandwidth cap. Now you’re talking…!
This is the second half of what was originally the 4th article looking at “How to make money from video”. That would now make this the fifth article in the series, in which I will be looking at the Technical Solutions that are required to minimise the distance that packets travel.
I want to re-iterate (for those that missed article 1) where the traffic that makes all this a problem is coming from. Video content on the internet is an elephant with a long tail.
Elephants do not have a short tail – they can be up to 1.5m long. Similarly there is a long tail of TV content, newly accessible in the internet era, but the main body of traffic is massive and dwarfs the tail.
The top 50 DVD titles in any given week in the US generate as much as 85% of total rental revenues. People think of the internet as a much bigger democracy in which there is fundamentally more choice, but YouTube’s numbers show an even greater level of concentration. The 20th most popular all time has been downloaded only 18% as often as the number 1. Social networking actually intensifies a human being’s natural instinct to follow the herd by making it much clearer the direction the herd is travelling.
In addition, as file sizes increase for the most popular items, so the number of files that make up 80% of the traffic shrinks further. I’ll use the example of Casino Royale, the James Bond film. A quick Google finds a couple of torrent sites with the 8.06 GB file. Compare that to the bbc.co.uk web site at 105 KB or Google.com at 13.4 KB. Casino Royale is equivalent to 80,000 bbc and 630,000 Google home page downloads. The point is that it doesn’t take very many downloads of Casino Royale to render insignificant today’s most popular non-video content.
With even greater numbers watching “mass market content”, it is increasingly the same content being carried over and over again on the networks. The same combinations of 1s and 0s are being sent down the same routes over and over again driving ever greater investments in networks. Why are we spending more money to carry the same thing over and over? Are we Lemmings walking off a cliff?
The internet cannot cope because of all the tromboning of traffic that goes on, which is the fundamental technical problem: internet networks are not built to route packets at a local level – they are built like trombones. Traffic travels a long way in tunnels before it actually sees a router. This level 2 backhaul is built to the hardware and built into the architecture because of problems managing significantly greater numbers of Level 3 devices. Peer to peer accentuates tromboning by bouncing files around between hosts on broadband networks that are designed for client server using ATM and L2TP.
So today’s networks are built to trombone traffic which is a very inefficient use of the pure bandwidth resource even if it is much simpler to administer. In the past there was no incentive to deal with the inefficiencies because there was not scarcity in the backhaul. It may be cheaper to get another backhaul circuit than to fix the architecture but as it accelerates, the speed with which you have to add new backhaul capacity will reach a tipping point.
The Solution to Tromboning
Pushing the routing function down to the exchange level, combined with more localised presence aware version of peer to peer has the potential to clean up the tromboning of traffic within networks themselves. Given an impending capacity crunch, it must surely be an economic imperative to solve the fundamental problem and start deploying the intelligence at the edge rather than in the core.
The capability of the silicon in consumer devices is astonishing, especially now that the PS3 is here. Peer to peer and grid computing unleash that potential to process information and generate traffic. Handling this requires a highly efficient architecture and tromboning is not highly efficient.
A large factor in tromboning is the interconnection architecture between competing providers which is concentrated in central locations, meaning that packets go a through London to get from Cambridge to Stevenage. It is worth noting that even interconnection between an ISPs own subscribers often trombones the traffic in and out of these same locations because of the centralised architectures deployed by ISPs internally.
Local routing must be combined with localised distribution of storage capacity. The cost of storage is in perpetual decline on a per GB basis and servers with 1 terabyte storage are available now for £1,400. That is a price of £1.40 per GB stored, including all the tin. Our UK LLU backhaul network model says that to get 1 GB from the internet would cost £0.23 per GB transferred (just under half being the cost of transit, the rest backhaul). In other words, if the network has to carry the same thing more than seven times, it would have been more efficient to cache it the first time.
What makes the situation much more serious is the fact that many of the biggest users of bandwidth are operating in a dark place, exploiting the democracy of the internet to avoid paying for their slice of the action. Peer to Peer applications were described by James Enck at Telco 2.0 as parasitic which is highly evocative. I suspect that this was a deliberately inflammatory comment designed to get people thinking rather than a view that we should exterminate it for the safety of humankind.
But following the analogy for a minute, the newest and potentially most deadly parasite is Joost which uses 700kbps (320 MB per hour) of your downstream bandwidth and around 220kbops (105 MB per hour) upstream when the application is running. You don’t even need to be watching anything, it still uses the network when the programme is minimised.
This is just the start: 700kbps on Joost is very blocky full screen and nowhere near the limits of the media codecs, processors and monitors out there. It would be reasonable to expect this 700kbps to grow as network bottlenecks are removed (version 0.9 released today looks like nearer 800kbps, but that is not a scientific measure).
Its easy to agree with the view that peer to peer is the root of all evil, but that ignores the reasons why it is such a problem. Peer to peer is inefficient because traffic trombones around the country. This would not be the case if files were being shared between neighbours but the way that networks are designed ensures traffic often goes to London and back a few times on its journey to reach me. Put that into an offline context…
It is not peer to peer that is the problem. The concept of sharing files and bandwidth exploits the latent capability of installed silicon, storage and bandwidth at the edge of networks. It is the network cores that can’t cope because of all the tromboning.
Consider localised peer to peer: sharing files with people in the same town or village, in a network that is locally routed. One seed file and everyone can see it. All at zero incremental cost because the local loops are the one remaining element of the delivery chain that can be guaranteed to be a fixed price.
Service providers can even encourage this by having localised social networks that, conveniently for the operators, encourage neighbours to watch the same stuff by highlighting the “popular stuff” on the EPG.
At this point, it is worth also highlighting interconnection between competing networks as a real issue. If it is left centrally as today, much of the benefit of local routing will not be realised as traffic from me on ISP 1 would still have to go to my neighbour on ISP 2 via London. That would be like having to go via London to see him because I drive a Saab and he drives a VW…
A further stumbling block that contributes to the discussions about local interconnection is the way in which the internet deploys “hot-potato” routing. If a file is destined for Network 2, it will leave Network 1 at its first opportunity, travelling the rest of the way on Network 2. If Network 1 and 2 are different sizes, the smaller will offload far more packets onto the larger one’s backhaul network than it contributes the other way, thus receiving a proportionally better deal. So if they get done, interconnection deals will get done very slowly as the largest guard their positions.
In the end, I would probably be forced to get a VW because everyone else had one, which highlights the IP as a Natural Monopoly issue. The more networks, the more costly and complex the sum of the parts becomes. Interconnection at the local exchange would only be possible between a few big players before it also became too cumbersome to manage.
It is all very well having the commercial solutions that I went through yesterday but many of these value added revenue opportunities are also available to broadcasters too so the differential service advantages from using the internet for video are not as they might first appear.
Consumers can choose to stay with broadcast. Broadcast is not going away, and even if it is not as “rich” as the internet based TV experience could be, if it is oodles cheaper, people will stick with it.
Ideally, it would be great for all TV content to move over to the internet because it would bring new capabilities to the medium for enhanced social networking – the X Factor is an example of how TV uses social networking extremely successfully already…
I talked about the Telenor football example yesterday, but I skimmed over the social networking that they are developing around their audiences. They are working to allow the clubs supporters to share their experiences with photos and blogs, and to allow users to create personalised highlights packages of goals and incidents that involve a user’s favourite players.
The internet brings the undeniable potential for adding value to the TV experience. But, and it is a big but, internet networks are not designed to distribute mass market content and broadcast is immeasurably more efficient, even if it is limited in capacity terms and by the lack of an uplink.
The alternative to fixing the architecture as outlined here is to install more capacity into the backhaul networks. Even where there are existing ducts, this is expensive but there will of course be places where user density means doing so will cost into the business plan, in the short term. Such a plan is based on switching your users from the competition by offering significant improvements in price or speed or both. But even that is successful, what is your incentive in a saturated market to make the next upgrade step when it is required?
Heck, it may even be that there is enough installed capacity to at least serve the cities a decent ADSL 2+ service, sorry if you live in the countryside, and you are not going to get VDSL anywhere, ever – that’s government’s fault, ask them for a subsidy… Sorry, asking for a subsidy doesn’t wash for me when the last investment is being used so inefficiently.
Re-engineering the broadband networks, deploying local routing and exploiting silicon, storage and bandwidth at the edge is indeed much harder than doing a few deals with content providers and whitelisting their services. And, by the way, I’m not volunteering to do it for the same reason why the market has not yet found a solution: there is rarely any money in setting industry standards unless you are a monopoly and can then license those standards to smaller players.
And here we reach the final part of the puzzle that I want to address in this series: the incentive to act positively, which in this world is all about how to route the money again. There has to be a solution that shares the benefits between the elements of the value chain at least vaguely proportionally to their investments.
This means that content owners need to give up their apparent ambition to squeeze the network companies out of existence because without a network, there is no service. Trying to drown them in skybybroadband was a bit naughty, but Joost really needs to start holding peace talks with the access providers before it goes any further. MAD benefits no one in the end.
In return for cooperation and a share of the money, the networks should provide premium services and accept that the consumer is going to pay the distributor and not them for the content delivery. There is no incentive for either the content owner or their customer to pay unless there is a genuine technical or price advantage. Sorry net neutralists, as Martin Geddes says, yours is a “Philosophical Error”. You need to pay a courier if you want something big and heavy delivered to your home – what is so different with expecting payment for a large object online?
There is even a strong financial incentive for companies to solve the problems routing the money and the packets: the online video market, but I suspect that this will probably involve a disruptor or two.
Other Articles in the SeriesPart 1: The Online Video MarketPart 2: The cost of Online VideoPart 3: Traffic ManagementPart 4: Routing the MoneyPart 5: Routing the PacketsSummary Slides
Part 4: “The problem is not how to route the packe...
Part 3: Managing Traffic Volumes
Part 2: The cost implications of video
Part 1: The Online Video Market
How to make money from Video
IIR LLU Conference
IP as a Natural Monopoly
Intelligence at the Edge
Local Loop Unbundling
The Venice Project