The IP Development Network
spacer
spacer
spacer

Welcome to The IP Development Network Blog

Tuesday, 3 April 2007

 

Part 5: The problem is also how to route the packets

It’s no good just working out how to get paid for delivering video, although of course that helps. The amounts that you can make are small in comparison with the cost of inefficiencies in the way that the broadband networks route packets.

The incremental cost of delivering a 2 hour 1080p HD movie can go from £2.10 to zero by exploiting intelligence at the edge. That’s right, zero cost to the second user in a local community, and the third, and the fourth… All at zero incremental cost because you are only using the local loops which do not have an incremental cost. £1.30 a month for up to 24 megs always on, with no bandwidth cap. Now you’re talking…!

This is the second half of what was originally the
4th article looking at “How to make money from video”. That would now make this the fifth article in the series, in which I will be looking at the Technical Solutions that are required to minimise the distance that packets travel.

I want to re-iterate (for those that missed article 1) where the traffic that makes all this a problem is coming from. Video content on the internet is an elephant with a long tail.

Elephants do not have a short tail – they can be up to 1.5m long. Similarly there is a long tail of TV content, newly accessible in the internet era, but the main body of traffic is massive and dwarfs the tail.

The top 50 DVD titles in any given week in the US generate as much as 85% of total rental revenues. People think of the internet as a much bigger democracy in which there is fundamentally more choice, but YouTube’s numbers show an even greater level of concentration. The 20th most popular all time has been downloaded only 18% as often as the number 1. Social networking actually intensifies a human being’s natural instinct to follow the herd by making it much clearer the direction the herd is travelling.

In addition, as file sizes increase for the most popular items, so the number of files that make up 80% of the traffic shrinks further. I’ll use the example of Casino Royale, the James Bond film. A quick Google finds a couple of torrent sites with the 8.06 GB file. Compare that to the bbc.co.uk web site at 105 KB or Google.com at 13.4 KB. Casino Royale is equivalent to 80,000 bbc and 630,000 Google home page downloads. The point is that it doesn’t take very many downloads of Casino Royale to render insignificant today’s most popular non-video content.

With even greater numbers watching “mass market content”, it is increasingly the same content being carried over and over again on the networks. The same combinations of 1s and 0s are being sent down the same routes over and over again driving ever greater investments in networks. Why are we spending more money to carry the same thing over and over? Are we Lemmings walking off a cliff?

The internet cannot cope because of all the tromboning of traffic that goes on, which is the fundamental technical problem: internet networks are not built to route packets at a local level – they are built like trombones. Traffic travels a long way in tunnels before it actually sees a router. This level 2 backhaul is built to the hardware and built into the architecture because of problems managing significantly greater numbers of Level 3 devices. Peer to peer accentuates tromboning by bouncing files around between hosts on broadband networks that are designed for client server using ATM and L2TP.

So today’s networks are built to trombone traffic which is a very inefficient use of the pure bandwidth resource even if it is much simpler to administer. In the past there was no incentive to deal with the inefficiencies because there was not scarcity in the backhaul. It may be cheaper to get another backhaul circuit than to fix the architecture but as it accelerates, the speed with which you have to add new backhaul capacity will reach a tipping point.

The Solution to Tromboning
Pushing the routing function down to the exchange level, combined with more localised presence aware version of peer to peer has the potential to clean up the tromboning of traffic within networks themselves. Given an impending capacity crunch, it must surely be an economic imperative to solve the fundamental problem and start deploying the intelligence at the edge rather than in the core.

The capability of the silicon in consumer devices is astonishing,
especially now that the PS3 is here. Peer to peer and grid computing unleash that potential to process information and generate traffic. Handling this requires a highly efficient architecture and tromboning is not highly efficient.

A large factor in tromboning is the interconnection architecture between competing providers which is concentrated in central locations, meaning that packets go a through London to get from Cambridge to Stevenage. It is worth noting that even interconnection between an ISPs own subscribers often trombones the traffic in and out of these same locations because of the centralised architectures deployed by ISPs internally.

Local routing must be combined with localised distribution of storage capacity.
The cost of storage is in perpetual decline on a per GB basis and servers with 1 terabyte storage are available now for £1,400. That is a price of £1.40 per GB stored, including all the tin. Our UK LLU backhaul network model says that to get 1 GB from the internet would cost £0.23 per GB transferred (just under half being the cost of transit, the rest backhaul). In other words, if the network has to carry the same thing more than seven times, it would have been more efficient to cache it the first time.

Stumbling Blocks
What makes the situation much more serious is the fact that many of the biggest users of bandwidth are operating in a dark place, exploiting the democracy of the internet to avoid paying for their slice of the action. Peer to Peer applications were described by James Enck at Telco 2.0 as parasitic which is highly evocative. I suspect that this was a deliberately inflammatory comment designed to get people thinking rather than a view that we should exterminate it for the safety of humankind.

But following the analogy for a minute, the newest and potentially most deadly parasite is Joost which uses 700kbps (320 MB per hour) of your downstream bandwidth and around 220kbops (105 MB per hour) upstream when the application is running. You don’t even need to be watching anything, it still uses the network when the programme is minimised.

This is just the start: 700kbps on Joost is very blocky full screen and nowhere near the limits of the media codecs, processors and monitors out there. It would be reasonable to expect this 700kbps to grow as network bottlenecks are removed (version 0.9 released today looks like nearer 800kbps, but that is not a scientific measure).

Its easy to agree with the view that peer to peer is the root of all evil, but that ignores the reasons why it is such a problem. Peer to peer is inefficient because traffic trombones around the country. This would not be the case if files were being shared between neighbours but the way that networks are designed ensures traffic often goes to London and back a few times on its journey to reach me. Put that into an offline context…

It is not peer to peer that is the problem. The concept of sharing files and bandwidth exploits the latent capability of installed silicon, storage and bandwidth at the edge of networks. It is the network cores that can’t cope because of all the tromboning.

Consider localised peer to peer: sharing files with people in the same town or village, in a network that is locally routed. One seed file and everyone can see it.
All at zero incremental cost because the local loops are the one remaining element of the delivery chain that can be guaranteed to be a fixed price.

Service providers can even encourage this by having localised social networks that, conveniently for the operators, encourage neighbours to watch the same stuff by highlighting the “popular stuff” on the EPG.

At this point, it is worth also highlighting interconnection between competing networks as a real issue. If it is left centrally as today, much of the benefit of local routing will not be realised as traffic from me on ISP 1 would still have to go to my neighbour on ISP 2 via London. That would be like having to go via London to see him because I drive a Saab and he drives a VW…

A further stumbling block that contributes to the discussions about local interconnection is the way in which the internet deploys “hot-potato” routing. If a file is destined for Network 2, it will leave Network 1 at its first opportunity, travelling the rest of the way on Network 2. If Network 1 and 2 are different sizes, the smaller will offload far more packets onto the larger one’s backhaul network than it contributes the other way, thus receiving a proportionally better deal. So if they get done, interconnection deals will get done very slowly as the largest guard their positions.

In the end, I would probably be forced to get a VW because everyone else had one, which highlights the
IP as a Natural Monopoly issue. The more networks, the more costly and complex the sum of the parts becomes. Interconnection at the local exchange would only be possible between a few big players before it also became too cumbersome to manage.

Summary
It is all very well having the commercial solutions that I went through yesterday but many of these value added revenue opportunities are also available to broadcasters too so the differential service advantages from using the internet for video are not as they might first appear.

Consumers can choose to stay with broadcast. Broadcast is not going away, and even if it is not as “rich” as the internet based TV experience could be, if it is oodles cheaper, people will stick with it.

Ideally, it would be great for all TV content to move over to the internet because it would bring new capabilities to the medium for enhanced social networking – the X Factor is an example of how TV uses social networking extremely successfully already…

I talked about the Telenor football example yesterday, but I skimmed over the social networking that they are developing around their audiences. They are working to allow the clubs supporters to share their experiences with photos and blogs, and to allow users to create personalised highlights packages of goals and incidents that involve a user’s favourite players.

The internet brings the undeniable potential for adding value to the TV experience. But, and it is a big but, internet networks are not designed to distribute mass market content and broadcast is immeasurably more efficient, even if it is limited in capacity terms and by the lack of an uplink.

The alternative to fixing the architecture as outlined here is to install more capacity into the backhaul networks. Even where there are existing ducts, this is expensive but there will of course be places where user density means doing so will cost into the business plan, in the short term. Such a plan is based on switching your users from the competition by offering significant improvements in price or speed or both. But even that is successful, what is your incentive in a saturated market to make the next upgrade step when it is required?

Heck, it may even be that there is enough installed capacity to at least serve the cities a decent ADSL 2+ service, sorry if you live in the countryside, and you are not going to get VDSL anywhere, ever – that’s government’s fault, ask them for a subsidy… Sorry, asking for a subsidy doesn’t wash for me when the last investment is being used so inefficiently.

Re-engineering the broadband networks, deploying local routing and exploiting silicon, storage and bandwidth at the edge is indeed much harder than doing a few deals with content providers and whitelisting their services. And, by the way, I’m not volunteering to do it for the same reason why the market has not yet found a solution: there is rarely any money in setting industry standards unless you are a monopoly and can then license those standards to smaller players.

And here we reach the final part of the puzzle that I want to address in this series: the incentive to act positively, which in this world is all about how to route the money again. There has to be a solution that shares the benefits between the elements of the value chain at least vaguely proportionally to their investments.

This means that content owners need to give up their apparent ambition to squeeze the network companies out of existence because without a network, there is no service. Trying to drown them in skybybroadband was a bit naughty, but Joost really needs to start holding peace talks with the access providers before it goes any further. MAD benefits no one in the end.

In return for cooperation and a share of the money, the networks should provide premium services and accept that the consumer is going to pay the distributor and not them for the content delivery. There is no incentive for either the content owner or their customer to pay unless there is a genuine technical or price advantage. Sorry net neutralists, as Martin Geddes says, yours is a “Philosophical Error”. You need to pay a courier if you want something big and heavy delivered to your home – what is so different with expecting payment for a large object online?

There is even a strong financial incentive for companies to solve the problems routing the money and the packets: the online video market, but I suspect that this will probably involve a disruptor or two.


Other Articles in the Series

Part 1: The Online Video Market
Part 2: The cost of Online Video
Part 3: Traffic Management
Part 4: Routing the Money
Part 5: Routing the Packets
Summary Slides

Comments:
given that the register story
http://www.theregister.co.uk/2007/04/20/the_economics_of_prime_time/page2.html woulnt let people post comments directly there i have come here to say my part.

while you gloss over the real solution in the referenced PDF
http://ipdev.net/downloads/HD-TV%20Who%20Pays%20the%20Bill.pdf

and talk of
Content Transit IP Stream LLU
50 Minute Album - MP3 Encoding £ 0.01 £ 0.14 £ 0.01
2 Hour HD at 1080p £ 1.03 £ 21.13 £ 1.07

you seem to forget to mention that figure is based on the old MPEG2 codec NOT the far more advanced and lossless option that AVC/H.264 get you ,a FAR greater saving in the initial file size per hour given the HD1080P or less files.

also while you mention the so called edge optimisations , you dont inform the boardroom readers that its in their interest to stop being ludites and lookin gto the US models for milking the last remnents of profit form the MPEG2 format incable etc.

the VM (NTL/tw) accountants comment resently that they are using the MPEG2 codec on their UK cable network and the antiquated V+(TW TVdrive) rather than the far more advanced NTL trialed AVC STB that could be sitting on every Virgin Media customers desk/Tv ready to take any new AVC headend dual codec realtime encoders that exist today.


so were do we stand, this is whats needed and to be encuraged , make the ISP's turn on the included Multicasting capabilitys that have existed in every single commercial router since day one, all the way to the end users cable modem PC's.

insist that the likes of Azureus P2P/torrent coders take up the challenge to incorporate this Multicasting capability into the codebase or at the very least use multicasting tunnels over the ipv4/ipv6 ISP networks, there exista already JAVA multicasting DHT code 'bamboo' http://bamboo-dht.org/tutorial.html and java tunnel code mTunnel http://www.cdt.luth.se/~peppar/progs/mTunnel/ that some willing java coder could use as the test base to get someting working and out there, NOONE seems interested in saving masses of bandwidth and that included your pressious/inadiquate upload bandwdth rates.

again , java based Azureus knows about your local lan connections and can use that as a seperate predered download option over the WAN ,so as iv been saying for MANY years put these 3 open java codebases together into a multicastineg hybrid and we are 80%+ there already , someone capable of and willing most at the very least try and put a trial app together to see if it will pan out PLEASE for everyones benefit ,not least the ISP's and the end users alike.....

multicasting, DHT,P2P saves MASSES of BANDWIDTH.

make the Virgin Media's of the world turn on their multicasting to the users today and we can begin a whole new world of real innovation and [b]reasonable[/b] profit.
 
There are some interesting points to pick up on here. Firstly, to say that popper is completely right, Multicasting is a viable technical solution for mass market programming, but for whatever reason it has never been adopted on the internet. I think back to UUNET's Multicast product from the mid 1990s - the technology is not new - but networks have been built without it.

I wonder whether we will ever see it make a come back as the sort of locally routed p2p network that I have described is likely to be a better long term option, even if it might be hard to do. Using p2p and pushing the routing deeper somehow seems more evolutionary whereas "going back" and fitting multicast maybe is too revolutionary.

On the codecs, of course codecs will increase efficiencies, but I will stand by the data as the calcs were based on what was (and still is) available from iTunes at the time. All I did was take the file size, divide by duration to get bits per second and then scaled it occording to the length of film I wanted to represent.

Elsewhere in the series, I referenced an 8 GB Casino Royale movie from Mininova. This is 144 mins, which makes it 24% smaller per minute than the reference 1080p file, because the Casino Royale movie is a 720p rip.

Looking through the list of available files it is certainly fair to say that most downloads are of the 4 GB variety, which shows some element of even the market for pirate content regulating itself. I am advised that this has been happening of late - people are settling for standard resolution because it is 4 x quicker than HD (they can get 4 times as much content in the same amount of time - "value for money" mininova style?)

Getting back to the codecs, yes we will see incremental gains from that, but this only shifts the burden onto the memory in the hardware. Better compression requires faster processors and this is a fine balance.

I prefer to focus on what is an obviously inefficient use of the natural bandwidth resources because this needs is a step change if the potential of the consumer electronics devices are to be exploited.

My utlimate conclusion is that because the cost of 1080p is £2.10, the applications will have to serve at sub-optimal quaility and the ISPs will be the bad guys until the strutural problems are addressed.
 
thanks jeremy, and please forgive the bad typing, i wish we could edit posts in blogs but for that we would need to look to the likes of REBOL http://www.rebol.net/ blog scripting ;)

its a refreshing change to see a professional writer see the benefit of available tech thats already deployed, powered and ready to be used, IF THEY HAVE THE WILL and the skill.

i also take your point about 'make a come back as the sort of locally routed p2p network' too.

for me and you its such an obvious thing as we both appear to remember the innovative Mbone days ;)

so called revolution is a good thing when the kit is already there.

they just need to realise its there for the taking if they just step back and stop tying to come up with a way to make it pay.

turn it on and watch it pay for itself in massive savings in P2P traffic alone, not to mention the near-VOD and all the other outlined options that can then ride on it instead of unicasting.

i mention multicast tunnels as you already know, but for the benefit of the readers, its going to be VERY HARD to get the likes of the VM boardroom etc to OK the simple change and turn on the multicasting routers option, so end-user on the fly multicast tunnels are the next best thing.

sure you get a reduced throughput due to the overhead, but it also frees up the massive options as we can then ride the standard unicast ipv4 web network inside our MC tunnel.

BTW,the BBC have been running AVC multicast trials for a very long time now in the UK , but most of us cant actually trial it because most UK ISP's dont have multicasting turned on and thats a crying shame, a MC tunnel will solve that if you can find the end connection or someone started a MC server on the BBC MC network that you could connect too.

its even possible to have the classic server/client app make it easy, (again rebol/view GUI sripting can help that if your interested i can find the urls to the whiteboard MC)by following some basic MC rules, have the server select a few clients say between 3 and 10 and send one set of data/content to all of them as one operation not 10 seperate copys.

sure there will be naysayers, it cant be done because such and such, and its true..., as long as noone trys to make that app.

simple really, if your able to make it then do so and get it out there, iv pointed you in the right direction, do it and get fame and perhaps fortune as the first innovator of multicasting P2P for the masses, Joost didnt reply to my idea, nor did VUZE the new azureus beta video app ,perhaps you the reader can do it? or commission it as an open patform framework with fully working beta appa, (i get a free lifetime copy and exec use for my idea and insperation to the projects LOL).

on the matter of codec, i admit that currently (although given the massive EU and sky moves to IPTV and the AVC codec trials and deployments that will change far quicker than you might think) MPEG2 is the standard measure today so far point there....

its also interesting to note that some of the major heads of cable industry (alas not anyone in the VM boardroom yet) have hinted a multicasting ,but they seem to associate it with IPv6 and DOCSIS3 as their trojan hourse against the
other market leaders.....

they dont seem to realise they already have that capability and its powered up and waiting to be turned on, perhaps someone should tell them jeremy ;)
 
Post a Comment





<< Home

spacer

This page is powered by Blogger. Isn't yours?

 Subscribe in a reader