Category Archives: telco

Link Exchange (Telconomics)

    1. Basic Analysis of Entry and Exit in the US Broadband Market (2005–2008): broadband is a too-broad category.
    2. Margin Squeeze in the Telecommunications Sector: A More Economics-Based Approach. An issue in which the European Union Competition Authorities are ahead of the US Supreme Court.
    3. An Empirical Analysis of the Demand for Fixed and Mobile Telecommunications Services. Mobile calls are more inelastic than local calls.
    4. Network effects, Customer Satisfaction and Recommendation on the Mobile Phone Market. Supply-side rule over demand-side effects on the mobile market.

Reverse Engineering Network Protocols

The black arts of reverse engineering network protocols have been lost. These days, every network protocol seems to be run over HTTP and handling lots of XML: every network engineer of the past decades would just cringe at the thought of it.

Complete specifications of network protocols like those offered in RFCs have always been luxuries: the product of idealistic minds of the past like Jon Postel, they only exist for the better known protocols of the Internet. For the rest, their details could only be known by reverse engineering: and the truth is that it requires a deep understanding of traditional software debugging, using tools like IDA and/or OllyDbg, specially for protocols of the binary kind.

Thus, the case of Skype: a recent decompilation of its binaries using Hex-Rays was publicly sold as a reverse engineering of the whole protocol suite. Nothing could be further from the truth.

Providing yourself with a kit of the best tools is the best path to success:

  • Sniffers are boring, read-only tools to see through the network layers. More fun can be had by crafting network packets, as recently simplified by tools like Ostinato and scapy
  • Another set of tools focus on decoding text-like protocols: reverx  (paper), and the impressive netzob
  • And the more interesting ones, tools that cross-overs between debuggers and sniffers: oSpy, an utility to sniff network application calls, and windbgshark, an extension to integrate wireshark within windbg to manipulate virtual machine network traffic

It’s said that in computer science, there’s only a sure way to find a research topic to write papers about: just add automatic to any problem statement, and a whole area of research is born! (aka. the meta-folk theorem of CS research). Most of the time the topic is obviously undecidable and a huge effort will be needed to produce tools of real practical value, but this doesn’t seem to stop researchers to produce interesting Proof-Of-Concepts. Reverse engineering being such a painstaking manual process, it’s a perfect target for this way of producing research, and very different methods and approaches have been tested: Smith-Waterman and Needleman-Wunsch algorithms from bioinformatics, with a recent open-source implementation combined with statistical techniques; automata algorithms to infer transitions between statesstatic binary analysis and runtime analysis of binaries because access to the runtime call stack is very convenient whenever using distributed computing contexts. Finally, a very interesting project was Discoverer @Discover@MSR: they announced very high success rates for very complex protocols (RPCCIFS/SMB), but the tools were never released,

Download (PDF31KB)

This post would not be complete without the mention of the best inspiration for every reverse engineer in the network field: SAMBA, the magnum opus of Andrew Tridgell, an open-source interoperability suite to let Linux and Windows computers talk together. A book about the protocol and the project, Implementing CIFS, is as good as any divulgation book can get: he makes it look so easy, even a child could do it.

Captured Regulators on Natural Monopolies

Due to the high competition and low profit margins of the mobile network operators, an odd market structure with no precedents is being gestated in the UK: there will be only two mobile networks but five major brands, MVNOs aside. The first network is the result of the merging of T‑Mobile and Orange (Everything Everywhere) and the network sharing agreement of T‑Mobile and Three (Mobile Broadband Network Limited); the other network, Cornerstone, is result of the network sharing agreement of  Vodafone and Telefónica (O2).

There are multiple ways to analyze and interpret this situation: on one side, regulated and mandated fragmentation against natural monopolies/duopolies is a disaster waiting to happen that lowers the level of network investment, thus it always reverts back to their natural structure, as the Ma Bell history clearly shows; on the other side, the companies are forced to keep a façade of competition under multiple commercial entities that resell network access to the consumer, a messy situation that only a captured regulator would agree with.

What I do know is that this experiment would only happen in the UK and not under the current rule of the European Union, but given how influential and imitated the policies of the OFCOM regulator are, it’s a matter of time before other states follow. And I wonder how this play with BEREC, the European telco super-regulator: will network sharing create duopolies in every national market under the pretext of the incipient 4G deployment, but just some mega-marketing companies at the European level? Will the incentives for network investment be perfectly aligned under such structure?

The Need for Speed^WCapacity

Computer networks are used every day, but with a very limited understanding of the consequences of their cumulative aggregation. Network coding is the field that devises techniques for their optimal utilization to reach the maximum possible transmission rate in a network, under the assumption that the nodes are somewhat intelligent and able to alter the network flow and not just to forward it. It’s still nascent, so the practical impact of its results is quite limited: for example, it would very useful to have techniques and a tool to estimate the real network capacity in a multicast/P2P network, except it’s still an open problem. Fortunately, the following paper offers the first worthy approach to close this question:

Download (PDF839KB)

Although to be resistant to common Internet attacks, network coding should be accompanied with homomorphic signatures.

Modelling How Technological Change Influences Economic Growth

You can see the computer age everywhere but in the productivity statistics.” Robert Solow (1987 Nobel Prize in Economics) summed up with this harsh remark his celebrated “productivity paradox”, which itself started a research frenzy to find counterexamples to refute it. It took more than a decade, because there is a strong inter-relationship between information technologies and the human capital that they are at the same time complementing and substituting for, but at the end these affirmations could be discredited. Furthermore, another profound change with much more evidences against the paradox occurred parallel to the wide expansion of computer technology, which was also easier to measure and prove: the global spread of the digital mobile phone. To get a better understanding of its true economic impact, nothing better than to sum up the relevant literature regarding this topic.

From a purely microeconomic perspective, Jensen was able to prove that the introduction of mobile phones incremented the profits of North Kerala’ fishers to a whopping 8%, reducing at the same time the final consumer price by 4%: better communications enabled the access to wider markets, expanding the dealing possibilities of those offered by the previous local fish market, enhancing overall market efficiency via an stronger the law of one price. From a macroeconomic point of view, Waverman used statistical and econometric techniques to isolate cause from effect, to find that an increase of 10 devices per 100 in a developing country did add 0.6 points to GDP growth per capita and 0.5 to GDP growth: these results bring out the transformative power of technology to the the global economic activity.

And to gain a better understanding of how technological innovations are transmitted into the economy, I’ve put together a stylized model in an Excel workbook offering a mechanistic explanation of how a successful general purpose technology is able to impact economic growth in such a significant way: in the first sheet, a general Bass model is used to quantify the transition to digital mobile technology from 1996 to 2011 (taking care of network effects in a gross manner, better modelled using Becktrom’s law); in the second sheet, and by using the previously calculated penetration level of the digital mobile phone technology as one of the inputs, a neoclassical economic growth model (Solow-Swan) is utilised to explain its economic impact: note this particular model was the first used to introduce technological progress as a fundamental variable to explain economic growth, making it look like a component that increments the productivity of the labour factor and that also complements capital accumulation at the same time, itself divided in different periods of decreasing value to account for the technological depreciation process. The only negative aspect of this model is that technological progress is supposed to be constant over the full period of analysis, leaving aside the possibilities of a growing innovation rate, or a much more realistic decreasing innovation rate. In addition, other variables that are taken into account by the model are: capital depreciation, savings rate, population growth and the relationship between capital and labour in the resulting economic production. Besides, other technological changes could be analysed with the same Excel workbook, because they feature similar diffusion processes and economic impacts: the adoption of the car, substituting for horses; the diffusion of electricity; or the diffusion of computer, replacing the typewriter.

Later economic models supplement the previous one introducing the accumulation of human capital next to technological change, giving birth to endogenous economic growth theories that better explain the relationship between computer technology and economic growth: even if information technologies are mostly deployed for the purpose of substituting the labour factor, their true nature is incredibly complementary to human capital, but this is more difficult to prove econometrically. Last but not least, the entertainment potential of computer technology makes it to negatively redound in productivity growth statistics: for example, the 5 million hours that Angry Birds is played every day should also be counterbalanced in other ways.

OpenFlow to Free the Closed Internet

The biggest paradox of the Internet is that, being the epitomeness of freedom and openness, its actual implementation is even more closed than the old mainframes. And despite the fact that the whole thing has always been properly documented in RFC memorandums, the oddness and peculiarities of the concrete implementations have always lied hidden within router images, even for the most important inter-domain routing protocols, the biggest concern during interoperability tests.

So it turns out that the real definition of openness is a very nuanced one: in the software world, licensing and governance are paramount, meanwhile standards and interoperability are crucial and strategic in the networking world.

Fortunately, OpenFlow is unlocking a new window of openness in this closed world:  its approach to Software-Defined Networking enables reprogrammable traffic management techniques in the Layer 2 much like MPLS does in Layer 3, but in much more heterogeneous environments. Its first version is very featureless, missing IPv6, QoS, traffic shaping and high-availability, and lacking a killer app, its general adoption will take time, if ever. Even so, its ability to complement recent virtualization technologies in the data center, and being the only practical way for researchers to try out new experimental protocols, makes it a key technology to watch for in the next years.

The Curious Case of the Diverging Browser’s Caches

Browser’s cache fulfill several aims, among others, to save network bandwidth and to diminish web pages loading time which, in turn, drop down the time costs of delays over user’s web loading. For example, suppose the typical user spends an average of 450 hours/year to surf the web at a rate of 120 pages/hour; an implied wage of 12€/hour; a fall in loading time due to the use of a cache of 1 second via desktop and 10 seconds via mobile; a caching success rate of 40%, then we easily estimate that the typical user can save between 72€/year(computer) and 720€/year(mobile) by just activating the browser’s cache.

Therefore, and given storage and bandwidth’s current costs, the implied break-even point on the use of the browser’s cache is always positive, even to store all the browsed pages that the user would ever visit for decades, a time longer than the average life of any device. This fact will still uphold true not by the exponentially decreasing costs for storage and bandwidth, but just because the labor costs are linearly increasing in time. But taking apart labor costs from the equation just consider the technological trends and taking into account that mobile bandwidth’s costs will always be several orders of magnitude higher than fiber and cable’s bandwidth, we would face the curious case that using the browser’s cache will soon stop making any sense in a computer but will still be  profitable in a mobile, and that for the period of several decades and also taking into consideration the higher mobile storage costs. Note that this is just one of the many divergences that could appear in the future evolution of the various Internet browsing devices, and that will entail much greater instruction density per transmitted byte to correct them.

The key point of this and other analyses always rests under the relative differences in the price evolution between magnetic storage (Kryder’s law ‑2x every 13 months-), the circuit’s scale of integration (Moore’s law ‑2x every 18 months-) and bandwidth’s throughput (Nielsen’s law ‑2x every 21 months-), among others. And we should put greater emphasis in the last one, since being the one with the slower evolution will also make it to be the most limited resource and, therefore, the one that will end up dominating the final price of any computer system. And on the other hand, storage will be the most used resource to lessen the disadvantages and deficiencies brought by telecommunication’s slowest evolution, following Jevons’s paradox, which remind us that increases in the efficiency with which a resource is used tend to increase, rather than decrease, the rate of consumption of that resource.

On the subject of the expected evolution of telecommunications, it would always be necessary to take apart the trends of the different underlying technologies (fiber, cable and wireless). And although the most optimistic would certainly lean into Edholm’s law, that predicts that the throughput of the different technologies will end up converging as a result of the law of the decreasing marginal returns on the fastest ones and even when taking into consideration the parallel increases in throughput that they have been experiencing, it will be the Cooper’s law regarding the efficiency in the use of the electromagnetic spectrum (-2x every 30 months-), the one law that highlights the underlying idiosyncrasy of wireless since it exploits a natural resource with no possibility of being expanded: analyzing its increases in efficiency in the last 100 years, we find that improvements in coding methods only explain the 0,6% of its enhancement; the enlargement of the spectrum under utilization, a mere 1,5%; and the most efficient use of the spectrum by its better confinement, the resting 97,9%. Nevertheless, optical fiber is in hard contrast to any wireless technology (Butter’s law ‑2x every 9 months), and just another reason to expect that the differences between the software applications available on mobile devices and the non-mobile ones using optical fiber cannot but be heightened over the years, the raison d’être of the mobile software cambrian explosion.

Telco Market Distortions

The FCC allowed small and rural telcos (local exchange carriers, LECs) in the USA to charge higher access fees to long distance and wireless companies (AT&T, Sprint, Verizon) to subsidize them, under the auspices of the Telecommunications Act of 1996. Abusing the prerogative, they partnered with conference call providers and providers of other shady services, giving birth to traffic pumping: generate high volume of incoming calls above typical rural usage to charge millions of dollars of fees to long distance and wireless companies and split the revenues with the service providers. Fast-forward to the present, technological advances and new business models are having a hard time to operate under this old set of rules, hampering new innovative services like Google Voice.

Every distortion introduced by regulators in the free market and the natural state of technology, however well-intended, sows the seeds of its own self-destruction.

Hertz, No Moore

More than half of all computers aren’t computers anymore

More than half of all computers aren’t computers anymore. Smartphone shipments surpass PCs. The transition is done. But in this new era of mobile personal computing, the limiting factor is not the CPU, it’s the spectral efficiency of the whole mobile environment, experienced as the user goodput. At least 1MBit/s is needed to get an optimal browsing experience on a mobile phone.

Evolution of average versus peak spectral efficiency over time

As in Moore’s law, the growth is exponential, but with a rather less pronounced slope. And underlying both, economic models that serve as a self-fulfilling prophecy and a barrier of what technology could achieve in the future: the costly deployments of mobile networks, financed by debt, parallel those of semiconductor chip fabrication plants that Rock’s law models as constraints to transistor integration limits.

Doom&Gloom for MNOs: The Writing’s on the Wall

 

 

These graphs show when carriers might expect to see costs exceed revenues, based on a new Tellabs study. Currently, stock markets don’t reflect these predictions, with Forward PE ratios at about 10.
Important assumptions of the underlying model are: a traffic growth of seven fold by 2015 for both voice and data combined, with a revenue decline per gigabyte of 80–85%; and data transport using only GSM/3G technologies (HSPA/HSPA+), since LTE will not be widely deployed by 2015. There also are some questionable assumptions: a flat-rate pricing model (telcos will lobby their way out of this trap) and a high percentage of data offloading onto indoor networks, a key assumption of the model, being a big unknown.

Mobile telcos will experience massive profit compressions in the future, redefining the value chain that has been in existence for almost two decades: network equipments and mobile terminal manufacturers with almost zero profit margins, whilst MNOs enjoyed high margins. Profits are migrating toward new smartphone services, but that it’s a history for another post.