Mark Townsley

This semester, I will be co-teaching the course “Internet Protocol Success” with my colleague Mark Townsley, to 3rd year and M1-ACN students at Ecole Polytechnique, and we will be expertly assisted by Jean-Louis Rougier who comes out from Telecom-ParisTech to lend us a hand with this.

“Internet Protocol Success” is almost a soft-skills-course: the imperative is to not actually to learn the nitty-gritty technical details of this, that, or the other, Internet Protocol – but, to answer the question “why did protocol X succeeded whereas protocol Y failed?”.

I write “almost a soft skills course” because, of course, to answer that question, we definitely need to understand every little detail of protocols X and Y. But, understanding the technical details are just not enough.

  • Why did the routing protocol RIP ultimately fail, whereas OSPFv2 became a success?

That question can (easily) be answered by knowing just the nitty-gritty technical details: RIP has convergence-issues (count-to-infinity, in all its variants) which are absent in OSPFv2 by way of the latter being a link state protocol.

However:

  • Why did IPv4 become a success, whereas IPX (the Xerox, then Novel protocol) and CLNP bothfailed?

That’s a much harder question to answer: if we’re looking just at the nitty-gritty technical details, IPX and CLNP appear as more complete, more feature-rich, and more attractive. So why did the former succeed, whereas the latter failed? We have to – and in the course we do – look at other (non-technical) factors.

IPv4-vs-IPv6
IPv4-vs-IPv6

Even more so:

  • Why has IPv6 not replaced IPv4?

IPv6 came out in 1987 as an intended replacement for IPv4, designed by the same committee (the IETF), with all the features of IPv4 (and then some) and objectively a nicer design – but why oh why are we now in 2017 and people still (in the vast majority) use IPv4? That question is also one that cannot be answered (purely) by considering technical arguments – although technical arguments do also largely inform the answer, which we will be seeing in the course.

An IETF Plenary Meeting

This is an incredibly fun, but also challenging, course to teach. We base ourself on the findings of RFC5218 – but then expand based on our more than 40 years of (combined) of experiences as protocol engineers, internet designers, spec-writers, and from various leadership functions – all this in various standards bodies: from the IETF over the ITU and IEEE through to industry alliances such as the Broadband Forum and the G3-PLC Alliance.

What’s particularly interesting is, that students get to “pick a problem domain, study the competing technologies therein, and apply the methodology and metrics of the course” rigorously, to establish a trend:

if it is an “old enough” problem domain where the dust has settled, to precisely identify why the technology that succeeded did succeed over the competitions
if it is a “new” problem domain, to apply the metrics to predict which from among the competing technologies will come out on top.

So, while we know where the course starts, we almost never know where it will end – keeping us on our toes throughout the course.

By the way, we believe that the methodology and these metrics are applicable to technology in general, also beyond “Internet Protocols”. In years past, we’ve had students analysing problem domains including:

  • Gaming protocols
  • Cryptocurrencies
  • Local computer connectivity (RS-232, USB, Thunderbolt, …)
  • Smart Grid Technology

Today, right now, is the first lecture in this course, and as always I am eager to see how it will evolve, what cool problem domains the students will bring us to be studying this year.