Research that matters
The Internet is one of the most significant technological developments of the present time. It is a tool for global communication and interconnectivity, but what underlying fundamental ideas make it all work together? In what direction is it evolving? Those questions inspire my research, which to this end spans the areas of routing, measurements, network security, and network virtualization.
Future Research in Routing and Performance Measurements
The Internet is a network of networks, and routing is the process of finding and selecting paths along which to send the traffic. The Border Gateway Protocol (BGP) is today the only protocol that connects different networks together. It is sometimes referred to as the "glue that keeps the Internet together". Researchers have studied this critical inter-domain routing protocol over a decade now, but there are nevertheless still open questions. There are three areas where I envision more research to be funded in the 5-10 years: (1) The problem of safety and non-convergence arising from complex policy interactions, (2) inter-domain traffic engineering and (3) the problem of BGP security.
Firstly, the Internet is composed of competing companies that have different strategic economic policies. Tim Griffin et al. predicts that the Internet routing system will enter a state of non-convergence that is so disruptive as to effectively bring down large portions of the Internet . There are no "early warning systems" that allow operators to detect problematic routing conditions. Attempts to solve those problems have been made with moderate success, but all those measurement-based approaches suffer from systematic and fundamental limitations. Despite those limitations, the problem is significant and further research must be conducted along the lines of a system that parses configuration files from routers, analyzes their policies and is able to detect problematic conditions [2, 3, 4]. This is to ensure the robustness of the Internet routing system.
Secondly, although it is important analyze router configurations files for safety, those are not always shared between networks. Today operators often adjust their routing policies in a "tweak and pray" fashion—which is more like rolling a dice rather than sound inter-domain traffic engineering. Traffic engineering is the process in which Internet Service Providers (ISPs) try to balance the traffic across their network (and the Internet) to improve the performance for end-users. In the inter-domain case this is difficult, as many secret business strategies of different ISPs interact. A realistic inter-domain topology model would allow answering what-if-questions.
Finally, Internet routing had been designed with an implicit trust model in mind. In the early days of the Internet this worked fine, but in our world today this lack of security has evolved into a big risk. The Internet Engineering Task Force (IETF) is proposing solutions with respect to Secure Inter-Domain Routing (SIDR). Unfortunately, the proposed cryptographic solutions have limitations and more research needs to be undertaken. The fundamental issue is that routes are being passed on from network to network, and a malicious attacker-network is able to place itself in the middle in a way that cannot be cryptographically distinguished from a benign network. Heuristics should be developed to detect those potential threats. Despite those shortcomings, the efforts of the SIDR working group are a very important step in the direction of securing our routing system. A first implementation is available, but before that can be deployed all over the Internet, the system needs to be checked for their security and scaling properties. We have started working on a system that allows building large-scale realistic system using virtual machines on a large compute cluster. This work has direct impact on the IETF and the future of the Internet.
Cybercrime as a Service (CaaS) is exploding. Effective network security is a matter that concerns all of us, but in particular graduates with a network communication focused degree should have a solid understanding of security related matters. One very effective way of teaching cyber security skills are practical hands-on exercises, such as cyber defense exercises (CDX). Those exercises are often conducted in a way where the students need to defend their network against "attacks". Those "attacks" are security related events that are going on in a virtual lab running on a compute server that is not connected to the real Internet. The main problem is that preparing such a course is time consuming and error prone task. For example, NATO's prestigious Locked Shields CDX is a four-day training event, but the preparation takes many months and a lot of manual configuration. Much of this could be automated. Unfortunately, it is not sufficient to just setup one exercise and then replay the same thing over and over again—the organizer needs to be able to vary the scenarios. High-level abstractions, such as the relationship for example between open ports on a firewall and the services running within the network, can be used to assist in the configuration process. The tool needs to be modular, and allow for flexibility with respect to scenario and technology. That reduces the configuration and deployment time, as well as errors during the setup process. That means that even faculty that is less experienced in ethical hacking would be able to successfully conduct such courses.
Such a tool is just the first step towards a system that uses high-level network abstractions for network specifications. Imagine, an input graph that describes relationships between the network components, and then auto-configures and auto-deploys the exercise [AutoNetKit, 5, 6]. Overall, such abstractions would not only be beneficial to multiply the number of students learning about security, but also for configuration management systems in general. Bridging the gap between high-level, mathematical sound, abstractions and the low-level details of a network will enable us to create safe, stable and flexible networks in the future.
Such topics have been studied in the context of Software Defined Networks (SDN) using network virtualization as an enabler for future network technology. However, mathematical provable abstractions of networks are often not integrated in such work, and therefore the work appears as an engineering hack with unknown consequences.
Future Internet Technology
Another example of such systems with unknown consequences and potentially dangerous implications is the area about IPv4 address sharing. IPv4 is still the predominant Internet Protocol, even though we ran out of addresses in 2011. The move to IPv6 is unavoidable, but the move to IPv6 is not going to happen as quickly as ISPs will need new IPv4 addresses. For this reason many providers are in the process of deploying address sharing techniques and are going to share one single IPv4 address with a whole street or smaller city, e.g., carrier-grade-Network Address Translation (NAT). What are the consequences of such deployment going to be? Will cyber criminals become untraceable in the future, as potentially hundreds or thousand of customers share one address? Is the future Internet evolution restricted, as for example products such as the X-BOX-Live will not work behind double-NATs? As a co-author of an Internet Standard (RFC 6346) I am interested in alternative solutions. However, more research is needed to make sure ISPs are not deploy the "wrong" technology. We will be stuck with the consequences of what ISPs decide today for a long time.
My research aims at bridging the gap between sound theoretical work that goes all the way to the nitty-gritty details that has real-world impact. Most network research in our field is either theoretically very strong, or we see good "engineering"-style contributions, but overall I believe that we need to aim at both at the same time.