Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
Skip to main content
—The next-generation networks (5G) aims to support services that demand strict requirements such as low-latency, high throughput, and high availability. Telecom operators have adopted Network Functions Virtualization (NFV) to virtualize... more
—The next-generation networks (5G) aims to support services that demand strict requirements such as low-latency, high throughput, and high availability. Telecom operators have adopted Network Functions Virtualization (NFV) to virtualize the network functions and deploy at distributed cloud datacenters. Deploying virtual network functions (VNFs) close to the end-user can reduce Internet latency. However, network congestion in telco cloud datacenters can result in increased latency, low network utilization and a drop of throughput. The existing protocols are not capable of utilizing the multiple paths offered by datacenter topologies e.g., DCTCP; require a major architectural change and face deployment challenges e.g., NDP; or increase flow completion times of short flows e.g., MPTCP. To address this, we propose a multipath transport for telco cloud datacenters called coupled multipath datacenter TCP, MDTCP. MDTCP evolves MPTCP subflows to employ ECN signals to react to congestion before queue overflow, offering both reduced latency and higher network utilization. The evaluation of MDTCP with simulated traffic indicates comparable or lower flow completion times compared with DCTCP and NDP for most of the studied traffic scenarios. The simulation results imply that MDTCP could give better throughput for telco traffic and at the same time be as fair as MPTCP in datacenters.
Research Interests:
Recently, Wireless Mesh Networks (WMNs) have attracted attention as a way to provide alternative Internet connectivity to rural areas or communities. In WMNs, wireless access points communicate with each other wirelessly, forming a true... more
Recently, Wireless Mesh Networks (WMNs) have attracted attention as a way to provide alternative Internet connectivity to rural areas or communities. In WMNs, wireless access points communicate with each other wirelessly, forming a true wireless mesh based ...
The lack of consideration for application delay requirements in standard loss-based congestion control algorithms (CCAs) has motivated the proposal of several alternative CCAs. As such, Copa is one of the most recent and promising CCAs,... more
The lack of consideration for application delay requirements in standard loss-based congestion control algorithms (CCAs) has motivated the proposal of several alternative CCAs. As such, Copa is one of the most recent and promising CCAs, and it has attracted attention from both academia and industry. The delay performance of Copa is governed by a mostly static latency-throughput tradeoff parameter, δ. However, a static δ parameter makes it difficult for Copa to achieve consistent delay and throughput over a range of bottleneck bandwidths. In particular, the coexistence of 4G and 5G networks and the wide range of bandwidths experienced in NG-RANs can result in inconsistent CCA performance. To this end, we propose a modification to Copa, Copa-D, that dynamically tunes δ to achieve a consistent delay performance. We evaluate the modification over emulated fixed, 4G, and 5G bottlenecks. The results show that Copa-D achieves consistent delay with minimal impact on throughput in fixed capacity bottlenecks. Copa-D also allows a more intuitive way of specifying the latency-throughput tradeoff and achieves more accurate and predictable delay in variable cellular bottlenecks.
This paper studies the impact of tunable parametersin the NB-IoT stack on the energy consumption of a user equipment(UE), e.g., a wireless sensor. NB-IoT is designed to enablemassive machine-type c ...
Interactive applications such as web browsing, audio/video conferencing, multi-player online gaming and financial trading applications do not benefit (much) from more bandwidth. Instead, they depend on low latency. Latency is a key... more
Interactive applications such as web browsing, audio/video conferencing, multi-player online gaming and financial trading applications do not benefit (much) from more bandwidth. Instead, they depend on low latency. Latency is a key determinant of user experience. An increasing concern for reducing latency is therefore currently being observed among the networking research community and industry.In this thesis, we quantify the proportion of potentially latency-sensitive traffic and its development over time. Next, we show that the flow start-up mechanism in the Internet is a major source of latency for a growing proportion of traffic, as network links get faster.The loss recovery mechanism in the transport protocol is another major source of latency. To improve the performance of latency-sensitive applications, we propose and evaluate several modifications in TCP. We also investigate the possibility of prioritization at the transport layer to improve the loss recovery. The idea is to...
Software-Defined Networking (SDN) has led to a paradigm shift in the way how networks are managed and operated. In SDN environments the data plane forwarding rules are managed by logically centralized controllers operating on global view... more
Software-Defined Networking (SDN) has led to a paradigm shift in the way how networks are managed and operated. In SDN environments the data plane forwarding rules are managed by logically centralized controllers operating on global view of the network. Today, SDN controllers typically posses little insight about the requirements of the applications executed on the end-hosts. Consequently, they rely on heuristics to implement traffic engineering or QoS support. In this work, we propose a framework for application-awareness in SDN environments where the end-hosts provide a generic interface for the SDN controllers to interact with. As a result, SDN controllers may enhance the end-host's view of the attached network and deploy policies into the edge of the network. Further, controllers may obtain information about the specific requirements of the deployed applications. Our demonstration extends the OpenDaylight SDN controller to enable it to interact with end-hosts running a novel...
To mitigate delay spikes during transmission ofbursty signaling traffic, concurrent multipath transmission(CMT) over several paths in parallel could be an option. Still,unordered delivery is a well ...
One of the ambitions when designing the Stream Control Transmission Protocol was to offer a robust transfer of traffic between hosts. For this reason SCTP was designed to support multihoming, which ...
Interactive applications such as web browsing, audio/video conferencing, multi-player online gaming and financial trading applications do not benefit (much) from more bandwidth. Instead, they depend on low latency. Latency is a key... more
Interactive applications such as web browsing, audio/video conferencing, multi-player online gaming and financial trading applications do not benefit (much) from more bandwidth. Instead, they depend on low latency. Latency is a key determinant of user experience. An increasing concern for reducing latency is therefore currently being observed among the networking research community and industry.In this thesis, we quantify the proportion of potentially latency-sensitive traffic and its development over time. Next, we show that the flow start-up mechanism in the Internet is a major source of latency for a growing proportion of traffic, as network links get faster.The loss recovery mechanism in the transport protocol is another major source of latency. To improve the performance of latency-sensitive applications, we propose and evaluate several modifications in TCP. We also investigate the possibility of prioritization at the transport layer to improve the loss recovery. The idea is to...
Information-centric networking (ICN) has been introduced as a potential future networking architecture. ICN promises an architecture that makes information independent from lo- cation, application, ...
Virtualization abstracts computing resources that can be shared by multiple virtual machines. It is central to cloud computing which enables demand based sharing of computing resources over the Internet. To mitigate operational costs and... more
Virtualization abstracts computing resources that can be shared by multiple virtual machines. It is central to cloud computing which enables demand based sharing of computing resources over the Internet. To mitigate operational costs and cope with increasing traffic demands, telecom operators have started to adopt cloud computing. But telecom services and applications are characterized by real-time responsiveness, strict end-to-end latency, and high reliability; the performance of which can be degraded due to the inherent overhead introduced by virtualization. Comprehensive performance measurements and analysis are important to improve the performance of emerging telecom applications and services in a virtualized environment. To this purpose, we conducted controlled experiments, to understand the impact of virtualization on end-to-end latency, study the performance of transport protocols in a virtualized environment, and provide a packet delay breakdown in the hypervisor stack. The ...
This deliverable provides a final report on the work on transport protocol enhancements done inWork Package 3. First, we report on the extensions made to the SCTP protocol that turn it into a viabl ...
This document describes the design and implementation of the 5GENESIS Monitoring & Analytics (M&A) framework (Release A), developed within Task T3.3 of the Project work plan.Fifth Generation End-to-End Network,... more
This document describes the design and implementation of the 5GENESIS Monitoring & Analytics (M&A) framework (Release A), developed within Task T3.3 of the Project work plan.Fifth Generation End-to-End Network, Experimentation, System Integration, and Showcasin
Cellular Internet of Things (CIoT) is a Low-Power Wide-Area Network (LPWAN) technology. It aims for cheap, lowcomplexity IoT devices that enable large-scale deployments and wide-area coverage. Moreover, to make large-scale deployments of... more
Cellular Internet of Things (CIoT) is a Low-Power Wide-Area Network (LPWAN) technology. It aims for cheap, lowcomplexity IoT devices that enable large-scale deployments and wide-area coverage. Moreover, to make large-scale deployments of CIoT devices in remote and hard-to-access locations possible, a long device battery life is one of the main objectives of these devices. To this end, 3GPP has defined several energysaving mechanisms for CIoT technologies, not least for the Narrow-Band Internet of Things (NB-IoT) technology, one of the major CIoT technologies. Examples of mechanisms defined include CONNECTED-mode DRX (cDRX), Release Assistance Indicator (RAI), and Power Saving Mode (PSM). This paper considers the impact of the essential energy-saving mechanisms on minimizing the energy consumption of NB-IoT devices, especially the cDRX and RAI mechanisms. The paper uses a purpose-built NB-IoT simulator that has been tested in terms of its built-in energy-saving mechanisms and validated with realworld NB-IoT measurements. The simulated results show that it is possible to save 70%-90% in energy consumption by enabling the cDRX and RAI. In fact, the results suggest that a battery life of 10 years is only achievable provided the cDRX, RAI, and PSM energy-saving mechanisms are correctly configured and used.
There is a growing concern that the Internet transport layer has become ossified in the face of emerging novel applications, and that further evolution has become very difficult. This paper identifies requirements for a new transport... more
There is a growing concern that the Internet transport layer has become ossified in the face of emerging novel applications, and that further evolution has become very difficult. This paper identifies requirements for a new transport layer and then proposes a conceptual architecture, the NEAT system, that we believe is both flexible and evolvable. Applications interface the NEAT system through an enhanced user API that decouples them from the operation of the transport protocols and the network features being used. In particular, applications provide the NEAT system with information about their traffic requirements, pre-specified policies, and measured network conditions. On the basis of this information, the NEAT system establishes and configures appropriate connections.
Cellular networks are continuously evolving to allow improved throughput and low latency performance for applications. However, it has been shown that, due to buffer over-provisioning, TCP’s standard loss-based congestion control... more
Cellular networks are continuously evolving to allow improved throughput and low latency performance for applications. However, it has been shown that, due to buffer over-provisioning, TCP’s standard loss-based congestion control algorithms (CCAs) can cause long delays in cellular networks. The QUIC transport protocol and the Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control are both proposed in response to shortcomings observed in TCP and loss-based CCAs. Despite its notable advantages, BBR can experience suboptimal delay performance in cellular networks due to one of its underlying design choices: the maximum bandwidth filter at the sender. In this work, we leverage QUIC’s extensibility to enhance BBR. Instead of using the ACK rate observed at the sender side, we apply a more fitting delivery rate calculated at the receiver. Our 5G-trace-based emulation experiments in CloudLab suggest that our modified QUIC could significantly improve latency without any notable effect on the throughput: In particular, in some of our experiments, we observe up to 39% reduction of the round-trip time (RTT) with a worstcase throughput reduction of 2.7%.
Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Many proposed ICN hop-by-hop congestion control schemes... more
Information-centric networking (ICN) with its design around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Many proposed ICN hop-by-hop congestion control schemes assume a fixed and known link capacity, which rarely — if ever — holds true for wireless links. Firstly, we demonstrate that although these congestion control schemes are able to fairly well utilise the available wireless link capacity, they greatly fail to keep the delay low. In fact, they essentially offer the same delay as in the case with no hop-by-hop, only end-to-end, congestion control. Secondly, we show that by complementing these schemes with an easy-to-implement, packet-train capacity estimator, we reduce the delay to a level significantly lower than what is obtained with only end-to-end congestion control, while still being able to keep the link utilisation at a high level.
This document presents the core transport system in NEAT, as used for development of thereference implementation of the NEAT System. The document describes the componentsnecessary to realise the ba ...
Ossification of the Internet transport-layer architecture is a significant barrier to innovation of the Internet. Such innovation is desirable for many reasons. Current applications often need to i ...
Ideally, network applications should be able to select an appropriate transport solution from among available transport solutions. However, at present, there is no agreed-upon way to do this. In fact, there is not even an agreed-upon way... more
Ideally, network applications should be able to select an appropriate transport solution from among available transport solutions. However, at present, there is no agreed-upon way to do this. In fact, there is not even an agreed-upon way for a source end host to determine if there is support for a particular transport along a network path. This draft addresses these issues, by proposing a Happy Eyeballs framework. The proposed Happy Eyeballs framework enables the selection of a transport solution that according to application requirements, pre-set policies, and estimated network conditions is the most appropriate one. Additionally, the proposed framework makes it possible for an application to find out whether a particular transport is supported along a network connection towards a specific destination or not.
This deliverable summarises and concludes our work in Work Package 3 (WP3) to extend the transport services provided by the NEAT System developed in Work Package 2, and to enable non-NEAT applicati ...
SCTP is a transport protocol targeted for telephony signaling traffic. Although SCTP from its inception supported multihoming, it has until now not supported concurrent mul- tipath transfer. Howeve ...
This document presents the first version of the low-level Core Transport System in NEAT, to be used for development of a reference implementation of the NEAT System. The design of this core transpo ...
Mobile wireless networks constitute an indispensable part of the global Internet, and with TCP being the dominating transport protocol on the Internet, it is vital that TCP works equally well over these networks as over wired ones. This... more
Mobile wireless networks constitute an indispensable part of the global Internet, and with TCP being the dominating transport protocol on the Internet, it is vital that TCP works equally well over these networks as over wired ones. This paper evaluates the performance of TCP NewReno and TCP CUBIC with respect to responsiveness to bandwidth variations related to different user movements. The evaluation complements previous studies on 4G mobile networks in two important ways: It primarily focuses on the behavior of the TCP congestion control in medium- to high-velocity mobility scenarios, and it not only considers the current 4G mobile networks, but also low latency configurations that move toward the potential delays in 5G networks. The results show that while the two TCP versions give similar goodput in scenarios where the radio channel quality continuously decreases, CUBIC gives a significantly higher goodput in scenarios where the quality continuously increases. This is due to CUB...
This document presents the core transport system in NEAT, as used for development of the reference implementation of the NEAT System. The document describes the components necessary to realise the ...
We present the latency-aware multipath scheduler ZQTRTT that takes advantage of the multipath opportunities in information-centric networking. The goal of the scheduler is to use the (single) lowes ...
To mitigate delay during transmission of bursty signaling traffic, concurrent multipath transmission (CMT) over several paths in parallel could be an option. Still, unordered delivery is a well known problem when concurrently transmitting... more
To mitigate delay during transmission of bursty signaling traffic, concurrent multipath transmission (CMT) over several paths in parallel could be an option. Still, unordered delivery is a well known problem when concurrently transmitting data over asymmetric network paths, leading to extra delay due to Head-of-Line Blocking (HoLB). The Stream Control Transmission Protocol (SCTP), designed as a carrier for signaling traffic over IP, is currently being extended with support for CMT (CMT-SCTP). To reduce the impact of HoLB, SCTP has support for transmission of separate data flows, called SCTP streams. In this paper, we address sender scheduling to optimize latency for signaling traffic using CMT-SCTP. We present dynamic stream-aware (DS) scheduling, which utilizes the SCTP stream concept, and continuously considers the current network status as well as the data load to make scheduling decisions. We implement a DS scheduler and compare it against some existing schedulers. Our investigation suggests that DS scheduling could significantly reduce latency compared to dynamic path scheduling that does not consider streams. Moreover, we show that naive round-robin scheduling may provide low latency over symmetric network paths, but may transmit data on non-beneficial asymmetric network paths leading to increased latency. Finally, our results show that a static stream based approach, found beneficial for bulk traffic, is not appropriate for bursty signaling traffic.
This position paper gives a status report on work we have recently started to survey techniques for reducing the delays in communications. The immediate aim is to organise all the techniques into a meaningful categorisation scheme, then... more
This position paper gives a status report on work we have recently started to survey techniques for reducing the delays in communications. The immediate aim is to organise all the techniques into a meaningful categorisation scheme, then to quantify the benefit of each approach and produce visualisations that highlight those approaches that are likely to be most fruitful
Research Interests:
The NEAT System offers an enhanced API for applications that disentangles them from the actual transport protocol being used. The system also enables applications to communicate their service requi ...
Understanding radio propagation characteristics and developing channel models is fundamental to building and operating wireless communication systems. Among others uses, channel characterization and modeling can be used for coverage and... more
Understanding radio propagation characteristics and developing channel models is fundamental to building and operating wireless communication systems. Among others uses, channel characterization and modeling can be used for coverage and performance analysis and prediction. Within this context, this paper describes a comprehensive dataset of channel measurements performed to analyze outdoor-to-indoor propagation characteristics in the mid-band spectrum identified for the operation of 5th Generation (5G) cellular systems. Previous efforts to analyze outdoor-to-indoor propagation characteristics in this band were made by using measurements collected on dedicated, mostly single-link setups. Hence, measurements performed on deployed and operational 5G networks still lack in the literature. To fill this gap, this paper presents a dataset of measurements performed over commercial 5G networks. In particular, the dataset includes measurements of channel power delay profiles from two 5G netwo...
Cooperative intelligent transport systems (C-ITS) enable information to be shared wirelessly between vehicles and infrastructure in order to improve transport safety and efficiency. Delivering C-ITS services using existing cellular... more
Cooperative intelligent transport systems (C-ITS) enable information to be shared wirelessly between vehicles and infrastructure in order to improve transport safety and efficiency. Delivering C-ITS services using existing cellular networks offers both financial and technological advantages, not least since these networks already offer many of the features needed by C-ITS, and since many vehicles on our roads are already connected to cellular networks. Still, C-ITS pose stringent requirements in terms of availability and latency on the underlying communication system; requirements that will be hard to meet for currently deployed 3G, LTE, and early-generation 5G systems. Through a series of experiments in the MONROE testbed (a cross-national, mobile broadband testbed), the present study demonstrates how cellular multi-access selection algorithms can provide close to 100 percent availability, and significantly reduce C-ITS transaction times. The study also proposes and evaluates a number of low-complexity, low-overhead single-access selection algorithms, and shows that it is possible to design such solutions so that they offer transaction times and availability levels that rival those of multi-access solutions.
The strict low-latency requirements of applications such as virtual reality, online gaming, etc., can not be satisfied by the current Internet. This is due to the characteristics of classic TCP such as Reno and TCP Cubic which induce high... more
The strict low-latency requirements of applications such as virtual reality, online gaming, etc., can not be satisfied by the current Internet. This is due to the characteristics of classic TCP such as Reno and TCP Cubic which induce high queuing delays when used for capacity-seeking traffic, which in turn results in unpredictable latency. The Low Latency, Low Loss, Scalable throughput (L4S) architecture addresses this problem by combining scalable congestion controls such as DCTCP and TCP Prague with early congestion signaling from the network. It defines a Dual Queue Coupled (DQC) AQM that isolates low-latency traffic from the queuing delay of classic traffic while ensuring the safe co-existence of scalable and classic flows on the global Internet. In this paper, we benchmark the DualPI2 scheduler, a reference implementation of DQC AQM, to validate some of the experimental result(s) reported in the previous works that demonstrate the co-existence of scalable and classic congestion...
During the first phase of NEWCOM the focus areas of Department 6 were identified and refined. A number of relevant knowledge gaps were identified for the areas transport protocols, architectures and cross-layer aspects, and modelling. In... more
During the first phase of NEWCOM the focus areas of Department 6 were identified and refined. A number of relevant knowledge gaps were identified for the areas transport protocols, architectures and cross-layer aspects, and modelling. In this deliverable we describe a first set of frameworks/models to support research integration within the Department. The integration approach and the defined models/frameworks are described for each one of the selected knowledge gaps. The deliverable also includes a report on tools, software libraries and traces that can be shared between the partners
ABSTRACT Experiences in the use of the Internet as a delivery medium for multimedia-based applications have revealed serious deficiencies in its ability to provide the QoS of Multimedia Applications. We propose an extension to TCP that... more
ABSTRACT Experiences in the use of the Internet as a delivery medium for multimedia-based applications have revealed serious deficiencies in its ability to provide the QoS of Multimedia Applications. We propose an extension to TCP that addresses the QoS requirements of applications with soft real-time constraints. Although, TCP has been found unsuitable for real-time applications, it can with minor modifications be adjusted to better comply with the QoS needs of applications with soft real-time requirements. Enhancing TCP with support for this group of applications is important since the congestion control mechanism of TCP assures stability of the Internet. In contrast, specialized multimedia protocols that lack appropriate congestion control can never be deployed on a large scale basis. Two factors of great importance for applications with soft real time constraints are jitter and throughput. By relaxing the reliability offered by TCP, the extension gives better jitter characteristics and an improved throughput. The extension only needs to be implemented at the receiving side. The reliability provided is controlled by the receiving application, thereby allowing a flexible tradeoff between different QoS parameters. In this paper, our TCP extension is presented and analyzed. The analysis investigates how the different application-controlled parameters influence performance. Our analysis is supported by a simulation study that investigates the tradeoff between interarrival jitter, throughput, and reliability. The simulation results also confirm that the extended version of TCP still behaves in a TCP-friendly manner.
PRTP is proposed to address the need of a transport service that is more suitable forapplications with soft real-time requirements, e.g., video broadcasting. It is an extensionfor partial reliabili ...
Research Interests:
To enable interoperability between the public switched telephone network and IP, the IETF SIGTRAN working group has developed an architecture for the transportation of SS7 signaling traffic over IP ...
Research Interests:
There is currently work going on at IETF to standardize concurrent multipath transfer, i.e., simultaneous transfer of data over several network paths, for SCTP. This paper studies whether or not SC ...
Research Interests:
kau.se. ...
There are some large economic, operational, and, to some extent, technical incentives to replace the traditional te lecom network with IP. However, such a large transition will not happen overnight - maybe never. Meanwhile, IP-based and... more
There are some large economic, operational, and, to some extent, technical incentives to replace the traditional te lecom network with IP. However, such a large transition will not happen overnight - maybe never. Meanwhile, IP-based and traditional TDM-based telephony will have to co-exist. To address this situation, the IETF SIGTRAN working group has developed an architecture for transportation of Signal ing System No. 7 (SS7) traffic over IP. Still, it remains to be shown that the introduction of the SIGTRAN architecture will not significantly deteriorate the performance of SS7. To this end, this paper evaluates the failover performance in SIGTRAN networks. Specifically, the paper evaluates the performance of SCTP-controlled failovers in M3UA-based SIGTRAN networks. The paper suggests that in order to obtain a failover performance with SCTP comparable to that obtained in traditional TDM-based SS7 systems, SCTP has to abandon many of the configuration recommendations of RFC 2960 an...
This document is the last deliverable of WPR.11 and presents an overview of the final activities carried out within the NEWCOM++ Workpackage WPR.11 during the last 18 months. We provide a descripti ...
The Datagram Congestion Control Protocol (DCCP) is a transport-layer protocol that provides upper layers with the ability to use non- reliable congestion-controlled flows. DCCP is not widely deployed in the Internet, and the reason for... more
The Datagram Congestion Control Protocol (DCCP) is a transport-layer protocol that provides upper layers with the ability to use non- reliable congestion-controlled flows. DCCP is not widely deployed in the Internet, and the reason for that can be defined as a typical example of a chicken-egg problem. Even if an application developer decided to use DCCP, the middle-boxes like firewalls and NATs would prevent DCCP end-to-end since they lack support for DCCP. Moreover, as long as the protocol penetration of DCCP does not increase, the middle-boxes will not handle DCCP properly. To overcome this challenge, NAT/NATP traversal and UDP encapsulation for DCCP is already defined. However, the former requires special middle-box support and the latter introduces overhead. The recent proposal of a multipath extension for DCCP further underlines the challenge of efficient middle-box passing as its main goal is to be applied over the Internet, traversing numerous uncontrolled middle-boxes. This ...
Mobile networks have become ubiquitous and the primary means to access the Internet, and the traffic they generate has rapidly increased over the last years. The technology and service diversity in mobile networks call for extensive and... more
Mobile networks have become ubiquitous and the primary means to access the Internet, and the traffic they generate has rapidly increased over the last years. The technology and service diversity in mobile networks call for extensive and accurate measurements to ensure the proper functioning of the networks and rapidly spot impairments. However, the measurement of mobile networks is complicated by their scale, and, thus, expensive, especially due to the diversity of deployments, technologies, and web services. In this paper, we present and provide access to the largest open international mobile network dataset collected using the MONROE platform spanning six countries, 27 mobile network operators, and 120 measurement nodes. We use them to run measurements targeting several web services from January 2018 to December 2019, collecting millions of TCP and UDP flows using these commercial mobile networks. We illustrate the data collection platforms and describe some of the main experiment...
This deliverable provides a final report on the work on transport protocol enhancements done in Work Package 3. First, we report on the extensions made to the SCTP protocol that turn it into a viable alternative to TCP and allow to... more
This deliverable provides a final report on the work on transport protocol enhancements done in Work Package 3. First, we report on the extensions made to the SCTP protocol that turn it into a viable alternative to TCP and allow to deliver a lower-latency transport service. Next, we describe our work to develop a framework for providing a deadline-aware, less-than-best-effort transport service, targeting background traffic and thus addressing requirements on NEAT from the EMC use case. We also present our efforts to design and implement a latency-aware scheduler for MPTCP, which enables NEAT to offer a transport service that meets the needs of latency-sensitive applications, and that efficiently utilises available network resources. Lastly, this document informs on our work on coupled congestion control for TCP, a mechanism that treats a bundle of parallel TCP flows between the same pair of hosts as a single unit. By efficiently multiplexing concurrent TCP flows, our coupled congest...
More and more of today's devices are multi-homing capable, in particular 3GPP user equipment like smartphones. In the current standardization of the next upcoming mobile network generation 5G Rel.16, this is especially targeted in the... more
More and more of today's devices are multi-homing capable, in particular 3GPP user equipment like smartphones. In the current standardization of the next upcoming mobile network generation 5G Rel.16, this is especially targeted in the study group Access Traffic Steering Switching Splitting [TR23.793]. ATSSS describes the flexible selection or combination of 3GPP untrusted access like Wi-Fi and cellular access, overcoming the single-access limitation of today's devices and services. Another multi-connectivity scenario is the Hybrid Access [I-D.lhwxz-hybrid-access-network-architecture][I-D. muley-network-based-bonding-hybrid-access], providing multiple access for CPEs, which extends the traditional way of single access connectivity at home to dual-connectivity over 3GPP and fixed access. A missing piece in the ATSSS and Hybrid Access is the access and path measurement, which is required for efficient and beneficial traffic steering decisions. This becomes particularly importan...
Contemporary mobile devices such as smartphones and tablets are increasingly equipped with multiple network interfaces that enable automatic vertical handover between heterogeneous wireless networks including WiFi and cellular 3G and 4G... more
Contemporary mobile devices such as smartphones and tablets are increasingly equipped with multiple network interfaces that enable automatic vertical handover between heterogeneous wireless networks including WiFi and cellular 3G and 4G networks. However, the employed vertical handover schemes are mostly quite simple, and incur non-negligible service disruptions to ongoing sessions, e.g., video streaming and live conferencing sessions. A number of improved mobility management frameworks for these lightweight mobile devices have been proposed in the past recent years. Although these may result in negligible service disruptions, the vast majority of them are network- or integrated network- and link-layer based, and require support in the infrastructure to be successfully deployed. This paper demonstrates the feasibility of using an infrastructure-independent, transport-level vertical handover scheme on a smartphone for an application as demanding as video streaming. In our study, we used a previously developed Android-based mobility framework. The study shows that a standardized mobility solution based on the Stream Control Transmission Protocol (SCTP) and its extension for Dynamic Address Reconfiguration (DAR), incurs a service disruption on par with comparable proposed network- and link-layer solutions.
The stability and performance,of the Internet to date have in a large part been due to the congestion control mechanism employed by TCP. However, while the TCP conges- tion control is appropriate for traditional applications such as bulk... more
The stability and performance,of the Internet to date have in a large part been due to the congestion control mechanism employed by TCP. However, while the TCP conges- tion control is appropriate for traditional applications such as bulk data transfer, it has been found less than ideal for multimedia applications. In particular, audio and video
Research Interests:
... Available from Karl-Johan Grinnemo's profile on Mendeley ... 2 readers Save reference to library · Related research. Mobile SCTP with Bicasting for Vertical Handover. Soon-Hong Kwon, Seok-Joo Koh, Tai-Won Um, Won Ryu in Third... more
... Available from Karl-Johan Grinnemo's profile on Mendeley ... 2 readers Save reference to library · Related research. Mobile SCTP with Bicasting for Vertical Handover. Soon-Hong Kwon, Seok-Joo Koh, Tai-Won Um, Won Ryu in Third International Conference on Convergence and ...
Reproducibility is one of the key characteristics of good science, but hard to achieve for experimental disciplines like Internet measurements and networked systems. This guide provides advice to researchers, particularly those new to the... more
Reproducibility is one of the key characteristics of good science, but hard to achieve for experimental disciplines like Internet measurements and networked systems. This guide provides advice to researchers, particularly those new to the field, on designing experiments so that their work is more likely to be reproducible and to serve as a foundation for follow-on work by others.
DCCP communication is currently restricted to a single path per connection, yet multiple paths often exist between peers. The simultaneous use of these multiple paths for a DCCP session could improve resource usage within the network and,... more
DCCP communication is currently restricted to a single path per connection, yet multiple paths often exist between peers. The simultaneous use of these multiple paths for a DCCP session could improve resource usage within the network and, thus, improve user experience through higher throughput and improved resilience to network failure. Multipath DCCP provides the ability to simultaneously use multiple paths between peers. This document presents a set of extensions to traditional DCCP to support multipath operation. The protocol offers the same type of service to applications as DCCP and it provides the components necessary to establish and use multiple DCCP flows across potentially disjoint paths.
The high heterogeneity of 5G use cases requires the extension of the traditional per-component testing procedures provided by certification organizations, in order to devise and incorporate methodologies that cover the testing... more
The high heterogeneity of 5G use cases requires the extension of the traditional per-component testing procedures provided by certification organizations, in order to devise and incorporate methodologies that cover the testing requirements from vertical applications and services. In this paper, we introduce an experimentation methodology that is defined in the context of the 5GENESIS project, which aims at enabling both the testing of network components and validation of E2E KPIs. The most important contributions of this methodology are its modularity and flexibility, as well as the open-source software that was developed for its application, which enable lightweight adoption of the methodology in any 5G testbed. We also demonstrate how the methodology can be used, by executing and analyzing different experiments in a 5G Non-Standalone (NSA) deployment at the University of Malaga. The key findings of the paper are an initial 5G performance assessment and KPI analysis and the detecti...
ABSTRACT Network security is an increasingly impor-tant issue. Traditional solutions for protecting data when transferred over the network are almost exclusively based on cryptography. As a complement, we propose the use of SCTP and its... more
ABSTRACT Network security is an increasingly impor-tant issue. Traditional solutions for protecting data when transferred over the network are almost exclusively based on cryptography. As a complement, we propose the use of SCTP and its support for physically separate paths to accomplish protection against eavesdropping attacks near the end points.
ABSTRACT This paper analyzes three existing tunable security services based on a conceptual model. The aim of the study is to examine the tunable features provided by the different services in a structured and consistent way. This implies... more
ABSTRACT This paper analyzes three existing tunable security services based on a conceptual model. The aim of the study is to examine the tunable features provided by the different services in a structured and consistent way. This implies that for each service user preferences as well as environment and application characteristics that influence the choice of a certain security configuration are identified and discussed.
... Annika Wennstrom, Anna Brunstrom, Johan Garcia Dept. ... The use of soft GSM software on a laptop makes it possible to ac-cess this interface, which is physically available on a RS-232 cable between the laptop and the GSM phone. ...
lib.kth.se. ...
ABSTRACT A packet switched wireless cellular system with wide area coverage and high throughput is proposed. The system is designed to be cost effective and to provide high spectral efficiency. It makes use of a combination of tools and... more
ABSTRACT A packet switched wireless cellular system with wide area coverage and high throughput is proposed. The system is designed to be cost effective and to provide high spectral efficiency. It makes use of a combination of tools and concepts: - Smart antennas both at base stations and mobiles, provide antenna gain and improve the signal to interference ratio. - The fast fading is predicted in both time and frequency and - a slotted OFDM radio interface is used, in which time-frequency slots are allocated adaptively to different mobile users, based on their predicted channel quality. This enables efficient scheduling among sectors and users as well as fast adaptive modulation and power control. We here outline the uplink of the radio interface. Calculations based on simplifying assumptions illustrate how the channel capacity grows with the number of simultaneous users and the number of antenna elements. A high capacity can be attained already for moderate numbers of users and base station/terminal antennas. For additional information and references, please see http://www.signal.uu.se/Research/PCCwirelessIP.html
ABSTRACT This paper presents a technique to improve the performance of TCP and the utilization of wireless networks. Wireless links exhibit high rates of bit errors, compared to communication over wireline or fiber. Since TCP cannot... more
ABSTRACT This paper presents a technique to improve the performance of TCP and the utilization of wireless networks. Wireless links exhibit high rates of bit errors, compared to communication over wireline or fiber. Since TCP cannot separate packet losses due to bit errors versus congestion, all losses are treated as signs of congestion and congestion avoidance is initiated. This paper explores the possibility of accepting TCP packets with an erroneous checksum, to improve network performance for those applications that can tolerate bit errors. Since errors may be in the TCP header as well as the payload, the possibility of recovering the header is discussed. An algorithm for this recovery is also presented. Experiments with an implementation have been performed, which show that large improvements in throughput can be achieved, depending on link and error characteristics.
ABSTRACT Internet-based applications that require low latency are becoming more common. Such applications typically generate traffic consisting of short, or bursty, TCP flows. As TCP, instead, is designed to optimize the throughput of... more
ABSTRACT Internet-based applications that require low latency are becoming more common. Such applications typically generate traffic consisting of short, or bursty, TCP flows. As TCP, instead, is designed to optimize the throughput of long bulk flows there is an apparent mismatch. To overcome this, a lot of research has recently focused on optimizing TCP for short flows as well. In this paper, we identify a performance problem for short flows caused by the metric caching conducted by the TCP control block interdependence mechanisms. Using this metric caching, a single packet loss can potentially ruin the performance for all future flows to the same destination by making them start in congestion avoidance instead of slow-start. To solve this, we propose an enhanced selective caching mechanism for short flows. To illustrate the usefulness of our approach, we implement it in both Linux and FreeBSD and experimentally evaluate it in a real test-bed. The experiments show that the selective caching approach is able to reduce the average transmission time of short flows by up to 40%.
ABSTRACT
Page 1. A 4G IP-based Wireless System Proposal Tony Ottosson1, Anders Ahlén2, Anna Brunstrom3, Mikael Sternad2 and Arne Svensson1 1Dept. ... However, since the channel will in general not be constant over the whole bandwidth-slot region,... more
Page 1. A 4G IP-based Wireless System Proposal Tony Ottosson1, Anders Ahlén2, Anna Brunstrom3, Mikael Sternad2 and Arne Svensson1 1Dept. ... However, since the channel will in general not be constant over the whole bandwidth-slot region, see Fig. ...
... Balan et al [3] describe TCP HACK, a similar scheme except that it uses a TCP option containing a checksum for Page 2. the TCP header. ... Figure 5 shows the 200 ms case and also includes measurement for unmodified FreeBSD 5.1 as well... more
... Balan et al [3] describe TCP HACK, a similar scheme except that it uses a TCP option containing a checksum for Page 2. the TCP header. ... Figure 5 shows the 200 ms case and also includes measurement for unmodified FreeBSD 5.1 as well as Red Hat 9 Linux. ...
ABSTRACT
Page 1. 20 Analytical Analysis of the Performance Overheads of IPsec in MIPv6 Scenarios Zoltán Faigl, Péter Fazekas, Stefan Lindskog, and Anna Brunstrom 20.1 Introduction The next generation network (NGN) connects different ...
kau.se. ...
ABSTRACT The data traffic volumes are constantly increasing in cellular networks. Furthermore, a larger part of the traffic is generated by applications that require high data rates. Techniques including Coordinated Multipoint... more
ABSTRACT The data traffic volumes are constantly increasing in cellular networks. Furthermore, a larger part of the traffic is generated by applications that require high data rates. Techniques including Coordinated Multipoint transmission (CoMP) can increase the data rates, but at the cost of a high overhead. The overhead can be reduced if only a subset of the users is served with CoMP. In this paper, we propose a user selection approach, including pre-selection of CoMP users and short term scheduling, that takes user requirements into account. Users that require a high data rate to reach an acceptable level of service satisfaction are selected to use coherent joint processing CoMP in some of their downlink transmission bandwidth. Simulation results show that both the number of satisfied users and fairness are improved with the proposed user selection as compared to user selection that does not consider individual user requirements. For additional information and references, please see http://www.signal.uu.se/Research/4G5Gwireless.html
ABSTRACT Routing packets over multiple disjoint paths towards a destination can increase network utilization by load-balancing the traffic over the network. The drawback of load-balancing is that different paths might have different delay... more
ABSTRACT Routing packets over multiple disjoint paths towards a destination can increase network utilization by load-balancing the traffic over the network. The drawback of load-balancing is that different paths might have different delay properties, causing packets to be reordered. This can reduce TCP performance significantly, as reordering is interpreted as a sign of congestion. Packet reordering can be avoided by letting the network layer route strictly on flow-level. This will, however, also limit the ability to achieve optimal network throughput. There are also several proposals that try to mitigate the effects of reordering at the transport layer. In this paper, we perform an initial evaluation of such TCP reordering mitigations in multi-radio multi-channel wireless mesh networks when using multi-path routing. We evaluate two TCP reordering mitigation techniques implemented in the Linux kernel. The transport layer mitigations are compared using different multi-path routing strategies. Our findings show that, in general, flow-level routing gives the best TCP performance and that transport layer reordering mitigations only marginally can improve performance.
Research Interests:
This paper presents an experiment in which the impact of SCTP-controlled failovers was studied. In particular, the experiment studied the impact these failovers have on the Message Signal Unit (MSU) transfer times, i.e., the signaling... more
This paper presents an experiment in which the impact of SCTP-controlled failovers was studied. In particular, the experiment studied the impact these failovers have on the Message Signal Unit (MSU) transfer times, i.e., the signaling message transfer times, for an M3UA user in a dedicated SIGTRAN network. In addition, the experiment studied to what extent an increased link delay has a significant deteriorating effect on the MSU transfer times during an SCTP-controlled failover
Research Interests:
Research Interests:
In this paper, we start to investigate the security implications of selective encryption. We do this by using the measure guesswork, which gives us the expected number of guesses that an attacker must perform in an optimal brute force... more
In this paper, we start to investigate the security implications of selective encryption. We do this by using the measure guesswork, which gives us the expected number of guesses that an attacker must perform in an optimal brute force attack to reveal an encrypted message. ...
We examine load balancing in a simple pipeline computation, in which a large number of data sets is pipelined through a series of tasks and load balancing is performed by distributing several available processors among the tasks. We... more
We examine load balancing in a simple pipeline computation, in which a large number of data sets is pipelined through a series of tasks and load balancing is performed by distributing several available processors among the tasks. We compare the ...
ABSTRACT In this paper we focus on wireless multimedia communication and investigate how soft information from the physical layer can be used at the application layer. The soft information yields reliability measures for the received bits... more
ABSTRACT In this paper we focus on wireless multimedia communication and investigate how soft information from the physical layer can be used at the application layer. The soft information yields reliability measures for the received bits and is incorporated into two ...
Research Interests:
ABSTRACT Wireless mesh networks (WMNs) based on the IEEE 802.11 standard are becoming increasingly popular as a viable alternative to wired networks. WMNs can cover large or difficult to reach areas with low deployment and management... more
ABSTRACT Wireless mesh networks (WMNs) based on the IEEE 802.11 standard are becoming increasingly popular as a viable alternative to wired networks. WMNs can cover large or difficult to reach areas with low deployment and management costs. Several multi-path routing algorithms have been proposed for such kind of networks with the objective of load balancing the traffic across the network and providing robustness against node or link failures. Packet aggregation has also been proposed to reduce the overhead associated with the transmission of frames, which is not negligible in IEEE 802.11 networks. Unfortunately, multi-path routing and packet aggregation do not work well together, as they pursue different objectives. Indeed, while multi-path routing tends to spread packets among several next-hops, packet aggregation works more efficiently when several packets (destined to the same next-hop) are aggregated and sent together in a single MAC frame. In this paper, we propose a technique, called aggregation aware forwarding, that can be applied to existing multi-path routing algorithms to allow them to effectively exploit packet aggregation so as to significantly increase their network performance. In particular, the proposed technique does not modify the path computation phase, but it just influences the forwarding decisions by taking the state of the sending queues into account. We demonstrated our proposed technique by applying it to Layer-2.5, a multi-path routing and forwarding paradigm for WMNs that has been previously proposed. We conducted a thorough performance evaluation by means of the ns-3 network simulator, which showed that our technique allows to increase the performance both in terms of network throughput and end-to-end delay.
Cellular networks have evolved to support high peak bitrates with low loss rates as observed by the higher layers. However, applications and services running over cellular networks are now facing other difficult congestion-related... more
Cellular networks have evolved to support high peak bitrates with low loss rates as observed by the higher layers. However, applications and services running over cellular networks are now facing other difficult congestion-related challenges, most notably a highly variable link capacity and bufferbloat. To overcome these issues and improve performance of network traffic in 4G/5G cellular networks, a number of in-network and end-to-end solutions have been proposed. Fairness between interacting congestion control algorithms (CCAs) has played an important role in the type of CCAs considered for research and deployment. The placement of content closer to the user and the allocation of per-user queues in cellular networks has increased the likelihood of a cellular access bottleneck and reduced the extent of flow interaction between multiple users. This has resulted in renewed interest in end-to-end CCAs for cellular networks by opening up room for research and exploration. In this work, we present end-to-end CCAs that target a high throughput and a low latency over highly variable network links, and classify them according to the way they address the congestion control. The work also discusses the deployability of the algorithms. In addition, we provide insights into possible future research directions, such as coping with a higher degree of variability, interaction of CCAs in a shared bottleneck, and avenues for synergized research, such as CCAs assisted by software defined networking and network function virtualization. We hope that this work will serve as a starting point for systematically navigating through the expanding number of cellular CCAs.
In this paper, we study the energy consumption of Narrowband IoT devices. The paper suggests that key to saving energy for NB-IoT devices is the usage of full Dis-continuous Reception (DRX), including the use of connected-mode DRX (cDRX):... more
In this paper, we study the energy consumption of Narrowband IoT devices. The paper suggests that key to saving energy for NB-IoT devices is the usage of full Dis-continuous Reception (DRX), including the use of connected-mode DRX (cDRX): In some cases, cDRX reduced the energy consumption over a 10-year period with as much as 50%. However, the paper also suggests that tunable parameters, such as the inactivity timer, do have a significant impact. On the basis of our findings, guidelines are provided on how to tune the NB-IoT device so that it meets the target of the 3GPP, i.e., a 5-Wh battery should last for at least 10 years. It is further evident from our results that the energy consumption is largely dependent on the intensity and burstiness of the traffic, and thus could be significantly reduced if data is sent in bursts with less intensity, irrespective of cDRX support.
Cooperative Intelligent Transport Systems (C-ITS) enable information to be shared wirelessly between vehicles and infrastructure in order to improve transport safety and efficiency. Delivering C-ITS services using existing cellular... more
Cooperative Intelligent Transport Systems (C-ITS) enable information to be shared wirelessly between vehicles and infrastructure in order to improve transport safety and efficiency. Delivering C-ITS services using existing cellular networks offers both financial and technological advantages, not least since these networks already offer many of the features needed by C-ITS, and since many vehicles on our roads are already connected to cellular networks. Still, C-ITS pose stringent requirements in terms of availability and latency on the underlying communication system; requirements that will be hard to meet for currently deployed 3G, LTE, and early-generation 5G systems. Through a series of experiments in the MONROE testbed (a cross-national, mobile broadband testbed), the present study demonstrates how cellular multi-access selection algorithms can provide close to 100% availability, and significantly reduce C-ITS transaction times. The study also proposes and evaluates a number of low-complexity, low-overhead single-access selection algorithms, and shows that it is possible to design such solutions so that they offer transaction times and availability levels that rival those of multi-access solutions.
The strict low-latency requirements of applications such as virtual reality, online gaming, etc., can not be satisfied by the current Internet. This is due to the characteristics of classic TCP such as Reno and TCP Cubic which induce high... more
The strict low-latency requirements of applications such as virtual reality, online gaming, etc., can not be satisfied by the current Internet. This is due to the characteristics of classic TCP such as Reno and TCP Cubic which induce high queuing delays when used for capacity-seeking traffic, which in turn results in unpredictable latency. The Low Latency, Low Loss, Scalable throughput (L4S) architecture addresses this problem by combining scalable congestion controls such as DCTCP and TCP Prague with early congestion signaling from the network. It defines a Dual Queue Coupled (DQC) AQM that isolates low-latency traffic from the queuing delay of classic traffic while ensuring the safe coexistence of scalable and classic flows on the global Internet. In this paper, we benchmark the DualPI2 scheduler, a reference implementation of DQC AQM, to validate some of the experimental result(s) reported in the previous works that demonstrate the coexistence of scalable and classic congestion controls and its low-latency service. Our results validate the coexistence of scalable and classic flows using DualPI2 Single queue (SingleQ) AQM, and queue latency isolation of scalable flows using DualPI2 Dual queue (DualQ) AQM. However, the rate or window fairness between DCTCP without fair-queuing (FQ) pacing and TCP Cubic using DualPI2 DualQ AQM deviates from the original results. We attribute the difference in our results and the original results to the sensitivity of the L4S architecture to traffic bursts and the burst sending pattern of the Linux kernel.
A recently proposed congestion control algorithm (CCA) called BBR (Bottleneck Bandwidth and Round-trip propagation time) has shown a lot of promise in avoiding the bufferbloat and low-buffer inefficiency problems that have plagued... more
A recently proposed congestion control algorithm (CCA) called BBR (Bottleneck Bandwidth and Round-trip propagation time) has shown a lot of promise in avoiding the bufferbloat and low-buffer inefficiency problems that have plagued loss-based CCAs. Nevertheless, deployment of a new alternative algorithm requires a thorough evaluation of the effect of the proposed alternative on established transport protocols like TCP CUBIC. Furthermore, evaluations that consider the heterogeneity of Internet traffic sizes would provide a useful insight into the deployability of an algorithm that introduces sweeping changes across multiple algorithm components. Yet, most evaluations of BBR's impact and competitive fairness have focused on the steady-state performance of large flows. This work expands on previous studies of BBR by evaluating BBR's impact when the traffic consists of flows of different sizes. Our experiments show that under certain circumstances BBR's startup phase can result in a significant reduction of the throughput of competing large CUBIC flows and the utilization of the bottleneck link. In addition, the steady-state operation of BBR can have negative impact on the performance of bursty flows using loss-based CCAs over bottlenecks with buffer sizes as high as two times the bandwidth-delay product.
Research Interests:
Information-centric networking (ICN) with its de- sign around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Still, despite its attractiveness, there are many open... more
Information-centric networking (ICN) with its de- sign around named-based forwarding and in-network caching holds great promises to become a key architecture for the future Internet. Still, despite its attractiveness, there are many open questions that need to be answered before wireless ICN becomes a reality, not least about its congestion control: Many of the proposed hop-by-hop congestion control schemes assume a fixed and known link capacity, something that rarely – if ever – holds true for wireless links. As a first step, this paper demonstrates that although these congestion control schemes are able to fairly well utilise the available wireless link capacity, they greatly fail to keep the link delay down. In fact, they essentially offer the same link delay as in the case with no hop-by-hop, only end- to-end, congestion control. Secondly, the paper shows that by complementing these congestion control schemes with an easy- to-implement, packet-train link estimator, we reduce the link delay to a level significantly lower than what is obtained with only end-to-end congestion control, while still being able to keep the link utilisation at a high level.
Research Interests:
Mobile wireless networks constitute an indispensable part of the global Internet, and with TCP the dominating transport protocol on the Internet, it is vital that TCP works equally well over these networks as over wired ones. This paper... more
Mobile wireless networks constitute an indispensable part of the global Internet, and with TCP the dominating transport protocol on the Internet, it is vital that TCP works equally well over these networks as over wired ones. This paper identifies the performance dependencies by analyzing the responsiveness of TCP NewReno and TCP CUBIC when subject to bandwidth variations related to movements in different directions. The presented evaluation complements previous studies on 4G mobile networks in two important ways: It primarily focuses on the behavior of the TCP congestion control in medium- to high-velocity mobility scenarios, and it not only considers the current 4G mobile networks, but also low latency configurations that move towards the overall potential delays in 5G networks. The paper suggests that while both CUBIC and NewReno give similar goodput in scenarios where the radio channel continuously degrades, CUBIC gives a significantly better goodput in scenarios where the radio channel quality continuously increases. This is due to CUBIC probing more aggressively for additional bandwidth. Important for the design of 5G networks, the obtained results also demonstrate that very low latencies are capable of equalizing the goodput performance of different congestion control algorithms. Only in low latency scenarios that combine both large fluctuations of available bandwidths and a mobility pattern in which the radio channel quality continuously increases can some performance differences be noticed.
Research Interests:
Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the... more
Network Function Virtualization (NFV) is a promising solution for telecom operators and service providers to improve business agility, by enabling a fast deployment of new services, and by making it possible for them to cope with the increasing traffic volume and service demand. NFV enables virtualization of network functions that can be deployed as virtual machines on general purpose server hardware in cloud environments, effectively reducing deployment and operational costs. To benefit from the advantages of NFV, virtual network functions (VNFs) need to be provisioned with sufficient resources and perform without impacting network quality of service (QoS). To this end, this paper proposes a model for VNFs placement and provisioning optimization while guaranteeing the latency requirements of the service chains. Our goal is to optimize resource utilization in order to reduce cost satisfying the QoS such as end- to-end latency. We extend a related VNFs placement optimization with a fine-grained latency model including virtualization overhead. The model is evaluated with a simulated network and it provides placement solutions ensuring the required QoS guarantees.
Research Interests:
The demands for mobile communication is ever increasing. Mobile applications are increasing both in numbers and in heterogeneity of their requirements, and an increasingly diverse set of mobile technologies are employed. This creates an... more
The demands for mobile communication is ever increasing. Mobile applications are increasing both in numbers and in heterogeneity of their requirements, and an increasingly diverse set of mobile technologies are employed. This creates an urgent need for optimizing end-to-end services based on application requirements, conditions in the network and available transport solutions; something which is very hard to achieve with today’s Internet architecture. In this paper, we introduce the NEAT transport architecture as a solution to this problem. NEAT is designed to offer a flexible and evolvable transport system, where applications communicate their transport-service requirements to the NEAT system in a generic, transport-protocol independent way. The best transport option is then configured at run time based on application requirements, network conditions, and available transport options. Through a set of real life mobile use case experiments, we demonstrate how applications with di er- ent properties and requirements could employ the NEAT system in multi-access environments, showing significant performance bene ts as a result.
Research Interests:
ABSTRACT
Research Interests:
It is widely recognized that the Internet transport layer has become ossified, where further evolution has become hard or even impossible. This is a direct consequence of the ubiq- uitous deployment of middleboxes that hamper the... more
It is widely recognized that the Internet transport layer has become ossified, where further evolution has become hard or even impossible. This is a direct consequence of the ubiq- uitous deployment of middleboxes that hamper the deployment of new transports, aggravated further by the limited flexibility of the Application Programming Interface (API) typically presented to applications. To tackle this problem, a wide range of solutions have been proposed in the literature, each aiming to address a particular aspect. Yet, no single proposal has emerged that is able to enable evolution of the transport layer. In this work, after an overview of the main issues and reasons for transport-layer ossification, we survey proposed solutions and discuss their potential and limitations. The survey is divided into five parts, each covering a set of point solutions for a different facet of the problem space: 1) designing middlebox- proof transports, 2) signaling for facilitating middlebox traversal, 3) enhancing the API between the applications and the transport layer, 4) discovering and exploiting end-to-end capabilities, and 5) enabling user-space protocol stacks. Based on this analysis, we then identify further development needs towards an overall solution. We argue that the development of a comprehensive transport layer framework, able to facilitate the integration and cooperation of specialized solutions in an application-independent and flexible way, is a necessary step toward making the Internet transport architecture truly evolvable. To this end, we identify the requirements for such a framework and provide insights for its development.
Research Interests:
Currently, Multipath TCP (MPTCP) – a modifica- tion to standard TCP that enables the concurrent use of several network paths in a single TCP connection – is being standardized by IETF. This paper provides a comprehensive evaluation of the... more
Currently, Multipath TCP (MPTCP) – a modifica- tion to standard TCP that enables the concurrent use of several network paths in a single TCP connection – is being standardized by IETF. This paper provides a comprehensive evaluation of the use of MPTCP to reduce latency and thus improve the quality of experience or QoE for cloud-based applications. In particular, the paper considers the possible reductions in latency that could be obtained by using MPTCP and multiple network paths between a cloud service and a mobile end user. To obtain an appreciation of the expected latency performance for different types of cloud traffic, three applications are studied, Netflix, Google Maps, and Google Docs, representing typical applications generating high-, mid-, and low-intensity traffic. The results suggest that MPTCP could provide significant latency reductions for cloud applications, especially for applications such as Netflix and Google Maps. Moreover, the results suggest that MPTCP offers a reduced latency despite a few percent packet loss, and in spite of limited differences in the round-trip times of the network paths in an MPTCP connection. Still, larger differences in the round-trip times seem to significantly increase the application latency, especially for Netflix, Google Maps, and similar applications. Thus, to become an even better alternative to these applications, this paper suggests that the MPTCP packet scheduling policy should be changed: Apart from the round-trip times of the network paths in a connection, it should also consider the difference in round-trip time between the network paths.
Research Interests:
The rapidly growing interest in untethered Internet connections, especially in terms of WLAN and 3G/4G mobile connections, calls for intelligent session management: a mobile device should be able to provide a reasonable end-user... more
The rapidly growing interest in untethered Internet connections, especially in terms of WLAN and 3G/4G mobile connections, calls for intelligent session management: a mobile device should be able to provide a reasonable end-user experience despite location changes, disconnection periods and, not least, handovers. As part of an effort to develop a SCTP-based session management framework that meets these criteria, we are studying ways of improving the SCTP handover delay for real-time traffic; especially the startup delay on the connection between a mobile device and the target access point. To obtain an appreciation of the theoretically feasible gains of optimizing the startup delay on the handover-target path, we have developed a model that predicts the transfer times of SCTP messages during slow start. This paper experimentally validates our model and demonstrates that it could be used to predict the message transfer times in a variable bitrate flow by approximating the variable flow with a constant dito. It also employs our model to obtain an appreciation of the startup delay penalties incured by slow start during handovers in typical mobile, real-time traffic scenarios.
Experiences in the use of the Internet as a delivery medium for multimedia-based applications have revealed serious deficiencies in its ability to provide the QoS of Multimedia Applications. We propose an extension to TCP that addresses... more
Experiences in the use of the Internet as a delivery medium for multimedia-based applications have revealed serious deficiencies in its ability to provide the QoS of Multimedia Applications. We propose an extension to TCP that addresses the QoS requirements of applications with soft real-time constraints. Although, TCP has been found unsuitable for real-time applications, it can with minor modifications be adjusted to better comply with the QoS needs of applications with soft real-time requirements. Enhancing TCP with support for this group of applications is important since the congestion control mechanism of TCP assures stability of the Internet. In contrast, specialized multimedia protocols that lack appropriate congestion control can never be deployed on a large scale basis. Two factors of great importance for applications with soft real time constraints are jitter and throughput. By relaxing the reliability offered by TCP, the extension gives better jitter characteristics and an improved throughput. The extension only needs to be implemented at the receiving side. The reliability provided is controlled by the receiving application, thereby allowing a flexible tradeoff between different QoS parameters. In this paper, our TCP extension is presented and analyzed. The analysis investigates how the different application-controlled parameters influence performance. Our analysis is supported by a simulation study that investigates the tradeoff between interarrival jitter, throughput, and reliability. The simulation results also confirm that the extended version of TCP still behaves in a TCP-friendly manner.
Data protection is an increasingly important issue in today’s communication networks. Traditional solutions for protecting data when transferred over a network are almost exclusively based on cryptography. As a complement, we propose the... more
Data protection is an increasingly important issue in today’s communication networks. Traditional solutions for protecting data when transferred over a network are almost exclusively based on cryptography. As a complement, we propose the use of multiple physically separate paths to accomplish data protection. A general concept for providing physical separation of data streams together with a threat model is presented. The main target is delay-sensitive applications such as telephony signaling, live TV, and radio broadcasts that require only lightweight security. The threat considered is malicious interception of network transfers through so-called eavesdropping attacks. Application scenarios and techniques to provide physically separate paths are discussed.
SCTP congestion control includes the slow-start mechanism to probe the network for available bandwidth. In case of a path switch in a multihomed association, this mechanism may cause a sudden drop in throughput and increased message... more
SCTP congestion control includes the slow-start mechanism to probe the network for available bandwidth. In case of a path switch in a multihomed association, this mechanism may cause a sudden drop in throughput and increased message delays. By estimating the available bandwidth on the alternate path it is possible to utilize a more efficient startup scheme. In this paper, we analytically compare and quantify the degrading impact of slow start in relation to an ideal startup scheme. We consider three different scenarios where a path switch could occur. Further, we identify relevant traffic for these scenarios. Our results point out that the most prominent performance gain is seen for applications generating high traffic loads, like video conferencing. For this traffic, we have seen reductions in transfer time of more than 75% by an ideal startup scheme. Moreover, the results show an increasing impact of an improved startup mechanism with increasing RTTs.
Research Interests:
The introduction of multimedia in the Internet imposes new QoS requirements on existing transport protocols. Since neither TCP nor UDP comply with these requirements, a common approach today is to use RTP/UDP and to relegate the QoS... more
The introduction of multimedia in the Internet imposes new QoS requirements on existing transport protocols. Since neither TCP nor UDP comply with these requirements, a common approach today is to use RTP/UDP and to relegate the QoS responsibility to the application. Even though this approach has many advantages, it also entails leaving the responsibility for congestion control to the application. Considering the importance of efficient and reliable congestion control for maintaining stability in the Internet, this approach may prove dangerous. Improved support at the transport layer is therefore needed. In this paper, a partially reliable transport protocol, PRTP-ECN, is presented. PRTP-ECN is a protocol designed to be both TCP-friendly and to better comply with the QoS requirements of applications with soft real-time constraints. This is achieved by trading reliability for better jitter characteristics and improved throughput. A simulation study of PRTP-ECN has been conducted. The outcome of this evaluation suggests that PRTPECN can give applications that tolerate a limited amount of packet loss significant reductions in interarrival jitter and improvements in throughput as compared to TCP. The simulations also verified the TCP-friendly behavior of PRTP-ECN.
With Voice over IP (VoIP) emerging as a viable alternative to the traditional circuit-switched telephony, it is vital that the two are able to intercommunicate. To this end, the IETF Signaling Transport (SIGTRAN) group has defined an... more
With Voice over IP (VoIP) emerging as a viable alternative to the traditional circuit-switched telephony, it is vital that the two are able to intercommunicate. To this end, the IETF Signaling Transport (SIGTRAN) group has defined an architecture for seamless transportation of SS7 signaling traffic between a VoIP network and a traditional telecom network. However, at present, it is unclear if the SIGTRAN architecture will, in reality, meet the SS7 requirements, especially the stringent availability requirements. The SCTP transport protocol is one of the core components of the SIGTRAN architecture, and its failover mechanism is one of the most important availability mechanisms of SIGTRAN. This paper studies the impact of traffic load on the SCTP failover performance in an M3UA-based SIGTRAN network. The paper shows that cross traffic, especially bursty cross traffic such as SS7 signaling traffic, could indeed significantly deteriorate the SCTP failover performance. Furthermore the paper stresses the importance of configuring routers in a SIGTRAN network with relatively small queues. For example, in tests with bursty cross traffic, and with router queues twice the bandwidth-delay product, failover times were measured which were more than 50% longer than what was measured with no cross traffic at all. Furthermore, the paper also identifies some properties of the SCTP failover mechanism that could, in some cases, significantly degrade its performance.
ABSTRACT
ABSTRACT
ABSTRACT
... Annika Wennstrom, Anna Brunstrom, Johan Garcia Dept. ... The use of soft GSM software on a laptop makes it possible to ac-cess this interface, which is physically available on a RS-232 cable between the laptop and the GSM phone. ...
A packet switched wireless cellular system with wide area coverage and high throughput is proposed. The system is designed to be cost effective and to provide high spectral efficiency. It makes use of a combination of tools and concepts:... more
A packet switched wireless cellular system with wide area coverage and high throughput is proposed. The system is designed to be cost effective and to provide high spectral efficiency. It makes use of a combination of tools and concepts: - Smart antennas both at base stations and mobiles, provide antenna gain and improve the signal to interference ratio. - The fast fading is predicted in both time and frequency and - a slotted OFDM radio interface is used, in which time-frequency slots are allocated adaptively to different mobile users, based on their predicted channel quality. This enables efficient scheduling among sectors and users as well as fast adaptive modulation and power control. We here estimate the spectral efficiency of the uplink and downlink. Calculations based on simplifying assumptions illustrate how the channel capacity grows with the number of simultaneous users and the number of antenna elements. A high efficiency is attained already for moderate numbers of users ...
Research Interests:
A packet switched wireless cellular system with wide area coverage, high throughput and high spectral efficiency is proposed. Smart antennas at both base stations and mobiles improve the antenna gain and improve the signal to interference... more
A packet switched wireless cellular system with wide area coverage, high throughput and high spectral efficiency is proposed. Smart antennas at both base stations and mobiles improve the antenna gain and improve the signal to interference ratio. The small-scale fading is predicted in both time and frequency and a slotted OFDM radio interface is used, in which time-frequency bins are allocated adaptively to different mobile users, based on their predicted channel quality. This enables efficient scheduling among sectors and users as well as fast adaptive modulation and power control. We here estimate the spectral efficiency of the suggested downlink. The resulting channel capacity grows with the number of simultaneous users and with the number of antenna elements in terminals. A high efficiency, around 4 bits/s/Hz, is attained already for moderate numbers of users and terminal antennas. An outline is given of research pursued within the PCC Wireless IP Project to improve and investiga...
Research Interests:
A packet switched wireless cellular system with wide area coverage and high throughput is proposed. It is designed to be cost effective and to provide high spectral efficiency. The high performance is achieved by the use of long term... more
A packet switched wireless cellular system with wide area coverage and high throughput is proposed. It is designed to be cost effective and to provide high spectral efficiency. The high performance is achieved by the use of long term channel predictions, in both time and frequency, scheduling among users, and smart antennas combined with adaptive modulation and power control. Calculations based on reasonable simplifying assumptions indicate that a tremendous capacity can be attained for moderate numbers of users and terminal antennas. We also briefly discuss other means for performance improvements such as alternatives to standard TCP, interlayer interaction /communication, and the use of positioning information. For additional information and references, please see http://www.signal.uu.se/Research/PCCwirelessIP.html
Research Interests:
A packet switched wireless cellular system with wide area coverage and high throughput is proposed. The system is designed to be cost effective and to provide high spectral efficiency. It makes use of a combination of tools and concepts:... more
A packet switched wireless cellular system with wide area coverage and high throughput is proposed. The system is designed to be cost effective and to provide high spectral efficiency. It makes use of a combination of tools and concepts: - Smart antennas both at base stations and mobiles, provide antenna gain and improve the signal to interference ratio. - The fast fading is predicted in both time and frequency and - a slotted OFDM radio interface is used, in which time-frequency slots are allocated adaptively to different mobile users, based on their predicted channel quality. This enables efficient scheduling among sectors and users as well as fast adaptive modulation and power control. We here outline the uplink of the radio interface. Calculations based on simplifying assumptions illustrate how the channel capacity grows with the number of simultaneous users and the number of antenna elements. A high capacity can be attained already for moderate numbers of users and base station...
This paper analyzes three existing tunable security services based on a conceptual model. The aim of the study is to examine the tunable features provided by the different services in a structured and consistent way. This implies that for... more
This paper analyzes three existing tunable security services based on a conceptual model. The aim of the study is to examine the tunable features provided by the different services in a structured and consistent way. This implies that for each service user preferences as well as environment and application characteristics that influence the choice of a certain security configuration are identified and discussed.
Research Interests:
The underlying physical link is transparent in most IP-based networks. Contrary to this commonly accepted design rule, we propose that the applications should be made aware of the channel conditions. This is especially fruitful for... more
The underlying physical link is transparent in most IP-based networks. Contrary to this commonly accepted design rule, we propose that the applications should be made aware of the channel conditions. This is especially fruitful for wireless links where the performance is many orders of magnitudes lower than in fixed networks. Instead of wasting resources to make the wireless link behave as a fixed link, the application could take care of the adaptation to the channel condition. The presented solution assumes that soft information consisting of a reliability measure of the received bits is produced in the physical layer. This soft information is then propagated to the application. The application may use this infor-mation to distinguish between errors caused by fading and network congestion. Another possible use for soft information is to make the applications adapt the source and channel codes to the current channel condition and thus maximize performance.
To achieve the best possible QoS tradeoff between security and performance for networked applications, a tunable and differential treatment of security is required. In this paper, we present the design and implementation of a tunable... more
To achieve the best possible QoS tradeoff between security and performance for networked applications, a tunable and differential treatment of security is required. In this paper, we present the design and implementation of a tunable encryption service. The proposed service is based on a selective encryption paradigm in which the applications can request a desired encryption level. Encryption levels are selected by the applications at the inception of sessions, but can be changed at any time during their lifetime. A prototype implementation is described along with an initial performance evaluation. The experimental results demonstrate that the proposed service offers a high degree of security adaptiveness at a low cost.
In this paper, we investigate the tunable features provided by Mix-Nets and Crowds using a conceptual model for tunable secu- rity services. A tunable security service is deflned as a service that has been explicitly designed to ofier... more
In this paper, we investigate the tunable features provided by Mix-Nets and Crowds using a conceptual model for tunable secu- rity services. A tunable security service is deflned as a service that has been explicitly designed to ofier various security levels that can be se- lected at run-time. Normally, Mix-Nets and Crowds are considered to be static anonymity services, since they were not explicitly designed to provide tunability. However, as discussed in this paper, they both con- tain dynamic elements that can be used to achieve a tradeofi between anonymity and performance.
KauNet is an emulation system that allows deterministic placement of packet losses and bit-errors as well as more precise control over bandwidth and delay changes. KauNet is an extension to the well-known Dummynet emulator in FreeBSD and... more
KauNet is an emulation system that allows deterministic placement of packet losses and bit-errors as well as more precise control over bandwidth and delay changes. KauNet is an extension to the well-known Dummynet emulator in FreeBSD and allows the use of pattern and scenario files to increase control and repeatability. This report provides a comprehensive description of the usage of KauNet, as well as a technical description of the design and implementation of KauNet.
This paper presents a technique to improve the performance of TCP and the utilization of wireless networks. Wireless links exhibit high rates of bit errors, compared to communication over wireline or fiber. Since TCP cannot separate... more
This paper presents a technique to improve the performance of TCP and the utilization of wireless networks. Wireless links exhibit high rates of bit errors, compared to communication over wireline or fiber. Since TCP cannot separate packet losses due to bit errors versus congestion, all losses are treated as signs of congestion and congestion avoidance is initiated. This paper explores the possibility of accepting TCP packets with an erroneous checksum, to improve network performance for those applications that can tolerate bit errors. Since errors may be in the TCP header as well as the payload, the possibility of recovering the header is discussed. An algorithm for this recovery is also presented. Experiments with an implementation have been performed, which show that large improvements in throughput can be achieved, depending on link and error characteristics.
This paper presents a wireless link and network emulator, along with experiments and validation against the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading downlinks (base to terminals)... more
This paper presents a wireless link and network emulator, along with experiments and validation against the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading downlinks (base to terminals) link-level frames are scheduled and the transmission is adapted on a fast time scale. With fast link adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteracted at the physical and link layers. The emulator has been used to experimentally investigate the resulting interaction between the transport layer and the link layer. The paper gives an overview of the emulator design, and presents experimental results with three different TCP variants in combination with various link layer characteristics. For additional information and references, please see http://www.signal.uu.se/Research/PCCwirelessIP.html
This paper presents a wireless link and network emulator, based upon the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading down-links (base to terminals) link-level frames are scheduled... more
This paper presents a wireless link and network emulator, based upon the "Wireless IP" 4G system proposal from Uppsala University and partners. In wireless fading down-links (base to terminals) link-level frames are scheduled and the transmission is adapted on a fast time scale. With fast link adaptation and fast link level retransmission, the fading properties of wireless links can to a large extent be counteracted at the physical and link layers. A purpose of the emulator is to investigate the resulting interaction with transport layer protocols. The emulator is built on Internet technologies, and is installed as a gateway between communicating hosts. The paper gives an overview of the emulator design, and presents preliminary experiments with three different TCP variants. The results illustrate the functionality of the emulator by showing the effect of changing link layer parameters on the different TCP variants. For additional information and references, please see htt...
This paper argues for the usefulness of enhancing current network emulation practices to also include more control over loss and bit-errors. By using an extended Dummynet emulator we illustrate the beneficial effects of being able control... more
This paper argues for the usefulness of enhancing current network emulation practices to also include more control over loss and bit-errors. By using an extended Dummynet emulator we illustrate the beneficial effects of being able control the placement of losses. Both the possibility to get additional knowledge about protocol behavior, as well as statistical benefits such as paired experiments are discussed. By extending the control to also include bit-error generation a finer level of abstraction is provided which allows the possibility to also examine bit-error sensitive protocol behavior. Time-driven bit-error insertion can be used to emulate the time-varying bit-error characteristics of a wireless link in a repeatable manner, and data-driven bit-errors can be useful when examining protocol details.
Network emulation has for a long time been an important tool for evaluating the performance of communication pro- tocols. By emulating network characteristics, such as re- stricted bandwidth, delay and losses, knowledge about the behavior... more
Network emulation has for a long time been an important tool for evaluating the performance of communication pro- tocols. By emulating network characteristics, such as re- stricted bandwidth, delay and losses, knowledge about the behavior and performance of actual protocol implementa- tions can be obtained. This paper focuses on the gener- ation of losses in network emulators and shows the ben- ecial effects of being able to control the generation of losses in a precise way. Both the possibility to get addi- tional knowledge about a protocol implementations behav- ior, as well as statistical benets such as paired experiments are discussed. By extending the loss generation to also in- clude bit-error generation, in addition to packet losses, a ner level of abstraction is provided. Deterministic bit- error generation allows detailed and repeatable studies of bit-error sensitive protocol behavior. TCP and a loss differ- entiating variant of TCP is used to illustrate the utility of impr...
This paper presents an experiment designed to compare a contract-based programming method with a reference programming method based on exceptions. The purpose was to evaluate whether contracts would shorten the development time, improve... more
This paper presents an experiment designed to compare a contract-based programming method with a reference programming method based on exceptions. The purpose was to evaluate whether contracts would shorten the development time, improve work satisfaction and increase the quality of the resulting software program. The experiment was carried out in a project work course for students in the computer science program at Karlstad University. The students were to solve an assignment in groups of four within a period of ten weeks. Half of the groups used the contract-based method and the other half the exception based method. For statistical analysis we gathered data on time consumption and work satisfaction on daily report forms. The results show that there was a gain in the time spent on implementation of the assignment when the contract-based method is used, but show no significant difference in total time consumption. The results give a weak indication that work satisfaction was slightl...
Research Interests:
When designing a software module or system, a software engineer needs to consider and dierentiate between how the system handles external and internal errors. Exter- nal errors must be tolerated by the system, while inter- nal errors... more
When designing a software module or system, a software engineer needs to consider and dierentiate between how the system handles external and internal errors. Exter- nal errors must be tolerated by the system, while inter- nal errors should be discovered and eliminated. This paper presents a development strategy based on design contracts to minimize the amount of internal errors in a software system while accommodating external errors. A distinc- tion is made between weak and strong contracts that corre- sponds to the distinction between external and internal er- rors. According to the strategy, strong contracts should be applied initially to promote the correctness of the system. Before release, the contracts governing external interfaces should be weakened and error management of external er- rors enabled. This transformation of a strong contract to a weak one is harmless to client modules. In addition to pre- senting the strategy, the paper also presents a case study of an indust...
One important aspect when teaching OO technology is the semantics of programming. Much time is traditionally spent on syntax and language mechanisms whereas the semantics is given less time. To remedy this problem, we try to introduce a... more
One important aspect when teaching OO technology is the semantics of programming. Much time is traditionally spent on syntax and language mechanisms whereas the semantics is given less time. To remedy this problem, we try to introduce a semantic thinking throughout the entire computer science education. We have developed a contract-based programming method to enforce the semantic aspects and have performed course experiments to see what advantages such a method can have on the students' abilities. This paper presents the method and an experiment performed in a course on project work and Java to compare the method to a standard programming method. The contract-based method was positively received from the students who reported that working with the method felt natural. The results of the experiment show that the work satisfaction is slightly higher when using this contract-based method. They also show that there is a gain in the time spent on the assignment when the contract-base...
The Stream Control Transmission Protocol (SCTP) was de- veloped to support the transfer of telephony signaling over IP networks. One of the ambitions when designing SCTP was to offer a robust transfer of traffic between hosts. For this... more
The Stream Control Transmission Protocol (SCTP) was de- veloped to support the transfer of telephony signaling over IP networks. One of the ambitions when designing SCTP was to offer a robust transfer of traffic between hosts. For this reason SCTP was designed to support multihoming, which gives the possibility to set up several paths between the same hosts in the same session. If the primary path be- tween a source machine and a destination machine breaks down, the traffic may still be sent to the destination by uti- lizing one of the alternate paths. The failover that occurs when changing path is to be transparent to the application. Consequently, the time between occurrence of a break on the primary path until the traffic is run smoothly on one of the alternate paths is important. This paper presents experi- mental results concerning SCTP failover performance. The focus in this paper is to evaluate the impact of the SACK delay and link delay on the failover time as well as on the...
The stream control transmission protocol (SCTP) is a fairly new transport protocol that was initially designed for carrying signaling traffic in IP networks. SCTP offers a reliable end-to-end (E2E) transport. Compared to TCP, SCTP... more
The stream control transmission protocol (SCTP) is a fairly new transport protocol that was initially designed for carrying signaling traffic in IP networks. SCTP offers a reliable end-to-end (E2E) transport. Compared to TCP, SCTP provides a much richer set of transport features such as message oriented transfer, multistreaming to handle head- of-line blocking, and multihoming for enhanced failover. These are
The present Internet limits the performance of applications that need real- time interaction. This is in part because the design of the network has been optimised to boost throughput, maximising efficiency for bulk applications. However,... more
The present Internet limits the performance of applications that need real- time interaction. This is in part because the design of the network has been optimised to boost throughput, maximising efficiency for bulk applications. However, changes in use have resulted in that an increasing number of applications now depend on timely delivery. One of the targets of the RITE project is to reduce internet transport latency in support of such applications. Initial results from the project on how end nodes can be optimized for more timely error recovery are presented in this poster.
The Stream Control Transmission Protocol (SCTP) was designed by the IETF as a viable solution for IP-based signaling transport. Signaling traffic is different from ordinary TCP bulk traffic in many ways. One example is that the require-... more
The Stream Control Transmission Protocol (SCTP) was designed by the IETF as a viable solution for IP-based signaling transport. Signaling traffic is different from ordinary TCP bulk traffic in many ways. One example is that the require- ment of timely delivery usually is much stricter. However, the management of the SCTP retransmission timer is not designed considering this requirement. Basically, the management algo- rithm unnecessarily extends the time needed for loss detection. This paper presents a new management algorithm that is able to maintain a correct state of the retransmission timer, which eliminates this particular problem. The paper also compares the performance of the two management algorithms in an emulated signaling environment, using the lksctp implementation of SCTP. The results show that the proposed algorithm is able to provide significant reductions in loss recovery time.
Research Interests:
Web services have today become an important technology for information exchange over the Internet. Al-though web services are designed to support interoperable machine-to-machine interaction, humans are often the final recipients of the... more
Web services have today become an important technology for information exchange over the Internet. Al-though web services are designed to support interoperable machine-to-machine interaction, humans are often the final recipients of the produced information. This makes the performance of web services important from a user perspective. In this paper we present a comprehensive experimental evaluation on the response times of web services. The limited amount of data transfered in a typical web service message makes its performance sensitive to packet loss in the network and we focus our investigation on this issue. Using a web service re-sponse time model, we evaluate the performance of two typical web services over a wide range of network delays and packet loss patterns. The experiments are based on network emulation and two real protocol im-plementations are examined. The experimental results indicate that a single packet loss may more than double the response times of the evaluated ...
Research Interests:
The Stream Control Transmission Protocol (SCTP) was de- veloped to be a viable solution for transportation of signal- ing traffic within IP-based networks. Signaling traffic is dif- ferent from ordinary bulk traffic in many ways. One... more
The Stream Control Transmission Protocol (SCTP) was de- veloped to be a viable solution for transportation of signal- ing traffic within IP-based networks. Signaling traffic is dif- ferent from ordinary bulk traffic in many ways. One exam- ple of this is that the requirements of timely delivery usually are much stricter. However, the loss recovery mechanisms in SCTP are not fully optimized to these requirements. For in- stance, if packet loss occurs when the amount of outstand- ing data is small, a SCTP sender might be forced to rely on lengthy timeouts for loss recovery. This paper presents a number of proposals that try to solve this particular problem, with focus on the Early Retransmit mechanism. We propose a modification to Early Retransmit, to adapt it to signaling scenarios, and evaluate its performance experimentally. The results show that the modified Early Retransmit mechanism is able to provide significant reductions in loss recovery time. In some cases, the time needed t...
Research Interests:
This is a poster on the KauNet network emulation system. As compared to other emulation system, KauNet is deterministic. The poster shows how patterns enforce determinism and how patterns can be used to emulate a satellite channel at the... more
This is a poster on the KauNet network emulation system. As compared to other emulation system, KauNet is deterministic. The poster shows how patterns enforce determinism and how patterns can be used to emulate a satellite channel at the IP level.
lib.kth.se. ...
Research Interests:
A powerpoint presentation is given. The paper discusses the experimental evaluation of the performance costs of different IKEv2 authentication methods. Internet key exchange version 2 protocol negotiates security associations for IPsec,... more
A powerpoint presentation is given. The paper discusses the experimental evaluation of the performance costs of different IKEv2 authentication methods. Internet key exchange version 2 protocol negotiates security associations for IPsec, authenticates the peer, supports the extensible authentication protocol methods and a candidate technology in future AAA frameworks which is a major issue in next generation wireless networks.
... the HA and the mobile prefix discovery use encapsulating security payload (ESP) in transport mode with a non-null data origin authentication algorithm and null encryption. Home testing messages are protected with ESP in tunnel mode... more
... the HA and the mobile prefix discovery use encapsulating security payload (ESP) in transport mode with a non-null data origin authentication algorithm and null encryption. Home testing messages are protected with ESP in tunnel mode with a non-null encryption HA MSA-AAA ...
ABSTRACT Tackling security and performance issues in ubiquitous computing has turned out to be a challenging task due to the heterogeneity of both the environment and the applications. Services must satisfy several constraints caused by... more
ABSTRACT Tackling security and performance issues in ubiquitous computing has turned out to be a challenging task due to the heterogeneity of both the environment and the applications. Services must satisfy several constraints caused by the security, performance, and other requirements of applications, users and providers. This paper introduces a new formalized decision model for security solution suitability analysis. It supports the design of dynamic security services and can be used by security managers making runtime decisions. Our solution improves previously proposed AHP-based decision models. The MAHP decision engine is applied using a new approach. Furthermore, we extend the MAHP algorithm to handle the non-fulfillment of requirements. This results in more accurate decisions, and better fulfillment of the design criteria. The use of the proposed decision model is illustrated through an IKEv2 authentication method selection problem.
ABSTRACT This paper describes the design of secure socket SCTP (SS-SCTP). SS-SCTP is a new end-to-end security solution that uses the AUTH extension for integrity protection of messages and TLS for mutual authentication and key... more
ABSTRACT This paper describes the design of secure socket SCTP (SS-SCTP). SS-SCTP is a new end-to-end security solution that uses the AUTH extension for integrity protection of messages and TLS for mutual authentication and key negotiation. Data confidentiality is in SS-SCTP provided through encryption at the socket layer. SS-SCTP aims to offer a high degree of security differentiation based on features in the base SCTP protocol as well as in standardized extensions. The flexible message concept provided in the base protocol plays a central role in the design of SS-SCTP. In the paper, a comparison of the message complexity produced by SS-SCTP, SCTP over IPsec, and TLS over SCTP is also presented. The main conclusion that can be drawn from the comparison is that, depending on the traffic pattern, SS-SCTP produces either less or similar message overhead compared to the standardized solutions when transferring user data.
In this paper a GPRS measurement testbed for TCP performance evaluation is presented. Unlike simulations and live measurements, the testbed combines the use of real network equipment and protocol implementations with a precise control... more
In this paper a GPRS measurement testbed for TCP performance evaluation is presented. Unlike simulations and live measurements, the testbed combines the use of real network equipment and protocol implementations with a precise control over radio channel conditions. Some initial TCP measurements obtained with the GPRS testbed are also presented. The effect of varying numbers of PDCH and of buffering
ABSTRACT
Page 1. 978-1-4244-4439-7/09/$25.00 cO2009 IEEE Impact of Packet Aggregation on TCP performance in Wireless Mesh Networks Jonas Karlsson Dept. of Computer Science Karlstad University [email protected] Andreas Kassler Dept. ...
Abstract Traditionally, allocation of data in distributed database management systems has been determined by off-line anidysis and optimization. This technique works well for static database access patterns, but is often inadequate for... more
Abstract Traditionally, allocation of data in distributed database management systems has been determined by off-line anidysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. This ...
ABSTRACT Routing packets over multiple disjoint paths towards a destination can increase network utilization by load-balancing the traffic over the network. In wireless mesh networks, multi-radio multi-channel nodes are often used to... more
ABSTRACT Routing packets over multiple disjoint paths towards a destination can increase network utilization by load-balancing the traffic over the network. In wireless mesh networks, multi-radio multi-channel nodes are often used to create a larger set of interference-free paths thus increasing the chance of load-balancing. The drawback of load-balancing is that different paths might have different delay properties, causing packets to be reordered. This can reduce TCP performance significantly, as reordering is interpreted as a sign of congestion. Packet reordering can be avoided by letting the network layer forward traffic strictly on flow-level. This would avoid the negative drawbacks of packet reordering, but will also limit the ability to achieve optimal network throughput. On the other hand, there are several proposals that try to mitigate the effects of reordering at the transport layer. In this paper, we perform an in-depth evaluation of such TCP reordering mitigations in multi-radio multi-channel wireless mesh networks when using multi-path forwarding. We evaluate two TCP reordering mitigation techniques implemented in the Linux kernel. The transport layer mitigations are compared using different multi-path forwarding strategies. Our findings show that, in general, flow-level forwarding gives the best TCP performance and that transport layer reordering mitigations only marginally can improve performance.
lib.kth.se. ...
ABSTRACT This paper presents an experimental evaluation carried out in an academic environment. The goal of the experiment was to compare how different methods of documenting semantic information affect software reuse. More specifically,... more
ABSTRACT This paper presents an experimental evaluation carried out in an academic environment. The goal of the experiment was to compare how different methods of documenting semantic information affect software reuse. More specifically, the goal was to measure if there were any differences between the methods with regard to the time needed to implement changes to existing software. Four methods of documentation were used; executable contracts, non-executable contracts, Javadoc-style documentation and sequence diagrams. The results indicate that executable contracts demanded more time than the other three methods and that sequence diagrams and Javadoc demanded the least time.
ABSTRACT A partially reliable extension of SCTP, PR-SCTP, has been considered as a candidate for prioritizing content sensitive traffic at the transport layer. PR-SCTP offers a flexible QoS trade-off between timeliness and reliability.... more
ABSTRACT A partially reliable extension of SCTP, PR-SCTP, has been considered as a candidate for prioritizing content sensitive traffic at the transport layer. PR-SCTP offers a flexible QoS trade-off between timeliness and reliability. Several applications such as streaming multimedia, IPTV transmission, and SIP signaling have been shown to benefit from this. Our previous work, however, suggests that the performance gain can be very much reduced in a network with competing traffic. One of the most important factors in this case is the inefficiency in the forward tsn mechanism in PR-SCTP. In this paper, we thoroughly examine the forward tsn inefficiency and propose a solution to overcome it that takes advantage of the NR-SACKs mechanism available in FreeBSD. Moreover, we implement and evaluate the proposed solution. Our initial set of results show a significant performance gain for PR-SCTP with NR-SACKs. In some scenarios the average message transfer delay is reduced by more than 75%.
ABSTRACT The performance of applications in wireless networks is partly dependent upon the link configuration. Link characteristics varies with frame retransmission persistency, link frame retransmission delay, adaptive modulation... more
ABSTRACT The performance of applications in wireless networks is partly dependent upon the link configuration. Link characteristics varies with frame retransmission persistency, link frame retransmission delay, adaptive modulation strategies, coding, and more. The link configuration and channel conditions can lead to packet loss, delay and delay variations, which impact different applications in different ways. A bulk transfer application may tolerate delays to a large extent, while packet loss is undesirable. On the other hand, real-time interactive applications are sensitive to delay and delay variations, but may tolerate packet loss to a certain extent. This paper contributes a study of the effect of link frame retransmission persistency and delay on packet loss and latency for real-time interactive applications. The results indicate that a reliable retransmission mechanism with fast link retransmissions in the range of 2-8 ms is sufficient to provide an upper delay bound of 50 ms over the wireless link, which is well within the delay budget of voice over IP applications. For additional information and references, please see http://www.signal.uu.se/Research/PCCwirelessIP.html
ABSTRACT Internet-based applications that require low latency are becoming more common. Such applications typically generate traffic consisting of short, or bursty, TCP flows. As TCP, instead, is designed to optimize the throughput of... more
ABSTRACT Internet-based applications that require low latency are becoming more common. Such applications typically generate traffic consisting of short, or bursty, TCP flows. As TCP, instead, is designed to optimize the throughput of long bulk flows there is an apparent mismatch. To overcome this, a lot of research has recently focused on optimizing TCP for short flows as well. In this paper, we identify a performance problem for short flows caused by the metric caching conducted by the TCP control block interdependence mechanisms. Using this metric caching, a single packet loss can potentially ruin the performance for all future flows to the same destination by making them start in congestion avoidance instead of slow-start. To solve this, we propose an enhanced selective caching mechanism for short flows. To illustrate the usefulness of our approach, we implement it in both Linux and FreeBSD and experimentally evaluate it in a real test-bed. The experiments show that the selective caching approach is able to reduce the average transmission time of short flows by up to 40%.
ABSTRACT From being regarded as a pathological event, packet reordering is now considered to be naturally prevalent within the Internet. When packets are reordered, the performance of transport protocols like TCP can be severely hurt. To... more
ABSTRACT From being regarded as a pathological event, packet reordering is now considered to be naturally prevalent within the Internet. When packets are reordered, the performance of transport protocols like TCP can be severely hurt. To overcome performance problems a number of mitigations have been proposed. Common for most proposals is, however, the lack of evaluations using real protocol implementations and good models of packet reordering. In this paper we highlight the need for detailed reordering models, and implement support for such models in the KauNet network emulator. To demonstrate the importance of using detailed models we present an experimental example.
ABSTRACT The Stream Control Transmission Protocol (SCTP) was designed by the IETF as a viable solution for transportation of signaling traffic within IP-based networks. Signaling traffic is different from ordinary TCP bulk traffic in many... more
ABSTRACT The Stream Control Transmission Protocol (SCTP) was designed by the IETF as a viable solution for transportation of signaling traffic within IP-based networks. Signaling traffic is different from ordinary TCP bulk traffic in many ways. One example is that the requirement of timely delivery usually is much stricter. However, the management of the SCTP retransmission timer is not optimally designed considering this requirement. Basically, the management algorithm, unnecessarily, extends the time needed for loss detection. This paper presents a new management algorithm that is able to maintain a correct state of the retransmission timer, which eliminates this particular problem. In addition, the paper also compares the performance of the two management algorithms in an emulated signaling environment, using the lksctp implementation of SCTP. The results show that the proposed algorithm is able to provide significant reductions in loss recovery time. In some cases, the time needed to recover from packet loss is reduced with as much as 43%.
ABSTRACT Packet reordering is now considered naturally prevalent within complex networks like the Internet. When packets are reordered, the performance of transport protocols like TCP is severely hurt. To overcome performance issues a... more
ABSTRACT Packet reordering is now considered naturally prevalent within complex networks like the Internet. When packets are reordered, the performance of transport protocols like TCP is severely hurt. To overcome performance issues a number of mitigations have been proposed. While evaluations have shown the success of such mitigations, most have not considered realistic scenarios where other impairments are present. Furthermore, most studies only evaluate the performance of long-lived TCP flows, although short-lived flows are the most common. In this paper we evaluate Linux's built-in reordering mitigations and the TCP-NCR proposal using real protocol implementations. The results show that Linux and TCP-NCR are able to provide good protection against reordering when no other impairments are present. For flows that also experience packet loss, the performance is dominated by the negative effect of these losses. Results also indicate that short-lived flows are sensitive to how reordering mitigation is conducted. Linux was able to improve the performance of short flows slightly, while TCP-NCR performed worse than TCP without reordering protection.
ABSTRACT
ABSTRACT Syslog is one of the basic methods for event logging in computer networks. Log messages that are generated by syslog can be used for a number of purposes, including optimizing system performance, system auditing, and... more
ABSTRACT Syslog is one of the basic methods for event logging in computer networks. Log messages that are generated by syslog can be used for a number of purposes, including optimizing system performance, system auditing, and investigating malicious activities in a computer network. Considering all these attractive uses, both timeliness and reliability is needed when syslog messages are transported over a network. The unreliable transport protocol UDP was specified in the original syslog specification; later a reliable transport service based on TCP was also proposed. However, TCP is a costly alternative in terms of delay. In our previous work, we introduced the partially reliable extension of SCTP, PR-SCTP, as a transport service for syslog, trading reliability against timeliness by prioritizing syslog messages. In this work, we first model syslog data using real syslog traces from an operational network. The model is then used as input in the performance evaluation of PR-SCTP. In the experiments, real congestion is introduced in the network by running several competing flows. Although PR-SCTP clearly outperformed TCP and SCTP in our previous work, our present evaluations show that PR-SCTP performance is largely influenced by the syslog data size characteristics.
ABSTRACT This paper describes the design and implementation of secure socket SCTP (S2SCTP). S2SCTP is a new multi-layer, end-to-end security solution for SCTP. It uses the AUTH protocol extension of SCTP for integrity protection of both... more
ABSTRACT This paper describes the design and implementation of secure socket SCTP (S2SCTP). S2SCTP is a new multi-layer, end-to-end security solution for SCTP. It uses the AUTH protocol extension of SCTP for integrity protection of both control and user messages; TLS is the proposed solution for authentication and key agreement; Data confidentiality is provided through encryption and decryption at the socket library layer. S2SCTP is designed to offer as much security differentiation support as possible using standardized solutions and mechanisms. In the paper, S2SCTP is also compared to SCTP over IPsec and TLS over SCTP in terms of packet protection, security differentiation, and message complexity. The following main conclusions can be draw from the comparison. S2SCTP compares favorably in terms of offered security differentiation and message overhead. Confidentiality protection of SCTP control information is, however, only offered by SCTP over IPsec.
The current wireless network landscape comprises a plethora of technologies including WLAN, WiMAX and 3G, and not much speaks for a radical change of the state of affairs in the near future. In light of this, it becomes pivotal to... more
The current wireless network landscape comprises a plethora of technologies including WLAN, WiMAX and 3G, and not much speaks for a radical change of the state of affairs in the near future. In light of this, it becomes pivotal to facilitate vertical handover between different types of wireless networks. Although, a large number of vertical handover schemes have been proposed in the past several years, the majority of the proposed solutions reside in the network and/or link layer -- e.g., Mobile IP and various IEEE 802.21 schemes - and relatively few are transport-layer solutions. However, we think transport-layer solutions many times are attractive, particularly in cases where there are no economic incentives to upgrade the existing network infrastructure. To this end, we have designed a lightweight, transport-level mobility framework based on the Stream Control Transmission Protocol (SCTP) and its extension for dynamic address reconfiguration. The framework API has been kept very small and closely aligned with the SCTP sockets extensions, which makes porting of existing applications fairly straightforward. To demonstrate its usefulness for low-power tablets and smart phones, we have implemented our framework on a Motorola Xoom tablet running the Android OS. Our initial proof-of-concept experiment gave satisfactory results with a handover performance on par with that of other vertical handover solutions.
Abstract We focus on wireless multimedia communication and investigate how cross-layer information can be used to improve performance at the application layer, using JPEG2000 as an example. The cross-layer information is in the form of... more
Abstract We focus on wireless multimedia communication and investigate how cross-layer information can be used to improve performance at the application layer, using JPEG2000 as an example. The cross-layer information is in the form of soft information from the ...
Page 1. Performance Analysis of IPsec in Mobile IPv6 Scenarios Zoltain Faigl, Peter Fazekas Department of Telecommunications Budapest University of Technology and Economics Budapest, Hungary {[email protected]}l{[email protected]} ...
... of Computer Science, Karlstad University SE-651 88 Karlstad, Sweden Email:Annika.Wennstrom, Anna.Brunstrom¡ @kau.se ... The client in this testbed is a laptop connected with a serial cable to a GPRS terminal at the R ref-erence point.... more
... of Computer Science, Karlstad University SE-651 88 Karlstad, Sweden Email:Annika.Wennstrom, Anna.Brunstrom¡ @kau.se ... The client in this testbed is a laptop connected with a serial cable to a GPRS terminal at the R ref-erence point. ...
... SE-65 1 88 Karlstad, Sweden { Annika.Wennstrom, Johan.Garcia, Anna Brunstrom} @kau.se ... The use of so@ GSM software on a laptop made it possible to access this interface, which was physically available on a RS-232 cable between the... more
... SE-65 1 88 Karlstad, Sweden { Annika.Wennstrom, Johan.Garcia, Anna Brunstrom} @kau.se ... The use of so@ GSM software on a laptop made it possible to access this interface, which was physically available on a RS-232 cable between the laptop and the GSM phone. ...
To reduce cost and provide more flexible services, telecommunication operators are currently replacing traditional telephony networks with IP-networks. To support the requirements of telephony signaling in IP-networks, SCTP was... more
To reduce cost and provide more flexible services, telecommunication operators are currently replacing traditional telephony networks with IP-networks. To support the requirements of telephony signaling in IP-networks, SCTP was standardized. SCTP solves a number of problems that follows from using TCP for telephony signaling transport. However, the design of SCTP is still largely based on TCP, and most of SCTP’s
ABSTRACT Secure communications have a key role in future networks and applications. Information security provisions such as authorization, authentication, and encryption must be added to current communications protocols. To accomplish... more
ABSTRACT Secure communications have a key role in future networks and applications. Information security provisions such as authorization, authentication, and encryption must be added to current communications protocols. To accomplish this, each protocol must be reexamined to determine the impact on performance of adding such security services. This paper presents an experimental evaluation of the performance costs of a wide variety of authentication methods over IKEv2 in real and partly emulated scenarios of next generation wireless networks. The studied methods are pre-shared keys (PSKs), extensible authentication protocol (EAP) using MD5, SIM, TTLS-MD5, TLS, and PEAP-MSCHAPv2. For the EAP-based methods, RADIUS is used as the authentication, authorization, and accounting (AAA) server. Different lengths of certificate chains are studied in case of the TLS-based methods, i.e., TTLS-MD5, TLS, and PEAP-MSCHAPv2. The paper first presents a brief overview of the considered authentication methods. Then, a comparison of the costs for message transfers and computations associated with the authentication methods is provided. The measurement results are verified through a simple analysis, and interpreted by discussing the main contributing factors of the costs. The measurement results illustrate the practical costs involved for IKEv2 authentication, and the implications of the use of different methods are discussed. Copyright © 2009 John Wiley & Sons, Ltd.
ABSTRACT In GPRS networks, excessive buffering has a negative effect on TCP as the round trip times become very long. Measurements with different buffer settings indicate that the queueing delay can be reduced by orders of magnitude with... more
ABSTRACT In GPRS networks, excessive buffering has a negative effect on TCP as the round trip times become very long. Measurements with different buffer settings indicate that the queueing delay can be reduced by orders of magnitude with a smaller buffer, without significantly degrading TCP throughput. The measurements are conducted in a GPRS testbed consisting of real network nodes.
ABSTRACT Wireless mesh networks (WMNs) based on the IEEE 802.11 standard are becoming increasingly popular as a viable alternative to wired networks. WMNs can cover large or difficult to reach areas with low deployment and management... more
ABSTRACT Wireless mesh networks (WMNs) based on the IEEE 802.11 standard are becoming increasingly popular as a viable alternative to wired networks. WMNs can cover large or difficult to reach areas with low deployment and management costs. Several multi-path routing algorithms have been proposed for such kind of networks with the objective of load balancing the traffic across the network and providing robustness against node or link failures. Packet aggregation has also been proposed to reduce the overhead associated with the transmission of frames, which is not negligible in IEEE 802.11 networks. Unfortunately, multi-path routing and packet aggregation do not work well together, as they pursue different objectives. Indeed, while multi-path routing tends to spread packets among several next-hops, packet aggregation works more efficiently when several packets (destined to the same next-hop) are aggregated and sent together in a single MAC frame. In this paper, we propose a technique, called aggregation aware forwarding, that can be applied to existing multi-path routing algorithms to allow them to effectively exploit packet aggregation so as to significantly increase their network performance. In particular, the proposed technique does not modify the path computation phase, but it just influences the forwarding decisions by taking the state of the sending queues into account. We demonstrated our proposed technique by applying it to Layer-2.5, a multi-path routing and forwarding paradigm for WMNs that has been previously proposed. We conducted a thorough performance evaluation by means of the ns-3 network simulator, which showed that our technique allows to increase the performance both in terms of network throughput and end-to-end delay.
ABSTRACT Latency is increasingly becoming a performance bottleneck for Internet Protocol (IP) networks, but historically networks have been designed with aims of maximizing throughput and utilization. This article offers a broad survey of... more
ABSTRACT Latency is increasingly becoming a performance bottleneck for Internet Protocol (IP) networks, but historically networks have been designed with aims of maximizing throughput and utilization. This article offers a broad survey of techniques aimed at tackling latency in the literature up to March 2014 and their merits. A goal of this work is to be able to quantify and compare the merits of the different Internet latency reducing techniques, contrasting their gains in delay reduction versus the pain required to implement and deploy them. We found that classifying techniques according to the sources of delay they alleviate provided the best insight into the following issues: 1) the structural arrangement of a network, such as placement of servers and suboptimal routes, can contribute significantly to latency; 2) each interaction between communicating endpoints adds a Round Trip Time (RTT) to latency, especially significant for short flows; 3) in addition to base propagation delay, several sources of delay accumulate along transmission paths, today intermittently dominated by queuing delays; 4) it takes time to sense and use available capacity, with overuse inflicting latency on other flows sharing the capacity; and 5) within end systems delay sources include operating system buffering, head-of-line blocking, and hardware interaction. No single source of delay dominates in all cases, and many of these sources are spasmodic and highly variable. Solutions addressing these sources often both reduce the overall latency and make it more predictable.