TY - GEN
T1 - Adaptive Batching for Fast Packet Processing in Software Routers using Machine Learning
AU - Okelmann, Peter
AU - Linguaglossa, Leonardo
AU - Geyer, Fabien
AU - Emmerich, Paul
AU - Carle, Georg
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/6/28
Y1 - 2021/6/28
N2 - Processing packets in batches is a common technique in high-speed software routers to improve routing efficiency and increase throughput. With the growing popularity of novel paradigms such as Network Function Virtualization, advocating for the replacement of hardware-based networking modules towards software-based network functions deployed on commodity servers, we observe that batching techniques have been successfully implemented to reduce the HW/SW performance gap. As batch creation and management is at the very core of high-speed packet processors, it provides a significant impact to the overall packet processing capabilities of the system, affecting latency, throughput, CPU utilization and power consumption. It is commonly accepted to adopt a fixed maximum batching size (usually in the range between 32 and 512) to optimize for the worst case scenario (i.e. minimum-size packets at full bandwidth capacity). Such approach may result in a loss of efficiency despite a 100% utilization of the CPU. In this work we explore the possibilities of enhancing the runtime batch creation in VPP, a popular software router based on the Intel DPDK framework. Instead of relying on the automatic batch creation, we apply machine learning techniques to optimize the batching size for lower CPU-time and higher power efficiency in average scenarios, while maintaining its high performance in the worst case.
AB - Processing packets in batches is a common technique in high-speed software routers to improve routing efficiency and increase throughput. With the growing popularity of novel paradigms such as Network Function Virtualization, advocating for the replacement of hardware-based networking modules towards software-based network functions deployed on commodity servers, we observe that batching techniques have been successfully implemented to reduce the HW/SW performance gap. As batch creation and management is at the very core of high-speed packet processors, it provides a significant impact to the overall packet processing capabilities of the system, affecting latency, throughput, CPU utilization and power consumption. It is commonly accepted to adopt a fixed maximum batching size (usually in the range between 32 and 512) to optimize for the worst case scenario (i.e. minimum-size packets at full bandwidth capacity). Such approach may result in a loss of efficiency despite a 100% utilization of the CPU. In this work we explore the possibilities of enhancing the runtime batch creation in VPP, a popular software router based on the Intel DPDK framework. Instead of relying on the automatic batch creation, we apply machine learning techniques to optimize the batching size for lower CPU-time and higher power efficiency in average scenarios, while maintaining its high performance in the worst case.
U2 - 10.1109/NetSoft51509.2021.9492668
DO - 10.1109/NetSoft51509.2021.9492668
M3 - Conference contribution
AN - SCOPUS:85112062915
T3 - Proceedings of the 2021 IEEE Conference on Network Softwarization: Accelerating Network Softwarization in the Cognitive Age, NetSoft 2021
SP - 206
EP - 210
BT - Proceedings of the 2021 IEEE Conference on Network Softwarization
A2 - Shiomoto, Kohei
A2 - Kim, Young-Tak
A2 - Rothenberg, Christian Esteve
A2 - Martini, Barbara
A2 - Oki, Eiji
A2 - Choi, Baek-Young
A2 - Kamiyama, Noriaki
A2 - Secci, Stefano
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 7th IEEE International Conference on Network Softwarization, NetSoft 2021
Y2 - 28 June 2021 through 2 July 2021
ER -