IEEE International Symposium on Local and Metropolitan Area Networks
11–12 July 2022 // Virtual Conference



Day 1: July 11, Monday EDT

9:30-9:45 Opening and Welcome
9:45-10:45 Keynote 1
11:00-11:45 Invited talk 1
11:55-12:45 Session 1: Routing protocols
14:00-14:45 Invited talk 2
15:00-15:50 Session 2: Edge and wireless networking
16:00-16:45 Invited talk 3

Day 2: July 12, Tuesday EDT

9:30-10:15 Invited talk 4
10:30-11:15 Invited talk 5
11:15-13:00 Poster session
14:00-14:50 Session 3: ICN and SDN
15:00-15:45 Invited talk 6
16:00-17:00 Keynote 2
17:00-17:15 Closing remarks

Day 1: July 11, Monday EDT

Keynote 1: Networking for Big Data: Theory, Algorithms and Applications

Chair: Patrick P. C. Lee (The Chinese University of Hong Kong, Hong Kong)


Edmund Yeh (Northeastern University)

Abstract:  In the era of big data, experts in various fields are facing unprecedented challenges in data access, distribution, processing and analysis, and in the coordinated use of limited computing, storage and network resources.  To address this, we present new frameworks for the optimization of key network functionalities, which are broadly applicable to content delivery networks, wireless heterogeneous networks, and distributed computing networks.  The frameworks enable joint (in-network) caching, request routing, and congestion control for content distribution over general network topologies, optimizing metrics including routing costs, data retrieval delay, and content-based fairness.  We meet the challenge of the underlying NP-hard problems by exploiting submodularity, matroid structure, DR-submodularity, and by leveraging tools including concave relaxation, stochastic gradient ascent, continuous greedy and Lagrangian barrier algorithms.  We develop polynomial-time approximation algorithms with proven optimality guarantees, with particular emphasis on adaptive and distributed implementations.
We further discuss the extension of these frameworks for jointly optimal wireless user association and content caching in wireless heterogeneous networks, and for jointly optimal computation scheduling, caching and request forwarding in distributed computing networks.  Finally, we discuss an ongoing project which applies the optimization frameworks and algorithms to facilitate data distribution and computation in the Large Hadron Collider (LHC) high-energy physics network, one of the largest data applications in the world.

Invited talk 1: Optimizing Contributions to Distributed, Networked Learning

Chair: Murat Yuksel (University of Central Florida, USA)


Carlee Joe-Wong (Carnegie Mellon University)

Abstract: The rapid expansion of Internet-connected, compute-equipped “things” has greatly expanded the amount of data that can be collected about many types of systems, from smart cities to mobile applications to personal health. Making use of this data, however, requires effectively leveraging computing resources to run data analysis algorithms (e.g., machine learning inference or training). Unfortunately, the “things” at which all of this data is collected are often resource-constrained, e.g., with limited power budgets, unreliable network connectivity, and/or limited computing capabilities. Distributed learning algorithms such as federated learning aim to address these challenges, but they are generally not optimized to run on networks of devices with limited, heterogeneous, and unreliable computing and communication resources. In this talk, I will present new variants on federated learning algorithms that provide theoretical convergence guarantees and good empirical performance in the presence of such resource limitations. By carefully designing algorithms for each stage in the distributed machine learning pipeline (data collection, data analysis, and communication across devices), we can realize significant improvement in the accuracy of our trained models.

Session 1: Routing protocols

Chair: Minseok Kwon (Rochester Institute of Technology, USA)


THORP: Choosing Ordered Neighbors To Attain Efficient Loop-Free Minimum-Hop Routing
J. J. Garcia-Luna-Aceves (University of California at Santa Cruz, USA)
Distance Vector Routing in Partitioned Networks
Ammar Farooq and Murat Yuksel (University of Central Florida, USA)

Invited talk 2: The Hyper-Converged Programmable Gateway in Alibaba Edge Cloud

Chair: Sonia Fahmy (Purdue University, USA)


Hongqiang Liu (Alibaba Group)

Abstract: Edge cloud provides significant performance and cost advantages for emerging applications such as cloud gaming, video conferencing and AR/VR, etc. However, different from central clouds, edge cloud also faces tremendous challenges due to the limited resources, demands on high performance, and hardware heterogeneity. Alibaba solves these problems by introducing a hyper-converged gateway platform &dlquo;SNA” that provides the cloud network stack and network functions within the network rather than the hosts. SNA is a heterogeneous computing platform that merges network switching, network virtualization, and various network functions on top of programmable network ASICs, FPGAs, and CPUs. It has been deployed to support some multi-million-user products in Alibaba’s edge cloud. The key technical enabler of the rapid and safe deployment of the hyper-converged gateways running in SNA is our programmable network development platform “TaiX” which provides novel and practical programming abstractions, compilers, debuggers, testers, orchestrators, and operation tools.

Session 2: Edge and wireless networking

Chair: Amit Sheoran (AT&T Labs, USA)


Service Mesh Controller for Cooperative Load Balancing among Neighboring Edge Servers
Toru Furusawa (The University of Tokyo & Toyota Motor Corporation, Japan); Hiroshi Abe and Kazuya Okada (Toyota Motor Corporation, Japan); Akihiro Nakao (The University of Tokyo, Japan)
Exploring Performance Limits on Proactive Fair Scheduling for mmWave WLANs
Ang Deng, Yuchen Liu and Douglas Blough (Georgia Institute of Technology, USA)
Awarded best paper

Invited talk 3: Untangling Interconnection in the Mobile Ecosystem

Chair: Hulya Seferoglu (University of Illinois at Chicago, USA)


Andra Lutu (Telefónica Research)

Abstract: The IP eXchange (IPX) Network interconnects about 800 Mobile Network Operators (MNOs) worldwide and a range of other service providers (such as cloud and content providers) to for the core that enables global data roaming. Global roaming now supports the fast growth of the Internet of Things, as well as responds to the insatiable demand coming from digital nomads, who adhere to a lifestyle where they connect from anywhere in the world.
In this talk, we’ll take a first look into this so-far opaque mobile ecosystem, and present first-of-its-kind in-depth analysis of an operational IPX Provider (IPX-P). The IPX Network is a private network formed by a small set of tightly interconnected IPX-Ps. We analyze an operational dataset from a large IPX-P that includes BGP data as well as statistics from signaling. We shed light on the structure of the IPX Network as well as on the temporal, structural and geographic features of the IPX traffic. Our results are a first step to fully understand the global mobile Internet, especially since it now represents a pivotal part in connecting IoT devices and digital nomads all over the world.

Day 2: July 12, Tuesday EDT

Invited talk 4: Machine Learning for Sketches and Sketches for Machine Learning

Chair: Patrick P. C. Lee (The Chinese University of Hong Kong, Hong Kong)


Tong Yang (Peking University)

Abstract:  Sketches, a type of probabilistic algorithms, have been widely accepted as the most promising solution for network measurement. There are a series of papers about sketches published in SIGCOMM, SIGKDD, SIGMOD and NSDI. One the one side, sketch can be used to encode/compress the gradients, significantly reducing the bandwidth usage. One the other side, the error of sketches can be learned and reduced by machine learning.
ML2Sketch: This talk first presents the idea of employing machine learning to reduce this dependence of the accuracy of sketches on network traffic characteristics and present a generalized machine learning framework that increases the accuracy of sketches significantly. We further present three case studies, where we applied our framework on sketches for measuring three well-known flow-level network metrics. Experimental results show that machine learning helps decrease the error rates of existing sketches by up to 202 times.
Sketch2ML: This talk then presents two sketches (MinMax and Cluster-Reduce) to compress the gradients transferred through the network in distributed ML. MinMaxSketch builds a set of hash tables and solves hash collisions with a MinMax strategy. The key technique of Cluster-Reduce is to cluster the adjacent counters with similar values in the sketch to significantly improve the accuracy. Extensive experimental results show that Cluster-Reduce can achieve up to 60 times smaller error than prior works.

Invited talk 5: Pushing the Limits of Learning-Augmented Adaptation in Networked Systems

Chair: Yuki Koizumi (Osaka University, Japan)


Junchen Jiang (The University of Chicago)

Abstract: ML-inspired techniques are transforming many classic problems in networking and systems communities, by formulating the problems as standard learning (often reinforcement learning) problems and solving them as such. However, despite much interest from industry, these advances are sometimes met with lukewarm adoption in real-world systems. To understand this gap, this talk discusses our recent efforts in applying ML/RL to two systems problems (congestion control and cloud resource reservation). Our experience shows that the ML literature is ripe enough that, by carefully choosing the suitable formulations and techniques, we can design more efficient and practical solutions for real systems. In particular, better solutions often result from using the right formulation to best capture well-studied structures of the targeted systems or harness non-ML domain-specific solutions developed over the decades. Yet, these changes cannot be brought about without joint work between ML researchers and systems researchers and operators.

Poster session

Chair: Minseok Kwon (Rochester Institute of Technology, USA)


Experimenting an Edge-Cloud Computing Model on the GPULab Fed4Fire Testbed

Vikas Tomer (Graphic Era Deemed to be University, India); Sachin Sharma (Technological University Dublin, Ireland)

Designing a Double LoRa Connectivity for the Arduino Portenta H7

Daniel López Pino (Technical University of Catalunya, Spain); Felix Freitag (Technical University of Catalonia, Spain); Mennan Selimi (South East European University, Macedonia, the former Yugoslav Republic of)

Towards A Low-Cost Stateless 5G Core

Umakant Kulkarni (Purdue University, USA); Amit Sheoran (AT&T Labs – Research, USA); Sonia Fahmy (Purdue University, USA)

ML-based Cellular Service Issue Troubleshooting Using Limited Ground Truth Data

Xiaofeng Shi (AT&T Labs – Research, USA); Chen Qian (University of California at Santa Cruz, USA); Amit Sheoran and Jia Wang (AT&T Labs – Research, USA)

Demonstrating Configuration of Software Defined Networking in Real Wireless Testbeds

Saish Urumkar (Technological University Dublin, Ireland); Gianluca Fontanesi and Avishek Nag (University College Dublin, Ireland); Sachin Sharma (Technological University Dublin, Ireland)

Session 3: ICN and SDN

Chair: Xiaofeng Shi (AT&T Labs, USA)


Access Control with Individual Key Delivery in ICN
Yuma Fukagawa and Noriaki Kamiyama (Ritsumeikan University, Japan)
Bandwidth and Congestion Aware Routing for Wide-Area Hybrid Networks
Osama Abu Hamdan (University of Nevada, Reno, USA); Scotty Strachan (Nevada System of Higher Education, USA); Engin Arslan (University of Nevada, Reno, USA)

Invited talk 6: Toward Practical Federated Learning

Chair: Amit Sheoran (AT&T Labs, USA)


Mosharaf Chowdhury (University of Michigan)

Abstract: Although theoretical federated learning research is growing exponentially, we are far from putting those theories into practice. In this talk, I will share our ventures into building practical systems for two extremities of federated learning. Sol is a cross-silo federated learning and analytics system that tackles network latency and bandwidth challenges faced by distributed computation between far-apart data sites. Oort, in contrast, is a cross-device federated learning system that enables training and testing on representative data distributions despite unpredictable device availability. Both deal with systems and network characteristics in the wild that are hard to account for in analytical models. I’ll then share the challenges in systematically evaluating federated learning systems that have led to a disconnect between theoretical conclusions and performance in the wild. I’ll conclude this talk by introducing FedScale, which is an extensible framework for evaluation and benchmarking in realistic settings to democratize practical federated learning for researchers and practitioners alike. All these systems are open-source and available at

Keynote 2: On Open RAN Telco Analytics and Automation

Chair: Hulya Seferoglu (University of Illinois at Chicago, USA)


Rittwik Jana (Google)

Abstract: Open RAN has gained significant momentum in the last five years. We discuss some of the challenges and opportunities that Open RAN faces followed by some of the architecture and deployment scenarios that are popular. We also provide an overview of some interesting AI/ML enabled use cases that benefit from open interfaces driving massive closed loop automation and secure distributed intelligence in a multi-cloud architecture. The talk will be more of a tutorial nature and target an audience who is interested in learning about the next generation cellular network and future research problems.