NativeNI 2022

The 1st International Workshop on Native Network Intelligence
Co-located with ACM CoNEXT

December 9th, Rome, Italy

Call for papers

In recent years we witnessed a growing interest towards leveraging Artificial Intelligence (AI) tools to innovate network operations at all layers, domains and planes. Yet, if, what and where we need to integrate intelligence in networks and how to (re)design networks for the native support of AI is still largely under debate. This is due to the multi-faceted nature of the challenges behind such integration: on the one hand, network architectures must be updated to accommodate AI models and their lifecycle by design (e.g., collecting and provisioning data in real-time, balancing centralized versus distributed computing approaches, empowering low latency requirements for fast closed-loop decision-making and network function automation); on the other hand, the design of AI models shall improve to better align with the myriad of requirements of production network systems (e.g., inference latency, computational complexity, trustworthiness of AI decisions); finally, operational procedures in research must be enhanced for verifiabilty, reproducibility and real-world deployment (e.g., establishing reference datasets, sharing trained models without sacrificing models explainability, robustness or safety).

Pragmatic answers to all these points are paramount to enable a transition of the current large body of literature on AI for networking from academic exercises to solutions integrated in production systems.

This workshop aims to bringing together researchers from academia and industry who are committed to making AI in networks a reality. We call for contributions from researchers working in the areas of network systems, applied machine learning and data science. We seek contributions that range from visionary position papers to promising ideas and prototypes, all the way to fully deployed solutions. All submissions should contribute to the common goal of making AI a viable and native technology for networks.

Topics of interest include (but are not limited to):

  • Network architectures and infrastructures for native AI support
  • AI requirements for integration in network environments
  • Network traffic data collection and analysis for AI support
  • Low-latency AI for networks
  • Compute-prudent AI for networks
  • Tailored AI models for network management and orchestration
  • Data availability for data-driven research and development
  • Ethics in AI for networking
  • On-device, cloud-driven or off-line application of AI for networking
  • Centralized or distributed computational paradigm to support AI models
  • AutoML and AI automation for networking
  • Meta-learning for networking
  • AI for Intent-Based Networking
  • Explainability, robustness, safety of AI model deployments in networks
  • Open-access datasets for the training and testing of AI models for networks
  • Open-source tools for the assessment of AI models for networks
  • Experimental deployments of AI in network systems.

Submission instructions

Authors should submit only original work that has not been published before and is no under submission to any other venue.

All submitted papers will be assessed through a double-blind review process. This means that the authors do not see who are the reviewers and the reviewers do not see who are the authors.

As an author, you should do your best to ensure that your paper submission does not directly or indirectly reveal the authors’ identities. These following steps are minimal requirements for a double-blind submission:

  • Remove all personal information about the authors from the paper (e.g., names, affiliations).
  • Remove acknowledgements to organizations and/or people.
  • Referring to your previous work should be done similarly to any other work, as if your are not an author of that work.
  • Do not add references to external repositories or technical reports that can be used to identify any of the authors or institutions/organization.
  • Uploading a version of the paper to a non-peer-reviewed location (e.g., ArXiv) is acceptable. However, authors need to avoid advertising the paper on popular mailing list and social media until after the review process closes.

As reviewers, PC members should not actively try to de-anonymize the authors’ identities. Any violation of the double-blind reviewing process should be reported to the PC chais.

Submissions should be six pages maximum, plus one page for references, in 2-column 10pt ACM format. When using Latex, please download the style and templates from here.

Uncompress the zip file and look for sample-sigconf.tex in the /sample subdirectory. The file can be used as a starting point, or the confiation can be copied to your own file if one exists. In any case, your text file should use the following class


We encourage authors to share code/data at either submission time or at the camera ready.

Papers should be submitted at


Abstract registration September 23rd, 2022
Submission September 30th, 2022
Notification October 16th, 2022
Camera ready October 25th, 2022
Workshop Event December 9th, 2022



Alessandro Finamore Huawei Technologies, France
Marco Fiore IMDEA Networks
Carlee Joe-Wong Carnegie Mellon University

TPC Members

Albert Cabellos Universitat Politecnica de Catalunya
Amedeo Sapio Intel
Andra Lutu Telefonica
Bo Ji Virginia Tech
Chen Tian Nanjing University
Chuan Wu University of Hong Kong
Chuanxiong Guo Bytedance
George Iosifidis Delft University of Technology
Gianni Antichi Queen Mary University of London
Ilias Leontiadis Meta
John Chi Shing Lui Chinese University of Hong Kong
Junchen Jiang University of Chicago
Kyunghan Lee Seoul National university
Marco Gramaglia Universidad Carlos III de Madrid
Roberto González NEC Laboratories Europe
Tao Han New Jersey Institute of Technology
Tian Lan George Washington University
Vaneet Aggarwal Purdue University
Xiaoxi Zhang Sun Yat-Sen University
Zied Ben Houidi Huawei Technologies France
Zinan Lin Carnegie Mellon University


For any questions please reach out to the chairs Alessandro Finamore, Marco Fiore and Carlee Joe-Wong