Short name | Deliverable | Abstract |
---|---|---|
RIEMANN-E1 | Management and Dissemination of the project results | This document presents a comprehensive report on the activities and accomplishments of the 6G-RIEMANN subprojects in 2022 in relation to communication and dissemination. As the subprojects are interrelated, it was decided to combine the individual reports of each subproject into a single document. In 2022, we experienced some delays in bootstrapping all activities due to the tender process. Despite this, we have achieved significant progress, which includes one paper contribution and an ongoing PhD thesis. |
RIEMANN-DS-E8 | Definition of the usecases for privacy preserving data sharing | The fields of machine learning and cybersecurity are becoming increasingly intertwined, as the use of machine learning in cybersecurity applications becomes more prevalent. However, with the use of machine learning comes the risk of privacy breaches and malicious attacks on the models. Thus, ensuring privacy and security in machine learning models has become a major topic in the AI research community. One way to protect the privacy of raw data in machine learning models is through Privacy Preserving Data Publishing (PPDP), which allows the sharing of training data among different actors without the associated privacy risks. This technology opens up the possibility of novel applications that were previously only possible among trusted parties in complex federated learning scenarios. One such use case is in credit scoring, where PPDP can be used to transform raw financial data while maintaining the accuracy of the machine learning model, and without compromising the privacy of the individual's financial data. |
RIEMANN-SI-E8 | Definition of the use cases for privacy preserving network management | The field of network management and orchestration has seen significant advancements with the increasing adoption of automation and artificial intelligence. In particular, the operation of mobile networks has become a requirement for automation, which has been made possible by the availability of countless AI-based solutions. However, rapid and flexible data availability is essential to ensure the success of this automation. The use of AI and automation also extends to the domain of IoT, where predictive maintenance and anomaly detection are increasingly being utilized to improve network management. Furthermore, the analysis of DNS data can be used to detect and prevent phishing and other types of cyber attacks. However, with the increasing use of sensitive data in network management, privacy concerns must be considered and addressed. Overall, the successful implementation of network management and orchestration requires a comprehensive approach that considers the challenges and opportunities in automation, AI, IoT, and privacy. |
RIEMANN-FR-E12 | Definition of the usecases for privacy preserving network architectures | In recent years, the field of machine learning has experienced tremendous growth and has become an essential tool in various domains, such as healthcare, finance, and industry. However, the use of machine learning models requires large amounts of data, and the centralized management of this data poses several challenges, including privacy concerns and regulatory constraints. As a result, privacy-preserving network architectures have become a crucial aspect of machine learning, enabling the creation of models while ensuring data privacy. One common technique used in privacy-preserving network architectures is the projection of data features onto a latent, lower dimensional space, called embeddings or encodings. Data encoders are generated using Deep Neural Networks and are optimized for a specific public task while removing any irrelevant information. Alternatively, data encoders can also be used to keep as much information as possible while avoiding a private task, ensuring privacy and data confidentiality. |
RIEMANN-ML-E12 | Definition of the usecases for privacy preserving machine learning | This document focuses on machine learning as a service (MLaaS) and its privacy implications. The document emphasizes the need to consider privacy implications in all MLaaS scenarios, as sensitive information can be inferred from seemingly innocuous data. The document explores different scenarios of MLaaS, including "human"-centered and IoT-based examples. These examples demonstrate the potential benefits of MLaaS, including leveraging high computational power for complex tasks and reducing the need for in-house expertise. However, they also highlight the potential privacy risks associated with outsourcing data to a third-party service. |