Publications tagged with Differential privacy

Published:

Publications tagged with "Differential privacy"

  1. Campanile, L., de Biase, M. S., & Marulli, F. (2025). Edge-Cloud Distributed Approaches to Text Authorship Analysis: A Feasibility Study [Book chapter]. Lecture Notes on Data Engineering and Communications Technologies, 250, 284–293. https://doi.org/10.1007/978-3-031-87778-0_28
    Abstract
    Automatic authorship analysis, often referred to as stylometry, is a captivating yet contentious field that employs computational techniques to determine the authorship of textual artefacts. In recent years, the importance of author profiling has grown significantly due to the proliferation of automatic text generation systems. These include both early-generation bots and the latest generative AI-based models, which have heightened concerns about misinformation and content authenticity. This study proposes a novel approach to evaluate the feasibility and effectiveness of contemporary distributed learning methods. The approach leverages the computational advantages of distributed systems while preserving the privacy of human contributors, enabling the collection and analysis of extensive datasets of “human-written” texts in contrast to those generated by bots. More specifically, the proposed method adopts a Federated Learning (FL) framework, integrating readability and stylometric metrics to deliver a privacy-preserving solution for Authorship Attribution (AA). The primary objective is to enhance the accuracy of AA processes, thus achieving a more robust “authorial fingerprint”. Experimental results reveal that while FL effectively protects privacy and mitigates data exposure risks, the combined use of readability and stylometric features significantly increases the accuracy of AA. This approach demonstrates promise for secure and scalable AA applications, particularly in privacy-sensitive contexts and real-time edge computing scenarios. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
    DOI Publisher Details
    Details
  2. Campanile, L., De Fazio, R., Di Giovanni, M., & Marulli, F. (2024). Beyond the Hype: Toward a Concrete Adoption of the Fair and Responsible Use of AI [Conference paper]. CEUR Workshop Proceedings, 3762, 60–65. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205601768&partnerID=40&md5=99140624de79e37b370ed4cf816c24e7
    Abstract
    Artificial Intelligence (AI) is a fast-changing technology that is having a profound impact on our society, from education to industry. Its applications cover a wide range of areas, such as medicine, military, engineering and research. The emergence of AI and Generative AI have significant potential to transform society, but they also raise concerns about transparency, privacy, ownership, fair use, reliability, and ethical considerations. The Generative AI adds complexity to the existing problems of AI due to its ability to create machine-generated data that is barely distinguishable from human-generated data. Bringing to the forefront the issue of responsible and fair use of AI. The security, safety and privacy implications are enormous, and the risks associated with inappropriate use of these technologies are real. Although some governments, such as the European Union and the United States, have begun to address the problem with recommendations and proposed regulations, it is probably not enough. Regulatory compliance should be seen as a starting point in a continuous process of improving the ethical procedures and privacy risk assessment of AI systems. The need to have a baseline to manage the process of creating an AI system even from an ethics and privacy perspective becomes progressively more important In this study, we discuss the ethical implications of these advances and propose a conceptual framework for the responsible, fair, and safe use of AI. © 2024 Copyright for this paper by its authors.
    Publisher Details
    Details

← Back to all publications