Publications tagged with Misleading informations
Published:
Publications tagged with "Misleading informations"
- Marulli, F., Campanile, L., Ragucci, G., Carbone, S., & Bifulco, M. (2025). Data Generation and Cybersecurity: A Major Opportunity or the Next Nightmare? [Conference paper]. Proceedings of the 2025 IEEE International Conference on Cyber Security and Resilience, CSR 2025, 969–974. https://doi.org/10.1109/CSR64739.2025.11130069
Abstract
In recent years, the proliferation of synthetic data generation techniques-driven by advances in artificial intelli-gence-has opened new possibilities across a wide range of fields, from healthcare to autonomous systems, by addressing critical data scarcity issues. However, this technological progress also brings with it a growing concern: the dual-use nature of synthetic data. While it offers powerful tools for innovation, it simultaneously introduces significant risks related to information disorder and cybersecurity. As AI systems become increasingly capable of producing highly realistic yet entirely fabricated content, the boundaries between authentic and artificial information blur, making it more difficult to detect manipulation, protect digital infrastructures, and maintain public trust. This work undertakes a preliminary exploration of the evolving nexus between Generative AI, Information Disorder, and Cybersecurity: it aims to investigate the complex interplay among these three and to map their dynamic interactions and reciprocal influences, highlighting both the potential benefits and the looming challenges posed by this evolving landscape. Moreover, it seeks to propose a conceptual framework for assessing these interdependencies through a set of indicative metrics, offering a foundation for future empirical evaluation and strategic response. © 2025 IEEE. - Campanile, L., Cesarano, M., Palmiero, G., & Sanghez, C. (2022). Break the Fake: A Technical Report on Browsing Behavior During the Pandemic [Conference paper]. Smart Innovation, Systems and Technologies, 309, 573–586. https://doi.org/10.1007/978-981-19-3444-5_49
Abstract
The widespread use of the internet as the main source of information for many users has led to the spread of fake news and misleading information as a side effect. The pandemic that in the last 2 years has forced us to change our lifestyle and to increase the time spent at home, has further increased the time spent surfing the Internet. In this work we analyze the navigation logs of a sample of users, in compliance with the current privacy regulation, comparing and dividing between the different categories of target sites, also identifying some well-known sites that spread fake news. The results of the report show that during the most acute periods of the pandemic there was an increase in surfing on untrusted sites. The report also shows the tendency to use such sites in the evening and night hours and highlights the differences between the different years considered. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. - Marulli, F., Verde, L., Marrore, S., & Campanile, L. (2022). A Federated Consensus-Based Model for Enhancing Fake News and Misleading Information Debunking [Conference paper]. Smart Innovation, Systems and Technologies, 309, 587–596. https://doi.org/10.1007/978-981-19-3444-5_50
Abstract
Misinformation and Fake News are hard to dislodge. According to experts on this phenomenon, to fight disinformation a less credulous public is needed; so, current AI techniques can support misleading information debunking, given the human tendency to believe “facts” that confirm biases. Much effort has been recently spent by the research community on this plague: several AI-based approaches for automatic detection and classification of Fake News have been proposed; unfortunately, Fake News producers have refined their ability in eluding automatic ML and DL-based detection systems. So, debunking false news represents an effective weapon to contrast the users’ reliance on false information. In this work, we propose a preliminary study aiming to approach the design of effective fake news debunking systems, harnessing two complementary federated approaches. We propose, firstly, a federation of independent classification systems to accomplish a debunking process, by applying a distributed consensus mechanism. Secondly, a federated learning task, involving several cooperating nodes, is accomplished, to obtain a unique merged model, including features of single participants models, trained on different and independent data fragments. This study is a preliminary work aiming to to point out the feasibility and the comparability of these proposed approaches, thus paving the way to an experimental campaign that will be performed on effective real data, thus providing an evidence for an effective and feasible model for detecting potential heterogeneous fake news. Debunking misleading information is mission critical to increase the awareness of facts on the part of news consumers. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. - Marulli, F., Balzanella, A., Campanile, L., Iacono, M., & Mastroianni, M. (2021). Exploring a Federated Learning Approach to Enhance Authorship Attribution of Misleading Information from Heterogeneous Sources [Conference paper]. Proceedings of the International Joint Conference on Neural Networks, 2021-July. https://doi.org/10.1109/IJCNN52387.2021.9534377
Abstract
Authorship Attribution (AA) is currently applied in several applications, among which fraud detection and anti-plagiarism checks: this task can leverage stylometry and Natural Language Processing techniques. In this work, we explored some strategies to enhance the performance of an AA task for the automatic detection of false and misleading information (e.g., fake news). We set up a text classification model for AA based on stylometry exploiting recurrent deep neural networks and implemented two learning tasks trained on the same collection of fake and real news, comparing their performances: one is based on Federated Learning architecture, the other on a centralized architecture. The goal was to discriminate potential fake information from true ones when the fake news comes from heterogeneous sources, with different styles. Preliminary experiments show that a distributed approach significantly improves recall with respect to the centralized model. As expected, precision was lower in the distributed model. This aspect, coupled with the statistical heterogeneity of data, represents some open issues that will be further investigated in future work. © 2021 IEEE.