Publications tagged with Generative AI
Published:
Publications tagged with "Generative AI"
- Marulli, F., Campanile, L., Ragucci, G., Carbone, S., & Bifulco, M. (2025). Data Generation and Cybersecurity: A Major Opportunity or the Next Nightmare? [Conference paper]. Proceedings of the 2025 IEEE International Conference on Cyber Security and Resilience, CSR 2025, 969–974. https://doi.org/10.1109/CSR64739.2025.11130069
Abstract
In recent years, the proliferation of synthetic data generation techniques-driven by advances in artificial intelli-gence-has opened new possibilities across a wide range of fields, from healthcare to autonomous systems, by addressing critical data scarcity issues. However, this technological progress also brings with it a growing concern: the dual-use nature of synthetic data. While it offers powerful tools for innovation, it simultaneously introduces significant risks related to information disorder and cybersecurity. As AI systems become increasingly capable of producing highly realistic yet entirely fabricated content, the boundaries between authentic and artificial information blur, making it more difficult to detect manipulation, protect digital infrastructures, and maintain public trust. This work undertakes a preliminary exploration of the evolving nexus between Generative AI, Information Disorder, and Cybersecurity: it aims to investigate the complex interplay among these three and to map their dynamic interactions and reciprocal influences, highlighting both the potential benefits and the looming challenges posed by this evolving landscape. Moreover, it seeks to propose a conceptual framework for assessing these interdependencies through a set of indicative metrics, offering a foundation for future empirical evaluation and strategic response. © 2025 IEEE. - Marulli, F., Campanile, L., de Biase, M. S., Marrone, S., Verde, L., & Bifulco, M. (2024). Understanding Readability of Large Language Models Output: An Empirical Analysis [Conference paper]. Procedia Computer Science, 246(C), 5273–5282. https://doi.org/10.1016/j.procs.2024.09.636
Abstract
Recently, Large Language Models (LLMs) have seen some impressive leaps, achieving the ability to accomplish several tasks, from text completion to powerful chatbots. The great variety of available LLMs and the fast pace of technological innovations in this field, is making LLM assessment a hard task to accomplish: understanding not only what such a kind of systems generate but also which is the quality of their results is of a paramount importance. Generally, the quality of a synthetically generated object could refer to the reliability of the content, to the lexical variety or coherence of the text. Regarding the quality of text generation, an aspect that up to now has not been adequately discussed is concerning the readability of textual artefacts. This work focuses on the latter aspect, proposing a set of experiments aiming to better understanding and evaluating the degree of readability of texts automatically generated by an LLM. The analysis is performed through an empirical study based on: considering a subset of five pre-trained LLMs; considering a pool of English text generation tasks, with increasing difficulty, assigned to each of the models; and, computing a set of the most popular readability indexes available from the computational linguistics literature. Readability indexes will be computed for each model to provide a first perspective of the readability of textual contents artificially generated can vary among different models and under different requirements of the users. The results obtained by evaluating and comparing different models provide interesting insights, especially into the responsible use of these tools by both beginners and not overly experienced practitioners. © 2024 The Authors.