Publications tagged with Large language model

Published:

Publications tagged with "Large language model"

  1. Marulli, F., Campanile, L., de Biase, M. S., Marrone, S., Verde, L., & Bifulco, M. (2024). Understanding Readability of Large Language Models Output: An Empirical Analysis [Conference paper]. Procedia Computer Science, 246(C), 5273–5282. https://doi.org/10.1016/j.procs.2024.09.636
    Abstract
    Recently, Large Language Models (LLMs) have seen some impressive leaps, achieving the ability to accomplish several tasks, from text completion to powerful chatbots. The great variety of available LLMs and the fast pace of technological innovations in this field, is making LLM assessment a hard task to accomplish: understanding not only what such a kind of systems generate but also which is the quality of their results is of a paramount importance. Generally, the quality of a synthetically generated object could refer to the reliability of the content, to the lexical variety or coherence of the text. Regarding the quality of text generation, an aspect that up to now has not been adequately discussed is concerning the readability of textual artefacts. This work focuses on the latter aspect, proposing a set of experiments aiming to better understanding and evaluating the degree of readability of texts automatically generated by an LLM. The analysis is performed through an empirical study based on: considering a subset of five pre-trained LLMs; considering a pool of English text generation tasks, with increasing difficulty, assigned to each of the models; and, computing a set of the most popular readability indexes available from the computational linguistics literature. Readability indexes will be computed for each model to provide a first perspective of the readability of textual contents artificially generated can vary among different models and under different requirements of the users. The results obtained by evaluating and comparing different models provide interesting insights, especially into the responsible use of these tools by both beginners and not overly experienced practitioners. © 2024 The Authors.
    DOI Publisher Details
    Details
  2. Campanile, L., De Fazio, R., Di Giovanni, M., & Marulli, F. (2024). Beyond the Hype: Toward a Concrete Adoption of the Fair and Responsible Use of AI [Conference paper]. CEUR Workshop Proceedings, 3762, 60–65. https://www.scopus.com/inward/record.uri?eid=2-s2.0-85205601768&partnerID=40&md5=99140624de79e37b370ed4cf816c24e7
    Abstract
    Artificial Intelligence (AI) is a fast-changing technology that is having a profound impact on our society, from education to industry. Its applications cover a wide range of areas, such as medicine, military, engineering and research. The emergence of AI and Generative AI have significant potential to transform society, but they also raise concerns about transparency, privacy, ownership, fair use, reliability, and ethical considerations. The Generative AI adds complexity to the existing problems of AI due to its ability to create machine-generated data that is barely distinguishable from human-generated data. Bringing to the forefront the issue of responsible and fair use of AI. The security, safety and privacy implications are enormous, and the risks associated with inappropriate use of these technologies are real. Although some governments, such as the European Union and the United States, have begun to address the problem with recommendations and proposed regulations, it is probably not enough. Regulatory compliance should be seen as a starting point in a continuous process of improving the ethical procedures and privacy risk assessment of AI systems. The need to have a baseline to manage the process of creating an AI system even from an ethics and privacy perspective becomes progressively more important In this study, we discuss the ethical implications of these advances and propose a conceptual framework for the responsible, fair, and safe use of AI. © 2024 Copyright for this paper by its authors.
    Publisher Details
    Details

← Back to all publications