How to Regulate Large Language Models for Responsible AI

Research output: Contribution to journalArticlepeer-review

Abstract

Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the US automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.
Original languageEnglish
Pages (from-to)1-1
Journal IEEE Transactions on Technology and Society
DOIs
Publication statusPublished - May 21 2024

Cite this