Leveraging Local LLMs for Secure In-System Task Automation with Prompt-Based Agent Classification

Suthir Sriram, C. H. Karthikeya, K. P. Kishore Kumar, V. Nivethitha, M. Thangavel

Research output: Contribution to journalArticlepeer-review

Abstract

Recent progress in the field of intelligence has led to the creation of powerful large language models (LLMs). While these models show promise in improving personal computing experiences concerns surrounding data privacy and security have hindered their integration with sensitive personal information. In this study, a new framework is proposed to merge LLMs with personal file systems, enabling intelligent data interaction while maintaining strict privacy safeguards. Our methodology organizes tasks based on LLM agents, which apply designated tags to the tasks before sending them to specific LLM modules. Every module is has its own function, including file search, document summarization, code interpretation, and general tasks, to make certain that all processing happens locally on the user's device. Our findings indicate high accuracy across agents: classification agent managed to get an accuracy rating of 86%, document summarization reached a BERT score of 0.9243. The key point of this framework is that it splits the LLM system into modules, which enables future development by integrating new task-specific modules as required. Our findings suggest that integrating local LLMs can significantly improve interactions with file systems without compromising data privacy.

Original languageEnglish
JournalIEEE Access
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Few-Shot Prompting
  • File system
  • LangChain
  • LLM
  • Prompt engineering

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'Leveraging Local LLMs for Secure In-System Task Automation with Prompt-Based Agent Classification'. Together they form a unique fingerprint.

Cite this