TY - JOUR
T1 - Leveraging Local LLMs for Secure In-System Task Automation with Prompt-Based Agent Classification
AU - Sriram, Suthir
AU - Karthikeya, C. H.
AU - Kishore Kumar, K. P.
AU - Nivethitha, V.
AU - Thangavel, M.
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Recent progress in the field of intelligence has led to the creation of powerful large language models (LLMs). While these models show promise in improving personal computing experiences concerns surrounding data privacy and security have hindered their integration with sensitive personal information. In this study, a new framework is proposed to merge LLMs with personal file systems, enabling intelligent data interaction while maintaining strict privacy safeguards. Our methodology organizes tasks based on LLM agents, which apply designated tags to the tasks before sending them to specific LLM modules. Every module is has its own function, including file search, document summarization, code interpretation, and general tasks, to make certain that all processing happens locally on the user's device. Our findings indicate high accuracy across agents: classification agent managed to get an accuracy rating of 86%, document summarization reached a BERT score of 0.9243. The key point of this framework is that it splits the LLM system into modules, which enables future development by integrating new task-specific modules as required. Our findings suggest that integrating local LLMs can significantly improve interactions with file systems without compromising data privacy.
AB - Recent progress in the field of intelligence has led to the creation of powerful large language models (LLMs). While these models show promise in improving personal computing experiences concerns surrounding data privacy and security have hindered their integration with sensitive personal information. In this study, a new framework is proposed to merge LLMs with personal file systems, enabling intelligent data interaction while maintaining strict privacy safeguards. Our methodology organizes tasks based on LLM agents, which apply designated tags to the tasks before sending them to specific LLM modules. Every module is has its own function, including file search, document summarization, code interpretation, and general tasks, to make certain that all processing happens locally on the user's device. Our findings indicate high accuracy across agents: classification agent managed to get an accuracy rating of 86%, document summarization reached a BERT score of 0.9243. The key point of this framework is that it splits the LLM system into modules, which enables future development by integrating new task-specific modules as required. Our findings suggest that integrating local LLMs can significantly improve interactions with file systems without compromising data privacy.
KW - Few-Shot Prompting
KW - File system
KW - LangChain
KW - LLM
KW - Prompt engineering
UR - http://www.scopus.com/inward/record.url?scp=85210964708&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85210964708&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3505298
DO - 10.1109/ACCESS.2024.3505298
M3 - Article
AN - SCOPUS:85210964708
SN - 2169-3536
JO - IEEE Access
JF - IEEE Access
ER -