People with hearing loss or hard hearing struggle with daily life activities as sign language is not widely known by the public. There are many attempts to use technology to help assist hearing loss individuals. However, most proposed solutions are standalone applications or require special hardware like a wearable glove. Our goal is to leverage cloud computing and artificial intelligence (AI) to provide a solution that is portable and does not require any special hardware. We created a lightweight 3D model and rendered it on the browser along with another lightweight object detection model for Arabic Sign Language (ArSL) for real-Time detection. Our contribution is primarily based on integrating our novel functional lightweight 3D avatar model and a lightweight ArSL alphabet detection model, which is trained on public ArSL21L dataset, that are suitable to be given as a cloud service. Prototypes of the 3D digital twin avatar model and AI model are publicly offered for the research community on GitHub. We will be working on a full-scale real-Time cloud-based communication system in ArSL.