As the world continues to evolve, technology has proven to be a necessity in the lives of everyone. Evolving beyond professional use, cyberspace is now populated by online communities being used for communication, learning, and entertainment. However, the increased online presence exposes users to a variety of cultures, personalities, and levels of maturity. Some may also seek to cause harm to others through cyberbullying or may display toxic behaviors. This research aims to tackle the growing problem of toxicity and harassment in online environments. The proposed solution will utilize Artificial Intelligence (AI), and more specifically Natural Language Processing (NLP), to moderate communication and detect malicious language and behavior. The efforts shared in this paper specifically present a work-in-progress. For the time being, two models have been tested with a single dataset from Twitter: A Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN). The results of experimentation show a promising start for the use of NLP in moderation with an 83% accuracy using an RNN.