TY - GEN
T1 - Multi-model deep learning for cloud resources prediction to support proactive workflow adaptation
AU - El-Kassabi, Hadeel
AU - Serhani, Mohamed Adel
AU - Bouktif, Salah
AU - Benharref, Abdelghani
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/8
Y1 - 2019/8
N2 - Scientific workflows are complex, resource intensive, dynamic in nature and require elastic cloud resources. To support these requirements, cloud resources' prediction schemes forecast resource scarcity and therefore support proactive workflow adaptation. In this paper, we propose a proactive workflow adaptation approach supported by a Deep Learning based prediction of cloud resources' usage. The model uses an algorithm to evaluate and privilege the most appropriate prediction model for resource utilization violations for each task of the workflow. Then, it recommends the proper adaptation actions to maintain the Quality of Service (QoS) for the entire workflow. Runtime monitoring of cloud resources data is continuously fed into Machine Learning models including GRU, LSTM, and Bi-directional LSTM for predicting the future task resource utilization values. The algorithm evaluates the resources' prediction using a number of metrics, such as RMSE, MAE, and MAPE. The prediction model achieving the highest accuracy is selected to determine the needed cloud resources. We conducted a series of experiments to evaluate our approach and the results demonstrate that the proposed Multi-Model predicts properly the cloud resource usage as well as suggesting their adaptation actions to guarantee the required workflow QoS.
AB - Scientific workflows are complex, resource intensive, dynamic in nature and require elastic cloud resources. To support these requirements, cloud resources' prediction schemes forecast resource scarcity and therefore support proactive workflow adaptation. In this paper, we propose a proactive workflow adaptation approach supported by a Deep Learning based prediction of cloud resources' usage. The model uses an algorithm to evaluate and privilege the most appropriate prediction model for resource utilization violations for each task of the workflow. Then, it recommends the proper adaptation actions to maintain the Quality of Service (QoS) for the entire workflow. Runtime monitoring of cloud resources data is continuously fed into Machine Learning models including GRU, LSTM, and Bi-directional LSTM for predicting the future task resource utilization values. The algorithm evaluates the resources' prediction using a number of metrics, such as RMSE, MAE, and MAPE. The prediction model achieving the highest accuracy is selected to determine the needed cloud resources. We conducted a series of experiments to evaluate our approach and the results demonstrate that the proposed Multi-Model predicts properly the cloud resource usage as well as suggesting their adaptation actions to guarantee the required workflow QoS.
KW - Cloud
KW - Deep Learning
KW - QoS
KW - Resource prediction
KW - Workflow
KW - Workflow adaptation
UR - http://www.scopus.com/inward/record.url?scp=85085482966&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085482966&partnerID=8YFLogxK
U2 - 10.1109/CloudSummit47114.2019.00019
DO - 10.1109/CloudSummit47114.2019.00019
M3 - Conference contribution
AN - SCOPUS:85085482966
T3 - Proceedings - 2019 3rd IEEE International Conference on Cloud and Fog Computing Technologies and Applications, Cloud Summit 2019
SP - 78
EP - 85
BT - Proceedings - 2019 3rd IEEE International Conference on Cloud and Fog Computing Technologies and Applications, Cloud Summit 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd IEEE International Conference on Cloud and Fog Computing Technologies and Applications, Cloud Summit 2019
Y2 - 8 August 2019 through 10 August 2019
ER -