Asking IT students, job interviewees, and competition contestants to write code is very common. Nevertheless, to properly assess the programming skills of such people, the anticipated difficulty level of these coding questions should be estimated and kept in mind throughout the preparation of such exams. Poor results coming out of these assessment tools may not be entirely attributed to exam takers, but rather to the poor design of the exams that fail to gauge the competency levels of each student via the ranked levels of difficulty of questions on the exam. In this research, we argue that we can develop a predictive tool, named the Predicted Difficulty Index (PDI), that is derived from the structure of the sample solutions to rank the questions based on the difficulty that students may encounter while solving them. Such prior knowledge about questions' complexity should help instructors assign questions the proper points and place them in a progressive order throughout the exam leading to a more reliable evaluation tool.