Abstract:
Public examinations are conducted worldwide for certification, placement, promotion, etc. As these examinations are high stake examinations, evaluation of the answer-scripts needs to be carried out in a uniform, error-free and unbiased manner. However, the large quantum of answer-scripts pertaining to each subject/course paper invariably introduces evaluation anomalies. Coupled with this, evaluation also suffers from intra/inter examiner heterogeneity and subjectivity. Some of the currently used approaches such as moderation of answer-scripts, in-house verification, personal verification, re-evaluation of answer-scripts and scaling of marks, only provide cursory relief from anomalous and heterogeneous evaluation. This is apparent from alarmingly increasing cases of verification/re-evaluation converging into significant changes in the original marks. In this paper, we propose an E-moderation scheme using machine learning techniques to classify each answer evaluation as negligent or normal and further predict scale.