Artificial exam scorer for efficient marking and grading of short essay tests

dc.contributor.authorMenya, Edmond Odhiambo
dc.date.accessioned2018-10-23T10:21:48Z
dc.date.available2018-10-23T10:21:48Z
dc.date.issued2018
dc.descriptionThesis submitted in partial fulfillment of the requirements for the Degree of Master of Science in Information Technology (MSIT) at Strathmore Universityen_US
dc.description.abstractLearning is an integral aspect to the development of students as well as progressing of a society. The process is always marked with milestones from class work to semester projects and eventually examinations. Students are always required, as a standard, to sit for an instructor set exam paper. The grade and scores that the student garners indicator of progress, amount of knowledge acquired as well as whether or not the student is qualified for the next academic level. Exams are thus an imperative aspect in the academic life cycle and a critical one for that matter. However, the examinations marking and grading process has been marred with inefficiencies, irregularities and unethical practices over the years. This study aimed at achieving the automation of the exam marking process. This approach seeks to introduce efficiencies cutting down time and cost involved in examinations marking in addition to eliminating human bias in the marking process. Research objectives were centered around studying accuracy levels of past exam papers marked by human instructors, reviewing challenges linked to the examination marking process, reviewing existing models, frameworks, architectures and algorithms that have tried exam marking automation, to develop an improved algorithm-based solution that is efficient for the marking problem and performing of experiments to validate the algorithm. The research engaged experimental research experimenting the relation between keywords, synonyms and their related words involvement in artificial marking and marking accuracy. The outcome is an algorithm that mines related words and counts between scheme and student answer to mark exams. The findings were that the model achieves an improved marking accuracy by a margin of 16% from 73% to 89%. The model achieved more accuracy when grading lower mark answers achieving 99.9% when marking 1-mark answers.en_US
dc.identifier.urihttp://hdl.handle.net/11071/5997
dc.language.isoenen_US
dc.publisherStrathmore Universityen_US
dc.subjectArtificialen_US
dc.subjectmarkingen_US
dc.subjectessay testsen_US
dc.titleArtificial exam scorer for efficient marking and grading of short essay testsen_US
dc.typeThesisen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Artificial exam scorer for efficient marking and grading of short essay tests.pdf
Size:
13.92 MB
Format:
Adobe Portable Document Format
Description:
Full-text Thesis 2018
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.71 KB
Format:
Item-specific license agreed upon to submission
Description: