Home / Articles
Automating Handwritten Answer Evaluation: A Deep Learning and OCR Integrated Approach |
![]() |
Author Name Pof.A.H.Pawar, Vaishnavi Chavan, Vedantika Pol , Harshad Shinde , Sonyabapu Thorat Abstract Subjective answer evaluation remains a complex and time-consuming task in education. This paper presents an automated evaluation system that uses advanced Natural Language Processing (NLP) techniques to assess student answers against teacher-provided reference solutions. The system employs pre-trained BERT Transformers from Hugging Face’s library to encode both student and reference answers into semantic vectors, with cosine similarity used to measure semantic closeness. A rule-based scoring mechanism assigns scores based on defined similarity thresholds. The system supports the evaluation of optional (OR) questions by calculating scores for multiple responses and selecting the highest similarity value. A user-friendly front-end is developed using the Streamlit framework, enabling teachers to manage subjects, classes, and upload PDF answers. PyMuPDF (fitz) is used for answer extraction, with the data stored in an SQLite database for processing. The system normalizes per-question scores and aggregates them for comprehensive exam evaluation. Experimental results show that the system achieves a high correlation with human graders, outperforming traditional keyword-based methods. This work contributes to the development of scalable, efficient, and accurate grading tools, reducing manual effort and supporting personalized feedback for learners.
Key Words: Automated Evaluation, Handwritten Answer Grading, PaddleOCR, BERT Transformer, Cosine Similarity, NLP Models, Educational Assessment, Machine Learning, Multi-Language Support Published On : 2025-04-30 Article Download : ![]() |