
Volume 12, Issue 4 (April 2025), Pages: 1-11

----------------------------------------------
Original Research Paper
Innovative test item analysis using optical mark recognition technology: An evaluation
Author(s):
Ruth G. Luciano *
Affiliation(s):
College of Information and Communications Technology, Nueva Ecija University of Science and Technology, Cabanatuan, Philippines
Full text
Full Text - PDF
* Corresponding Author.
Corresponding author's ORCID profile: https://orcid.org/0000-0001-8532-6971
Digital Object Identifier (DOI)
https://doi.org/10.21833/ijaas.2025.04.001
Abstract
This study focuses on the need for effective tools to improve assessment processes in education by developing and evaluating a software application that analyzes test items using Optical Mark Recognition (OMR) technology. Traditional methods of test item analysis are often slow and unreliable due to manual handling and limited statistical insights. The proposed software aims to automate the creation, analysis, and management of test items, making the process more efficient for educators. The study follows a mixed-method approach, using qualitative methods for software design and quantitative evaluation based on ISO/IEC 25010 software quality standards. Developmental research principles guide the continuous improvement of the system to meet educational goals and user needs. Initial assessments by IT experts and users confirm the system’s functionality and ease of use. Recent advancements in automated assessment systems highlight the potential of OMR-based technology to make test item analysis faster and more accurate. The evaluation phase uses quantitative measures to assess the system’s reliability, efficiency, and user satisfaction. Findings from related studies on question difficulty prediction further support improvements to the software, ensuring it meets the demands of modern educational assessment.
© 2025 The Authors. Published by IASE.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Keywords
Optical mark recognition, Test item analysis, Automated assessment, Software evaluation, Educational tools
Article history
Received 9 September 2024, Received in revised form 6 January 2025, Accepted 2 April 2025
Acknowledgment
No Acknowledgment.
Compliance with ethical standards
Ethical considerations
The study involved human participants in the form of interviews, surveys, and user testing. Participation was voluntary, and informed consent was obtained from all participants. All data collected were treated with strict confidentiality and were used solely for research purposes. No personally identifiable information was recorded or disclosed. The study followed ethical standards in accordance with the institutional research guidelines of Nueva Ecija University of Science and Technology.
Conflict of interest: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Citation:
Luciano RG (2025). Innovative test item analysis using optical mark recognition technology: An evaluation. International Journal of Advanced and Applied Sciences, 12(4): 1-11
Permanent Link to this page
Figures
Fig. 1 Fig. 2 Fig. 3
Tables
Table 1 Table 2 Table 3
----------------------------------------------
References (37)
- Afridon M, Tedyyana A, Ratnawati F, Julianto A, and Faizi MN (2024). Optimizing data security in computer-assisted test applications through the advanced encryption standard 256-bit cipher block chaining. International Journal of Advanced Computer Science and Applications, 15(8): 163-170. https://doi.org/10.14569/IJACSA.2024.0150817 [Google Scholar]
- Akhtar H and Kovacs K (2024). Measurement precision and user experience with adaptive versus non-adaptive psychometric tests. Personality and Individual Differences, 225: 112675. https://doi.org/10.1016/j.paid.2024.112675 [Google Scholar]
- Alalawi K, Athauda R, Chiong R, and Renner I (2024). Evaluating the student performance prediction and action framework through a learning analytics intervention study. Education and Information Technologies, 30: 2887–2916. https://doi.org/10.1007/s10639-024-12923-5 [Google Scholar]
- Alem DD (2020). An overview of data analysis and interpretations in research. International Journal of Academic Research in Education and Review, 8(1): 1-27. [Google Scholar]
- AlKhuzaey S, Grasso F, Payne TR, and Tamma V (2024). Text-based question difficulty prediction: A systematic review of automatic approaches. International Journal of Artificial Intelligence in Education, 34: 862–914. https://doi.org/10.1007/s40593-023-00362-1 [Google Scholar]
- Attia S, El-Degwy A, and Attia M (2024). Usability and fitness testing for building performance simulation tools. Journal of Building Performance Simulation, 17(4): 460-479. https://doi.org/10.1080/19401493.2024.2341089 [Google Scholar]
- Braun J, Schläfli R, Schmocker D, and Wilding B (2023). Experiences and critical reflection on online-assessment with Excel case studies: Review on a successful online-assessment practice as well as the adaptation to a remote setting due to the COVID-19 pandemic. In: Hummel S and Donner MT (Eds.), Student assessment in digital and hybrid learning environments: 9-36. Springer Fachmedien Wiesbaden, Wiesbaden, Germany. https://doi.org/10.1007/978-3-658-42253-0_2 [Google Scholar]
- Das B, Majumder M, Sekh AA, and Phadikar S (2022). Automatic question generation and answer assessment for subjective examination. Cognitive Systems Research, 72: 14-22. https://doi.org/10.1016/j.cogsys.2021.11.002 [Google Scholar]
- Davis FD and Venkatesh V (2004). Toward preprototype user acceptance testing of new information systems: Implications for software project management. IEEE Transactions on Engineering Management, 51(1): 31-46. https://doi.org/10.1109/TEM.2003.822468 [Google Scholar]
- de Guzman AB and Adamos JL (2020). Like the layers of an onion: Curricular noticing as a lens to understand the epistemological features of the Philippine K to 12 secondary mathematics curriculum materials. Educational Research for Policy and Practice, 19(3): 389-409. https://doi.org/10.1007/s10671-020-09264-8 [Google Scholar]
- Delgado AJ, Wardlow L, McKnight K, and O'Malley K (2015). Educational technology: A review of the integration, resources, and effectiveness of technology in K-12 classrooms. Journal of Information Technology Education: Research, 14: 397-416. https://doi.org/10.28945/2298 [Google Scholar]
- DeLone WH and McLean ER (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4): 9-30. https://doi.org/10.1080/07421222.2003.11045748 [Google Scholar]
- Fagbohun O, Iduwe NP, Abdullahi M, Ifaturoti A, and Nwanna OM (2024). Beyond traditional assessment: Exploring the impact of large language models on grading practices. Journal of Artificial Intelligence and Machine Learning and Data Science, 2(1): 1-8. https://doi.org/10.51219/JAIMLD/oluwole-fagbohun/19 [Google Scholar]
- Gupta S and Gayathri N (2022). Study of the software development life cycle and the function of testing. In the International Interdisciplinary Humanitarian Conference for Sustainability, IEEE, Bengaluru, India: 1270-1275. https://doi.org/10.1109/IIHC55949.2022.10060231 [Google Scholar] PMid:35048294 PMCid:PMC8768444
- Haleem A, Javaid M, Qadri MA, and Suman R (2022). Understanding the role of digital technologies in education: A review. Sustainable Operations and Computers, 3: 275-285. https://doi.org/10.1016/j.susoc.2022.05.004 [Google Scholar]
- Igbokwe IC (2023). Application of artificial intelligence (AI) in educational management. International Journal of Scientific and Research Publications, 13(3): 300-307. https://doi.org/10.29322/IJSRP.13.03.2023.p13536 [Google Scholar]
- Ikegwu AC, Nweke HF, and Anikwe CV (2024). Recent trends in computational intelligence for educational big data analysis. Iran Journal of Computer Science, 7(1): 103-129. https://doi.org/10.1007/s42044-023-00158-5 [Google Scholar]
- Khosravi H, Shum SB, Chen G, Conati C, Tsai YS, Kay J, Knight S, Martinez-Maldonado R, Sadiq S, and Gašević D (2022). Explainable artificial intelligence in education. Computers and Education: Artificial Intelligence, 3: 100074. https://doi.org/10.1016/j.caeai.2022.100074 [Google Scholar]
- Krug S (2013). Don't make me think, revisited: A common-sense approach to web usability. 3rd Edition, New Riders, Indianapolis, USA. [Google Scholar]
- Kurdi G, Leo J, Parsia B, Sattler U, and Al-Emari S (2020). A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30: 121-204. https://doi.org/10.1007/s40593-019-00186-y [Google Scholar]
- Masegosa AR, Cabañas R, Maldonado AD, and Morales M (2024). Learning styles impact students' perceptions on active learning methodologies: A case study on the use of live coding and short programming exercises. Education Sciences, 14(3): 250. https://doi.org/10.3390/educsci14030250 [Google Scholar]
- Nayebi F, Desharnais JM, and Abran A (2012). The state of the art of mobile application usability evaluation. In the 25th IEEE Canadian Conference on Electrical and Computer Engineering, IEEE, Montreal, Canada: 1-4. https://doi.org/10.1109/CCECE.2012.6334930 [Google Scholar]
- Ngqondi T, Maoneke PB, and Mauwa H (2021). A secure online exams conceptual framework for South African universities. Social Sciences and Humanities Open, 3(1): 100132. https://doi.org/10.1016/j.ssaho.2021.100132 [Google Scholar]
- Olipas CNP and Luciano RG (2024). Analyzing test performance of BSIT students and question quality: A study on item difficulty index and item discrimination index for test question improvement. International Journal of Information Technology and Computer Science, 16(3): 1-11. https://doi.org/10.5815/ijitcs.2024.03.01 [Google Scholar]
- Pushpakumar R, Sanjaya K, Rathika S, Alawadi AH, Makhzuna K, Venkatesh S, and Rajalakshmi B (2023). Human-computer interaction: Enhancing user experience in interactive systems. In the International Conference on Newer Engineering Concepts and Technology: E3S Web of Conferences, EDP Sciences, Paris, France, 399: 04037. https://doi.org/10.1051/e3sconf/202339904037 [Google Scholar]
- Raj NS and Renumol VG (2024). An improved adaptive learning path recommendation model driven by real-time learning analytics. Journal of Computers in Education, 11(1): 121-148. https://doi.org/10.1007/s40692-022-00250-y [Google Scholar] PMCid:PMC9748379
- Saarela M, Hosseinzadeh S, Hyrynsalmi S, and Leppänen V (2017). Measuring software security from the design of software. In the 18th International Conference on Computer Systems and Technologies, Association for Computing Machinery, Ruse, Bulgaria: 179-186. https://doi.org/10.1145/3134302.3134334 [Google Scholar]
- Shneiderman B, Plaisant C, Cohen M, Jacobs S, Elmqvist N, and Diakopoulos N (2016). Designing the user interface: Strategies for effective human-computer interaction. 6th Edition, Pearson, London, UK. [Google Scholar]
- Shoaib M, Sayed N, Singh J, Shafi J, Khan S, and Ali F (2024). AI student success predictor: Enhancing personalized learning in campus management systems. Computers in Human Behavior, 158: 108301. https://doi.org/10.1016/j.chb.2024.108301 [Google Scholar]
- Swiecki Z, Khosravi H, Chen G, Martinez-Maldonado R, Lodge JM, Milligan S, Selwyn N, and Gašević D (2022). Assessment in the age of artificial intelligence. Computers and Education: Artificial Intelligence, 3: 100075. https://doi.org/10.1016/j.caeai.2022.100075 [Google Scholar]
- Tinh PD and Minh TQ (2024). Automated paper-based multiple choice scoring framework using fast object detection algorithm. International Journal of Advanced Computer Science and Applications, 15(1): 1174-1181. https://doi.org/10.14569/IJACSA.2024.01501115 [Google Scholar]
- Tkalich A, Klotins E, Sporsem T, Stray V, Moe NB, and Barbala A (2025). User feedback in continuous software engineering: revealing the state-of-practice. Empirical Software Engineering, 30: 79. https://doi.org/10.1007/s10664-024-10557-2 [Google Scholar]
- van Berkel N, Goncalves J, Wac K, Hosio S, and Cox AL (2020). Human accuracy in mobile data collection. International Journal of Human-Computer Studies, 137: 102396. https://doi.org/10.1016/j.ijhcs.2020.102396 [Google Scholar]
- Vashishth TK, Sharma V, Sharma KK, Kumar B, Panwar R, and Chaudhary S (2024). AI-driven learning analytics for personalized feedback and assessment in higher education. In: Nguyen TV and Vo N (Eds.), Using traditional design methods to enhance AI-driven decision making: 206-230. IGI Global, Pennsylvania, USA. https://doi.org/10.4018/979-8-3693-0639-0.ch009 [Google Scholar]
- Vassallo K, Garg L, Prakash V, and Ramesh K (2019). Contemporary technologies and methods for cross-platform application development. Journal of Computational and Theoretical Nanoscience, 16(9): 3854-3859. https://doi.org/10.1166/jctn.2019.8261 [Google Scholar]
- Venkatesh V, Morris MG, Davis GB, and Davis FD (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3): 425-478. https://doi.org/10.2307/30036540 [Google Scholar]
- Yan Z, Brown GT, Lee JCK, and Qiu XL (2020). Student self-assessment: Why do they do it? Educational Psychology, 40(4): 509-532. https://doi.org/10.1080/01443410.2019.1672038 [Google Scholar]
|