RESEARCH PAPER
Construction of a Mathematical Model for Calibrating Test Task Parameters and the Knowledge Level Scale of University Students by Means of Testing
 
More details
Hide details
1
Zhetysu State University named after I. Zhansugurov, KAZAKHSTAN
 
2
Kazakh University of Technology and Business, KAZAKHSTAN
 
 
Online publication date: 2017-11-06
 
 
Publication date: 2017-11-06
 
 
EURASIA J. Math., Sci Tech. Ed 2017;13(11):7421-7429
 
KEYWORDS
ABSTRACT
The relevance of this study is determined by the algorithm developed for test task selection. The purpose of this article is to develop this test task selection algorithm having single and multiple-choice answers with various blocks of reactions. The main approach to the study of the problem is the construction of a mathematical model to calibrate test task parameters and the university students’ scale of knowledge level. The study has proven insufficient use of the classical theory of testing in an objective assessment of students’ knowledge. The method for calibrating test task parameters is developed. Scales of testees’ readiness level are defined. The adequate size of test reliability coefficient is found. The rule for test task selection is formulated. A formula for the complexity degree of a test task is found. An algorithm for allocation of test task types with unambiguous and multiple-choice answers with various blocks of reactions is offered.
 
REFERENCES (46)
1.
Avanessov, V. S. (2009). The language of pedagogical measurements. Pedagogical Measurements, 2, 29-60. http://testolog.narod.ru/Theor... [in Russian].
 
2.
Avanessov, V. S. (2014). New educational technology in university. Bulletin of the Russian University of Peoples’ Friendship, a series of education issues: languages and specialty, 4, 138-144. [in Russian].
 
3.
Boesen, J., Lithner, J., & Palm, T. (2010). The relation between types of assessment tasks and the mathematical reasoning students use. Educational studies in mathematics, 75(1), 89-105. doi:10.1007/s10649-010-9242-9.
 
4.
Bortz, J., & Döring, N. (2005). Forschungsmethoden und Evaluation. Heidelberg: Springer-Verlag. doi:10.1007/978-3-662-07299-8.
 
5.
Borzykh, A. A., & Gorbunov, A. S. (2009). Virtual worlds, information environments and ambitions e-Learning. Educational Technology & Society, 12(2), 423-437. https://cyberleninka.ru/articl... [in Russian].
 
6.
Botti, A., Grimaldi, M., Tommasetti, A., Troisi, O., & Vesci, M. (2017). Modeling and Measuring the Consumer Activities Associated with Value Correction: An Exploratory Test in the Context of Education. Service Science, 9(1), 63-73. doi:10.1287/serv.2016.0156.
 
7.
Chen, C., Wang, J., & Yu, C. (2017). Assessing the attention levels of students by using a novel attention aware system based on brainwave signals. British J. of Educational Technology, 48(2), 348-369. doi:10.1111/bjet.12359.
 
8.
Cheng, Y., Diao, Q., & Behrens, J. T. (2017). A simplified version of the maximum information per time unit method in computerized adaptive testing. Behavior Research methods, 49(2), 502-512. doi:10.3758/s13428-016-0712-6.
 
9.
Coe, R. (2003). Web-based nuclear testing & training. Nuclear Plant Journal, 21(1), 47-61. https://www.highbeam.com/doc/1....
 
10.
De Meo, P., Messina, F., Rosaci, D., & Sarné G. M. L. (2017). Combining trust and skills evaluation to form e-Learning classes in online social networks. Information Sciences, 405, 107-122. doi:10.1016/j.ins.2017.04.002.
 
11.
de Villiers, M. R. (Ruth), & Becker, D. (2017). Investigating learning with an interactive tutorial: A mixed-methods strategy. Innovations in Education and Teaching International, 54(3), 247-259. doi:10.1080/14703297.2016.1266959.
 
12.
Dvoryatkina, S. N. (2013). Designing adaptive computerized training system tasks probabilistic-statistical areas of mathematics. Bulletin of the Russian Peoples Friendship University, Series Informatization of Education, 1, 97-104. [in Russian].
 
13.
Edens, K., & Shields, C. A. (2015). Vygotskian approach to promote and formatively assess academic concept learning. Assessment & Evaluation in Higher Education, 40(7), 928-942. doi:10.1080/02602938.2014.957643.
 
14.
Guznenkov V. N., & Seregin V. I. (2016). Computer testing as a form of control knowledge of students on geometric-graphic disciplines. International Research Journal, Series: Pedagogical Sciences, 9(51), 56-58. [in Russian].
 
15.
Haist, S. A., Butler, A. P., & Paniagua, M. A. (2017). Testing and evaluation: the present and future of the assessment of medical professionals. Advances in Physiology Education, 41(1), 149-153. doi:10.1152/advan.00001.2017.
 
16.
Horton, W., & Horton K. (2005). E-learning: tools and technologies. Moscow: Kudits-Image. [in Russian].
 
17.
Howard, S. J., Woodcock, S., Ehrich, J., Bokosmaty, S., & others (2017). What are standardized literacy and numeracy tests testing? Evidence of the domain-general contributions to students’ standardized educational test performance. British J. of Educational Psychology, 87(1), 108-122. doi:10.1111/bjep.12138.
 
18.
Huang, H. T. D., & Hung, S. T. A. (2010). Examining the practice of a reading-to-speak test task: anxiety and experience of EFL students. Asia Pacific Education Review, 11(2), 235-242. doi:10.1007/s12564-010-9072-6.
 
19.
Hung, J. (2012). Trends of E‐learning Research from 2000 to 2008, Use of text mining and bibliometrics. British J. of Educational Technology, 43(1), 5–16. doi:10.1111/j.1467-8535.2010.01144.x.
 
20.
Kibzun, A. I., & Inozemtsev, A. O. (2014). Using the maximum likelihood method to estimate test complexity levels. Automation and Remote control, 75(4), 607-621. doi:10.1134/S000511791404002X.
 
21.
Lim, S. Y., & Chapman, E. (2013). Development of a short form of the attitudes toward mathematics inventory. Educational studies in mathematics, 82(1), 145-164. doi:10.1007/s10649-012-9414-x.
 
22.
Markon, K. E. (2013). Information Utility: Quantifying the Total Psychometric Information Provided by a Measure. Psychological Methods, 18(1), 15-35. doi:10.1037/a0030638.
 
23.
Martos-Garcia, D., Usabiaga, O., & Valencia-Peris, A. (2017). Students’ Perception on Formative and Shared Assessment: Connecting two Universities through the Blogosphere. Journal of new Approaches in Educational Research, 6(1), 64-70. doi:10.7821/naer.2017.1.194.
 
24.
Nurgabyl, D. N. (2014a). About one mathematical model of calibration of parameters of test tasks. Bulletin of KazNTU named after K. Satpayev, 3, 482-487. http://vestnik.kazntu.kz/files... [in Russian].
 
25.
Nurgabyl, D. N. (2014b). On a mathematical model of multi-step adaptive testing. Bulletin of the Abai Kazakh National Pedagogical University, a Series of Physical and Mathematical Sciences, 1(45), 143-149. [in Russian].
 
26.
Nurgabyl, D. N. (2012). On a computer adaptive testing technology in vocational training. Proceedings of the international scientific-practical conference “Mathematical, science education and information”, 2, (316-319). Moscow: Institute of Mathematics and Informatics. [in Russian].
 
27.
Nurgabyl, D. N., & Ramazanov, R. G. (2013). About one model of adaptive computerized testing. Proceedings of the International Conference on the Transformation of Education, Mathematics, (pp.13-21). London.
 
28.
Nurjanah, Dahlan, J. A., & Wibisono, Y. (2017). Design and Development Computer-Based E-Learning Teaching Material for Improving Mathematical Understanding Ability and Spatial Sense of Junior High School Students. Proceedings of the 3rd International Seminar on Mathematics, Science, and Computer Science Education (MSCEIS), Bandung, Indonesia, 2016, Journal of Physics Conference Series, 812, UNSP 012098. doi:10.1088/1742-6596/812/1/012098.
 
29.
Ozturk, N., & Dogan, N. (2015). Investigating Item Exposure Control Methods in Computerized Adaptive Testing. Educational sciences-theory & practice, 15(1). 85-89. doi:10.12738/estp.2015.1.2593.
 
30.
Pantziara, M., & Philippou, G. (2012). Levels of students’ “conception” of fractions. Educational studies in mathematics, 79(1), 61-83. doi:10.1007/s10649-011-9338-x.
 
31.
Park, J. (2010). Constructive multiple‐choice testing system. British J. of Educational Technology, Special Issue, Learning objects in progress, 41(6), 1054–1064. doi:10.1111/j.1467-8535.2010.01058.x.
 
32.
Permyakov, O. E., & Maksimov, O. A. (2015). Formalization of expert evaluation of the quality of test materials from the positions of the system approach. Vestnik pedagogicheskikh innovatsii, 3(7), 157-178. [in Russian].
 
33.
Prado, E., Hartini, S., & Rahmawati, A. et al. (2010). Test selection, adaptation, and evaluation: A systematic approach to assess nutritional influences on child development in developing countries. British J. of Educational Psychology, 80(1), 31-53. doi:10.1348/000709909X470483.
 
34.
Rasch, G. (1980). Probabilistic Models for Some Intelligence and Attainment Tests. Chicago: The University of Chicago Press.
 
35.
Sangwin, C. J., & Jones, I. (2017). Asymmetry in student achievement on multiple-choice and constructed-response items in reversible mathematics processes. Educational studies in mathematics, 94(2), 205-222. doi:10.1007/s10649-016-9725-4.
 
36.
Senior, C., Fearon, C., & Mclaughlin, H. et al. (2017). How might your staff react to news of an institutional merger? A psychological contract approach. International Journal of Educational management, 31(3), 364-382. doi:10.1108/IJEM-05-2016-0087.
 
37.
Siddiq, F., Gochyyev, P., & Wilson, M. (2017). Learning in Digital Networks. ICT literacy: A novel assessment of students’ 21st century skills. Computers & Education, 109, 11-37. doi:10.1016/j.compedu.2017.01.014.
 
38.
van der Linden, W. J., & Glas, C. A. W. (Eds.) (2010). Elements of Adaptive Testing. Springer. doi:10.1007/978-0-387-85461-8.
 
39.
van Rijn, P. W., & Ali, U. S. (2017). A comparison of item response models for accuracy and speed of item responses with applications to adaptive testing. British J. of Mathematical & Statistical Psychology, 70(2), SI, 317-345. doi:10.1111/bmsp.12101.
 
40.
Vasiliev V. N. (2007). University as an open system. Innovations, 2, 57-60. https://elibrary.ru/item.asp?i... [in Russian].
 
41.
Vlasin, I., & Chirila, C. B. (2015). The model of a competence based e-learning platform for primary and middle school students. Smart 2014-Social media in academia: Research and Teaching, 179-184 http://www.academia.edu/987213....
 
42.
Voutilainen, A., Saaranen, T. S., & Ormunen, M. (2017). Conventional vs. e-learning in nursing education: A systematic review and meta-analysis. Nurse Education Today, 50, 97-103. doi:10.1108/IJEM-05-2016-0087.
 
43.
Wilmot, D. B., Schoenfeld, A., Wilson, M., Champney, D., & Zahner, W. (2011). Validating a Learning Progression in Mathematical Functions for College Readiness. Mathematical Thinking and learning, 13(4), 259-291. doi:10.1080/10986065.2011.608344.
 
44.
Wilson, M. (2005). Constructing Measures: An Item Response Modeling Approach. Mahwah, New Jersey: Lawrence Erlbaum Associates.
 
45.
Xia, Q., Liang, R., & Wu, J. (2017). Transformed contribution ratio test for the number of factors in static approximate factor models. Computational statistics & Data Analysis, 112, 235-241. doi:10.1016/j.csda.2017.03.005.
 
46.
Yaman, S. (2011). Comparison of test use and multiple-evaluation to test effectiveness of PBL in different grouping strategies. Energy Education Science and Technology, Part B-social and Educational studies, 3(1-2), 131-142.
 
eISSN:1305-8223
ISSN:1305-8215
Journals System - logo
Scroll to top