Peer Reviewed Journal via three different mandatory reviewing processes, since 2006, and, from September 2020, a fourth mandatory peer-editing has been added.
This paper reported on the development of an online data-gathering system for the programmatic assessment of General Education Programs (GEP) at a US public polytechnic university. The article began with a brief introduction to the study area and population. It then presents the findings of a literature review that underpinned the study, including research on faculty buy-in for programmatic evaluation. The primary findings highlighted a significant disconnect between those managing the data-reporting process for accreditation agencies and those charged with teaching and assessing students who are required to provide the data. Next, the study methods and procedures utilized for developing the online data-gathering system were described. A group of educators was engaged in a collaborative co-design process to develop the necessary data-gathering instrument and to test various tools during feedback sessions. For this pilot test, the GEP outcome being examined was 'Oral Communication,' which utilized a four-point Likert-style scale for indicators. The results of the pilot test are presented, along with user observations and comments. The article concludes with a series of findings and implications for how these methods can be applied to other GEPs and, more broadly, to any program evaluation needs.