Part 1: Public Version Of Maker Literacies Student Learning Data

U T A with star in the center, used when staff photo is unavailable

by Martin Wallace

IMLS logo

[This is the first in a series of blog posts about our data gathering, processing and analysis methodology.]

While we still have not completed our final report to the IMLS, nor have I finished analyzing the student self-assessment data gathered from our pilot program, I feel that it’s time to release the public version of our raw data set. Rather than waiting until all data has been analyzed, I plan to post regular weekly (or monthly, as time permits) updates that will take readers through our methodology and ultimately reveal some of our findings.

Before questions are raised about the potentially spurious data I’m making available to the public, I offer this important disclaimer: the data should probably not be used for serious research at this time. While I have made every attempt to clean up the data and remove unreliable survey responses, I do not believe the data to be completely reliable or generalizable at this stage. The reasons for this will become evident herein, in subsequent posts, and in our final report to the IMLS, where I will describe what we did right and what we did wrong with this first-pass of data gathering, processing and analysis. As we refine our methodology and collect more data, I am confident the data will become more reliable and generalizable.

It is important to note that our work with data collection, processing and analysis up to this point has been to develop a proof of concept for measuring student learning in  academic library makerspaces. It has not been about the data collected, per se. I believe we have successfully developed a sound, while not perfect, methodology for measuring the learning taking place in academic library makerspaces, based on student self-assessment. I have learned a great deal about survey design and data validation in the process, and we plan to combine the lessons learned through this pilot experience with feedback from participants to refine our methodology while bringing it in line with existing best practices as identified in the literature. Once these changes have been implemented, we will then turn our attention more seriously toward the data itself.

I should mention here that data collection and analysis was not a requirement for this IMLS planning grant. One of our goals was to explore of a variety of assessment tools and techniques for measuring student learning. The pre- and post-self-assessment survey methodology that we have developed is one such assessment technique that we were able to fully explore.

In August 2016, UTA Libraries’ Maker Literacies Task Force decided it needed a standardized measurement tool that could be deployed across all courses participating in the Maker Literacies program. We decided upon a pre- and post-self-assessment survey combination. The surveys consisted of multiple choice and open-ended, but mostly Likert scale questions asking students to rate their level of knowledge in various dimensions of each of the eleven maker-based competencies. The data set also includes course information such as partner site, discipline, semester, etc.

The Likert scales are the primary measurement apparatus considered for analysis. The open-ended questions did not reveal much, as they were often left blank or filled with garbage data. Only nineteen students took time to answer them substantively, and from those we didn't gain much generalizable insight. The multiple choice questions (i.e. “Before taking this course, have you ever used the equipment in the library makerspace?”) and pre-known data (i.e. course discipline) can be used to subdivide Likert scale findings into groups, so we will be able to compare discipline to discipline, or compare students who had never used the makerspace to those who had.

In fall 2017 we conducted the surveys informally, without IRB approval or students’ informed consent; instructors simply required their students to take the surveys, or offered participation credit to students who completed the surveys voluntarily. Once we received the IMLS grant, we had to receive IRB approval to administer the surveys, which we were granted; however, in order to use the existing data (since it would be combined with data collected during the grant period) we had to backtrack and seek informed consent from students who completed surveys prior to the grant period. We made our best attempt to contact all of them by email. 54 of those students responded and provided their consent. We have excluded all survey responses associated with opt-outs and non-responses in our public release of the data.

For reference, the final IRB-approved versions of our pre- and post-self-assessment question banks are posted with the public data on Mavs Dataverse. They will also be included as Appendix 6 and Appendix 7 in the forthcoming final report to the IMLS. I have also posted a data dictionary to help users understand what each column heading means in the data set. Using the self-assessment question banks in tandem with the data dictionary will provide users the best understanding of the data collected.

The data dictionary is not perfect, but should be suitable. I have no plans at this time to make it look prettier or more user-friendly. That said, I will do my best to address requests for improvements that are brought directly to my attention. Before sending your requests, please consider that we will need to compile a whole new data dictionary for the updated surveys that we will begin using in Spring 2019. I reserve the right to hold off on any changes that will be revised again or reversed in version 2 of the data dictionary. If you have any confusion at all about finding information in the data or its dictionary, don’t hesitate to contact me for clarification at martin.wallace@uta.edu.

[Note: the following information has been updated, here.] Over two semesters (Fall 2017-Spring 2018) 437 students, including students who opted in and who opted out, completed the pre-self-assessment survey. Of these, 394 opted-in to participate in the study and were issued a corresponding post-self-assessment survey. Students who did not provide consent have been removed from the public data.

Of the 394 students who consented to participate, 217 completed the post-self-assessment survey. Students who did consent to participate and completed the pre-self-assessment survey, but who then did not complete a post-self-assessment survey have also been removed from the data. We could have left this in the public data, but I felt that little could be gained from it, and the extra rows clutter up the data. I can make this data available upon request in return for your convincing argument of how that data will be useful.

DATAVERSE PRE POST

Fig. 1. Comparison of Pre- and Post-Self-Assessment Likert Scales

Post-self-assessment Likert scale questions asked students to reflect back to the beginning of the semester and re-rate their levels of competence, as well as rating their level of competence after completing their makerspace projects (see Fig. 1). The difference between these two ratings helps us measure growth in competencies, while the reflective rating can also be compared to students’ answers in the pre-self-assessment survey. Comparing these two ratings gives us an idea of how much students tend to over- or under-rate themselves at the beginning of the semester, but also serves a much more important purpose—helping us identify unreliable responses. We can analyze those data points to identify responses that have characteristics of randomness, as though the student randomly clicked through the surveys just to be done with them in either the pre- or post-self-assessment, or both. This is done by calculating something called an “intraclass correlation coefficient”, and will be the topic of my next blog post in this series, which I hope to complete by Thanksgiving break.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <button> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.