Yesterday I blogged about our pre-searching activities and the use of sticky notes for some gentle formative assessment. Today I want to share how I went about coding the student responses not only to get a sense of students’ thinking during the two days of pre-searching, but to also use the data as a baseline of sorts in hopefully looking a broad collection of their work as we try to track their trajectory of growth and progress through this extended research unit.
Coding Information Sources
I began by removing the sticky notes for each period from the whiteboards and affixing them to large post-it notes and labeling each grouping by period and response type. The next challenge was to think of categories for coding the student responses. The “information sources used” was the easiest starting point, so I began there.
I listed all the information sources from the LibGuide for the project and then tallied responses. I wound up adding Google as another category since some students indicated they had used this search engine. Here are the results by period:
In both classes, it appears Gale Opposing Viewpoints was a starting point for the majority of students; Gale Science in Context was next in popularity. 2nd period seemed to like SweetSearch and self-selected information sources while 3rd period leaned more heavily toward Academic Search Complete.
When we look at the updated topics roster (while taking into account the intiial list of topics they had generated), the numbers are not too surprising. I know that many of them will benefit from some guidance into specific databases and search tools that will align with their topic choices as we move deeper into the project, but I’m not terribly surprised by what I see from the first two days of the risk free pre-search time to just hone down an interest area for one broad topic. This data, though, does suggest to me that there may be sources unfamiliar to students or they have used minimally in the past (as do the results from the information literacy skills needs survey we did via index cards with Ms. Rust a few weeks ago).
My categories for coding the questions students generated included:
- How or Why?
- Topic Clarification
- Question about the research or the assignment
- Other (other types of questions i.e. Is Finland’s educational system superior to the United States?)
2nd period posed 15 “how/why” questions and 11 questions that fell under “other”; there were four “who” questions and 6 “what” questions; three students did not note any questions. 3rd period generated questions that primarily fell under “what” (4), “how/why” (4), research/assignment questions (6), or “other” (6); five students did not generate any questions. Clearly, there is a stark contrast between the two classes in the types of questions they generated. This data may indicate that 3rd period may need more guided help in engaging more deeply with their articles OR strategies for generating questions.
Discoveries and Insights
For this group of sticky note responses, I created these coding categories:
- Fact or concrete detail
Once I began taking a pass through the student responses, I realized I need four additional categories:
- Topic Ideas
Second period students primarily recorded facts or concrete details for their notes; however, several used this space to think through additional topic ideas; the pattern was nearly identical in 3rd period. I was not surprised by these findings since students spent only two days doing light pre-search and I knew in advance that getting enough information to eliminate some topic areas of interest would be where many would expend their time and energy.
The pre-search activity and days were designed to help students rule out some topics and have time to explore those of interest and our sticky note method of formative assessment was one we felt would give us feedback without imposing a structure that would be time-consuming for students since we really wanted them to channel their energies into reading and learning more about their topic lists. While some of the data I coded was not surprising, I was really struck by the differences in the types of questions they generated. Right now I don’t know if this means one class might need more help in generating questions from informational texts or if perhaps they were approaching the reading and activity in a different way that didn’t lend itself to composing lots of questions at that early juncture in time.
If you are incorporating pre-search as part of the connecting cycle of inquiry, what kinds of formative assessments do you use? If you code student responses, how do you approach that process, and how do you use that data to inform your instructional design? I find this kind of work interesting—I am looking forward to seeing if any of these gray areas or perceived gaps come to light as we move further into our research unit this month.