Pre-Searching Round Two: Moving Toward a More Specific Topic for Research and Inquiry

presearch round 2students

After introducing students to some basics of information evaluation, we began our second phase of pre-searching on Monday, October 6.   Our learning targets included (based on our district content area standards and the AASL Standards for the 21st Century Learner):

  • I can use prior background knowledge as context for new learning.
  • I can find, evaluate, and select appropriate sources to answer questions.
  • I can read widely and fluently to make connections with self, the world, and previous reading.
  • I can use my library time wisely to think deeply on my work and stay on task.

We began the conversation by discussing how this next-round of pre-search was going to be a little more strategic and structured since our first phase had given us a topic commitment and now it was time to start “cropping” the big picture to narrow our topic (hat tip to Pegasus Librarian for this wonderful metaphor and to my friend Kristin Fontichiaro for pointing me to it).

We then introduced our structures , steps, and resources for helping us go more deeply into our pre-search to help us read and reflect more intentionally while evaluating our information sources.

pre-search round 2 directions

 

We required students to print or create a hard copy of any information sources they were using so that they could highlight and annotate the text.  We then took time to discuss strategies for annotating informational text and how annotations help us think more deeply and purposefully about a text.  We drew heavily from reading and literacy expert Cris Tovani to create this handy “help” sheet on annotating texts for our students:

Annotating Text Strategies

We then shared with the students how the text annotations would be the bridge to our modified KWL for pre-search and how this reflective thinking, while time intensive for the present, would be essential and instrumental to building our existing knowledge of the topic so that we could hone in on a more specific focus.

On the backside of the hard copy of this chart was the information source evaluation checklist we had worked with the previous week in our research/inquiry circles.   We explained how we would use the CRAAP test and our assessment tool to evaluate the information source.  Once students had read and annotated an article, completed a KWL for that article, and completed the information evaluation assessment tool for that article, we asked them to staple that together as a “packet” and then add the information source to their EasyBib working bibliography.  We ended with a short EasyBib refresher and pointed students to specific tutorial videos we’ve created for a variety of resources.  

tempcheckWe then turned the students loose, and they began immersing themselves in the work.  Over the next few days, the primary role for Sarah, Jennifer, and me as instructors was to facilitate; most of our efforts were spent answering 1:1 questions and individual conferencing to help students keep moving forward or adjust their searching.   After doing a “temperature check” on Friday, October 10, we realized students needed one more additional day for searching, reading, annotating, and doing their metacognitive work with the KWL and information evaluation tool. This was an opportunity for students to wrap up their work while others took advantage of the extra day to get some additional  intensive and extended 1:1 help—most requests were related to search terms and techniques.  For these students, the personalized help was beneficial in moving them from a place where they felt stuck to discovering new sources.

The content in these pre-search “packets” will be the fodder for helping us move forward with the next step in narrowing topics:  mindmapping. We formally started this process of mindmapping today, and I’ll be writing more about that soon as well as the assessments and self-assessments we’re designing to think about where we are in our learning before moving forward into our next phase of inquiry!

Follow our journey:

Hashtag:  #rustyq

Our LibGuide

Blog post 1:  Inquiring with Students: What Do or Can “Good” Research Projects Look Like?

Blog post 2:  Beginning Our Research and Inquiry Experiences with Pre-Searching

Blog post 3:  Sticky Notes as Formative Assessment for Information Literacy Instruction: Coding Student Responses

Blog post 4:  Collaborative Information Source Evaluation: Research/Inquiry Circles and the CRAAP Test

Collaborative Information Source Evaluation: Research/Inquiry Circles and the CRAAP Test

info evaluation activity collage

Last week, students completed the gentle entry-level phase of pre-search (see the end of this post for more detailed reflections); teacher Sarah Rust and I felt it would be helpful to introduce information and source evaluation skills to our students before moving forward into the next round of pre-searching.    We grouped students into collaborative “research” or “inquiry” circles based on their initial topic interests.  We plan to use these research circles as a medium for workshopping with small groups as we move deeper into research and inquiry; these groups will also help us move into collaborative learning experiences.

On Thursday, October 2, we grouped students and then introduced them to the CRAAP test with this terrific video from the Academy of Art University; while this structure for evaluating information originally was designed for online resources, we discussed how it was important to evaluate ALL forms of information, including ones traditionally considered authoritative.    We talked about the messiness of information evaluation and context  of authority using the framework of the CRRAP method. Using the recent Secret Service security breaches as our research topic, students then were asked collaboratively look at seven different information sources we posted on our project LibGuide and to work together to evaluate each information source using the CRAAP test as their guide.   We asked them to use this checklist to guide their assessment and to tally their scores for each source.  Students worked together all period and for about the first quarter of class on Friday, October 3.

After students finished up their assessments on Friday, we instructed each group then posted their score on a dry-erase board on our Verb easel; we labeled each whiteboard with a sticker for the source so that the “parking lots” for their scores would be easy to post.

Each group then came up to the easel and shared/defended their assessments of each source.

As they did this, I took rough notes about how each group scored sources and notes of any comments or reasoning they shared.  You can see my notes below:

Sarah, Jennifer, and I were fascinated by the students’ responses.  Just a few things some students/groups noticed:

  • Databases may be great, but if they are only providing background information and not answering one’s research question, the content there may not always be the best fit.  We were impressed they made this distinction.
  • One group commented that they would like to know if the journalists for the Washington Post article had previously written about problems with the Secret Service security issues or if this was their first effort on writing about that topic.  Again, we were happily surprised they were this discerning in their evaluation.
  • Several groups noted that just because a source was government publication, it was not necessarily credible since they might be interested in putting a certain spin on the value and integrity of the Secret Service; this level of questioning could be a reflection of previous instruction elsewhere that values interrogating all sources, but we also wondered if that stance might also be a reflection (at least, in part) of the politically conservative nature of the community.
  • Discussions emerged about different news publications and outlets and how their reputation to lean left or right might impact the objectivity of the articles or news videos.
  • Several students indicated they would like GALE to include more information about the authors of reference articles in databases like Opposing Viewpoints in Context.
  • Scores were pretty consistent from group to group within specific class periods and across both class sections.

We were incredibly happy with the way students engaged with each other and the assessment task as groups.  Our goal was for them to have an opportunity to debate and wrestle with their evaluation of each source within their groups and to share that thinking out loud with the larger class; this approach accomplished that outcome.  I definitely would introduce information evaluation in this way again, and this springboard activity seemed to fit a wide range of prior experiences with these concepts.  As we’ve engaged in pre-search “phase 2″ this week, we’ve incorporated this CRAAP framework into their metagcognitive learning activites.  I’ll share more about those processes in a new blog post next week.

Follow our journey:

Hashtag:  #rustyq

Our LibGuide

Blog post 1:  Inquiring with Students: What Do or Can “Good” Research Projects Look Like?

Blog post 2:  Beginning Our Research and Inquiry Experiences with Pre-Searching

Blog post 3:  Sticky Notes as Formative Assessment for Information Literacy Instruction: Coding Student Responses

Sticky Notes as Formative Assessment for Information Literacy Instruction: Coding Student Responses

Yesterday I blogged about our pre-searching activities and the use of sticky notes for some gentle formative assessment.  Today I want to share how I went about coding the student responses not only to get a sense of students’ thinking during the two days of pre-searching, but to also use the data as a baseline of sorts in hopefully looking a broad collection of their work as we try to track their trajectory of growth and progress through this extended research unit.

Coding Information Sources

I began by removing the sticky notes for each period from the whiteboards and affixing them to large post-it notes and labeling each grouping by period and response type.  The next challenge was to think of categories for coding the student responses.  The “information sources used” was the easiest starting point, so I began there.

Coding "Information Sources Used" Sticky Notes from Days 1 and 2 of PreSearch, 3rd Period #rustyq

I listed all the information sources from the LibGuide for the project and then tallied responses.  I wound up adding Google as another category since some students indicated they had used this search engine.  Here are the results by period:

2nd period Rust Sources Used Sticky Note Data PreSearch October 2014

 

3rd  period Rust Sources Used Sticky Note Data PreSearch October 2014

In both classes, it appears Gale Opposing Viewpoints was a starting point for the majority of students; Gale Science in Context was next in popularity.  2nd period seemed to like SweetSearch and self-selected information sources while 3rd period leaned more heavily toward Academic Search Complete.

When we look at the updated topics roster (while taking into account the intiial list of topics they had generated), the numbers are not too surprising.  I know that many of them will benefit from some guidance into specific databases and search tools that will align with their topic choices as we move deeper into the project, but I’m not terribly surprised by what I see from the first two days of the risk free pre-search time to just hone down an interest area for one broad topic.  This data, though, does suggest to me that there may be sources unfamiliar to students or they have used minimally in the past (as do the results from the information literacy skills needs survey we did via index cards with Ms. Rust a few weeks ago).

Questions

My categories for coding the questions students generated included:

  • Who
  • What
  • Where
  • When
  • How or Why?
  • Topic Clarification
  • Question about the research or the assignment
  • Other (other types of questions i.e. Is Finland’s educational system superior to the United States?)
  • None

2nd period posed 15 “how/why” questions and 11 questions that fell under “other”; there were four “who” questions and 6 “what” questions; three students did not note any questions.  3rd period generated questions that primarily fell under “what” (4), “how/why” (4), research/assignment questions (6), or “other” (6); five students did not generate any questions.  Clearly, there is a stark contrast between the two classes in the types of questions they generated.  This data may indicate that 3rd period may need more guided help in engaging more deeply with their articles OR strategies for generating questions.

Discoveries and Insights

For this group of sticky note responses, I created these coding categories:

  • Fact or concrete detail
  • Concept/Conceptual
  • Question
  • Reflection
  • Commentary/Opinion/Reaction

Once I began taking a pass through the student responses, I realized I need four additional categories:

  • Topic Ideas
  • Sources
  • None
  • Other

Second period students primarily recorded facts or concrete details for their notes; however, several used this space to think through additional topic ideas; the pattern was nearly identical in 3rd period.  I was not surprised by these findings since students spent only two days doing light pre-search and I knew in advance that getting enough information to eliminate some topic areas of interest would be where many would expend their time and energy.

Final Thoughts

The pre-search activity and days were designed to help students rule out some topics and have time to explore those of interest and our sticky note method of formative assessment was one we felt would give us feedback without imposing a structure that would be time-consuming for students since we really wanted them to channel their energies into reading and learning more about their topic lists.  While some of the data I coded was not surprising, I was really struck by the differences in the types of questions they generated.  Right now I don’t know if this means one class might need more help in generating questions from informational texts or if perhaps they were approaching the reading and activity in a different way that didn’t lend itself to composing lots of questions at that early juncture in time.

If you are incorporating pre-search as part of the connecting cycle of inquiry, what kinds of formative assessments do you use?  If you code student responses, how do you approach that process, and how do you use that data to inform your instructional design?   I find this kind of work interesting—I am looking forward to seeing if any of these gray areas or perceived gaps come to light as we move further into our research unit this month.

Beginning Our Research and Inquiry Experiences with Pre-Searching

Day 1 PreSearch:  Exploring and Sharing Our Questions, Discoveries, and Information Sources

We formally began our first steps into our fall research experience with Ms. Rust’s 12 British Literature/Composition Honors students by giving them  a few days to pre-search their initial lists of topics of interests.    We introduced the research guide and took a few minutes to discuss the purpose of pre-searching and to encourage them to exhaust as many of the databases and search engines in the guide; we also told students they could explore information sources they knew would be meaningful (example: Sports Illustrated website for the essay on LeBron James returning to Cleveland).  We stressed that this was a risk-free period of time to just explore and learn more about the topics of interest in an informal way that did not involve notes or citations.    As students came in, they picked up three different colors of sticky notes; we instructed students to label the blue sticky note as the placeholder for questions that might arise from their readings; the bright yellow sticky note as the space for making notes of discoveries, insights, or new information; and the pink sticky notes as a place to track the information sources they were sampling and exploring.

As expected, there was a range of responses from those who fully immersed themselves in the opportunity and thrived to those who were stuck even thinking of a topic or not feeling enthusiastic about the initial list they had generated.   Sarah Rust (teacher), Jennifer Lund (librarian), and I essentially acted as “coaches” who encouraged and provided feedback to students wherever they fell on the the spectrum; we also tried to help nudge those who were stuck in neutral by doing 1:1 conferencing and sharing strategies to help them either discover a topic that mattered to them or to unpack some areas of interest to connect to a concrete topic.  After the first day, Sarah reflected on what we observed:

I was surprised at how hesitant some students are with all of the freedom of inquiry. I think they are so used to the previous confines of research that they’re timid/baffled/weirded-out that we’re giving them the onus of topic selection and they have time to actually think, refine, change, explore topics. Oh the freedom!!!

At the end of the first day, we asked students to share their sticky notes on our Verb whiteboards and easel that served as a “parking lot” for their work.

sarah rust pre-searching and sticky notes

We used these sticky notes as a gentle formative assessment to see where students were at the end of both days of pre-search and as a medium for helping students engage in metacognition without imposing too formal of a structure on them at this early stage of connecting in Stripling’s Model of Inquiry.  We are very pleased with this method of collecting feedback and getting kids to be reflective without over-structuring the activity and interfering with the exploration focus.

At the end of the second day, we asked students to complete this survey for homework.  This assignment was designed to help us see where they were with broad topic selection and to refine our initial groupings for inquiry-research circles that we’ll be utilizing for collaborative activities we have planned for this research unit.

How are you approaching pre-searching in units of inquiry and research with your students?  What does it look like in your learning community, and what strategies have you tried that have been successful?

Related Posts:

Inquiring with Students:  What Do or Can “Good” Research Projects Look Like?, September 29, 2014

Inquiring with Students: What Do or Can “Good” Research Projects Look Like?

Responses from Ms. Rust's 2nd and 3rd period students

Responses from Ms. Rust’s 2nd and 3rd period students

We have just started a new inquiry unit with Language Arts teacher Sarah Rust and her students in 12 British Literature/Composition; although the course is identified as a senior level course, most of the students are juniors due to the nature of the IB curriculum.    We wanted to give students an opportunity to go deep with a research project and have opportunities to develop their own research questions and target processes and skills they identified as areas of personal need. We’re using Stripling’s Model of Inquiry as our framework while pulling in the affective aspects of Carol Kuhlthau’s Information Search Process model.  After surveying students on their topics of interest, we also asked them to identify information literacy and technology skills they felt confident about as well as areas of need (see 2nd period and see 3rd period).  We then decided to ask students the following questions:

  • What is a good research project?
  • What does/what can it look like?
  • What qualities does/should a good research project have?

These questions are seemingly simple, but reading the students’ responses reminds me of the complexity of prior experiences, perceptions, and connotations associated with words like “good” and “research”.    I love that reading students’ responses forces me to rethink my own perceptions and criterion for identifying quality research projects and how I conceptualize research, especially when I think of it more broadly as information seeking behavior in a variety of contexts—K-12 school, real-world, the workplace, and academia.

rust what is good research notecard surveys

Sarah collected the responses from our students this past Friday via index cards, and I then compiled them over the weekend.  You can read our students’ responses here in this Google doc; I’ve also enclosed a visualization in this post of their responses.  A few initial reactions of patterns I noticed in their responses:

  • While many students referenced a traditional paper, an overwhelming number of students indicated that images and multimedia were essential to a “good” research project.
  • Most students felt that research projects should be more than a traditional paper and that multimedia formats like Prezi and videos were valid and in some cases, superior, forms of a “text.”
  • Some students stressed quality and quantity of facts while others felt that a person’s insights and understandings were equally, if not more so, important.
  • The influence of the Schaffer Writing Program that has been in place here for a few years at NHS was reflected in the references to CDs (concrete details) and CMs (commentaries).
  • Several students felt the topic should be interesting and of importance to both the writer and the reader of the project/paper.
  • Quite a few students stressed the importance of organization while others mentioned citations and appropriate references to reliable sources although a few shared they wanted more freedom to use alternative sources of information that might be traditional “authoritative” sources.
  • Several students discussed the importance of “depth” in the quality and scope of the project.

I can’t help but wonder what we might glean if we start inquiry units or initial research projects with questions like these to see where our students are and their perceptions.  I also believe this type of exercise can be a springboard in engaging students in the process of instructional design, including the design and criterion for formative and summative assessments; it can also be a conversation starter about how context might determine our responses and how we define “good” in different information seeking tasks and settings.

How might your students define research and what counts as a “good” or effective research project?  Your teachers?  Your administrators?  I’d love to hear from you if you have posed these sorts of questions in your learning community.