1887

Using Scientific Abstracts to Measure Learning Outcomes in the Biological Sciences

    Authors: Rebecca Giorno1, William Wolf1, Patrick L. Hindmarsh1, Jeffrey V. Yule1, Jeff Shultz1,*
    VIEW AFFILIATIONS HIDE AFFILIATIONS
    Affiliations: 1: School of Biological Sciences, Louisiana Tech University, Ruston, LA 71272
    AUTHOR AND ARTICLE INFORMATION AUTHOR AND ARTICLE INFORMATION
    • Published 02 December 2013
    • Supplemental materials available at http://jmbe.asm.org
    • *Corresponding author. Mailing address: School of Biological Sciences, 1 Adams Blvd., Louisiana Tech University, Ruston, LA 71272. Phone: 318-257-2753. Fax: 318-257-4574. E-mail: jlshultz@latech.edu.
    • ©2013 Author(s). Published by the American Society for Microbiology.
    Source: J. Microbiol. Biol. Educ. December 2013 vol. 14 no. 2 275-276. doi:10.1128/jmbe.v14i2.633
MyBook is a cheap paperback edition of the original book and will be sold at uniform, low price.
  • PDF
    133.31 Kb
  • HTML
    21.06 Kb
  • XML

    Abstract:

    Educators must often measure the effectiveness of their instruction. We designed, developed, and preliminarily evaluated a multiple-choice assessment tool that requires students to apply what they have learned to evaluate scientific abstracts. This examination methodology offers the flexibility to both challenge students in specific subject areas and develop the critical thinking skills upper-level classes and research require. Although students do not create an end product (performance), they must demonstrate proficiency in a specific skill that scientists use on a regular basis: critically evaluating scientific literature via abstract analysis, a direct measure of scientific literacy. Scientific abstracts from peer-reviewed research articles lend themselves to in-class testing, since they are typically 250 words or less in length, and their analysis requires skills beyond rote memorization. To address the effectiveness of particular courses, in five different upper-level courses (Ecology, Genetics, Virology, Pathology, and Microbiology) we performed pre- and postcourse assessments to determine whether students were developing subject area competence and if abstract-based testing was a viable instructional strategy. Assessment should cover all levels in Bloom’s hierarchy, which can be accomplished via multiple-choice questions (2). We hypothesized that by comparing the mean scores of pre- and posttest exams designed to address specific tiers of Bloom’s taxonomy, we could evaluate the effectiveness of a course in preparing students to demonstrate subject area competence. We also sought to develop general guidelines for preparing such tests and methods to identify test- and course-specific problems.

Key Concept Ranking

Canning
0.52380955
0.52380955

References & Citations

1. Jacobs LC, Chase CI 1992 Developing and using tests effectively: a guide for faculty Josses-Bass Publishers San Francisco, CA
2. Palmer EJ, Davit PG 2007 Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper BMC Med Educ 7 49 10.1186/1472-6920-7-49 18045500 2148038 http://dx.doi.org/10.1186/1472-6920-7-49
3. Shultz J 2012 Improving active learning by integrating scientific abstracts into biological science courses J Coll Sci Teach 41 44 47
jmbe.v14i2.633.citations
jmbe/14/2
content/journal/jmbe/10.1128/jmbe.v14i2.633
Loading

Citations loading...

Supplemental Material

Loading

Article metrics loading...

/content/journal/jmbe/10.1128/jmbe.v14i2.633
2013-12-02
2017-09-20

Abstract:

Educators must often measure the effectiveness of their instruction. We designed, developed, and preliminarily evaluated a multiple-choice assessment tool that requires students to apply what they have learned to evaluate scientific abstracts. This examination methodology offers the flexibility to both challenge students in specific subject areas and develop the critical thinking skills upper-level classes and research require. Although students do not create an end product (performance), they must demonstrate proficiency in a specific skill that scientists use on a regular basis: critically evaluating scientific literature via abstract analysis, a direct measure of scientific literacy. Scientific abstracts from peer-reviewed research articles lend themselves to in-class testing, since they are typically 250 words or less in length, and their analysis requires skills beyond rote memorization. To address the effectiveness of particular courses, in five different upper-level courses (Ecology, Genetics, Virology, Pathology, and Microbiology) we performed pre- and postcourse assessments to determine whether students were developing subject area competence and if abstract-based testing was a viable instructional strategy. Assessment should cover all levels in Bloom’s hierarchy, which can be accomplished via multiple-choice questions (2). We hypothesized that by comparing the mean scores of pre- and posttest exams designed to address specific tiers of Bloom’s taxonomy, we could evaluate the effectiveness of a course in preparing students to demonstrate subject area competence. We also sought to develop general guidelines for preparing such tests and methods to identify test- and course-specific problems.

Highlighted Text: Show | Hide
Loading full text...

Full text loading...

/deliver/fulltext/jmbe/14/2/jmbe-14-275.xml.a.html?itemId=/content/journal/jmbe/10.1128/jmbe.v14i2.633&mimeType=html&fmt=ahah

This is a required field
Please enter a valid email address
Please check the format of the address you have entered.
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error