Last
December I discussed the NRC report,

*Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering*. One of its themes is the importance of the adoption of “evidence-based teaching strategies.” It is hard to find carefully collected quantitative evidence that certain instructional strategies for undergraduate mathematics really are better. I was pleased to see two articles over the past month that present such evidence for active learning strategies.
One of the articles is the
long-anticipated piece by Jerry Epstein, "The Calculus Concept Inventory—Measurement of the Effect of Teaching Methodology in Mathematics" which appeared in the September
2013

*Notices of the AMS*[1]. Because this article is so readily available to all mathematicians, I will not say much about it. Epstein’s Calculus Concept Inventory (CCI) represents a notable advancement in our ability to assess the effectiveness of different pedagogical approaches to basic calculus instruction. He presents strong evidence for the benefits of Interactive Engagement (IE) over more traditional approaches. As with the older Force Concept Inventory developed by Hestenes*et al.*[2], CCI has a great deal of surface validity. It measures the kinds of understandings we implicitly assume our students pick up in studying the first semester of calculus, and it clarifies how little basic conceptual understanding is absorbed under traditional pedagogical approaches. Epstein claims statistically significant improvements in conceptual understanding from the use of Interactive Engagement, stronger gains than those seen from other types of interventions including plugging the best instructors into a traditional lecture format. Because CCI is so easily implemented and scored, it should spur greater study of what is most effective in improving undergraduate learning of calculus.
The second paper is "Assessing Long-Term Effects of Inquiry-Based Learning: A Case Study from College Mathematics" by Marina Kogan and Sandra Laursen
[3]. This was a carefully controlled study of the effects of Inquiry-Based
Learning (IBL) on persistence in mathematics courses and performance in
subsequent courses. They were able to compare IBL and non-IBL sections taught
at the same universities during the same terms.

IE and IBL describe comparable
pedagogical approaches. Richard Hake defined IE as

“… those [methods] designed at least
in part to promote conceptual understanding through interactive engagement of
students in heads-on (always) and hands-on (usually) activities which yield
immediate feedback through discussion with peers and/or instructors.” [4]

IBL does this and also is expected
to incorporate a structured curriculum that builds toward the big ideas, a
component that may or may not be present in IE. For the Kogan and Laursen
study, IBL was a label that the universities chose to apply to certain sections.
The trained observers in the Kogan and Laursen study found significant
differences between IBL and non-IBL sections. They rated IBL sections “higher
for creating a supportive classroom atmosphere, eliciting student intellectual
input, and providing feedback to students on their work” than non-IBL sections.
IBL sections spent an average of 60% of the time on student-centered
activities; in non-IBL sections the instructor talked at least 85% of the time.

Kogan and Laursen compared IBL and
non-IBL sections for three courses:

- G1, the first term of a three-term sequence covering multivariable calculus, linear algebra, and differential equations, taken either in the freshman or sophomore year;
- L1, a sophomore/junior-level introduction to proof course; and
- L2, an advanced junior/senior-level mathematics course with an emphasis on proofs.

For L1
and L2, students did not know in advance whether they were enrolling in IBL or
non-IBL sections. The IBL section of G1 was labeled as such. In all cases, the
authors took care to control for discrepancies in student preparation and
ability.

IBL had
the least impact on the students in the advanced course, L2. IBL students had
slightly higher grades in subsequent mathematics courses (2.6 for non-IBL, 2.8
for IBL) and took slightly fewer subsequent mathematics courses (1.5 for
non-IBL, 1.4 for IBL).

For the
introduction to proof course, L1, IBL students again had slightly higher grades
in the following term (2.8 for non-IBL, 3.0 for IBL). There were statistically significant
gains (

*p*< 0.05) from IBL in the number of subsequent courses that students took and that were required for a mathematics major, both for the overall population (0.5 for non-IBL, 0.6 for IBL) and, especially, for women (0.6 for non-IBL, 0.8 for IBL).
For L1,
the sample size was large enough (1077 non-IBL, 204 IBL over seven years) to
investigate persistence and subsequent performance broken down by student
overall GPA, recorded as low (< 2.5), medium (2.5 to 3.4), or high (>
3.4). For the non-IBL students, differences in overall GPA were reflected in
dramatic differences in their grades in subsequent mathematics courses required
for the major, all statistically significant at

*p*< 0.001. Low GPA students averaged 1.96, medium GPA students averaged 2.58, and high GPA students averaged 3.36. All three categories of IBL students performed better in subsequent required courses, but the greatest improvement was seen with the weakest students. Taking this course as IBL wiped out much of the difference between low GPA students and medium GPA students. It also decreased the difference between medium and high GPA students in subsequent required courses. For IBL students, low GPA students averaged 2.43, medium GPA students averaged 2.75, and high GPA students averaged 3.38 in subsequent required courses. See Figure 1.Figure 1: Average grade in subsequent courses required for the major following introduction to proof class taught either as non-IBL or IBL. |

While
the number of subsequent courses satisfying the requirements for a mathematics
major was higher for all students taking the IBL section of L1, here the
greatest gain was among those with the highest GPA. For low GPA students, the
number of courses was 0.50 for non-IBL and 0.51 for IBL; for medium GPA the number
was 0.53 for non-IBL, 0.62 for IBL; and for high GPA the number was 0.49 for
non-IBL, 0.65 for IBL. See Figure 2.

Figure 2: Average number of subsequent courses taken and required for the major following introduction to proof class taught either as non-IBL or IBL. |

For the
first course in the sophomore sequence, G1, IBL did have a statistically
significant effect on grades in the next course in the sequence (

*p*< 0.05). The average grade in the second course was 3.0 for non-IBL students, 3.4 for IBL students. There also was a modest gain in the number of subsequent mathematics courses that students took and that were required for the students’ majors: 1.96 courses for non-IBL students, 2.09 for IBL students.
These
have been the highlights of the Kogan and Laursen paper. Most striking is the
very clear evidence that IBL does no harm, despite the fact that spending more
time on interactive activities inevitably cuts into the amount of material that
can be “covered.” In fact, it was the course with the densest required
syllabus, G1, where IBL showed the clearest gains in terms of preparation of
students for the next course.

IBL is
often viewed as a luxury in which we might indulge our best students. In fact,
as this study demonstrates, it can have its greatest impact on those students
who are most at risk.

[1] J.
Epstein. 2013. The Calculus Concept
Inventory—Measurement of the Effect of Teaching Methodology in Mathematics.

*Notices of the AMS***60**(8), 1018–1026. http://www.ams.org/notices/201308/rnoti-p1018.pdf
[2] D.
Hestenes, M. Wells, and G. Swackhamer. 1992. Force concept inventory.

*Physics Teacher***30**, 141–158. http://modelinginstruction.org/wp-content/uploads/2012/08/FCI-TPT.pdf
[3] M.
Kogan and S. Laursen. 2013. Assessing Long-Term Effects of
Inquiry-Based Learning: A Case Study from College Mathematics.

*Innovative Higher Education***39**(3). http://link.springer.com/article/10.1007/s10755-013-9269-9
[4] R.R.
Hake. 1998. Interactive engagement versus traditional methods: A six-thousand
student survey of mechanics test data for physics courses.

*American J. Physics***66**(1), 64–74. http://www.physics.indiana.edu/~sdi/ajpv3i.pdf