Saturday, April 1, 2017

Conceptual Understanding

You can now follow me on Twitter @dbressoud.

Continuing my series of summaries of articles that have appeared in the International Journal of Research in Undergraduate Mathematics Education (IJRUME), this month I want to briefly describe three studies that address issues of conceptual understanding. The first is a study out of Israel that probed student difficulties in understanding integration as accumulation (Swidan and Yerushalmy, 2016). The second is from France, exploring student difficulties with understanding the real number line as a continuum (Durand-Guerrier, 2016). The final paper, from England, explores a method of measuring conceptual understanding (Bisson, Gilmore, Inglis, and Jones, 2016).

Integration as Accumulation
To use the definite integral, students need to understand it as accumulation. In particular, the Fundamental Theorem of Integral Calculus rests on the recognition that the definite integral of a function f, when given a variable upper limit, is an accumulation function of a quantity for which f describes the rate of change. Pat Thompson (2013) has described the course he developed for Arizona State University that places this realization at the heart of the calculus curriculum.

We know that students have a difficult time understanding and working with a definite integral with a variable upper limit. The authors of the IJRUME paper suggest that much of the problem lies in the fact that when students are introduced to the definite integral as a limit of Riemann sums, they only consider the case when the upper and lower limits on the Riemann sum are fixed. The limit is thus a number, usually thought of as the area under a curve. Making the transition to the case where the upper limit is variable is thus non-intuitive.

The authors used software to explore student recognition of accumulation functions based on right-hand Riemann sums. They investigated student recognition of how the properties of these functions are shaped by the rate of change function. The experiment involved a graphing tool, Calculus UnLimited (CUL), in which students input a function and the software provides values of the corresponding accumulation function given by a right-hand sum with Δx = 0.5 (see Figure 1). Students could adjust the upper and lower limits, in jumps of 0.5. The software displays the rectangles corresponding to a right-hand sum. Students were not told that these were points on an accumulation function, merely that this was a function related to the initial function. They were encouraged to start with a lower limit of –3 and to explore the functions x2, x2 – 9, and then cubic polynomials, and to discover what they could about this second function. Students received no further prompts.

Figure 1: The CUL interface. Taken from Swidan and Yerushalmy (2016), page 33.

Thirteen pairs of Israeli 17-year olds participated in the study. They all had been studying derivatives and indefinite integrals, but none had yet encountered definite integrals. Each pair spent about an hour exploring this software. Their actions and remarks were video-taped and then analyzed.

One of the interesting observations was that the key to recognizing that the second function accumulates areas came from playing with the lower limit. Adjusting the upper limit simply adds or removes points, but adjusting the lower limit moves the plotted points up or down. Once students realized that the point corresponding to the lower limit is always zero, they were able to deduce that the y-value of the next point is the area of the first rectangle, and that succeeding points reflect values obtained by adding up the areas of the rectangles. Rectangles below the x-axis were shaded in a darker color, and students quickly picked up that they were subtracting values. Seven of the thirteen pairs of students went as far as remarking on how the concavity of the accumulation function is related to the behavior of the original function.

This work suggests that a Riemann sum with a variable upper limit is more intuitive than a definite integral with a variable upper limit. In addition, it appears that students can discover many of the essential properties of a discrete accumulation function if allowed the opportunity to experiment with it.

Understanding the Continuum
The second paper explores student difficulties with the properties of the real number line and describes an intervention that appears to have been useful in helping students understand the structure of the continuum. Mathematicians of the nineteenth century struggled to understand the essential differences between the continuum of all real numbers and dense subsets such as the set of rational numbers. It comes as no surprise that our students also struggle with these distinctions.

The author analyzes the transcripts from an intervention described by Pontille et al. (1996). It began with the following question: Given an increasing function, f, ( x < y implies f(x) ≤ f(y) ) from an ordered set S into itself, can we conclude that there will always exist an element s in S for which f(s) = s? The answer, of course, depends on the set. The intervention asks students to answer this question for four sets: a finite set of positive integers, the set of numbers with finite decimal expansions in [0,1], the set of rational numbers in [0,1], and the entire set [0,1]. In the original work, this question was posed to a class of lycée students in a scientific track. Over the course of an academic year, they periodically returned to this question, gradually building a refined understanding of the structure of the continuum. The author’s analysis of the transcripts from these classroom discussions is fascinating.

Durand-Guerrier then posed this same question to a group of students in a graduate teacher- training program. In both cases, students were able to answer the question in the affirmative for the finite set, using an inductive proof or reductio ad absurdum. Almost all then tried to apply this proof to the dense countable sets. Here they ran into the realization that there is no “next” number. The graduate students, given only an hour to work on this, did not get much further. The lycée students did come to doubt that it was always true for these sets. As they began to think about the “holes” these sets left, they were able to construct counter-examples.

The continuum provides the most difficulty. The lycée students were eventually able to prove that it is true in this instance, but only after being given the hint to consider the set of x in [0,1] for which f(x) > x and to draw on the property of the continuum that every bounded set has a least upper bound.

Measuring Conceptual Understanding
The last paper in this set addresses the problem of measuring conceptual understanding. We know that students can be proficient in answering procedural questions without the least understanding of what they are doing or why they are doing it. But measuring conceptual understanding is difficult. A meaningful assessment with limited possible answers, such as a concept inventory, requires a great deal of work to develop and validate. Open-ended questions can provide a better window into student thinking and understanding, but consistent application of scoring rubrics across multiple evaluators is hard to achieve.

The authors build a solution from the observation that it is far easier to compare the quality of the responses from two students than it is to compare one student’s response against a rubric. They therefore suggest asking a simple, very open-ended question, scored by ranking student responses, which is achieved by pairwise comparisons. As an example, to evaluate student understanding of the derivative, they provided the prompt,
Explain what a derivative is to someone who hasn’t encountered it before. Use diagrams, examples and writing to include everything you know about derivatives.
The 42 students in this study first read several examples of situations involving velocity and acceleration (presumably to prompt them to think of derivatives as rates of change rather than a collection of procedures) and were then given 20 minutes to write their responses to the prompt. 

Afterwards, 30 graduate students each judged 42 pairings. The authors found very high inter-rater reliability (r = .826 to .907). In fact, they found that comparative judgments appeared to do a better job of evaluating conceptual understanding than did Epstein’s Calculus Concept Inventory (Epstein, 2013).

Similar studies were undertaken to evaluate student understanding of p-values and 11- to 12-year- olds understanding of the use of letters in algebra. Again, there was very high inter-rater reliability, and in these cases there were high levels of agreement with established instruments.

This approach constitutes a very broad method of assessment, but it does enable the instructor to get some idea of what students are thinking and how they understand the concept at hand. It can be used even with large classes because it is not necessary to look at all possible pairs to get a meaningful ranking.

Conclusion
The three papers referenced here are very different in focus and goal, but I do see the common thread of searching for ways to encourage and assess student understanding. After all, that is what teaching and learning is really about.

References
Bisson, M.-J., Gilmore, C., Inglis, M., and Jones, I. (2016). Measuring conceptual understanding using comparative judgement. IJRUME. 2:141–164.

Durand-Guerrier, V. (2016). Conceptualization of the continuum, an educational challenge for undergraduate students. IJRUME. 2:338–361.

Epstein, J. (2013). The Calculus Concept Inventory - measurement of the effect of teaching methodology in mathematics. Notices of the American Mathematical Society, 60, 1018–27.

Pontille, M. C., Feurly-Reynaud, J., & Tisseron, C. (1996). Et pourtant, ils trouvent. Repères IREM, 24, 10–34.

Swidan, O. and Yerushalmy, M. (2016). Conceptual structure of the accumulation function in an interactive and multiple-linked representational environment. IJRUME. 2:30–58.

Thompson, P.W., Byerley, C., and Hatfiled, N. (2013). A Conceptual approach to calculus made possible by technology. Computers in the Schools. 30:124–147.

Wednesday, March 1, 2017

MAA Calculus Studies: Use of Local Data

You can now follow me on Twitter @dbressoud.

From our 2012 study, Characteristics of Successful Programs in College Calculus (NSF #0910240), the most successful departments had a practice of monitoring and reflecting on data from their courses. When we surveyed all departments with graduate programs in 2015 as part of Progress through Calculus (NSF #1430540), we asked about their access to and use of these data, what we are referring to as “local data.”

The first thing we learned is that a few departments report no access to data about their courses or what happens to their students. For almost half, access is not readily available (see Table 1). When we asked, “Which types of data does your department review on a regular basis to inform decisions about your undergraduate program?”, most departments review grade distributions and pay attention to end of term student course evaluations (Table 2). Between 40% and 50% of the surveyed departments correlate performance in subsequent courses with the grades they received in previous courses and look at how well placement procedures are being followed. Given how important it is to track persistence rates (see The Problem of Persistence, Launchings, January 2010), it is disappointing to see that only 41% of departments track these data. Regular communication with client disciplines is almost non-existent.

Table 1. Responses to the question, “Does your department have access to data 
to help inform decisions about your undergraduate program? PhD indicates 
departments that offer a PhD in Mathematics. MA indicates departments for 
which the highest degree offered in Mathematics is a Master’s.

Table 2. Responses to the question, “Which types of data does your department
 review on a regular basis to inform decisions about your undergraduate program?”

We also asked departments to describe the kinds of data they collect and regularly review. Several reported combining placement scores, persistence, and grades in subsequent courses to better understand the success of their program. Some of the other interesting uses of data included universities that
  • Built a model of “at-risk” students in Calculus I using admissions data from the past seven years. Using it, they report “developing a program to assist these students right at the beginning of Fall quarter, rather than target them after they start to perform poorly.” 
  • Surveyed calculus students to get a better understanding of their backgrounds and attitudes toward studying in groups. 
  • Collected regular information from business and industry employers of their majors. 
  • Measured correlation of grade in Calculus I with transfer status, year in college, gender, whether repeating Calculus I, and GPA. 
  • Used data from the university’s Core Learning Objectives and a uniform final exam to inform decisions about the course (including the ordering of topics, emphasis on material and time devoted to mastery of certain concepts, particularly in Calculus II). 
  • Reviewed the performance on exam problems to decide if a problem type is too hard, a problem type needs to be rephrased, or an idea needs to be revisited on a future exam.
The intelligent use of data to shape and monitor interventions is a central feature of the large- scale initiatives that are now underway. To mention just one, the AAU STEM Initiative (Association of American Universities, a consortium of 62 of the most prominent research universities in the U.S. and Canada) has established a Framework for sustainable institutional change. It can be found at https://stemedhub.org/groups/aau/framework (Figure 1).

Figure 1. AAU STEM Initiative Framework

The three levels of change are subdivided into topics, each of which links to programs at member universities that illustrate work on this aspect of the framework.

Cultural change encompasses
  1.  Aligning incentives with expectations of teaching excellence. 
  2.  Establishing strong measures of teaching excellence. 
  3.  Leadership commitment.
Scaffolding includes
  1.  Facilities. 
  2.  Technology. 
  3.  Data. 
  4. Faculty professional development.
Pedagogy is comprised of
  1.  Access. 
  2. Articulated learning goals. 
  3. Assessments. 
  4. Educational practices.
In addition, AAU is now finalizing a list of “Essential Questions” to ask about the institution, the college, the department, and the course, illustrating the types of data and information that should be collected and pointing to helpful resources. This report, which should be published by the time this column appears, will be accessible through the AAU STEM Initiative homepage at https://stemedhub.org/groups/aau.


Wednesday, February 1, 2017

MAA Calculus Study: PtC Survey Results

You can now follow me on Twitter @dbressoud.


In spring 2015 the MAA’s Progress through Calculus (PtC) grant (NSF#1430540) surveyed all U.S. Departments of Mathematics that offer a graduate degree in Mathematics to learn about departmental practices, priorities, and concerns with respect to their mainstream courses in precalculus through single variable calculus. I have reported on some of the results from this study in November, 2015. This month’s column describes a variety of data relative to mainstream Calculus I that were collected in that survey. The full report can be found under PtC Reports (link from maa.org/cspcc).

The survey was sent to the chairs of all departments of mathematics in the United States that offer a graduate degree in Mathematics (PhD or Master’s). We received responses from 134 of the 178 PhD-granting universities (75%) and 89 of the 152 Master’s-granting universities (59%).

Given how ineffective the standard precalculus course is known to be (see my Launchings column from October, 2014), we were particularly interested in efforts to teach precalculus topics concurrently with calculus. Accomplishing this through a stretched-out Calculus I is now fairly common (20 of 222 respondents use this approach to incorporate precalculus topics into Calculus I). Eleven universities have courses or options with extra hours to allow time on precalculus, and three offer precalculus courses designed to be taken concurrently with Calculus I. We also found 14 universities with an accelerated calculus specifically designed to meet the needs of students entering with AP® Calculus credit. Three universities have special lower credit courses that enable students who begin in a non-mainstream Calculus I to transition to mainstream calculus.

Table 1: Number of surveyed universities that reported using each
of the listed variations in single variable calculus classes.

Every five years, CBMS surveys departments of mathematics in the U.S. to get enrollment numbers, but those are only gathered for the fall term. In this survey, we were particularly interested in how these numbers vary over the full year, both academic and summer terms. While we only have results for a sample of universities, and no undergraduate colleges, the numbers are large enough, 150,000 in Precalculus, 200,000 in mainstream Calculus I, and 160,000 in subsequent mainstream single variable classes, to get a good idea of how these enrollments distribute over the year. For Precalculus, 57% of the enrollment occurs in the fall term. Fall term accounts for 60% of the Calculus I students. Not surprisingly, Calculus II is predominantly a second-term course (47%), but 40% of the students who take Calculus II do so in the fall. The distribution among the terms is complicated by the fact that some universities are on a quarter system, others on semesters. What I have labeled 2nd Term, is either spring semester or winter quarter. The 3rd Term refers to the spring quarter for those on a quarter system. Summer aggregates all summer terms. Figure 1 shows actual numbers from the universities that responded to give an idea of how enrollments drop off. For the purposes of the survey, “Precalculus” was defined as the last course before mainstream Calculus I. It is variously called Precalculus, College Algebra, College Algebra with Trigonometry, or Preparation for Calculus. Calculus II includes all mainstream single variable calculus courses that follow Calculus I. On a semester system, there is usually just one. On a quarter system, there usually are two such courses.

Figure 1: Distribution of enrollments by term among the 205 universities that respond to this question.
2nd term = spring semester or winter quarter. 3rd term = spring quarter.
Calculus II includes all mainstream single variable calculus classes that follow Calculus I.

The number of contact hours (including recitation sections) in Calculus I averaged 4.17 (SD = 0.77) at PhD-granting universities and 4.25 (SD = 0.64) at Masters-granting universities. The DFW rate in mainstream Calculus I was 21% (SD =12.2), at PhD-granting universities and 25% (SD = 13.7) at Masters-granting universities.

The next table (Table 2) reports the fraction of universities in which Calculus I is frequently taught by each type of instructor. For each category of instructor, the options were “Never,” “Rarely,” or “Frequently.”

Table 2: Percentage of universities for which each category of
instructor frequently teaches mainstream Calculus I.

Recitation sections were far more common at PhD-granting universities. All classes have recitation sections for 49% of the institutions, some classes at 6%, and there are no recitation sections at 45% of the universities. For Masters-granting universities, the percentages were 18% for all classes, 6% for some classes, and 76% for no classes.

We also found that active learning was much more common at Masters-granting universities than PhD-granting universities. Figures 2 and 3 record primary instructional format for mainstream Calculus I. “Some active learning” includes techniques such as use of clickers or think-pair-share. “Minimal lecture” includes Inquiry Based Learning and flipped classes. “Other” usually means too much variation to be able to identify a primary instructional format. We did find that 35% of the PhD-granting universities did report having at least some sections that were using active learning approaches.

Figure 2. Primary instructional format for regular classes
(not recitation sections) at 214 PhD-granting universities.
Figure 3. Primary instructional format for regular classes
(not recitation sections) at 109 Masters-granting universities.

At 73% of the PhD-granting universities and 74% of the Masters-granting universities that offer recitation sections, they are simply homework help, Q&A, and review. Recitation sections are built around active learning approaches 21% of the time at PhD-granting universities, 4% of the time at Masters-granting universities.

Table 3 reports which elements of mainstream Calculus I are common across all sections. We see much more uniformity at PhD-granting universities. In view of our findings from the earlier Characteristics of Successful Programs in College Calculus that coordination of course elements was one of the significant factors of successful calculus programs (see my Launchings column from January 2014), the results of this study suggest a great deal of room for improvement.

Table 3: Percentage of reporting universities that have these elements across all sections of mainstream Calculus I.

Another aspect of coordination that was characteristic of the most successful programs was the practice of regular meetings of the course instructors. As shown in Table 4, there is also a great deal of room for improvement here.

Table 4: Response to "When several instructors are teaching in the same term,
how often do they typically meet as a group to discuss the course?"

The situation at PhD-granting universities is disappointing. The primary means of instruction is still large lecture with few or no structured opportunities for students to reflect on what is being presented to them, supplemented by recitation sections in which graduate students simply go over homework and answer student questions. At the Masters-granting universities, where classes are smaller and there is more emphasis on teaching, there is little coordination, often resulting in highly variable instruction. But there is room for hope. While there is no previous study with comparable data, there appears to be good deal of experimentation. My own experience in visiting these predominantly large public universities is that they are aware that what they are doing is not working, and they are looking for ways to improve what happens in this critical sequence.






Sunday, January 1, 2017

IJRUME: Approximation in Calculus

You can now follow me on Twitter @dbressoud.

In an earlier column, "Beyond the Limit, III," I talked about how Michael Oehrtman and colleagues have been able to use approximation as a unifying theme for single variable calculus that helps students avoid many of the confusing aspects of the language of limits. I also pointed out that this is hardly a new idea, having been used by many textbook authors including Emil Artin in A Freshman Honors Course in Calculus and Analytic Geometry and Peter Lax and Maria Terrell in Calculus with Applications. The IJRUME research paper I wish to highlight this month, “A study of calculus instructors’ perceptions of approximation as a unifying thread of the first-year calculus” by Sofronas et al., looks at how common this approach actually is.

The authors address four research questions:

  1. Do calculus instructors perceive approximation to be important to student understanding of first-year calculus? 
  2. Do calculus instructors report emphasizing approximation as a central concept and-or unifying thread in the first-year calculus? 
  3. Which approximation ideas do calculus instructors believe are “worthwhile” to address in first-year calculus?  
  4. Are there any differences between demographic groups with respect to the approximation ideas they teach in first-year calculus courses? 
They surveyed calculus instructors at 182 colleges and universities, collecting 279 responses.


To the first two questions, 89% agreed that approximation is important, but only 51% considered it a central concept, and only 40% found that it provides a unifying thread (see Figure 1). For those who did consider it central and-or unifying, the reasons that they gave included: (a) it illuminates reasons for studying calculus, (b) most functions are not elementary and approximation is helpful in dealing with such functions, (c) approximation facilitates the understanding of fundamental concepts including limit, derivative, integral, and series, (d) linear approximations lie at the foundation of differential calculus, and (e) an emphasis on approximation resonates with the instructors personal interests in applied mathematics or numerical analysis.
Figure 1: Graph depicting participants’ perceptions of approximation (N=214).
 Source: Sofronas et al. 2015.

For those who did not consider approximation to be central or unifying, many stated that it is not sufficiently universal, only important in a few contexts such as motivating the definition of the derivative at a point or the value of a definite integral. Many stated other unifying threads such as limit or the study of change. Some objected to an emphasis on approximation because of its inevitable ties to the use of technology. There were also a large number of obstacles to the use of approximation that instructors identified. These included: (a) an overcrowded syllabus that left no room for the instructor to develop a unifying thread, (b) required adherence to a curriculum emphasizing procedural facility, (c) students with weak preparation who are not prepared to understand the subtleties of approximation arguments, (d) lack of access to technology, (e) lack of familiarity with how to use approximation ideas in developing calculus. I personally find these obstacles to be very sad, in particular the assumption on the part of many instructors that the only way to get through the required syllabus or to enable students to pass the course is to focus exclusively on memorizing procedures.

Jumping ahead to the fourth question, the authors found that the single factor that most highly correlated with emphasizing approximation as a central concept and-or unifying thread was having served on either a local or national calculus committee. Not surprisingly, this factor was also highly correlated with number of years teaching calculus, rank, being the recipient of a teaching award, and having published or presented on a calculus topic.

To the third research question, the combined list of topics gleaned from all of the responses truly spans first-year calculus: numerical limits, definition of limit, definition of the derivative, derivative values, tangent line approximations, differentials, error estimation, function change, functions roots and Newton’s method, linearization, integration, Riemann sums, Taylor polynomials and Taylor series, Newton’s second law, Einstein’s equation for force, L’Hospital’s rule, Euler’s method, and the approximation of irrational numbers. One unexpected outcome of the survey is that several of the respondents commented that answering this survey about their use of approximation in first-year calculus opened their eyes to the opportunity to use it as a unifying theme. As one respondent wrote,
I agree that approximation is an important concept AND after taking this survey I can see teaching calculus using approximation as the main theme. The rate of change theme offers many opportunities for real-life applications but I can see how using approximations from the beginning would offer other opportunities. It is an interesting idea, and I would love to incorporate more of this theme into my lessons.
For those who are interested in following up on the use of approximation as a unifying thread, this article also supplies a wealth of background information that includes a discussion of the different ways in which approximation can be used and the research evidence for its effectiveness as a guiding theme in developing student understanding of limits, derivatives, integrals, and series.

References

Artin, E. (1958). A Freshman Honors Course in Calculus and Analytic Geometry Taught at Princeton University. Buffalo, NY: Committee on the Undergraduate Program of the Mathematical Association of America

Lax, P. & Terrell, M.S. (2014). Calculus with Applications, Second Edition. New York, NY: Springer.

Sofronas, K.S., DeFranco, T.C., Swaminathan, H., Gorgievski, N., Vinsonhaler, C., Wiseman, B., Escolas, S. (2015). A study of calculus instructors’ perceptions of approximation as a unifying thread of the first-year calculus. Int. J. Res. Undergrad. Math. Ed. 1:386–412 DOI 10.1007/s40753-015- 0019-5

Thursday, December 1, 2016

IJRUME: Peer-Assisted Reflection

You can now follow me on Twitter @dbressoud.

The second paper I want to discuss from the International Journal of Research in Undergraduate Mathematics Education is a description of part of the doctoral work done by Daniel Reinholz, who earned his PhD at Berkeley in 2014 under the direction of Alan Schoenfeld. It consists of an investigation of the use of Peer-Assisted Reflection (PAR) in calculus [1].

Macintosh HD:Users:macalester:Dropbox:Personal:Launchings:2016:2016-12:21565092511_2e41878c9c_b.jpg
Daniel Reinholz. Photo Credit: David Bressoud
PAR addresses an aspect of learning to do mathematics that Schoenfeld refers to as “self-reflection or monitoring and control” in his chapter on “Learning to Think Mathematically” [4]. As he observed in his problem-solving course at Berkeley, most students have been conditioned to assume that when presented with a mathematical problem, they should be able to identify immediately which tool to use. Among the possible activities that students might engage in while solving a problem—read, analyze, explore, plan, implement, and verify—most students quickly chose one approach to explore and then “pursue that direction come hell or high water” (Figure 1).

Macintosh HD:Users:macalester:Dropbox:Personal:Launchings:2016:2016-12:Schoenfeld fig3.tiff
Figure 1: Time-line graph of a typical student attempt to solve a non-standard problem.
Source: [4, p.356, Figure 15-3]
In contrast, when he observed a mathematician working on an unfamiliar problem, he observed all of these strategies coming into play, a constant appraisal of whether the approach being used was likely to succeed and a readiness to try different ways of approaching the problem. He also found that mathematicians would verbalize the difficulties they were encountering, something seldom encountered among students (Figure 2). Note that over half the time was spent making sense of the problem rather than committing to a particular direction. Triangles represent moments when explicit comments were made such as “Hmm, I don’t know exactly where to start here.”

Macintosh HD:Users:macalester:Dropbox:Personal:Launchings:2016:2016-12:Schoenfeld fig4.tiff
Figure 2: Time-line graph of a mathematician working on a difficult problem.
Source: [4, p.356, Figure 15-4]
In the conclusion to this section of his chapter, Schoenfeld wrote, “Developing self-regulatory skills in complex subject-matter domains is difficult.” In reference to two of the studies that had attempted to foster these skills, he concluded that, “Making the move from such ‘existence proofs’ (problematic as they are) to standard classrooms will require a substantial amount of conceptualizing and pedagogical engineering.”

One of the problems with the early attempts at instilling self-reflection was the tremendous amount of work required of the instructor. Reinholz implemented PAR in Calculus I, greatly simplifying the role of the instructor by using students as partners in analyzing each other’s work. The study was conducted in two phases over two separate semesters in studies that each semester included one experimental section and eight to ten control sections, all of whom used the same examinations that were blind-graded. There were no significant differences between sections in either student ability on entering the class or in student demographics. The measure of success was an increase in the percentage of students earning a grade of C or higher. In the first phase of the study, the experimental section had a success rate of 82%, as opposed to the control sections where success was 69%. In the second phase, success rose from 56% in the control sections to 79% in the experimental section.

Reinholz observed a noticeable improvement in student solutions to the PAR problems after they had received peer feedback. From student interviews, he found that many students in the PAR section had learned the importance of iteration, that homework is not just something to be turned in and then forgotten, but that getting it wrong the first time was okay as long as they were learning from their mistakes. Students were learning the importance of explaining how they arrived at their solutions. And they appreciated the chance to see the different approaches that other students in the class might take.

What is most impressive about this intervention is how relatively easy it is to implement. Each week, the students would be given one “PAR problem” as part of their homework assignment. They were required to work on the problem outside of class, reflect on their work, exchange their solution with another student and provide feedback on the other student’s work in class, and then finalize the solution for submission. The time in class in which students read each other’s work and exchanged feedback took only ten minutes per week: five minutes for reading the other’s work (to ensure they really were focusing on reasoning, not just the solution) and five minutes for discussion.

The difficulty, of course, lies in ensuring that the feedback provided by peers is useful. Reinholz identifies what he learned from several iterations of PAR instruction. In particular, he found that it is essential for the students to be explicitly taught how to provide useful feedback. By the time he got to Phase II, Reinholz was giving the students three sample solutions to that week’s PAR problem, allowing two to three minutes to read and reflect on the reasoning in each, and then engaging in a whole class discussion for about five minutes before pairing up to analyze and reflect on each other’s work.

Further details can be found in [2] and [3]. For anyone interested in using Peer-Assisted Reflection, this is a useful body of work with a wealth of details on how it can be implemented and strong evidence for its effectiveness.

References

[1] Reinholz, D.L. (2015). Peer-Assisted Reflection: A design-based intervention for improving success in calculus. International Journal of Research in Undergraduate Mathematics Education. 1:234–267.

[2] Reinholz, D. (2015). Peer conferences in calculus: the impact of systematic training. Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2015.1077197

[3] Reinholz, D.L. (2016). Improving calculus explanations through peer review. The Journal of Mathematical Behavior. 44: 34–49.

[4] Schoenfeld, A.H. (1992). Learning to think mathematically: problem-solving, metacognition, and sense-making in mathematics. Pp. 334–370 in Handbook for Research in Mathematics Teaching and Learning. D. Grouws (Ed.). New York: Macmillan.

Tuesday, November 1, 2016

IJRUME: Measuring Readiness for Calculus

You can now follow me on Twitter @dbressoud.

In 2015, the International Journal of Research in Undergraduate Mathematics Education (IJRUME) was launched by Springer with editors-in- chief Karen Marrongelle and Chris Rasmussen from the U.S. and Mike Thomas from New Zealand. It was established to “become the central, premier international journal dedicated to university mathematics education research.” While this is a journal by mathematics education researchers for mathematics education researchers, many of the articles are directly relevant to those of us engaged in the teaching of post-secondary mathematics. This then is the first of what I anticipate will be a series of columns abstracting some of the insights that I gather from this journal.

I have chosen for the first of these columns the paper by Marilyn Carlson, Bernie Madison, and Richard West, “A study of students’ readiness to learn calculus.” [1] It is common to point to students’ lack of procedural fluency as the culprit behind their difficulties when they get to post- secondary calculus. Certainly, this is a problem, but not the whole story. Work over the past quarter century by Tall, Vinner, Dubinsky, Monk, Harel, Zandieh, Thompson, Carlson and many others have led the authors to identify major reasoning abilities and understandings that students need for success in calculus. This paper describes a validated diagnostic test that measures foundational reasoning abilities and understandings for learning calculus, the Calculus Concept Readiness (CCR) instrument.

The reasoning abilities and conceptual understandings assessed by CCR require students to move beyond a procedural or action-oriented understanding of mathematics. Whether it is an equation such as 2 + 3 = 5 or a function definition, f(x) = x2 + 3x + 6, students are introduced to these as describing an action to be taken, adding 2 to 3 or plugging in various values for x. To make sense of and use the ideas of calculus, students need to view a function as a process (defined by a function formula, graph, or word description) that characterizes how the values of two varying quantities change together. Listed below are four of the reasoning abilities and understandings assessed by CCR and which the authors highlight in their article.

  1. Covariational Reasoning. When two variables are linked by an equation or a functional relationship, students need to understand how changes in one variable are reflected in changes in the other variable. The classic example considers how the rates of change of height and volume are related when water is poured into a non-cylindrical container such as a cone. At an even more basic level, students need to be able to interpret information on the velocities of two runners to an understanding of which is ahead at what times. Another example, which involves covariational reasoning as well as understanding rate as a ratio, considers the height of a ladder and its distance from a wall (Figure 1). When the authors administered their instrument to 631 students who were starting Calculus I, only 27% were able to select the correct answer (c) to the ladder problem.

    Figure 1. The ladder problem.

  2. Understanding the Function Concept. Too many students interpret f(x) as an unnecessarily long-winded way of saying y. They see a function definition such as f(x) = x2+ 3x + 6 as simply a prescription for how to take an input x and turn it into an output f(x). Such a limited view makes it difficult for students to manipulate functional relationships or to compose function formulas. Carlson et al. asked their 631 students for the formula for the area of a circle in terms of its circumference and offered the following list of possible answers:
          a. A = C2/4π
          b. A = C2/2
          c. A = (2πr)2a     
         d.
    A = πr2   
         e.
    A = π(C2/4)
    Only 28% chose the correct answer (a). As the authors learned from interviewing a sample of these students, those who answered correctly were the students who could see the equation C = 2πr as a process relating C and r which could be inverted and then composed with the familiar functional relationship between the area and radius.
  3. Proportional Relationships. Too many students do not understand proportional reasoning. When Carlson et al. in an earlier study [2] administered the rain-gauge problem of Piaget et al. (Figure 2) to 1205 students who were finishing a precalculus course, only 43% identified the correct answer (as presented in Figure 2, it is 4⅔). Many students preserve the difference rather than the ratio, giving 5 as the answer. Difficulties with proportional reasoning are known to impede student understanding of constant rate of change, which in turn underpins average rate of change, which is fundamental to understanding the meaning of the derivative.

    Figure 2. The rain gauge problem (taken from [3], [4])

  4. Angle Measure and Sine Function. As I described some years ago in an article for The Mathematics Teacher [5], the emphasis in high school trigonometry on the sine as a ratio of the lengths of sides of a triangle—often leading to the misconception that the sine is a function of a triangle rather than an angle—can lead to difficulties when encountering the sine in calculus, where it must be understood as a periodic function expressible in terms of arc length. An example is given in Figure 3, a problem for which only 21% of the Calculus I students chose the correct answer (e). Student interviews revealed that difficulties with this problem most often arose because students did not understand how to represent an angle measure using the length of the arc cut off by the angle’s rays.


What lessons are we to take away from this for our own classes? Last spring, in What we say/What they hear and What we say/What they hear II, I discussed problems of communication between instructors and students. The work of Carlson, Madison, and West illustrates some of the fundamental levels at which miscommunication can occur and identifies the productive ways of thinking that students need to develop.

References

[1] Carlson, M.P., Madison, B., & West, R.D. (2015). A study of students’ readiness to learn Calculus. Int. J. Res. Undergrad. Math. Ed. 1:209–233. DOI 10.1007/s40753-015- 0013-y.

[2] Carlson, M., Oehrtman, M., & Engelke, N. (2010). The precalculus concept assessment (PCA) instrument: a tool for assessing reasoning patterns, understandings and knowledge of precalculus level students. Cognition and Instruction, 28(2):113–145.

[3] Piaget, J., Blaise-Grize, J., Szeminska, A., & Bang, V. (1977). Epistemology and psychology of functions. Dordrecht: Reidel.

[4] Lawson, A.E. (1978). The development and validation of a classroom test of formal reasoning. Journal of Research in Science Teaching, 15, 11–24. doi:10.1002/tea.3660150103.

[5] Bressoud, D.M. (2010). Historical reflections on teaching trigonometry. The Mathematics Teacher. 104(2):106–112.



Saturday, October 1, 2016

MAA Calculus Study: Women in STEM

It is nice to see that the national media has picked up one of the publications arising from the MAA’s national study, Characteristics of Successful Programs in College Calculus (NSF #0910240). It is the article by Ellis, Fosdick, and Rasmussen, “Women 1.5 times more likely toleave STEM pipeline,” that was published in PLoS ONE on July 13 of this year. The media coverage includes:
...as well as a host of blogs and regional news sources.

The article was an outgrowth of the “switcher” analysis that Jess Ellis and Chris Rasmussen had begun, using data from our 2010 national survey to study who came into Calculus I with the intention of staying on to Calculus II but then changed their minds by the end of the course. You can find a preliminary report on the Ellis and Rasmussen switcher analysis in my column for December 2013, MAA Calculus Study: Persistence through Calculus and a further analysis of the differences between men and women in the November, 2014 column, MAA Calculus Study: Women are Different. See also Rasmussen and Ellis (2013).

The 2013 column reported that women were about twice as likely as men to switch out of the calculus sequence, but those data were compromised by several lurking variables, most significantly intended major. Women are heavily represented in the biological sciences, much less so in engineering and the physical sciences. Since the biological sciences are less likely to require a second semester of calculus, some of the effect was almost certainly due to different requirements.

The study published in PLOS One controlled for student preparedness for Calculus I, intended career goals, institutional environment, and student perceptions of instructor quality and use of student-centered practices. They found that even with these controls, women were 50% more likely to switch out than men. As I discussed in my 2014 column, while Calculus I is very efficient at destroying the mathematical confidence of most of the students who take it, it is particularly effective for women (see Figure 1). As Ellis et al. report, 35% of the STEM-intending women who switched out chose as one of their reasons, “I do not believe I understand the ideas of Calculus I well enough to take Calculus II.” Only 14% of the men chose this reason.

Figure 1: Change in standard mathematical confidence at the beginning of the Calculus I semester (pre- survey) and at the end of the semester (post-survey) separated by career intentions, gender and persistence status, [N = 1524] doi:10.1371/journal.pone.0157447.g004

The last figure in the Ellis et al. article is enlightening (see Figure 2). If we could just raise the persistence rates of women once they choose enter Calculus I to match that of men, we could get a 50% increase in the percentage of women who enter the STEM workforce each year.

Figure 2: Projected participation of STEM if women and men persisted at equal rates after Calculus I. The dotted line represents the projected participation of women. doi:10.1371/journal.pone.0157447.g005

I believe that this issue of women’s confidence is cultural, not biological. It fits in with all we know about stereotype threat. When the message is that women are not expected to do as well as men in mathematics, negative signals loom very large. Calculus—as taught in most of our colleges and universities—is filled with negative signals.

Reference

Ellis, J., Fosdick, B.K., and Rasmussen, C. (2016). Women 1.5 times more likely to leave STEM pipeline after calculus compared to men: Lack of mathematical confidence a potential culprit. PLoS ONE 11(7): e0157447. doi10.1371/journal.pone.0157447

Rasmussen, C., & Ellis, J. (2013). Who is switching out of calculus and why? In Lindmeier, A. M. & Heinze, A. (Eds.). Proceedings of the 37th Conference of the International Group for the Psychology of Mathematics Education, Vol. 4 (pp. 73-80). Kiel, Germany: PME.