Wednesday, April 1, 2015

Reaching Students

By David Bressoud

The National Academies have just released a report that should be of interest to readers of this column: Reaching Students: What research says about effective instruction in undergraduate science and engineering. [1] It is based on their earlier report, Discipline-Based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering (the DBER Report), which was the subject of my Launchings column in December 2012, Mathematics and the NRC Discipline-Based Education Research Report. The new report illustrates the insights and recommendations from DBER with current examples and presents practical suggestions for improving classroom instruction.

Before I get into the many things I like about this report, I will start with its one glaring fault: It completely ignores undergraduate mathematics education. Like the DBER Report itself, it reads as if mathematicians have never thought about effective classroom practice. Based as it is on the DBER Report, this is perhaps not surprising. It is still disappointing.
Nevertheless, there is a lot that mathematicians can learn from this report. The many examples that describe actual classroom practice include:
  • Facilitation of reflective learning (p. 6)
  • Use of peer-led team learning (p. 18)
  • Effective use of clickers in large classes (p. 22)
  • Effective use of learning goals (p. 37)
  • Methods for identifying the ideas that are most misunderstood by or confusing to students (p. 67)
  • Assessment in active learning classes (p. 124)
  • Effective faculty professional development (p. 196)
  • The Association of American Universities efforts to improve undergraduate STEM education (p. 203)
These call-out illustrations are interspersed among pointed and helpful discussion of the issues faced by those who are working to improve undergraduate STEM education. It starts with the basics: how to find like-minded colleagues, how to find resources, and the benefits of joining a learning community.

This report discusses the role of lecturing, both its strengths and its weaknesses. More importantly, it talks about strategies for making lectures more interactive. It looks at assessment as more than measuring what questions students can answer, describing how to use it—especially student writing—to understand student reasoning, misconceptions, and misunderstandings.

It also deals with the challenges of changing one’s pedagogy and the obstacles that we all face, recognizing the difficulty in finding the time and energy required to adapt one’s approach to teaching. The advice includes: start with whatever is comfortable for you, use proven materials that others have developed, take advantage of the support that is available (there are many small grants specifically designed to ease the adoption of such practices [2]), and share the effort with interested colleagues.

The report also tackles the issue of coverage, one of the most frequently cited reasons for sticking with lectures. As the report accurately states, “What really matters is how much content students actually learn, not how much content an instructor presents in a lecture.” (p. 160) Moreover, as I have found in my classes, helping students learn how to think about mathematics, how to read it, how to wrestle with it, how to tackle unfamiliar and challenging problems, means helping them learn how to learn it on their own. As we succeed in these goals, there will much content that can be assigned to them to learn through reading or online resources rather than by taking up precious contact time.

Noah Finkelstein of CU-Boulder makes exactly this point, “You must be willing to move away from the idea that teaching is the transmission of information and learning is the acquisition of information, to the notion that teaching and learning are about enculturating people to think, to talk, to act, to do, to participate in certain ways.” (p. 31)

This enculturation enables students to use what they have learned in our classes. As the report states in the chapter on Using Insights from Research on Learning to Inform Teaching, “expertise consists of more than just knowing an impressive array of facts. What truly distinguishes experts from novices is experts’ deep understanding of the concepts, principles, and procedures of inquiry in their field, and the framework for organizing this knowledge.” (their italics, p. 58)

Helping students develop this kind of expertise is difficult, but we know that active learning approaches are much more effective than simply watching an expert produce the solution in a flawless flow.

The report ends with a summary of lessons (pp. 212–213), from which I have chosen and paraphrased four:
  1. Begin by understanding how students learn. [3]
  2. Start small with the changes that make the most sense and are easily implemented.
  3. Establish challenging goals for what students will learn and use them to guide both your instructional strategies and your assessments.
  4. Draw on the research, materials, and support structures that are already available.

I hope that this report will sit in the reading room of every math department and at hand for every mathematician who cares about teaching.


[1] Kober, N. (2015). Reaching Students: What research says about effective instruction in undergraduate science and engineering. Washington, DC: The National Academies Press.
[2] One example of a source of small grants for the teaching of undergraduate mathematics is the Academy of Inquiry Based Learning,
[3] Two of the best resources for this are:
Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C., and Norman, M.K. (2010). How Learning Works: Seven research-based principles for smart teaching. San Francisco, CA: Jossey-Bass.
National Research Council. (2005). How Students Learn: Mathematics in the Classroom. M.S. Donovan and J.D. Bradford, Editors. Washington, DC: The National Academies Press.

Sunday, March 1, 2015

The Emporium

By David Bressoud

Last month, in MOOCs Revisited, I looked at one version of the use of online resources. This month I’d like to comment on another approach to using technology to improve student learning while cutting costs, the Math Emporium, first adopted on a large scale at Virginia Tech. It shifts math classes from large lecture halls to computer labs where students are required to put in a certain number of hours each week in which they work through computer supplied problems while wandering tutors help those in difficulty.

My column is inspired by a visit I made in February to a large public university that uses a Math Emporium for their pre-calculus courses: Intermediate Algebra, College Algebra, Trigonometry, and Pre-Calculus. Their operation is on a large scale. Just over 4,000 of their students took one of these four courses in fall 2014. This was my first opportunity to observe and probe the workings of a Math Emporium. This column, however, is not about what I found at that particular university. Rather, I am using that experience to reflect on what I see as the strengths, weaknesses, and possibilities of the emporium model.

As I observed the workings of the emporium, I noted four distinguishing characteristics:

Self-pacing. The fact that computers mediate almost all of the learning means that students have a great deal of flexibility in the pace at which they proceed through the course. This was particularly appreciated by returning adult students and those for whom their last mathematics class was in the distance past. For them, it was helpful to be able to work at assignments until they were correct and postpone quizzes until a level of mastery had been achieved.

Compulsory laboratory attendance. In the emporium that I observed, students were required to spend at least three hours per week in the laboratory, a tightly structured environment in which they had access to nothing except their computer, which was locked onto that week’s lessons, videos, homework problems, and quizzes. For three hours a week, there was nothing they could do except work on mathematics. Almost all of the students I talked with chafed at this. They would prefer to do this work in a more personal and relaxed environment. Yet, the fact is that many students, especially those at most risk, do not know how to structure their time effectively. The lab forced a structure on them.

One lesson I took away from the particular emporium I visited was the importance of a welcoming environment within the computer lab. The prospect of being forced to spend time in a sterile, unfriendly room can be a strong disincentive to enrolling in a math course run in the emporium model.

Tutoring. An essential feature of a Math Emporium is the presence of tutors circulating among the working students. Students can use the computer to signal a request for a tutor, but often the interaction happens more informally when a student catches a tutor who happens to be walking by. Moreover, tutors are trained in how to identify students who are struggling and how to offer assistance. Not all students are willing to signal for help.

Help in the laboratory comes from three categories of personnel. There are the instructors responsible for setting the syllabus, homework assignments, quizzes, and exams as well as meeting regularly with the tutors to prepare them for potential student difficulties with the upcoming materials. Spending time in the emporium is part of their responsibilities. There are graduate students, usually in their first year, for whom this is their work assignment. And there are undergraduate students, many of whom also experienced the emporium as students. Talking with students, it is clear that the dedication and abilities of the tutors, especially the graduate students, vary widely.

The particular university I visited continues to run one 50-minute lecture per week for each class of 300 to 400 students. It serves as an introduction to the material but offers little to no opportunity for student/faculty interaction. However, I found that most of the students identified strongly with their instructor and preferred to snag him (none of the instructors are women) when in the emporium. As a helpful feature, the screens are color-coded so that instructors can identify the students in their classes from a distance, and student names are prominently displayed on the screen so that instructors can pretend they know them by name.

This raises an interesting point that I touched on last month: For most students, it is important to have some sense of a personal connection with their instructor. One can question how much benefit students derive from their once-a-week 50-minute meeting with the instructor in the company of 350 other students, but the students with whom I talked did feel some connection to their instructor, strengthened when the instructor would stop to talk with them in the emporium. Many of them chose the time they came to the emporium by when they knew their instructor would be present.

Assessments. Students know that what counts is what is on the test. One of the major drawbacks of purely computer-mediated testing is that the problem format has usually been restricted to multiple choice and short answer questions, a format that enforces a view of mathematics as a collection of procedures to be mastered, with little opportunity for assessing the development of a structured understanding of the undergirding principles.

For the courses at the Math Emporium that I observed, high school courses that many if not most of the students are repeating, there may be a case for instruction focused on one-step procedural fluency. Nevertheless, one of the dominant complaints among the faculty in this Department of Mathematics was that the students enter calculus with little experience in multi-step problem solving or justification of what they have done. Technology is changing what can be assessed, but changing large-scale assessment to capture multi-step problem solving and conceptual understandings is still difficult.

The Math Emporium was created as a response to the reality of teaching large numbers of students with few instructors, combined with the recognition that large lecture classes were not working. Large lecture classes can work, as attested in Frank Morgan’s Huffington Post blog, “Are smaller college calculus classes really better?”. In fact he quotes my observation from the MAA National Study of College Calculus that revealed no correlation between class size and changes in student attitudes. But I think that lack of correlation has more to do with the fact that classes of any size can be taught poorly than that class size is really immaterial. Furthermore, I am unconvinced by Frank’s examples of large lecture classes that work. All of his examples are at institutions with very highly motivated students who know how to study on their own. I also believe that, while 100–120 students constitute a large class, there is a qualitative difference between large classes of this size and classes of 300–400 students where instructors cannot possibly monitor or encourage the performance of more than a small number of their students.

The Math Emporium is far from the ideal of what we would like undergraduate education to be. Unfortunately, that ideal is incredibly expensive. The emporium model does provide a relatively inexpensive means of structuring how students study, monitoring their progress, and providing some degree of individual attention. There is every reason to believe that it provides a framework that can work for many students. Moreover, there are and will continue to be opportunities to improve its effectiveness.

Sunday, February 1, 2015

MOOCs Revisited

Despite this month’s title, I have refrained from writing about MOOCs, Massive Open Online Courses, in this column before now. The initial burst of interest always seemed overdone to me. Now that the enthusiasm has waned, we are beginning to see the emergence of meaningful information about when and how they can be useful.

As I argued in my co-authored piece in the AMS Notices, Musings on MOOCs [1], they do seem to hold promise as a source of supplementary material that enables flipped classes, supplementary instruction, alternate approaches, or opportunities for exploring topics that extend beyond the course syllabus. Two questions immediately emerge: How hard is it to take advantage of these materials? Do students actually benefit?

This past summer, Rebecca Griffiths and her team at Ithaka S+R, an academic consulting and research service, released Interactive Online Learning on Campus [2], its study of the use of hybrid MOOCs within the University System of Maryland. Hybrid MOOCs are face-to-face classes for which instructors draw on online courses, in this case developed by Coursera or the Open Learning Initiative, to supplement their own instruction. Griffiths et al. conducted seven side-by-side studies, direct comparisons of the same courses taught with and without these online supplemental materials, and ten case study investigations of courses that were only taught with supplemental materials derived from MOOCs. The side-by-side comparisons are of greatest interest to me because of the usefulness of direct comparisons and because these courses included STEM subjects: three sections of introductory biology and one each of pre-calculus, statistics, and computer science, plus a course in communications.

In answer to the first question—How hard is it to incorporate material from these online courses?—the answer is hard, but probably will become easier when repeated. Griffiths et al. found that self-reported instructor time spent selecting the materials and preparing how they would incorporate them into their hybrid course had a median of 68 and a mean of 144 hours, roughly two to four weeks. The variation was tremendous, from only one full-time week to an entire summer. Most of this is, almost certainly, a one-time investment. For some hybrid courses, face time was reduced by as much as 50%. For others, there was no reduction in face time. Once the start-up time is invested, there appears to be potential for some time—and therefore cost—savings, although it would be modest at best.

The biggest question is whether this improved student outcomes. For the most part in the side-by-side comparisons, there was little difference in pass rates or student performance on a common post test. One biology section had a substantially and significantly better pass rate for the hybrid course, but the other two hybrid biology sections had slightly lower (though not statistically significantly lower) pass rates than the sections with which they were paired. With two exceptions, results on the post tests were indistinguishable between hybrid and non-hybrid courses. Those exceptions were the biology section with the high pass rate and the pre-calculus class. In both of these cases, the hybrid classes posted substantially higher post test results that were significant at p < 0.001.

Griffiths et al. also looked at pass rates and post test results by key subgroups involving race, gender, socio-economic status, and SAT scores. Averaging across all of the side-by-side comparisons for each of the subgroups, pass rates and post test results improved with the hybrid courses, although none of the pass rate differences were significant at p < 0.01. However, several of the post test comparisons were. Although all students saw gains from the hybrid approach, the greatest gains were to White and Asian students, females, and those with parental income between $50,000 and $100,000, at least one parent with a BA, and combined SAT scores above 1000.

There were other factors that came into play. Students preferred the traditional course format and felt that they learned more from it, although they did prefer to do their homework assignments, quizzes, and exams online. Technical glitches did arise in the hybrid courses and may have been a factor in student dislike of online instruction.

One of the most intriguing differences was in how much time students spent on the course outside of classtime. Here the effects were in opposite directions for: under-represented minorities (URM) versus non-URM, low income versus high income, first generation college student versus not first generation, SAT scores below 1000 versus SAT score above 1000. In all cases, the first group saw a decrease in time spent outside of class with the hybrid course, the second group an increase. It may be that online materials allowed students in these traditionally under-represented subgroups to make more efficient use of their time, thus needing to spend less time. But that is a hypothesis that would require study. On its face, this distinction is troubling.

[1] Bonfert-Taylor, P., Bressoud, D.M., and Diamond, H. 2014. Musings on MOOCs. Notices of the AMS. Vol 61, pp. 69–71.

[2] Griffiths, R., Chingos, M., Mulhern, C., and Spies, R. 2014. Interactive Online Learning on Campus: Testing MOOCs and Other Platforms in Hybrid Formats in the University System of Maryland. New York, NY: Ithaka S+R.

Thursday, January 1, 2015

The Benefits of Confusion

This past September, The Chronicle of Higher Education published an article with which I strongly resonated, “Want to help students learn? Try confusing them.” [1]. It described an experiment in which two groups of students were each presented with a video of a physics lecture. The first lecture was straightforward, using simple animations and clear explanations. The second involved a tutor and a student in which the student struggled to understand the concepts and the tutor provided leading questions but no answers. Coming out of the videos, students found the first to be clear and easy to understand, the second very confusing. Yet when later tested on this physics lesson, students who had seen the second video demonstrated far more learning than those who had seen the first.

This illustrates the problem with so much of standard instruction, especially in undergraduate mathematics. The ideas have become so polished over decades if not centuries, and we who teach this material understand its nuances so thoroughly, that what we present glides easily past our students without opportunity to grasp its true complexities. For learning to take place, students must engage and wrestle with the concepts we want them to understand.

I am not advocating confusion for confusion’s sake. As Courtney Gibbon’s cartoon illustrates, a polished lecture can also be very confusing, and not in a good way. Confusion is most productive when it provides a focus for personal investigation. An example of positive confusion is the cognitive dissonance produced when student expectations confront convincing evidence that they are wrong. My prime example of this is George Pólya’s Let Us Teach Guessing (see my Launchings column Pólya's Art of Guessing).

I like to think of this as “gritty” mathematics rather than confusing mathematics. One of my favorite examples from personal experience was a Topics in Real Analysis course that I taught in Spring 1997 using Thomas Hawkins’ doctoral dissertation, Lebesgue’s Theory of Integration: Its Origins and Development, as the text. My experience teaching that course laid the foundation for my textbook A Radical Approach to Lebesgue’s Theory of Integration. Back in 1998, I wrote a paper about this experience, “True Grit in Real Analysis.” I never published it, but I still like it, and as a New Year’s gift to readers, I offer a link to that paper.

[1] Kolowich, S. Confuse Students to Help Them Learn. The Chronicle of Higher Education. September 5, 2014. Available at

Monday, December 1, 2014

Reforming Undergraduate Math and Science Education: Two Reports

Two reports have just come out calling for reform of undergraduate Math and Science education. One approaches reform from above: from the organizations that represent presidents, provosts, and deans across consortia of universities and colleges and from those corporations and non-profit organizations seeking to facilitate change; the other from below: from individual faculty who include some of the leading research mathematicians in the country. That is not an entirely accurate characterization. Both reports reflect the concerns and efforts of everyone involved in reform of undergraduate mathematics and science education, but it does indicate their respective strengths.

Between them, these two reports tally an impressive 68 specific recommendations. More important than the details of what they recommend are the common threads among their messages. I see six that stand out:
  • We need to be aware that, because of changing demographics and increasing pressures to accelerate K-12 education, both the needs and preparation of the students entering our colleges and universities are not what they were a generation ago. We need to evaluate our programs in the light of these changes and to modify them accordingly. This includes the support and even creation of nontraditional pathways toward careers in mathematics, science, engineering, and technology.
  • We need to revisit our curricular decisions in order to support learning that draws on multiple disciplines and to build platforms from which students can address the problems of the future.
  • We need to revisit our pedagogical decisions with an awareness of the evidence that exists for better approaches to teaching and learning.
  • We need to develop, employ, and respond to measurements of program effectiveness.
  • We need to focus more attention on the preparation of teachers, both those who would teach in K-12 and those graduate students who will be the next generation of college and university professors.
  • We need to support an institutional culture that encourages awareness of what is and is not working as well as a habit of thoughtful, creative, and timely response when problems are identified. An essential component of supporting such a culture is an increase in the value placed on the work of those who are addressing these needs.
The first of the two reports, Achieving Systemic Change: A Sourcebook for Advancing and Funding Undergraduate STEM Education, is available at It was issued by the Coalition for Reform of Undergraduate STEM Education, a joint effort of the American Association for the Advancement of Science (AAAS), the American Association of Colleges and Universities (AAC&U), the Association of American Universities (AAU), and the Association of Public Land-Grant Universities (APLU). Linda Slakey, former Director of the Division of Undergraduate Education at NSF and currently Senior Adviser to AAU and Senior Fellow of AAC&U, is the convener of this coalition. The Alfred P. Sloan Foundation and the Research Corporation for Science Advancement are the funders.

As the subtitle indicates, the audience for this report encompasses all of those who seek to reform the current system, especially targeting those interested in funding change. For this reason, the focus is on what we know about what works and where to find the leverage points with greatest potential effect. In addition to its cross-disciplinary emphasis and mobilization of administrators and funders, Achieving Systemic Change provides a wealth of references to and descriptions of successful programs and initiatives.

The second report, Transforming Post-Secondary Education in Mathematics (TPSE Math): Report of a Meeting, is the product of an effort by leading mathematicians: Phillip Griffiths of IAS, Eric Friedlander of USC, Mark Green of UCLA, Tara Holm of Cornell, and Uri Treisman of UT-Austin, with additional leadership from Jim Gates of the University of Maryland and with funding from the Alfred P. Sloan Foundation and the Carnegie Corporation of New York. It reports the outcome of a workshop held at the University of Texas at Austin, June 20–22, 2014, and is available at

TPSE Math is less tightly structured (it contains 44 of the 68 recommendations), but it is directly relevant to the mathematics community and communicates a powerful message. It also includes descriptions of a wide variety of ongoing programs and initiatives.

Together, these two reports provide a glimpse into the current landscape of reform efforts in undergraduate science and mathematics education and an indication of where the most pressing needs currently lie. They also suggest a hopeful confluence of concern with these issues across all levels. Bottom-up efforts only work in an institutional environment that encourages and supports them. Top-down directives only work when there is critical mass of faculty eager to do the hard work of implementation. There is every reason to hope that these forces are coming into alignment.

Saturday, November 1, 2014

MAA Calculus Study: Women Are Different

MAA’s study of Calculus I, Characteristics of Successful Programs in College Calculus (CSPCC) revealed some interesting and important differences between the men and the women who study calculus in college. The most dramatic of these is the intended major, but the study also revealed differences in preparation (women calculus students have taken more advanced mathematics courses in high school), standardized test score (women score slightly lower on SAT and ACT Math), persistence (women are less likely to continue in mathematics), and reasons for not continuing (women performing at the same level as men are more likely to consider their grades and understanding of calculus to be inadequate). The overall impression that emerges is that women are much more reluctant than men to pursue a mathematically intensive major, and that any indication that they may not be up to the task is much more influential for them than for men.

Even though women make up the majority of undergraduates, CSPCC found that they account for only 46–47% of the students in Calculus I in four-year undergraduate programs. Even once men and women enter calculus, they do not necessarily have the same goals. Most of the women (53%) in Calculus I intend to pursue the biological sciences or teaching, with only 20% heading into the physical sciences, engineering, or computer science. The situation is reversed for men, where 53% intend to major in the physical sciences, engineering, or computer science, and only 23% are going into the biological sciences or teaching (see Figure 1).

Figure 1: Career goals of all students in Calculus I, by gender
phys sci = physical sciences; eng = engineering; comp = computer and information science; geo = geo sciences; bio = biological & life sciences, including pre-med; social = social sciences
We did find interesting differences in the backgrounds of women and men taking calculus at Ph.D. granting universities (see Table 1). Women calculus students are more racially and ethnically diverse and noticeably less likely to consider themselves to be math people. Women who take Calculus I in college are slightly more likely than men to have been on an accelerated track: Algebra II by 10th grade, Precalculus by 11th, and Calculus by 12th. Their SAT Math scores are slightly lower (about a quarter of a standard deviation) than those of men, echoed almost precisely in their ACT Math scores. It has been documented (Strenta et al, 1994) that even when women and men have comparable high school backgrounds and records of performance, women do tend to score lower on standardized tests in science and mathematics.

Table 1: Characteristics and background of students entering Calculus I
at Ph.D.-granting universities, by gender (N = 3125 women, 3824 men)

Algebra II by grade 10
Precalculus by grade 11
Calculus by grade 12
SAT Math
mean (SD)
3rd quartile median
1st quartile
654 (70)
670 (69)
See self as math person

Women were almost twice as likely as men to choose not to continue in calculus, even when Calculus II was a requirement of their intended major. Of the men who began the fall term intending to continue on to Calculus II and who successfully completed Calculus I (C or higher), 11% changed their mind by the end of the class. For women, the figure was 20%. We found that the difference between women and men over whether to continue was present irrespective of grade or intended major (see Table 2). In general, bio-science majors are much more likely to switch than are engineering majors, almost certainly because Calculus II is less essential to their intended field. Yet whether the intended major was in the bio sciences or engineering, women were consistently less likely to continue to Calculus II. What was striking was how discouraging a C in Calculus I was to a woman’s intention to pursue engineering, while it barely dented a man’s confidence.

Table 2: Switchers at Ph.D. universities, organized by grade in Calculus I and intended major

Grade in Calc I
Bio Science Majors
Grade in Calc I
A or B
Engineering majors
Grade in Calc I
A or B
Note: Among all students who entered Calculus I with a definite intention to continue on to Calculus II (N = 988 women, 1476 men), percentage that had decided not to continue or were undecided whether to continue to Calculus II by the end of that term.

Table 3 looks at the reasons that students gave for switching. The first two reasons, "Too many other courses I need to take," and "Have changed major," are not necessarily indictments of calculus instruction, but they do point to missed opportunities. We see that women and men are equally likely to believe that calculus "takes too much time and effort." A fifth of the A and B students in Calculus I gave a bad experience in the class as one of their reasons for switching. For those earning a C in the class, this was by far the most popular reason for women to switch out, and close to the most commonly cited reason for men to switch out. But the most striking gender differences occur for the last two reasons. Only 4% of the men earning an A or B were dropping calculus because they did not understand calculus well enough to continue its study, but this was true of almost a fifth of the women earning an A or B. Even more notably, not a single man earning an A or B felt that this grade was not good enough to continue the study of calculus, but this was true of 7% of the women who were switching out of the calculus sequence. This is consistent with the findings of Strenta et al (1994) that found strongly significant differences (p < 0.001) between women and men: Women were much more likely to question their ability to handle the course work, and women were much more likely to feel depressed about their academic progress. They also found that women were more likely than men to leave science because they found it too competitive (p < 0.01).

Table 3: Reasons for switching at PhD universities, by grade in course and by gender
(N = 143 women, 109 men)

Reason for switching
Students earning
A or B
Students earning
Too many other courses I need to take
Have changed major
Takes too much time and effort
Bad experience in Calculus I
Don’t understand calculus well enough
Grade was not good enough
Note: Students could select multiple responses.

The picture that emerges is one of women who are as well prepared as men for the calculus sequence but less attracted to the most mathematically intensive fields and much more easily dissuaded from continuing the study of mathematics. The amount of work required to succeed in college-level mathematics is not a factor in the gender differences, but women are bringing a more self-critical attitude toward what they understand.


Strenta, A. C., Elliott, R., Adair, R., Matier, M., & Scott, J. 1994. Choosing and leaving science in highly selective institutions. Research in Higher Education. 35 (4), 513–547.

With thanks to Cathy Kessel for bringing this reference to my attention.