AAAS 2017 Annual Meeting AMA Series: All of us make decisions based on unconscious shortcuts that result in bias or other mistakes. We are scientists who use big data and structured interventions to compensate for errors in human decision-making. Ask us anything!

Abstract

Hi reddit!

Research and educational institutions and industries in the United States have encountered difficulty attracting and retaining individuals from underrepresented groups, particularly in STEM disciplines, despite decades of well-meaning efforts by government, universities, and employers. Why is this?

We’re Lydia Villa-Komaroff, an independent scientist, Sherilynn Black of Duke University, Timothy Renick of Georgia State University, and David Asai and we approached the problem from a variety of perspectives.

Psychologists and other social scientists have long recognized that humans make systematic errors in judgment. Hard-wired, simple, efficient rules that all humans use to make decisions (i.e., “thinking fast”) may lead to misjudgments about the capacity and potential of individuals from underrepresented groups.

In this AMA, we’d like to discuss real-world examples, and share how academic institutions can make remarkable progress recruiting underrepresented groups as students and faculty when they recognize and compensate for the realities of how the human mind works. Real-time use of big data at Georgia State has, for example, eliminated all achievement gaps based on race, ethnicity or students’ socioeconomic status. Recognition of established barriers to diversity, coupled with a thoughtful strategy to engage students, alums, faculty, staff, and administrators has resulted in profound organizational change at Duke.

We'll be back at 1 pm EST; we look forward to reading and answering questions about our research and findings!

EDIT: Thank you all so much for your interesting questions! We hate that we couldn't get to every single one, but we really enjoyed your insights and perspectives. We will do our best to update references in the near future. Thanks, Sherilynn, Lydia and Tim

You said that you eliminated all achievement gaps at Georgia State. How did you determine what issues you needed to address and your success in eliminating these problems so completely?

PumpkinHead01

Tim Renick: Great question. The simple answer is that we let the data decide. Our initiative launched with a big data project in 2012 in partnership with the Education Advisoru Baird. We used 2.5 million Georgia State student grades and 140,000 student records. The aim of the project was not specifically to address bias. It merely looked for actions and decisions that Georgia State students were making academically that correlated statistically to them dropping out or failing. We found over 800 behaviors of this sort, and, in fall 2012, started intervening whenever one of these behaviors was identified. Last year, our academic advisors had more than 52,000 one-on-one interventions with students that were prompted by alerts avoming out of this system. The system was not set up to target low-income or underrepresented students at all. Every student is tracked every day equally. We have found that identifying "risky" academic behaviors and intervening within a couple of days benefits all students by lowering drop out and students failing and improving graduation rates, but our approach benefitted low-income and underrepresented students more positively than it did other groups. In effect, a program not designed to deal with institutional bias actually addressed institutional bias more effectively than anything we had tried before.


Hello!

My question is with respect to pushback/disbelief of the issue at hand. Having spent the vast majority of my life in academia, I've met many professors who are very set in their ways/attitudes/opinions. How do you deal with people who may, just, completely disregard what you're saying or take offense that there is a possibility that they may have inherent bias? Are most people open to the idea?

rseasmith

Tim Renick: My colleagues on this panel have incredible insight into that factors that shape and change the mindset of faculty and other stakeholders in higher education who may be subject to implicit bias. At Georgia State, we came to our insights almost accidentally. The example of Georgia State suggests that one tool for change that is particularly impactful in changing the attitudes of faculty and staff in academia about bias is data. When we launched an intensive suite of analytics-based advising interventions at Georgia State in 2012, there were sceptics among faculty and staff. They thought that factors prior to college enrollment--economic differences, poorer academic preparation--meant that low-income and first-generation students would inevitably graduate at lower rates from Georgia State overall than students from middle- and upper-income backgrounds. Implicit is these beliefs was what turned out to be a false assumption about Georgia State: namely, that we were not part of the problem. Many faculty members thought that our own institutional structures and practices were not contributed to achievement gaps between various groups of students. Four years later, the most impactful factor in changing this mindset has been the data: our graduation rates are up significantly and our achievement gaps are gone because we changed the way we onboard, track and support students as a university--not because society suddenly became more just, K-12 more effective, and economic injustices went away. Faced with compelling evidence and thousands of new data points, many sceptical faculty and staff are now strong supporters of the approach we have taken at Georgia State and understand better how institutional biases were contributing to the problem.


Hello!

My question is with respect to pushback/disbelief of the issue at hand. Having spent the vast majority of my life in academia, I've met many professors who are very set in their ways/attitudes/opinions. How do you deal with people who may, just, completely disregard what you're saying or take offense that there is a possibility that they may have inherent bias? Are most people open to the idea?

rseasmith

Hi, this is Sherilynn, and thank you for your question. I think that the most effective way to impact the opinions of scientists is to present them with data! Many studies exist that demonstrate the fact that we all have bias, and that it is something that we will all continue to experience throughout our lives. There are also tools online that help to demonstrate our individual biases, such as Project Implicit (https://implicit.harvard.edu/implicit/takeatest.html), and these can be very helpful towards convincing individuals that bias is a real phenomenon that we all experience.


Hello!

My question is with respect to pushback/disbelief of the issue at hand. Having spent the vast majority of my life in academia, I've met many professors who are very set in their ways/attitudes/opinions. How do you deal with people who may, just, completely disregard what you're saying or take offense that there is a possibility that they may have inherent bias? Are most people open to the idea?

rseasmith

Hi this is Lydia. Many profs get offended because they only hear "bias" when we say "implicit bias". They will get offended because they do not have objections to members of under-represented groups, but say that they make decisions based only on merit. So my approach is to focus on the biological and evolutionary basis of how humans think, and if I can get a hearing at all, scientists respond to data. I then suggest that, in private, they take the implicit bias tests on-line. Sherilynn is posting the web site.


Self recognition of implicit bias seems to be a high barrier to overcome. People at all levels trust their own judgement as appropriate rather than accepting they may have prejudices. Being told one's preconceptions influence judgements without merit is not effective. However, simple training that demonstrates how such prejudices affect judgement seems effective.

Can you comment on just how effective even minimal implicit bias training is in academia?

Thanks for taking your time to answer questions.

Wrathchilde

Hi, Sherilynn here. Many institutions have incorporated implicit bias training at all stages (for trainees, often at the start of scientific training and for faculty, often at the start of the search committee/faculty hiring process). Data indicates that even short, one-touch experiences can be effective practices and have positive impact. The process of even having to learn and discuss the terminology associated with bias raises awareness and initiates self-reflective practices that are useful for reducing bias in daily life. Molly Carnes at the Univ of Wisconsin has done great research on the effectiveness of these sorts of programs--I highly recommend taking a look at her work!


Self recognition of implicit bias seems to be a high barrier to overcome. People at all levels trust their own judgement as appropriate rather than accepting they may have prejudices. Being told one's preconceptions influence judgements without merit is not effective. However, simple training that demonstrates how such prejudices affect judgement seems effective.

Can you comment on just how effective even minimal implicit bias training is in academia?

Thanks for taking your time to answer questions.

Wrathchilde

Lydia here. One place to see an aggregation of studies is the Understanding Interventions web site. http://understanding-interventions.org/


Could you elaborate a bit more on how it is you make the transition from having data on gaps in the system to having actionable insights? As I understand it, big data is very helpful for describing existing patterns but not so much for predicting how a system will change in response to an intervention (unless you somehow managed to rapidly iterate over a variety of interventions?).

Also, other field where biases and mistakes happen often is in healthcare. Do you have plans to conduct similar interventions in such a context?

Finally, if I wanted to get involved in this type of research, what's the "toolkit" you would recommend I pick up? I am a Biology BSc with an MSc in Epidemiology currently doing an MSc in Statistics, perhaps aiming for a PhD in a psychology-y field would be a good complement?

joevector

Tim Renick: Your question is important. The data project that Georgia State engaged in helped to identify the problems that were tripping up our students and leading them to drop out and fail out. It did not tell us how to correct these problems. Higher education needs more reserach in this area, and this is one reason I am leading a 4-year RCT study across eleven universities to collect data on these issues. The reality, though, is the following: it is impossible to correct problems that we do not know exist. The power of that data at Georgia State has been to allow us to turn the mirror on ourselves and to see things--and yes, in many cases, problems--that we were not aware that we had created. Once the problems in our own institutional structures were identified, we put together teams to try to develop solutions, and we did what any good scientist would do: ran pilots, collected data, analyzed what worked and what did not, and made changes in the programs to try to produce better results.


Could you elaborate a bit more on how it is you make the transition from having data on gaps in the system to having actionable insights? As I understand it, big data is very helpful for describing existing patterns but not so much for predicting how a system will change in response to an intervention (unless you somehow managed to rapidly iterate over a variety of interventions?).

Also, other field where biases and mistakes happen often is in healthcare. Do you have plans to conduct similar interventions in such a context?

Finally, if I wanted to get involved in this type of research, what's the "toolkit" you would recommend I pick up? I am a Biology BSc with an MSc in Epidemiology currently doing an MSc in Statistics, perhaps aiming for a PhD in a psychology-y field would be a good complement?

joevector

Hi, Sherilynn here. I am glad to hear that you are interested in joining this research community! I think it is critically important to treat diversity and bias research as a rigorous academic discipline, just as you would with any scientific field of study. Much of the research in this area is a combination of basic science, social science and educational research. If you would like an introduction to the field, I would suggest checking out the Understanding Interventions website (http://understanding-interventions.org/). In addition to posting information about the annual research conference (coming up at the beginning of March), the website also contains national information, publications, references and resources related to research in the field. Good luck!


Are there effective ways to retrain your brain when you realize you're making judgements based upon biased and faulty judgment? Any tips for helping intervene when you notice someone else is doing that same thing?

firedrops

Lydia here. It's not clear to me that there is one way that will work for everyone. What I try to do for myself is to start by saying "what if this cv or grant was submitted by John from Harvard instead whoever really sent it. It seems the main thing is to step back and think about your response instead of reacting fast. Much easier to say than do. As to pointing out to others making faulty judgements, ask gee what would we think if this was a man, or if the institution was one we knew, or was my cousin. Again easier said than done.


I'm a medical student, and especially in the last 2 years of medical school we rely on subjective evaluations from faculty for the majority of our grades. While we do still take exams and test scores do factor into our grades, they make up a small percent (30% at my school) as compared to the subjective evaluation (70%). This strikes me as wide open to cognitive bias. What are some ways a college of medicine can train its faculty (some of whom may be far-flung rural family docs who teach medical students once or twice per year, or hospitalists with a constant flock of medical students) to minimize cognitive bias when evaluating their medical students?

hpmagic

Hi, Sherilynn here. There are several faculty training programs that are designed to highlight bias in training spaces, and I know that many have successfully shifted the tone of training environments in both MD and PhD programs. We recently collaborated with Theater Delta (http://theaterdelta.com/) to design a program to highlight bias and microaggressions that can occur during scientific training. The theater troupe works directly with scientists to create hyper-realistic scenes that progress in a sort of crowd-sourced 'choose your own adventure' format. It really gives the opportunity to think through specific situations that can occur in training spaces, especially those that may not have clearly defined solutions. The faculty indicated that they found it to be highly useful!


I'm a medical student, and especially in the last 2 years of medical school we rely on subjective evaluations from faculty for the majority of our grades. While we do still take exams and test scores do factor into our grades, they make up a small percent (30% at my school) as compared to the subjective evaluation (70%). This strikes me as wide open to cognitive bias. What are some ways a college of medicine can train its faculty (some of whom may be far-flung rural family docs who teach medical students once or twice per year, or hospitalists with a constant flock of medical students) to minimize cognitive bias when evaluating their medical students?

hpmagic

Lydia here. There is a consensus that subjective evaluations are indeed subject to implicit bias showing up. Data based benchmarks are better at getting objective evaluations. Take baseball--Money Ball by Michael Lewis shows that if you first find out what matters to a successful team, then see what data you need then evaluate based on the data. Thing is, someone has to develop the metrics. I don't know of any of those studies, but will look.


How broadly can your work be applied? One very regrettable thing about the biggest questions in the world is that there is often no way to submit these questions to basic reductionist experimentation. In economics, there is no way to have a duplicate version of the world in which US interest rates are raised 0.25% instead of 0.5% in order to test the outcome before deployment. A company building a factory for a given product has to make many uncontrolled decisions which ultimately affect the company's competitiveness. Even small organizations have to develop certain non-optimal processes which invariably become more inefficient over time so long as they continue to produce positive results within an acceptable time. Can your tools help with these kinds of problems?

cazbot

Tim Renick: Georgia State's approach has not been first to try to change the mindset of our campus community and then to hope that the institution will perform better. Rather, we have worked to use data to create an institution that performs better and then used the results to change mindsets. By showing that student outcomes--especially for low-income, first generation, and underrepresented students--can be significantly improved by changing the structures by which the university onboards, tracks and supports its students, we have been able to convince many faculty and staff that disadvantaged students are not destined to graduate at lower rates just because of their experiences prior to enrolling in college.


In terms of underrepresented person placement and team creation. Does your research argue more for a clustering strategy to give underrepresented people a team/culture similar to themselves in the beginning, or an immersion strategy where the underrepresented are spread out, potentially being the only minority in their stem group?

soco

This is Sherilynn, and thank you for your question. Many research studies indicate that the formation of a similar-culture cohort is an effective strategy to reduce impostor syndrome and to increase a sense of belonging for individuals from underrepresented backgrounds. Cohorts are helpful for individuals from all backgrounds.


How will GSU's achievement be replicated elsewhere? I see there's a four year grant for a consortium including them and nine other universities, which is great. Is the goal to create, in 2020, software other universities can buy? Have you been releasing publications or other information describing how to do this, publishing open source software, etc.?

omearabrian

Tim Renick: At Georgia State, we partnered with the Education Advisory Board (EAB) in Washington, D.C. to develop the analytics-based tracking of all students every day. This same platform, customized for the data on each campus, is available from EAB as the Student Success Collaborative. (Georgua State has no financial initerest in the enterprise, and there are other vendors out there who offer parallel products to campuses.). My understanding is that there are now about 400 universities nationally using the EAB system. The trick, though, is not merely to have analytics-based alerts going off but to have a system of support to take these alerts, communicate about them with students I a timely fashion, and develop effective interventions. Without all three steps--alerts, timely conversations with students, and effective mitigation of the problems identified--the system will not have much real-life benefit to actual students.


Thanks for doing this AMA!

From the description of your work, it sounds like you focus primarily on recruitment and hiring practices, which is clearly important. For at least some underrepresented groups, though, there are also challenges associated with retention; one frequently discussed example of this being how having a child may impact a woman in the workplace, both practically and in terms of perception/bias. Could any of the strategies that you will discuss be applied to retaining individuals from underrepresented groups?

neurobeegirl

Tim Renick: The project at Georgia State does not focus on recruitment at all but exclusively on the retention, progression and graduation of students once they are enrolled. Our results suggest that there are strong benefits to looking at institutional structures that disadvantage some students and student populations when compared to others. Recognize that sometimes the biases are held by the low-income and underrepresented students themselves. You might want to look at the work at the University of Texas at Austin in changing the "mindset" of enrolled undergraduate students from disadvanted backgrounds who themselves question whether they belong and can succeed at the University of Texas.


Thanks for doing this AMA!

From the description of your work, it sounds like you focus primarily on recruitment and hiring practices, which is clearly important. For at least some underrepresented groups, though, there are also challenges associated with retention; one frequently discussed example of this being how having a child may impact a woman in the workplace, both practically and in terms of perception/bias. Could any of the strategies that you will discuss be applied to retaining individuals from underrepresented groups?

neurobeegirl

This is Sherilynn, and thank you for your question. There are several programs designed to retain individuals from diverse backgrounds and life experiences. The issues surrounding childcare can be challenging, and many women may have to take time away from research that extends beyond the allotted time for maternity leave. One item that may assist with this issue is the NIH Research Supplement to 'Promote Re-Entry into Biomedical and Behavioral Research Careers' (PA-16-289). https://www.nigms.nih.gov/Research/Mechanisms/Pages/PromoteReentry.aspx This can potentially allow families to make decisions that work best for them as they navigate their career decisions. This supplement also supports individuals who may have other family obligations that impact the ability to do research.


Are there any metrics to indicate the magnitude of the problem? How do STEM fields compare to non-STEM?

nate

Hi, Sherilynn here. I can't immediately recall the statistics for the differences in between STEM fields and non-STEM fields, but the data is publicly available from the National Science Foundation. Hopefully you can locate the data you need on that site!


Can you address the frequently heard belief that there is no bias in STEM? Are the some recent publications you can share with us that demonstrate a wage gap, glass ceiling, barrier of entry, or other form of obstruction to an equitable playing field for all in STEM?

p1percub

Lydia here--there is definitely bias, implicit and explicit in STEM. Here are a few refs: http://science.sciencemag.org/content/333/6045/1015 and many refs here: http://sites.nationalacademies.org/PGA/cwsem/PGA_161607


Additional Assets

License

This article and its reviews are distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and redistribution in any medium, provided that the original author and source are credited.