Misconceptions in Cell Biology

“Every living thing is made of cells, and everything a living thing does is done by the cells that make it up.”
L.L. Larison Cudmore

Cells are the building blocks of all biology.  Every living organism is composed of cells.  All cells came from preexisting cells.  If you are a trained biologist, you recognize the last two sentences as The Cell Theory, one of the core theories of modern biology.  A lot of The Cell Theory seems basic considering what we know.  However, remember cells are smaller than can be seen by the naked eye.  Until the invention of microscopes, we didn’t even know cells existed.  The word cell was first used by Robert Hooke in the 1660s while examining thin slices of cork.  Hooke used the word cell to describe the structures he observed because they reminded him of the rooms of monks.

Additionally, it wasn’t until Loise Pasteur’s famous swan-necked flask experiment in 1859 that the idea of spontaneous generation, life spontaneous occurring out of organic material, was disproven.  Therefore, every cell must come from a preexisting cell. With the importance of The Cell Theory, it is not surprising that students spend a lot of time learning about the structure, function, and behavior of cells.  However, because cells are not visible to the naked eye, it is not surprising that many students have misconceptions concerning cells.

What is a misconception? Scientific misconceptions “are commonly held beliefs about science that have no basis in actual scientific fact. Scientific misconceptions can also refer to preconceived notions based on religious and/or cultural influences. Many scientific misconceptions occur because of faulty teaching styles and the sometimes-distancing nature of true scientific texts.”  When we teach students biology, how good are we at dealing with misconceptions?  The critical questions are what the student’s misconceptions are and how do we deal with them?

Musa Dikmenli looked at the misconceptions that student teachers had in his article Misconceptions of cell division held by student teachers in biology: A drawing analysis.  In the study, Dikmenli examined the understanding of 124 student teachers in cell division.  According to the study, these student teachers “had studied cell division in cytology, genetics, and molecular biology, as a school subject during various semesters.”  Therefore, the student teachers had already studied cell division at the college level.

At a basic level, cell division is the process of a single cell dividing to form two cells.  Scientists organize cell division (the cell cycle) into 5 phases Interphase, Prophase, Metaphase, Anaphase, and Telophase.  The cell cycle is often depicted using a circle. 

Figure of the cell cycle at different levels of detail. Created by PJ Bennett
Figure of the cell cycle at different levels of detail. Created by PJ Bennett

Instead of answering quiz questions or writing essays, the students were “asked to draw mitosis and meiosis in a cell on a blank piece of A4-sized paper. The participants were informed about the drawing method before this application.” (Dikmenli) The use of drawing as an analysis method has several advantages.  The most important of which is that it can be used across languages and by students in multiple nationalities.

After analyzing the drawings, almost half of the student teachers had misconceptions about cell division.  Some of the most come misconceptions are, when DNA synthesis occurs during mitosis and mistakes about the ploidy, the number of chromosome copies, during meiosis.  The research results mean that individuals that are going to teach biology at the primary and high school level are likely to pass their misconceptions along to their students.

So, where does the problem with student misconceptions start?  Students learn misconceptions from their teacher about cell division.  However, the teachers all have biology degrees from colleges, and their college faculty failed to address their misconceptions. However, perhaps we are not asking the correct questions.  Instead of trying to decide who, K-12 or College, is responsible for correcting student misconceptions, we should ask why students get through any level of school with misconceptions?

I can hear all the teachers now, while obviously, students get through school with misconceptions because it’s difficult to correct misconceptions. However, we know a lot about teaching to correct misconceptions.  Professor Taylor presents one method, refutational teaching in the blog post GUEST POST: How to Help Students Overcome Misconceptions.  With a quick Google search, you can find other supported methods.  In all cases getting the student to overcome the misconception, the student must actively acknowledge the misconception while confronting countering facts.

It is unlikely that the problem is that it is hard to teach to misconceptions, let’s be honest most teachers at any level are willing to use whatever techniques work.  No, I suspect the real problem is that most teachers don’t realize their students have misconceptions. So, then the real questions are why instructors don’t realize students have misconceptions.  In this case, I suspect it is the method of assessment.

Most classroom assignments and assessments ask the students to provide the “right” answer.  The right answer is especially prevalent in the large lecture class where multiple-choice questions are common.  However, the fact that a recent review article A Review of Students’ Common Misconceptions in Science And Their Diagnostic Assessment Tools covers 111 research articles suggest that identifying misconceptions is not complicated if teachers use the correct methods.  Therefore, the incorporation of the proper assessment methods alongside teachers’ standard methods will help teachers identify student misconceptions.

However, it is not good enough to identify misconceptions. The misconceptions must be identified early enough in the course so the teacher can address them.  Finding misconceptions is a perfect justification for course pretests either comprehensively at the beginning of the course or smaller pretests at the start of unites.  In an ideal world, pretests would be a resource that departments or schools would build, maintain and make available to their teachers ideally as a question bank.  Until schools provide resources to identify misconceptions, think about adding a pretest to determine your student’s misconceptions.  It will help you do a better job in the classroom

Thanks for Listing to My Musings
The Teaching Cyborg

Double-Blind Education

“It is a capital mistake to theorize before one has data.”
Arthur Conan Doyle (via Sherlock Holmes)

Several years ago, I was attending a weekly Discipline-Based Educational Research (DBER) meeting. Two senior faculty members led and organized the weekly meetings.  Both faculty members had trained in STEM disciplines.  One had received their educational research training through a now-defunct National Science Foundation (NSF) program, while the other was mostly self-taught through multiple calibrations with educational researchers.

The group was discussing the design of a research study that the Biology department was going to conduct.  One of the senior faculty members said if they were serious, they would design a double-blind study.  The other senior faculty member said that not only should they not do a double-blind study, but a double-blind study was likely a bad idea. I don’t recall the argument over double-blind studies in education ever getting resolved. We also never found out why one of the faculty members thought double-blind studies were a bad idea in educational research.

Double-blind studies are a way to remove bias. Most people know about them from drug trials.  Educational reform is not likely to accidentally kill someone if an incorrect idea gets implemented due to a bias in the research.  However, a person’s experiences during their education will certainly have a lifelong impact.  While double-blind studies might be overkill in education research, there is the question of what is enough.  As I have said before, it is the job of educators to provide the best educational experience possible; this should extend to our research.

How do faculty know how they should teach? What research should faculty members use?  Should we be concerned with the quality of educational research? Let me tell you a story (the names have been changed to protect the useless).  A colleague of mine was looking for an initial research project for a graduate student. My college told me about a piece of educational “research” that was making the rounds on his campus.  Alice, a well-respected STEM (Science Technology Engineering and Math) faculty member, had observed her class.  She noted what methods of note-taking her students were using.  At the end of the semester, she compared the method of notetaking to the student’s grade. On average, the students that used the looking glass method of notetaking had grades that avraged one letter grade lower than the other method of notetaking.

Alice told this finding to a friend the Mad Hatter, a DBER (Discipline-Based Education Research) expert.  The Mad Hatter was so impressed with the result that he immediately started telling everyone about it and including it in all his talks.  Now because Alice did her study on the spur of the moment, she did not get research approval and signed participation agreements.  The lack of paperwork meant that Alice couldn’t publish her results.  With such a huge effect, my colleague thought repeating this study with the correct permissions so that it could be published would be perfect for a graduate student.

They set-up the study; this time, to assess what methods the students were using to take notes, they videotaped each class period.  Additionally, the researchers conducted a couple of short questioners and interviewed a selection of the students.  After a full semester of observation, the graduate students analyzed the data. The result, there was no significant difference between looking glass notetaking and all the other types.  Just a little while ago, I saw a talk by the Mad Hatter. It still included Alice’s initial results.  Now the interesting thing is neither Alice nor the Mad Hatter would have excepted Alice’s notetaking research methodology if it was a research project in their STEM discipline.  However, as an educational research project, they were both willing to take the notetaking results as gospel.

While there is a lot of proper educational research, researchers have suggested that a lot of faculty and policymakers have a low bar for what is acceptable educational research.  The authors of We Must Raise the Bar for Evidence in Education suggest a solution to this low bar in educational research.  Their recommendation is to change what we except as the basic requirement of educational research.  Most of the author’s suggestions center around eliminating bias (the idea at the core of the double-blind study) their first suggestion is,

“to disentangle whether a practice causes improvement or is merely associated with it, we need to use research methods that can reliably identify causal relationships. And the best way to determine whether a practice causes an outcome is to conduct a randomized controlled trial (or “RCT,” meaning participants were randomly assigned to being exposed to the practice under study or not being exposed to it).”

One of the biggest problems with human research, which includes educational research, is the variability in the student population.  As so many people are fond of saying, we are all individuals.  By randomly assigning individuals to a group, you avoid the issue of concentrating traits in one group. 

Their second suggestion is, “policymakers and practitioners evaluating research studies should have more confidence in studies where the same findings have been observed multiple times in different settings with large samples.”  The more times you observe something, the more likely it is to be true (there is an argument against this, but I will leave that for another time.)

Lastly, the authors suggest, “we can have much more faith in a study’s findings when they are preregistered. That is, researchers publicly post what their hypotheses are and exactly how they will evaluate each one before they have examined their data.”  Preregistration is a lot like the educational practice used with student response systems were the student/researcher is less likely to delude themselves about the results if they must commit to an idea ahead of time.

If we are going to provide the best educational experiences for our students, we need to know what the best educational experiences are.  However, it is not enough to conduct studies. We need to be as rigorous as possible in our studies.  The next time you perform an educational research project, take a minute, and ask yourself how I can make this study more rigorous.  Not only will your students benefit, so will your colleagues. Thanks for Listing to My Mussing
The teaching Cyborg

Much Ado about Lectures

“Some people talk in their sleep. Lecturers talk while other people sleep”
Albert Camus

The point of research is to improve our knowledge and understanding.  An essential part of research is the understanding that what we know today may be different from what we know tomorrow as research progresses, our knowledge changes.  Conclusions changing over time does not mean the earlier researchers were wrong. After all, the researchers based their conclusions on the best information; they had available at the time.  However, future researchers have access to new techniques, equipment, and knowledge, which might lead to different conclusions.  Education is no different. As research progresses and we get new and improved methods, our understanding grows.

Out of all the topics in educational research, the most interesting is the lecture.  No subject seems to generate as much pushback as the lecture.  A lot of faculty seems to feel the need to be offended for the lecture’s sake.  Anyone that has trained at a university and received a graduate degree should understand that our understanding changes over time.  Yet no matter how much researchers publish about the limited value of the lecture in education, large numbers of faculty insist the research must be wrong.

I suspect part of the push back about the lectures is because lecturing is what a lot of faculty have done for years.  If they except that the lecture is not useful, then they have been teaching wrong.  Faculty shouldn’t feel bad about lectures; after all, it is likely what they experienced in school.  I think it is the faculty member’s own experience with lectures as students that lead to the problem.  I have had multiple faculty tell me over the years some version of the statement “The classes I had were lectures, and I learned everything, so lectures have to work.”

The belief that you learned from lectures when you where a student is likely faulty.  The reason this belief is defective is that you have probably never actually had a course that is exclusively a lecture course.  I can hear everyone’s response as they read that sentence, “what are you talking about as a student most of my classes were lectures.  I went into the classroom, and the teacher stood at the front and lectured the whole period. So, of course, I have had lecture courses.”

Again, I don’t think most people have ever had an exclusive lecture course. Let’s braked down a course and see if you really can say you learned from the lecture.  First, did your course have a textbook or other readings assignments?  Just about every course I took had reading assignments.  In most of my classes, I spent more time reading then I spent in the class listing to the lecturer.  Most of my courses also had homework assignments and written reports.  Many of the courses also had weekly quizzes, and one or two midterms were we could learn from the feedback.

Can you honestly say that in a lecture course, you didn’t learn anything from the course readings?  That you didn’t learn anything from the homework assignments and papers. That you didn’t learn anything by reviewing the graded homework assignments, papers, quizzes, and midterms, the truth is even in a traditional lecture course, there are lots of ways for students to learn.  As a student, it is next to imposable to determine how much you learn from any one thing in a course.  So, with all these other ways to learn in a “Lecture” course, can you honestly say you learned from the lecture?  In truth, the only way to have a course where you could say you learned from the lecture is if you had a course that only had a lecture and final, no readings, no assignments, no exams with feedback, only a lecture.

However, there is an even deeper issue with the lecture, then the faculty insisting it works (without any real evidence.)  As faculty members, what should our goal as a teacher be?  It is quite reasonable to say that anyone teaching at a college, university, or any school should attempt to provide the best learning environment they can.  So, even if we accept the argument that students can learn from, let’s call it, a traditional lecture (I don’t) if the research says there is a better way to teach shouldn’t we be using it?

If faculty approach teaching based on what is the best way to teach, it does not matter if students can learn from lectures if there is a better way to teach, we should use it.  The research says we should be using Active Learning when we teach because it benefits the students.  A recent article, Active learning increases student performance in science, engineering, and mathematics from PNAS show that students in classes that don’t use active learning are 1.5 times more likely to fail the course.  At a time when universities and the government are pushing for higher STEM graduation rates, active learning would make a big difference.

So how much of a problem is the lecture?  I know a lot of faculty that say they use active learning in their classrooms.  In a recent newsletter from the Chronicle of Higher Education, Can the Lecture Be Saved? Beth McMurtrie states, “Most professors don’t pontificate from the moment class starts to the minute it ends, but lecturing is often portrayed that way.”

However, a recent paper from the journal Science Anatomy of STEM teaching in North American universities might refute this statement.  The Science paper shows, at least in the STEM disciplines, that when classroom teaching methods are observed rather than reported by survey, 55% of all the course observed are traditional lectures.  Only 18% of the courses are student-centered active learning environments.  The rest have some amount of active learning.

Regardless of whether you think the lecture works or not, it is long past time to change.  There is no reason to feel ashamed or think poorly about faculty that used lectures in the past.  After all, for a lot of reasons, lectures where believed to work.  However, we are also long past the time where anyone should be offended for the lecture’s sake.  We need to use the best teaching methods currently available.  The best methods are the techniques called active learning because students measurably learn better than in a traditional lecture.

Thanks for Listing To my Musings
The Teaching Cyborg

In Education What Does It Mean to Be Competent?

“What you know is more important than where or how you learned it.”
Tina Goodyear

While competency-based education (CBE) has been part of US education for 40- or 50-years, interest has been increasing over the last couple of years.  A faculty member I was working with once described a problem they were having at his school.  He worked with a system of schools that used a common course system across all their campuses.  A common course system can solve a lot of issues. The common course system allows students to transfer between schools smoothly.  It also lets the system office negotiate guaranteed transfer agreements with other universities for all the schools in the system rather than each school having to negotiation individual transfer agreements.

However, how the system he worked at maintained their common courses system was causing problems.  The system office maintained a central list of the learning outcomes of the common courses.  When a school taught a class, they only needed to teach 80% of the outcomes that were on the common list.  If the common list had 26 learning outcomes, you only need to teach 21 (20.8). Faculty don’t have to teach five of the learning outcomes on the common course list.

To pass a common course, the student must earn at least a C (70% of the learning outcomes taught).  That means a student can pass while only learning 15 (14.56) learning outcomes.  Therefore, a student can pass without knowing 11 of the 26 learning outcomes on the system’s core list. Taken to the extreme, it means that two students each from a different school that both earned a C and transferred to the same school might only have four learning outcomes in common between them.

The committee my friend was working with suggested the implementation of competency-based education as a solution to the problem with their current common course system.  I asked how they were planning on implementing CBE.  He answered, “well, we already have learning goals all we need to do is turn them into competencies then modify our assessments a little, and we will be doing CBE.”

I remember asking, “are you changing how you assign grades?”  “If you’re not changing the grades, are you going to change how your transcript?”  If you don’t make changes like “grading” differently or list mastered competencies on your transcripts, you will still have the same problem. A lot of people that are trying to jump on the CBE bandwagon are just rephrasing their learning goals into “competencies.”

Implement of CBE requires changes to the whole system.  One of the core ideas behind competency-based education is that given enough time, most people can master any concept.

“Supporters of mastery learning have suggested that the vast majority of students (at least 90%) have the ability to attain mastery of learning tasks (Bloom, 1968; Carroll, 1963). The key variables, rather, are the amount of time required to master the task and the methods and materials used to support the learning process.” (How did we get here? A brief history of competency‐based higher education in the United States)

This one idea turns the current educational system on its head.  In most schools,’ students’ progress is measured by the number of credits earned.  Students earn credits by passing a class.  If a student passes the class, they get the credits whether they get an A or C. Institutions assign the number of credits to a course based on the number of hours the course meets.  This system, the Carnegie Unit, was established over a century ago by the Carnegie Foundation.  Therefor students earn credits based on time.

However, the Carnegie Unit or Credit Hour was initially created as part of a program to determine eligibility in the Carnegie Pension Plan (today is known as TIAA-CREF).

“To qualify for participation in the Carnegie pension system, higher education institutions were required to adopt a set of basic standards around courses of instruction, facilities, staffing, and admissions criteria. The Carnegie Unit, also known as the credit hour, became the basic unit of measurement both for determining students’ readiness for college and their progress through an acceptable program of study.” (The Carnegie Unit A Century­ old Standard In A Changing Education Landscape)

While the Carnegie Unit brought standardizations to a nascent US educational system, it is possible, if not likely, that we have become too focused on the easily measured like the Carnegie unit.  In the CBE system, students earn credits based on mastery of concepts. Therefore, students take as much or little time as they need to master concepts and move forward at a pace that best suits them.  CBE puts the information learned as the central component used to earn credits, not the length of time spent in a course.

Beyond restructuring the educational experience to focus on mastery, there are questions about assessments.  It is not merely a matter of rewording learning goals into competencies.  Course designers build competencies around what students should be able to do, or vice versa.  The assessments must be carefully thought out to match the desired outcome and then ascertain whether the student has mastered the competency.  While the process of assessment creation is involved, the fact that schools like Western Governor’s University and the University of Wisconsin’s Flexible Option program are using CBE can provide examples and a knowledge pool to develop new programs.

I don’t know if most of the educational system will adopt CBE.  The changes need to the standard system are enormous. After all, if students can learn at their pace, semesters, and time to degree will have to be rethought. However, the thought of competency-based education changing the focus back to learning over sorting is appalling.  The CBE system could also help alleviate student frustrations over a course moving to slow or too fast, leading to higher matriculation rates.  In the long run, I suspect the degree to which CBE is adopted will depend mostly on the success of the institutions currently leading the way.  Regardless of the success or failure of CBE, it will be fun to follow the developments in CBE over the next several years.

Thanks for Listing to My Musings
The Teaching Cyborg

Is it Dedication or Delusion?

“Delusion is the seed of dreams.”
Lailah Gifty Akita

Educational reform is a never-ending process, which is, in many ways, good.  The purpose of educational institutions is to provide the best education possible.  The individual teacher learns from experience and improves over time.  Research into learning and cognition lead to better understandings of how people learn and therefor better ways to teach.

However, even with our continually improving knowledge, changes in education seem painfully slow or to not occur at all.  A consistent problem is classroom size.  While just about anyone that has studied education will agree that the best way to teach someone is with a dedicated teacher in a one on one environment (feel free to disagree I would love to hear your reasons). However, in a society that wants education especially higher education available to everyone one on one education is not possible.

Don’t believe me look at the numbers.  According to the US census bureau, there are 76.4 million students in school K through University.  That means we would need 76.4 million teachers if we paid them an average living wage including overhead each teacher would make $41,923 – $46,953 (still a little low if you ask me)  this works out to 3.2 – 3.5 trillion dollars or 17-19% of the US Gross National Product.  As a comparison, the budget for the US national government was 21% of the GDP in 2015.  Also, 76.4 million students are 24.7% of the US population, three and older, if we also had 24.7% of the US population working as teachers, then almost half of the US population would be students or teachers. Remember we would still need all the support staff, and these are with current numbers, not what we would need for everyone eligible for school.

I don’t think any country can afford to devote that much of their population and resources to one thing and survive.  As someone that loves education, I would love it if some economist out there proves me wrong.  So, class size is a compromise between what we can afford to do and the best environment for our students.

However, outside of issues that are constrained by shall we call it a reality.  We have all seen programs and projects that we think can help students get canceled.  We have all seen programs developed by grants get canceled the second the grant ends.  The loss of these programs is not only that future students will not benefit, but also the loss of resources, including time, commitment, and motivation of staff.

I have been asked after several of my programs have been canceled “how many times are you going to keep building programs that just get canceled?” It’s an interesting question and one that is not easy to answer.  I was at the University of Colorado Boulder when Carl Wieman won the 2001 Noble prize for Physics.  After winning the Nobel prize, Wieman went on to advocate for the improvement of science education.  To the extent that he was appointed the White House’s Office of Science and Technology Policy Associate Director of Science in 2010.  In 2013 I remembered reading an article Crusader for Better Science Teaching Finds Colleges Slow to Change that was about Dr. Weiman and his frustrations with the slow changes in higher education “… Mr. Wieman is out of the White House. Frustrated by university lobbying and distracted by a diagnosis of multiple myeloma, an aggressive cancer of the circulatory system, he resigned last summer. … “I’m not sure what I can do beyond what I’ve already done,” Mr. Wieman says.”

You can’t help but think if someone with the prestige and influence of Carl Weiman can’t encourage change what hope does anyone else have.  The truth of the matter is that how much someone can take and when they have had enough is a personal question.  When thinking about how much is enough, I can’t help but think of a humorous little fable Nasreddin and the Sultan’s Horse.  I have encountered versions of this fable many times.  I think the first time was in the science fiction book The Mote in God’s Eye by Larry Niven and Jerry Pournelle.

Nasreddin and the Sultan’s Horse

One day, while Nasreddin was visiting the capital city, the Sultan took offense to a joke that was made at his expense. He had Nasreddin immediately arrested and imprisoned; accusing him of heresy and sedition. Nasreddin apologized to the Sultan for his joke and begged for his life; but the Sultan remained obstinate, and in his anger, sentenced Nasreddin to be beheaded the following day. When Nasreddin was brought out the next morning, he addressed the Sultan, saying “Oh Sultan, live forever! You know me to be a skilled teacher, the greatest in your kingdom. If you will but delay my sentence for one year, I will teach your favorite horse to sing.”

The Sultan did not believe that such a thing was possible, but his anger had cooled, and he was amused by the audacity of Nasreddin’s claim. “Very well,” replied the Sultan, “you will have a year. But if by the end of that year you have not taught my favorite horse to sing, then you will wish you had been beheaded today.”

That evening, Nasreddin’s friends could visit him in prison and found him in unexpected good spirits. “How can you be so happy?” they asked. “Do you really believe that you can teach the Sultan’s horse to sing?” “Of course not,” replied Nasreddin, “but I now have a year which I did not have yesterday, and much can happen in that time. The Sultan may come to repent of his anger and release me. He may die in battle or of illness, and it is traditional for a successor to pardon all prisoners upon taking office. He may be overthrown by another faction, and again, it is traditional for prisoners to be released at such a time. Or the horse may die, in which case the Sultan will be obliged to release me.”

“Finally,” said Nasreddin, “even if none of those things come to pass, perhaps the horse can sing.”

In 2017 I read an article from Inside Higher Ed Smarter Approach to Teaching Science.  The article talks about a book (Improving How Universities Teach Science: Lessons from the Science Education Initiative) written by Carl Weiman that documents the research and methods to improve science teaching in higher education.  It seems that Dr. Weiman did not give up after all, and he is back and still pushing.  Perhaps the truth is that people that try and change the monolith must be a little bit crazy if crazy is doing the same thing repeatedly and expecting a different outcome. Then again, maybe the horse will learn to sing.

Thanks for Listing to My Musings
The Teaching Cyborg

Does a Letter Grade Tell You Whether Students are Learning?

“If I memorize enough stuff, I can get a good grade.”
Joseph Barrell

What do grades tell you?  Colleges and Universities accept student in part based on their GPA, which is determined by their grades.  Students get excepted as transfers based on the grades they received.  A student’s ability to move on to the next course is dependent on grades.  One of the reasons schools created grades was because of transfers and advanced degrees. “Increasingly, reformers saw grades as tools for system-building rather than as pedagogical devices––a common language for communication about learning outcomes.”  A student’s transcript is a list of the courses they took with the grade they received.  Some employers even look at grades when hiring.

We could forgive society in thinking that grades tell us everything.  In a lot of ways, modern educational institutions seem to center around grades.  Even a lot of educational professionals believe grades tell us everything.  I once participated in a meeting where a school was trying to work out an assessment to prove that an educational intervention was effective.  After a little bit of discussion about some of the possible approaches we could use, one of the individuals that had not participated up to that point spoke up and said:

“All of this is incredibly stupid, a complete waste of time.  We know this technique works.  Anyone that complains is just stupid.  After all the students pass the course, and we have good student distributions.  What more does anyone need besides grades.” (Quote Intentionally not cited)

After this statement, several people in the meeting agreed.  Now there are a lot of issues with grades and GPAs. Leaving aside the issue of grade inflation, let’s ask the question, do grades tell us how much a student learns in a course?  Were letter grades even meant to determine how much a student learns over the length of a course?  Maybe grades were just meant to show what skills a student had mastered at the end of the course?  The last two questions may sound similar, but they are not.

Let’s start with what problems we can run into using grads to assess student learning.  Let’s begin with curved grads.  Faculty started curving grades based on the belief that student grads should match the normal distribution.  The bell curve began to take hold in the early part of the 20th century. “It is highly probable that ability, whether in high school or college, is distributed in the form of the probability curve.” (Finkelstein, Isidor Edward. The marking system in theory and practice. No. 10. Baltimore: Warwick & York, 1913. p79.)  If faculty use a curved grading system, then any variations or changes in student performance based on educational interventions will be covered up by the curved grades.

Outside of curved grades, there is also the fact that different faculty and different schools (if you work in a multi-school system) will often have different grading scales. There is also the argument that the modern grading system is not about teaching but sorting.  “All stratification systems require “a social structure that divides people into categories” (Massey 2007, p. 242). Educational systems are among the most critical such structures in contemporary societies.” (Categorical Inequality: Schools As Sorting Machines).

Suppose we could deal with all the above issues.  We use a fixed (none curved) grading system.  All faculty and schools use the same grading system and the same assessments.  We record all the data year after year.  Now if we introduce an educational innovation and a statistically significant number of students get higher grades.  Then can we use grades to determine student learning?

In short, No, if a higher percentage of students continue to get higher grades, you could say that you have found a better way to teach. You can’t say anything about how much students have learned.  Assessing how much students learn in a course requires a piece of information that the student’s grades don’t provide.

To determine how much a student or group of students learn throughout a course, you need to know what their starting point is.  No student is a blank slate when they start in a course.  While part of the job of an educator is helping the student identify and deal with miss conceptions, incorrect information brought into a class.  Students will also bring correct information into a course.  Suppose you assessed all your students at the beginning of your course and discovered that all the students that got As scored 90% or higher on your pre-assessment.  Did you teach you’re A student’s anything?

 Measuring how much a student learns over a course based on their starting and ending knowledge is called Learning Gains.  The critical thing about Learning Gains is that it is a measure of how much a student can learn.  As an example, your pre-test showed that student A already knows 20% of the material that you will cover in the course.  While Student B already knew 30% of the material.  That means to reach 100% student A needs to learn 80% while student B needs only to learn 70%.  The actual learning gain of a student can be calculated using the mean normalized gain (g), which is calculated by (post-test – pre-test) / (100% – pre-test) = g.

Therefor using pre and post-tests we can measure the actual amount of learning as a fraction of the total learning that can occur over the length of a course.  While grades are useful for a lot of things, they don’t tell us how much students learn throughout a course.  Remember when you’re trying to improve your teaching use a measure that will show you the information you need.

Thanks for Listing to My Musings
The Teaching Cyborg

How do our Students Identify Expertise?

“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”
Charles Darwin

When can you use a title?  What makes someone an expert?  Over the years, I have built several pieces of furniture, tables, bookshelves, and chests does that make me a master carpenter?  I have met several master carpenters and seen their work; I am most definitely not a master carpenter.  Using the book Make Your Own Ukulele: The Essential Guide to Building, Tuning, and Learning to Play the Uke, I’ve built two ukuleles.  While the author of the books says once you’ve made “a professional-grade ukulele” you are a luthier I don’t think I will be calling myself a luthier anytime soon.

I have a lot of “hobbies,” I have made knives, braided whips, bound books, made hard cider, and cooked more things that I can remember.  The only one of my hobbies that I might be willing to use a title for is photography.  I have been practicing outdoor and nature photography for 30+ years, and if you caught me in the right mood, I might call myself a photographer.  What makes photography different? It’s not the time I have put into it, though I have long past the 10,000-hour mark.  I’ve had my work reviewed and excepted by people in the field, not every picture but enough to be comfortable with my skill.

I am selective when it comes to titles and proclaiming my expertise. However, there are people that are not selective about their expertise.  Believing your knowledge to be greater than it is, is common enough to have a name the Dunning–Kruger effect.  However, an even bigger problem than an individual mistaking their knowledge is when an individual mistakes their knowledge and present themselves as an expert.

The internet and self-publishing have increased our access to knowledge and different points of view.  Previously it was simply not possible, for multiple reasons, to publish everything, so editors and review boards had to decide what to publish.

While the benefits to open publications are significant, we must ask without “gatekeepers” how do we identify expertise?  Many people may ask, “why do we care?”  Well, we have issues like GMOs, STEM cell therapy, cloning, genetically engineered humans, and technology we have not even thought of yet.  How will people decide what to do with these technologies if they can’t identify expertise?

A great example of this is a recent study on GMO’s Those who oppose GMO’s know the least about them — but believe they know more than experts.  In the study, most people said that GMOs are unsafe to eat, which differs from scientist where the majority say GMOs are safe.  People’s views of GMOs are not a surprise news coverage of GMO clearly shows how people feel.  The interesting thing was the second point covered in the study.  The people that were most opposed to GMOs thought they knew the most about them.  However, when this group of self-identified experts had their scientific knowledge tested, they scored the lowest.

The difference between people’s beliefs and actual knowledge gets even more complicated when we move beyond GMOs.  While the consensus is that GMOs are safe and could be beneficial, their loss isn’t instantly deadly.  After all, we haven’t developed that GMO that will grow in any condition and solve world hunger or capture all the excess CO2 from the atmosphere.  However, what about the Anti-vaccination movement?  I’m not going to get into all the reasons people think they shouldn’t get vaccinated. However, let’s talk about how their action will affect you.

I know a lot of people say it’s just a small percentage and I’ve been vaccinated so ignore it.  You may even be one of them, let me ask you to have you heard about things like efficacy and herd immunity?  Additionally, do you remember or know that the measles can kill? Let’s look at the numbers, according to the CDC; the Measles vaccine is 93% effective.  Using the recommended two doses, 3 out of every 100 people that are vaccinated can get the measles.  Even if everyone in the US were vaccinated, there would be 9.8 million people still susceptible to measles.

A lot of people don’t believe this; after all, we don’t see millions of measles cases every year.  Herd immunity (community immunity) is the reason we don’t see millions of cases.  The idea is if enough people in a community are immunized, illness can’t spread through the community. So even if you are one of the individuals were the vaccine was ineffective, you don’t catch the disease because the individuals around you have an effective immunization.

What percentage of vaccination against measles grants herd immunity?   According to a presentation by Dr. Sebastian Funk Critical immunity thresholds for measles elimination for herd immunity to work for measles, the population needs an immunization level of 93-95%.  According to the CDC, the percentage of individuals 19-35 months is 91.1% while the percentage of individuals 13-17 years old is 90.2%. That is below the level needed for herd immunity.  Therefore, individuals choosing not to get vaccinated are endangering, not just themselves but others.

Fortunately, we know individuals can learn earlier this year Ethan Lindenberger, an 18-year-old teen that got himself vaccinated against his anti-vaccination mother’s wishes testified before congress about how he made the decision. A lot of what he talked about was reading information from credible sources and real experts.

So how do we teach students to identify credible experts and valid information?  I have heard a lot of faculty say identifying reliable experts is easy. You look at who they are and where they work.  Well, it’s not quite that easy; for example, Andrew Wakefield was a gastroenterologist and a member of the UK medical register and published researcher.  He claimed that the MMR vaccine was causing bowel disease and autism.  After his research was shown to be irreproducible and likely biased and fraudulent, the general medical council removed him from the UK medical register.  However, he continues to promote anti-vaccine ideas.

We need a better approach than where they work.  Dr. David Murphy suggests we interrogate potential experts using the tools of the legal system interrogation and confrontation. Gary Klein suggests a list of seven criteria;

  1. Successful performance—measurable track record of making good decisions in the past.
  2. Peer respect.
  3. Career—number of years performing the task.
  4. Quality of tacit knowledge, such as mental models.
  5. Reliability.
  6. Credentials—licensing or certification of achieving professional standards.
  7. Reflection.

While none of these criteria are guarantees individually taken as a whole, they can give a functional assessment of expertise.  However, we don’t often interview every individual we encounter in research. A third and likely most applicable approach involves reading critically and fact-checking.  To quote a phrase, “we need to teach students to question everything.”

One approach is the CRAAP test (Currency, Relevance, Authority, Accuracy, and Purpose) developed by Sarah Blakeslee of California State University, Chico.  The CRAAP Test is a list of questions that the reader can apply to a source of information to help determine if the information is valid and accurate.  The questions for Currency are:

  • When was the information published or posted?
  • Has the information been revised or updated?
  • Does your topic require current information, or will older sources work as well?
  • Are the links functional?

The currency questions address the age of the information.  Each section of the CRAAP test has 4 – 6 questions. The idea behind the CRAAP test is that once the researcher/student answers all the questions, they will be able to determine if the information is good or bad.

As an alternative or perhaps compliment, we should be teaching our student to think and behave like fact-checkers.  One of the most compelling arguments about fact-checkers comes from the book Why Learn History (When It’s Already on Your Phone)by Sam Wineburg.  In chapter 7: Why Google Can’t Save Us, the author talks about a study where Historians (average age 47) from several four-year institutions were asked to compare information about bullying on two sites. A long-standing professional medical organization maintains one site. While a small splinter group maintains the other (the issues that caused the split was adoption by same-sex couples).  A group of professional fact-checkers also examined the two sites.

Many of the professional histories decided that the splinter group was the more reliable source of information.  In contrast, the fact-checkers decided that the original organization was the most reliable.  The difference between the two groups is what the author calls vertical (historians) versus lateral (fact-checkers) reading.  The historians tend to read down the page and look at internal information.  The fact-checkers jump around and leave the page to check additional information like where these two organizations came from, what others write about them, and what other groups and individuals say about the same questions.

The way information is published and disseminated has changed and will likely continue to change as the tools become easier to use and cheaper.  Education needs to change how we teach our student to evaluate information.  I think I will argue for a bit of lateral thinking.

Thanks for Listing to My Musings
The Teaching Cyborg