Does Technology Change What It Means to Cheat?

“The first and worst of all frauds is to cheat oneself.”
Gamaliel Bailey

I came across an article from the Chronicle of Higher Education A Professor Wants to Fail Students for Sharing Information in an Online Chat. But Has Tech Changed What Qualifies as Cheating?  The article’s title proposes a question about what effect technology has on the meaning of cheating in education.  I’ve previously written about how technology makes it easier for students to commit plagiarism and gives faculty better tools to catch plagiarism (Technology and Plagiarism.) I didn’t address the issue of whether technology changes plagiarism or cheating.

When we talk about technology and cheating, we generally talk about how technology makes it easier for students to cheat.  We talk about training faculty to use technology and tools to catch students that are cheating.  We rarely talk about whether technology changes cheating.  So, what does it mean to cheat?  Some things are apparent or should be, plagiarism, copying another student’s answers off a test, and submitting somebody else’s work as your own is cheating.

However, at a fundamental level, what does it mean to cheat?  Like many of the words in the English language, according to Merriam-Webster’s dictionary, the word cheat has lots of definitions, specifically 12.  The definition that works best for education is “: to violate rules dishonestly.”

Using the definition from Webster cheating, therefore, is anything that the rules of the course say.  With these definitions in mind, we can ask the question, does the technology in the chronical article change what it means to cheat? 

The technology in question is the messaging app GroupMe.  The app allows students to send messages to small or large groups of individuals.  GroupMe appeals to students because it is free, and it enables students to communicate without sharing personal information.  The students in question were in an online anthropology course at the University of Texas at Austin.  The professor, John Kappelman, Ph.D., has a course rule “Students are not permitted to ask about, discuss, or share information related to exams and labs.”

One of the students in the anthropology course shared exam information in the GroupMe.  In response, the professor recommended that the dean fail the 70+ students using the GroupMe app for his class.  At the time of this writing, the 70+ cases are still under review, and because of ethical rules, we may never learn the outcome of most of the cases.  However, most schools would agree that a faculty member has the authority to set rules, expectations, and consequences in their course.

If we accept the fact that faculty have the right to set their own rules and expectations, then the student posting exam information has violated the rules and therefore cheated.  What about the other 70+ students using the GroupMe app do they also deserve to be failed?  Did all 70+ students in the GroupMe app cheat? Let’s leave aside the question of whether a failing grade is a correct punishment, something I suspect would generate debate and ask did the other students cheat?

The chronical article is a little short on the facts. However, a report from the Huston Chronical, 70 University of Texas students face discipline for group message about exam offers a little more information.

“Around the time of the Anthropology course’s second exam earlier this month, she said a student had posted in the GroupMe asking what might be on the test. Another student responded with a list of all the textbook concepts the class had reviewed up to the exam, she said. A few hours later, she received Kappelman’s email [this was the email were Kappelman said he was recommending that the dean fail all the students].”

With the information from the Huston Chronical, we can now put the students into three groups. Group One is the student that requested proscribed information.  Group Two is the student that provided the proscribed information.  Group Three is all the other the students in the GroupMe. Again, leaving aside any debate as to whether this question should be considered cheating, following professor Kappelman’s rules, the students in Group One and Two have cheated.

What about Group Three? The other 70+ users from the class that saw the list of topics.  Let’s propose a theoretical alternative.  Suppose the professor did not have access to GroupMe, and instead of posting a list of topics covered, the student posted the answer key to the exam.  If other students read this information and then took the exam, a reasonable person would say they cheated.

However, let’s suppose that instead of being a real-time app, the student had to log-in each time to see their messages.  Suppose the student posted the exam answer key.  What if a student never logged-on to the app in between the time the student posted the answers and the time of the test?  Again, I think it is clear they didn’t cheat; they never saw the answers.

Let’s also think about a third situation.  Again, using the situation in which a student posted the exam key.  Suppose a student logged into the app and saw the exam key.  Instead of taking the exam, the student immediately contacts the professor and explains to them what happened.  One could technically say by reading the message, the student participated in a rule-breaking conversation.  However, they were not dishonest about it, and they did not seek to gain from the illicit knowledge they gained, so no, they didn’t cheat.

So, the real problem here is that most of the students, Dr. Kappelman, is punishing nether requested or posted rule braking material. Additionally, we don’t know what the students would have done or even how many saw the message.  It is also not clear if there is even a way to determine who saw and who did not see the information.

However, does the fact that it is not clear whether 70+ students cheated mean that technology has changed what cheating is?  I don’t think so; this entire situation could have happened without the GroupMe app.  Suppose instead of the instant message app, students used an old-fashioned telephone and answering machine or even paper letters mailed to each other.  One could also imagine a situation in which students posed notes on a contact board at a local coffee shop.

Suppose there is a coffee shop that all the anthropology students use.  A student posts a note on the contact board at that coffee shop saying, “Does anyone know anything about the exam coming up?” Another student posts the answer key on the contact board.  If a student comes in and reads the answer key and then goes and takes the exam, they would be cheating.  If a student comes in the coffee shop and never looks at the contact board and, therefore, never sees the answer key when they take the test, they would not be cheating.  Additionally, if a student reads, the contact board sees the answer key and then goes and tells the professor they are also not cheating.   Just like in the GroupMe case, the hard part would be finding a way for the students to prove whether they had seen the information on the contact board.

If you spend some time and think about it, you will see that modern technology rarely creates new situations.  Modern technology makes things easier and faster than previously posable.  Therefore we are not facing a situation in which students are cheating in fundamentally new ways. It is simpler, faster, and easier for them to cheat.  The problem at the core of the GroupMe scandal is the misuse of technology and incomplete rules, not changes to cheating.

Thanks for Listing to My Musings
The Teaching Cyborg

Much Ado about Lectures

“Some people talk in their sleep. Lecturers talk while other people sleep”
Albert Camus

The point of research is to improve our knowledge and understanding.  An essential part of research is the understanding that what we know today may be different from what we know tomorrow as research progresses, our knowledge changes.  Conclusions changing over time does not mean the earlier researchers were wrong. After all, the researchers based their conclusions on the best information; they had available at the time.  However, future researchers have access to new techniques, equipment, and knowledge, which might lead to different conclusions.  Education is no different. As research progresses and we get new and improved methods, our understanding grows.

Out of all the topics in educational research, the most interesting is the lecture.  No subject seems to generate as much pushback as the lecture.  A lot of faculty seems to feel the need to be offended for the lecture’s sake.  Anyone that has trained at a university and received a graduate degree should understand that our understanding changes over time.  Yet no matter how much researchers publish about the limited value of the lecture in education, large numbers of faculty insist the research must be wrong.

I suspect part of the push back about the lectures is because lecturing is what a lot of faculty have done for years.  If they except that the lecture is not useful, then they have been teaching wrong.  Faculty shouldn’t feel bad about lectures; after all, it is likely what they experienced in school.  I think it is the faculty member’s own experience with lectures as students that lead to the problem.  I have had multiple faculty tell me over the years some version of the statement “The classes I had were lectures, and I learned everything, so lectures have to work.”

The belief that you learned from lectures when you where a student is likely faulty.  The reason this belief is defective is that you have probably never actually had a course that is exclusively a lecture course.  I can hear everyone’s response as they read that sentence, “what are you talking about as a student most of my classes were lectures.  I went into the classroom, and the teacher stood at the front and lectured the whole period. So, of course, I have had lecture courses.”

Again, I don’t think most people have ever had an exclusive lecture course. Let’s braked down a course and see if you really can say you learned from the lecture.  First, did your course have a textbook or other readings assignments?  Just about every course I took had reading assignments.  In most of my classes, I spent more time reading then I spent in the class listing to the lecturer.  Most of my courses also had homework assignments and written reports.  Many of the courses also had weekly quizzes, and one or two midterms were we could learn from the feedback.

Can you honestly say that in a lecture course, you didn’t learn anything from the course readings?  That you didn’t learn anything from the homework assignments and papers. That you didn’t learn anything by reviewing the graded homework assignments, papers, quizzes, and midterms, the truth is even in a traditional lecture course, there are lots of ways for students to learn.  As a student, it is next to imposable to determine how much you learn from any one thing in a course.  So, with all these other ways to learn in a “Lecture” course, can you honestly say you learned from the lecture?  In truth, the only way to have a course where you could say you learned from the lecture is if you had a course that only had a lecture and final, no readings, no assignments, no exams with feedback, only a lecture.

However, there is an even deeper issue with the lecture, then the faculty insisting it works (without any real evidence.)  As faculty members, what should our goal as a teacher be?  It is quite reasonable to say that anyone teaching at a college, university, or any school should attempt to provide the best learning environment they can.  So, even if we accept the argument that students can learn from, let’s call it, a traditional lecture (I don’t) if the research says there is a better way to teach shouldn’t we be using it?

If faculty approach teaching based on what is the best way to teach, it does not matter if students can learn from lectures if there is a better way to teach, we should use it.  The research says we should be using Active Learning when we teach because it benefits the students.  A recent article, Active learning increases student performance in science, engineering, and mathematics from PNAS show that students in classes that don’t use active learning are 1.5 times more likely to fail the course.  At a time when universities and the government are pushing for higher STEM graduation rates, active learning would make a big difference.

So how much of a problem is the lecture?  I know a lot of faculty that say they use active learning in their classrooms.  In a recent newsletter from the Chronicle of Higher Education, Can the Lecture Be Saved? Beth McMurtrie states, “Most professors don’t pontificate from the moment class starts to the minute it ends, but lecturing is often portrayed that way.”

However, a recent paper from the journal Science Anatomy of STEM teaching in North American universities might refute this statement.  The Science paper shows, at least in the STEM disciplines, that when classroom teaching methods are observed rather than reported by survey, 55% of all the course observed are traditional lectures.  Only 18% of the courses are student-centered active learning environments.  The rest have some amount of active learning.

Regardless of whether you think the lecture works or not, it is long past time to change.  There is no reason to feel ashamed or think poorly about faculty that used lectures in the past.  After all, for a lot of reasons, lectures where believed to work.  However, we are also long past the time where anyone should be offended for the lecture’s sake.  We need to use the best teaching methods currently available.  The best methods are the techniques called active learning because students measurably learn better than in a traditional lecture.

Thanks for Listing To my Musings
The Teaching Cyborg

In Education What Does It Mean to Be Competent?

“What you know is more important than where or how you learned it.”
Tina Goodyear

While competency-based education (CBE) has been part of US education for 40- or 50-years, interest has been increasing over the last couple of years.  A faculty member I was working with once described a problem they were having at his school.  He worked with a system of schools that used a common course system across all their campuses.  A common course system can solve a lot of issues. The common course system allows students to transfer between schools smoothly.  It also lets the system office negotiate guaranteed transfer agreements with other universities for all the schools in the system rather than each school having to negotiation individual transfer agreements.

However, how the system he worked at maintained their common courses system was causing problems.  The system office maintained a central list of the learning outcomes of the common courses.  When a school taught a class, they only needed to teach 80% of the outcomes that were on the common list.  If the common list had 26 learning outcomes, you only need to teach 21 (20.8). Faculty don’t have to teach five of the learning outcomes on the common course list.

To pass a common course, the student must earn at least a C (70% of the learning outcomes taught).  That means a student can pass while only learning 15 (14.56) learning outcomes.  Therefore, a student can pass without knowing 11 of the 26 learning outcomes on the system’s core list. Taken to the extreme, it means that two students each from a different school that both earned a C and transferred to the same school might only have four learning outcomes in common between them.

The committee my friend was working with suggested the implementation of competency-based education as a solution to the problem with their current common course system.  I asked how they were planning on implementing CBE.  He answered, “well, we already have learning goals all we need to do is turn them into competencies then modify our assessments a little, and we will be doing CBE.”

I remember asking, “are you changing how you assign grades?”  “If you’re not changing the grades, are you going to change how your transcript?”  If you don’t make changes like “grading” differently or list mastered competencies on your transcripts, you will still have the same problem. A lot of people that are trying to jump on the CBE bandwagon are just rephrasing their learning goals into “competencies.”

Implement of CBE requires changes to the whole system.  One of the core ideas behind competency-based education is that given enough time, most people can master any concept.

“Supporters of mastery learning have suggested that the vast majority of students (at least 90%) have the ability to attain mastery of learning tasks (Bloom, 1968; Carroll, 1963). The key variables, rather, are the amount of time required to master the task and the methods and materials used to support the learning process.” (How did we get here? A brief history of competency‐based higher education in the United States)

This one idea turns the current educational system on its head.  In most schools,’ students’ progress is measured by the number of credits earned.  Students earn credits by passing a class.  If a student passes the class, they get the credits whether they get an A or C. Institutions assign the number of credits to a course based on the number of hours the course meets.  This system, the Carnegie Unit, was established over a century ago by the Carnegie Foundation.  Therefor students earn credits based on time.

However, the Carnegie Unit or Credit Hour was initially created as part of a program to determine eligibility in the Carnegie Pension Plan (today is known as TIAA-CREF).

“To qualify for participation in the Carnegie pension system, higher education institutions were required to adopt a set of basic standards around courses of instruction, facilities, staffing, and admissions criteria. The Carnegie Unit, also known as the credit hour, became the basic unit of measurement both for determining students’ readiness for college and their progress through an acceptable program of study.” (The Carnegie Unit A Century­ old Standard In A Changing Education Landscape)

While the Carnegie Unit brought standardizations to a nascent US educational system, it is possible, if not likely, that we have become too focused on the easily measured like the Carnegie unit.  In the CBE system, students earn credits based on mastery of concepts. Therefore, students take as much or little time as they need to master concepts and move forward at a pace that best suits them.  CBE puts the information learned as the central component used to earn credits, not the length of time spent in a course.

Beyond restructuring the educational experience to focus on mastery, there are questions about assessments.  It is not merely a matter of rewording learning goals into competencies.  Course designers build competencies around what students should be able to do, or vice versa.  The assessments must be carefully thought out to match the desired outcome and then ascertain whether the student has mastered the competency.  While the process of assessment creation is involved, the fact that schools like Western Governor’s University and the University of Wisconsin’s Flexible Option program are using CBE can provide examples and a knowledge pool to develop new programs.

I don’t know if most of the educational system will adopt CBE.  The changes need to the standard system are enormous. After all, if students can learn at their pace, semesters, and time to degree will have to be rethought. However, the thought of competency-based education changing the focus back to learning over sorting is appalling.  The CBE system could also help alleviate student frustrations over a course moving to slow or too fast, leading to higher matriculation rates.  In the long run, I suspect the degree to which CBE is adopted will depend mostly on the success of the institutions currently leading the way.  Regardless of the success or failure of CBE, it will be fun to follow the developments in CBE over the next several years.

Thanks for Listing to My Musings
The Teaching Cyborg

Is it Dedication or Delusion?

“Delusion is the seed of dreams.”
Lailah Gifty Akita

Educational reform is a never-ending process, which is, in many ways, good.  The purpose of educational institutions is to provide the best education possible.  The individual teacher learns from experience and improves over time.  Research into learning and cognition lead to better understandings of how people learn and therefor better ways to teach.

However, even with our continually improving knowledge, changes in education seem painfully slow or to not occur at all.  A consistent problem is classroom size.  While just about anyone that has studied education will agree that the best way to teach someone is with a dedicated teacher in a one on one environment (feel free to disagree I would love to hear your reasons). However, in a society that wants education especially higher education available to everyone one on one education is not possible.

Don’t believe me look at the numbers.  According to the US census bureau, there are 76.4 million students in school K through University.  That means we would need 76.4 million teachers if we paid them an average living wage including overhead each teacher would make $41,923 – $46,953 (still a little low if you ask me)  this works out to 3.2 – 3.5 trillion dollars or 17-19% of the US Gross National Product.  As a comparison, the budget for the US national government was 21% of the GDP in 2015.  Also, 76.4 million students are 24.7% of the US population, three and older, if we also had 24.7% of the US population working as teachers, then almost half of the US population would be students or teachers. Remember we would still need all the support staff, and these are with current numbers, not what we would need for everyone eligible for school.

I don’t think any country can afford to devote that much of their population and resources to one thing and survive.  As someone that loves education, I would love it if some economist out there proves me wrong.  So, class size is a compromise between what we can afford to do and the best environment for our students.

However, outside of issues that are constrained by shall we call it a reality.  We have all seen programs and projects that we think can help students get canceled.  We have all seen programs developed by grants get canceled the second the grant ends.  The loss of these programs is not only that future students will not benefit, but also the loss of resources, including time, commitment, and motivation of staff.

I have been asked after several of my programs have been canceled “how many times are you going to keep building programs that just get canceled?” It’s an interesting question and one that is not easy to answer.  I was at the University of Colorado Boulder when Carl Wieman won the 2001 Noble prize for Physics.  After winning the Nobel prize, Wieman went on to advocate for the improvement of science education.  To the extent that he was appointed the White House’s Office of Science and Technology Policy Associate Director of Science in 2010.  In 2013 I remembered reading an article Crusader for Better Science Teaching Finds Colleges Slow to Change that was about Dr. Weiman and his frustrations with the slow changes in higher education “… Mr. Wieman is out of the White House. Frustrated by university lobbying and distracted by a diagnosis of multiple myeloma, an aggressive cancer of the circulatory system, he resigned last summer. … “I’m not sure what I can do beyond what I’ve already done,” Mr. Wieman says.”

You can’t help but think if someone with the prestige and influence of Carl Weiman can’t encourage change what hope does anyone else have.  The truth of the matter is that how much someone can take and when they have had enough is a personal question.  When thinking about how much is enough, I can’t help but think of a humorous little fable Nasreddin and the Sultan’s Horse.  I have encountered versions of this fable many times.  I think the first time was in the science fiction book The Mote in God’s Eye by Larry Niven and Jerry Pournelle.

Nasreddin and the Sultan’s Horse

One day, while Nasreddin was visiting the capital city, the Sultan took offense to a joke that was made at his expense. He had Nasreddin immediately arrested and imprisoned; accusing him of heresy and sedition. Nasreddin apologized to the Sultan for his joke and begged for his life; but the Sultan remained obstinate, and in his anger, sentenced Nasreddin to be beheaded the following day. When Nasreddin was brought out the next morning, he addressed the Sultan, saying “Oh Sultan, live forever! You know me to be a skilled teacher, the greatest in your kingdom. If you will but delay my sentence for one year, I will teach your favorite horse to sing.”

The Sultan did not believe that such a thing was possible, but his anger had cooled, and he was amused by the audacity of Nasreddin’s claim. “Very well,” replied the Sultan, “you will have a year. But if by the end of that year you have not taught my favorite horse to sing, then you will wish you had been beheaded today.”

That evening, Nasreddin’s friends could visit him in prison and found him in unexpected good spirits. “How can you be so happy?” they asked. “Do you really believe that you can teach the Sultan’s horse to sing?” “Of course not,” replied Nasreddin, “but I now have a year which I did not have yesterday, and much can happen in that time. The Sultan may come to repent of his anger and release me. He may die in battle or of illness, and it is traditional for a successor to pardon all prisoners upon taking office. He may be overthrown by another faction, and again, it is traditional for prisoners to be released at such a time. Or the horse may die, in which case the Sultan will be obliged to release me.”

“Finally,” said Nasreddin, “even if none of those things come to pass, perhaps the horse can sing.”

In 2017 I read an article from Inside Higher Ed Smarter Approach to Teaching Science.  The article talks about a book (Improving How Universities Teach Science: Lessons from the Science Education Initiative) written by Carl Weiman that documents the research and methods to improve science teaching in higher education.  It seems that Dr. Weiman did not give up after all, and he is back and still pushing.  Perhaps the truth is that people that try and change the monolith must be a little bit crazy if crazy is doing the same thing repeatedly and expecting a different outcome. Then again, maybe the horse will learn to sing.

Thanks for Listing to My Musings
The Teaching Cyborg

Does a Letter Grade Tell You Whether Students are Learning?

“If I memorize enough stuff, I can get a good grade.”
Joseph Barrell

What do grades tell you?  Colleges and Universities accept student in part based on their GPA, which is determined by their grades.  Students get excepted as transfers based on the grades they received.  A student’s ability to move on to the next course is dependent on grades.  One of the reasons schools created grades was because of transfers and advanced degrees. “Increasingly, reformers saw grades as tools for system-building rather than as pedagogical devices––a common language for communication about learning outcomes.”  A student’s transcript is a list of the courses they took with the grade they received.  Some employers even look at grades when hiring.

We could forgive society in thinking that grades tell us everything.  In a lot of ways, modern educational institutions seem to center around grades.  Even a lot of educational professionals believe grades tell us everything.  I once participated in a meeting where a school was trying to work out an assessment to prove that an educational intervention was effective.  After a little bit of discussion about some of the possible approaches we could use, one of the individuals that had not participated up to that point spoke up and said:

“All of this is incredibly stupid, a complete waste of time.  We know this technique works.  Anyone that complains is just stupid.  After all the students pass the course, and we have good student distributions.  What more does anyone need besides grades.” (Quote Intentionally not cited)

After this statement, several people in the meeting agreed.  Now there are a lot of issues with grades and GPAs. Leaving aside the issue of grade inflation, let’s ask the question, do grades tell us how much a student learns in a course?  Were letter grades even meant to determine how much a student learns over the length of a course?  Maybe grades were just meant to show what skills a student had mastered at the end of the course?  The last two questions may sound similar, but they are not.

Let’s start with what problems we can run into using grads to assess student learning.  Let’s begin with curved grads.  Faculty started curving grades based on the belief that student grads should match the normal distribution.  The bell curve began to take hold in the early part of the 20th century. “It is highly probable that ability, whether in high school or college, is distributed in the form of the probability curve.” (Finkelstein, Isidor Edward. The marking system in theory and practice. No. 10. Baltimore: Warwick & York, 1913. p79.)  If faculty use a curved grading system, then any variations or changes in student performance based on educational interventions will be covered up by the curved grades.

Outside of curved grades, there is also the fact that different faculty and different schools (if you work in a multi-school system) will often have different grading scales. There is also the argument that the modern grading system is not about teaching but sorting.  “All stratification systems require “a social structure that divides people into categories” (Massey 2007, p. 242). Educational systems are among the most critical such structures in contemporary societies.” (Categorical Inequality: Schools As Sorting Machines).

Suppose we could deal with all the above issues.  We use a fixed (none curved) grading system.  All faculty and schools use the same grading system and the same assessments.  We record all the data year after year.  Now if we introduce an educational innovation and a statistically significant number of students get higher grades.  Then can we use grades to determine student learning?

In short, No, if a higher percentage of students continue to get higher grades, you could say that you have found a better way to teach. You can’t say anything about how much students have learned.  Assessing how much students learn in a course requires a piece of information that the student’s grades don’t provide.

To determine how much a student or group of students learn throughout a course, you need to know what their starting point is.  No student is a blank slate when they start in a course.  While part of the job of an educator is helping the student identify and deal with miss conceptions, incorrect information brought into a class.  Students will also bring correct information into a course.  Suppose you assessed all your students at the beginning of your course and discovered that all the students that got As scored 90% or higher on your pre-assessment.  Did you teach you’re A student’s anything?

 Measuring how much a student learns over a course based on their starting and ending knowledge is called Learning Gains.  The critical thing about Learning Gains is that it is a measure of how much a student can learn.  As an example, your pre-test showed that student A already knows 20% of the material that you will cover in the course.  While Student B already knew 30% of the material.  That means to reach 100% student A needs to learn 80% while student B needs only to learn 70%.  The actual learning gain of a student can be calculated using the mean normalized gain (g), which is calculated by (post-test – pre-test) / (100% – pre-test) = g.

Therefor using pre and post-tests we can measure the actual amount of learning as a fraction of the total learning that can occur over the length of a course.  While grades are useful for a lot of things, they don’t tell us how much students learn throughout a course.  Remember when you’re trying to improve your teaching use a measure that will show you the information you need.

Thanks for Listing to My Musings
The Teaching Cyborg

1, 2, 3, 4, 5, and 6? Extinctions

“Extinction is the rule. Survival is the exception.”
Carl Sagan

An article in Scientific America asked an interesting question, Why Don’t We Hear about More Species Going Extinct? There have been a lot of stories about the planet being in the middle of the 6th mass extinction.  Reports are saying that the rate of extinction is as much a 1000 times normal.  If these articles are correct shouldn’t we see articles in the news about species going extinct?  However, I wonder if people even understand the context of mass extinctions?  If asked, what is a mass extinction, could you answer? 

To understand what a mass extinction is, we need to understand life on earth and the fossil record.  All five existing mass extinctions are in the fossil record.  The first life that appeared were microbes around 3.7 billion years ago.  They lived in a world that was quite different from present-day earth.  The atmosphere was almost devoid of O2 (molecular oxygen) and high in things like methane.  Molecular oxygen is highly reactive and will spontaneously react with any oxidizable compounds present.  The early earth was full of oxidizable compounds, any molecular oxygen that did appear was almost instantly removed by chemical reaction.

About 1.3 billion years later the first cyanobacteria evolved, these were the first photo-synthesizers. Over possibly hundreds of millions of years molecular oxygen produced by the cyanobacteria reacted with compounds in the environment until all the oxidizable compounds were used up.  A great example of this is banded iron deposits.  Only when molecular oxygen reacted with all the oxidizable compounds could molecular oxygen begin to accumulate in the environment. 

After another 1.7 billion years the first multicellular organisms, sponges appeared in the fossil record.  Around 65 million years later a group of multicellular organisms called the Ediacaran Biota joined the sponges on the seafloor.  Most of these organisms disappeared around 541 million years ago.  However, the loss of the Ediacaran Biota is not one of the five mass extinction events.  How much of an evolutionary impact the Ediacaran Biota had on modern multicellular organisms is still an open question.  Most of the Ediacaran Biota had body planes quite different from modern organisms.

The next period is especially important; it started about 541 million years ago and lasted for about 56 million years.  The period is known as the Cambrian.  This period is referred to as the Cambrian explosion because all existing types (phyla) of organisms we see in modern life emerged during this period. The Cambrian explosion is also essential because the diverse number and types of organisms that evolved during the Cambrian explosion form the backdrop for mass extinctions.

The first mass extinction occurred 444 million years ago at the end of the Ordovician period.  During this extinction event, 86% of all species disappeared from the fossil record over about 4.4 million years. Global recovery after the extinction event took about 20 Million years

The Second mass extinction occurred at the end of the late Devonian Period.  The Devonian extinction is the extinction that eliminated the Trilobites.  During this extinction event, 75% of all species disappeared from the fossil record over as much as 25 million years

The third and largest mass extinction occurred at the end of the Permian period 251 million years ago.  During this mass extinction, 96% of all species disappeared from the fossil record over 15 million years.  Research suggests that the Permian mass extinction took 30 million years for full global recovery.

The fourth mass extinction occurred 200 million years ago at the end of the Triassic period.  During this mass extinction, 80% of all species disappeared from the fossil record. The Triassic mass extinction appears to have occurred over an incredibly short period, less than 5000 years.

The fifth mass extinction occurred at the end of the Cretaceous period 66 million years ago.  This extinction is by far the most famous of the mass extinctions because it is the meteor strike that killed the dinosaurs.  During this extinction, 76% of all the spices disappeared from the fossil record.  Research suggests this mass extinction only took 32,000 years.

Now that we have looked at mass extinctions what about regular extinctions.  Normal or background extinction rate is the number of extinctions per million species per year (E/MSY).  Current estimates put the background extinction rate at 0.1 E/MSY.  If the the current extinction rate is 1000 times the background extinction rate, then currently the extinction rate is 100 E/MSY.

The current estimate for the total number of species is 8.9 million.  That means that 890 species are going extinct every year or 2.5 species a day.  So why don’t we hear more about spices going extinct if the extinction rate is that high?  First, the current catalog of identified species is 1.9 million, which means there are currently 7 million species (79%) that are undescribed.  That means 700 of the 890 extinctions a year would be in species that scientists haven’t identified.

The second problem is that even with identified species, it is often difficult to know if a species has gone extinct.  The International Union for Conservation of Nature (IUCN) maintains the Red List of critically endangered species.  One of the categories is Possibly Extinct (PE) based on the last time anyone saw an organism.  For example, no one has seen the San Quintin Kangaroo Rat in 33 years, no one has seen the Yangtze River Dolphin in 17 years, and no one has seen the Dwarf Hutia in 82 years.  It is likely that these three spices, along with several others, are extinct.

However, not being seen is not good enough to classify a species extinct.  After all, the Coelacanth was thought to be extinct for 65 million years until a fisherman caught one in 1938.  For a species to be declared extinct, a thorough and focused search must be made for the organism to declare it extinct.  These types of searches require time, personnel, and money.  Therefore, searches don’t often happen.  So with the exceptions of particular cases, like Martha the last Carrier pigeon who died on September 1, 1914, most species go extinct with a whimper, not a bang.

We don’t hear more about species going extinct because even knowing extinctions are occurring, in many cases, we don’t know about them.   Returning to the question of what is a mass extinction, and could there be a 6th happening?

Use the five existing mass extinctions as examples a simple definition of a mass extinction is an event where 75% or more of the existing species become extinct within a short (less than 30 million years) time.  Using the current estimated number of spices, and the current estimated rate of extinction we can calculate how long it will take to reach the 75% mark the answer is 7500 years. Since 7500 years is less than 30 million years, we could be on course for a 6th mass extinction.  However, as Doug Erwin says we are not in the middle of the 6th mass extinction.  If we were in the middle of a mass extinction like Dr. Erwin said cascade failures would already have started in the ecosystem and there would be anything we could do.  However, that is good news we still have time to do something.

What we need is an accurate count of extinct species.  So, do you have a class that could do fieldwork?  There is probably a critically endangered species near you.  Maybe you will even be lucky, and you will find the species then you can help with a plan to save it.

Thanks for Listening to My Musings
The Teaching Cyborg

How do our Students Identify Expertise?

“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”
Charles Darwin

When can you use a title?  What makes someone an expert?  Over the years, I have built several pieces of furniture, tables, bookshelves, and chests does that make me a master carpenter?  I have met several master carpenters and seen their work; I am most definitely not a master carpenter.  Using the book Make Your Own Ukulele: The Essential Guide to Building, Tuning, and Learning to Play the Uke, I’ve built two ukuleles.  While the author of the books says once you’ve made “a professional-grade ukulele” you are a luthier I don’t think I will be calling myself a luthier anytime soon.

I have a lot of “hobbies,” I have made knives, braided whips, bound books, made hard cider, and cooked more things that I can remember.  The only one of my hobbies that I might be willing to use a title for is photography.  I have been practicing outdoor and nature photography for 30+ years, and if you caught me in the right mood, I might call myself a photographer.  What makes photography different? It’s not the time I have put into it, though I have long past the 10,000-hour mark.  I’ve had my work reviewed and excepted by people in the field, not every picture but enough to be comfortable with my skill.

I am selective when it comes to titles and proclaiming my expertise. However, there are people that are not selective about their expertise.  Believing your knowledge to be greater than it is, is common enough to have a name the Dunning–Kruger effect.  However, an even bigger problem than an individual mistaking their knowledge is when an individual mistakes their knowledge and present themselves as an expert.

The internet and self-publishing have increased our access to knowledge and different points of view.  Previously it was simply not possible, for multiple reasons, to publish everything, so editors and review boards had to decide what to publish.

While the benefits to open publications are significant, we must ask without “gatekeepers” how do we identify expertise?  Many people may ask, “why do we care?”  Well, we have issues like GMOs, STEM cell therapy, cloning, genetically engineered humans, and technology we have not even thought of yet.  How will people decide what to do with these technologies if they can’t identify expertise?

A great example of this is a recent study on GMO’s Those who oppose GMO’s know the least about them — but believe they know more than experts.  In the study, most people said that GMOs are unsafe to eat, which differs from scientist where the majority say GMOs are safe.  People’s views of GMOs are not a surprise news coverage of GMO clearly shows how people feel.  The interesting thing was the second point covered in the study.  The people that were most opposed to GMOs thought they knew the most about them.  However, when this group of self-identified experts had their scientific knowledge tested, they scored the lowest.

The difference between people’s beliefs and actual knowledge gets even more complicated when we move beyond GMOs.  While the consensus is that GMOs are safe and could be beneficial, their loss isn’t instantly deadly.  After all, we haven’t developed that GMO that will grow in any condition and solve world hunger or capture all the excess CO2 from the atmosphere.  However, what about the Anti-vaccination movement?  I’m not going to get into all the reasons people think they shouldn’t get vaccinated. However, let’s talk about how their action will affect you.

I know a lot of people say it’s just a small percentage and I’ve been vaccinated so ignore it.  You may even be one of them, let me ask you to have you heard about things like efficacy and herd immunity?  Additionally, do you remember or know that the measles can kill? Let’s look at the numbers, according to the CDC; the Measles vaccine is 93% effective.  Using the recommended two doses, 3 out of every 100 people that are vaccinated can get the measles.  Even if everyone in the US were vaccinated, there would be 9.8 million people still susceptible to measles.

A lot of people don’t believe this; after all, we don’t see millions of measles cases every year.  Herd immunity (community immunity) is the reason we don’t see millions of cases.  The idea is if enough people in a community are immunized, illness can’t spread through the community. So even if you are one of the individuals were the vaccine was ineffective, you don’t catch the disease because the individuals around you have an effective immunization.

What percentage of vaccination against measles grants herd immunity?   According to a presentation by Dr. Sebastian Funk Critical immunity thresholds for measles elimination for herd immunity to work for measles, the population needs an immunization level of 93-95%.  According to the CDC, the percentage of individuals 19-35 months is 91.1% while the percentage of individuals 13-17 years old is 90.2%. That is below the level needed for herd immunity.  Therefore, individuals choosing not to get vaccinated are endangering, not just themselves but others.

Fortunately, we know individuals can learn earlier this year Ethan Lindenberger, an 18-year-old teen that got himself vaccinated against his anti-vaccination mother’s wishes testified before congress about how he made the decision. A lot of what he talked about was reading information from credible sources and real experts.

So how do we teach students to identify credible experts and valid information?  I have heard a lot of faculty say identifying reliable experts is easy. You look at who they are and where they work.  Well, it’s not quite that easy; for example, Andrew Wakefield was a gastroenterologist and a member of the UK medical register and published researcher.  He claimed that the MMR vaccine was causing bowel disease and autism.  After his research was shown to be irreproducible and likely biased and fraudulent, the general medical council removed him from the UK medical register.  However, he continues to promote anti-vaccine ideas.

We need a better approach than where they work.  Dr. David Murphy suggests we interrogate potential experts using the tools of the legal system interrogation and confrontation. Gary Klein suggests a list of seven criteria;

  1. Successful performance—measurable track record of making good decisions in the past.
  2. Peer respect.
  3. Career—number of years performing the task.
  4. Quality of tacit knowledge, such as mental models.
  5. Reliability.
  6. Credentials—licensing or certification of achieving professional standards.
  7. Reflection.

While none of these criteria are guarantees individually taken as a whole, they can give a functional assessment of expertise.  However, we don’t often interview every individual we encounter in research. A third and likely most applicable approach involves reading critically and fact-checking.  To quote a phrase, “we need to teach students to question everything.”

One approach is the CRAAP test (Currency, Relevance, Authority, Accuracy, and Purpose) developed by Sarah Blakeslee of California State University, Chico.  The CRAAP Test is a list of questions that the reader can apply to a source of information to help determine if the information is valid and accurate.  The questions for Currency are:

  • When was the information published or posted?
  • Has the information been revised or updated?
  • Does your topic require current information, or will older sources work as well?
  • Are the links functional?

The currency questions address the age of the information.  Each section of the CRAAP test has 4 – 6 questions. The idea behind the CRAAP test is that once the researcher/student answers all the questions, they will be able to determine if the information is good or bad.

As an alternative or perhaps compliment, we should be teaching our student to think and behave like fact-checkers.  One of the most compelling arguments about fact-checkers comes from the book Why Learn History (When It’s Already on Your Phone)by Sam Wineburg.  In chapter 7: Why Google Can’t Save Us, the author talks about a study where Historians (average age 47) from several four-year institutions were asked to compare information about bullying on two sites. A long-standing professional medical organization maintains one site. While a small splinter group maintains the other (the issues that caused the split was adoption by same-sex couples).  A group of professional fact-checkers also examined the two sites.

Many of the professional histories decided that the splinter group was the more reliable source of information.  In contrast, the fact-checkers decided that the original organization was the most reliable.  The difference between the two groups is what the author calls vertical (historians) versus lateral (fact-checkers) reading.  The historians tend to read down the page and look at internal information.  The fact-checkers jump around and leave the page to check additional information like where these two organizations came from, what others write about them, and what other groups and individuals say about the same questions.

The way information is published and disseminated has changed and will likely continue to change as the tools become easier to use and cheaper.  Education needs to change how we teach our student to evaluate information.  I think I will argue for a bit of lateral thinking.

Thanks for Listing to My Musings
The Teaching Cyborg