Building Build, Thyself

“Good buildings come from good people, and all problems are solved by good design.”
Stephen Gardiner

Years ago, when I was in graduate school, an IT technician was repairing the lab internet. He asked me, “So when will we be able to grow cars?”  The first thing that popped into my mind was how complex a modern car is.  According to Toyota, a modern car is made up of 30,000 parts if you count down to the bolts.  Electric vehicles don’t have as many “parts” according to an article in Handelsblatt Today, an electric car has 200 parts while a gas or diesel car has more than a 1000 parts.  I answered, “It will be quite some time before we can grow a car.  There is still a lot of work to do.”

It might seem strange to ask “about growing cars”; however, writers fill science fiction with the unbelievable.  In the television show Earth: final conflict, the alien Taelons grew buildings. The Leviathans are living spaceships in the television series FarScape. While a science fiction show is not the best barometer for what is possible, it’s not a measure of the imposable either.  When the television show Star Trek debut in 1966, most of the technology seemed imposable.  However, a lot of “Star Trek” technology exists now.  Google translate, while not perfect, makes a passable universal translator.  We also have handheld communicators (cell phones) and tablet computers.  There is even a subset of 3D printers that focus on food (the replicator.)

Technology tends to make truth out of our imagination. That technology is often driven by challenging scientific endeavors.  One of the most complex scientific efforts currently being pursued is sending people to Mars.  One of the biggest problems is providing astronauts with safe housing.  Beyond the extremely thin atmosphere on Mars, the surface of the planet has two other significant issues, the temperature, and the surface radiation.  The average daily temperature of Mars is -81° F (-63° C).  While the average yearly surface radiation on Mars is eight rads, on earth, its 0.63 rads.

The surface of Mars is lethal to astronauts.  Currently, the “best” idea for providing protective habitats for astronauts is to bury the habitat under several feet of Martian soil.  The Martian soil would provide insulation and protect against radiation.   However, burying the habitats would require large equipment so that the astronauts can move large quantities of soil. Alternatively, we could send prebuilt habitats with walls that are highly insulated and resistant to radiation.  The exact thickness and weight of the habitats would depend on the material used.

The biggest problem with these ideas is the weight. Either the habitat or the equipment to build the habitat weighs a lot.  It is both expensive and difficult to transport heavy objects.  According to NASA, it currently costs $10,000 per pound to put an object into earth orbit.

So, what does science fiction technology, questions about growing cars, and visiting Mars have to do with each other?  Well, science is again working towards making science fiction reality.  NASA scientists are researching the possibility of using fungus (mushrooms) to grow buildings.  When we think of fungus, especially mushrooms, what you generally picture is just a small part of the whole organism, the fruiting body.  The fruiting body of the mushroom produces mushroom spores and allows the fungus to spread.

The bulk of the mushroom grows underground or inside a decaying log and is called the mycelium, which is a fibrous material composed of hyphae fibers.  The idea is that engineers will seed lightweight shells with spores and dried food.  Then when the structures reach their destination water, collected from the local environment would activate the spores, which grow filling the shell creating rigid, durable, and insulated buildings.

When the building is full-grown, withholding water and nutrients will stop the growth.  Later if the structure is damaged, astronauts can add water and nutrients, and the building will repair itself. Using biologicals materials like funguses to build buildings would have an additional advantage for places like Mars.  If you want to expand a building, add another shell filled with water and nutrients, the mycelium from the old structure will grow into and fill the new one.

The final advantage of biological buildings is that once they are no longer needed or reach the end of their life, they can be composted and used to either make new buildings or grow crops.  Reusing the fungus as nutrients will reduce the production of waste materials and make the site more efficient.

Additionally, using techniques like CRISPR, the Mycelium could be engineered to secret natural resins or rubbers, turning them into complex composite materials. It is even possible that eventually, we could engineer the fungus to grow into specific shapes.   Imagine a giant puffball mushroom engineered to grow into a hollow sphere 10-12 feet in diameter.

In addition to using fungus, other groups are exploring the use of other organisms to build buildings.  A group out of the University of Colorado at Boulder has developed a method using cyanobacterium.  The researchers’ mix cyanobacterium, gelatin, and sand together into a brick-shaped mold.  The bacteria grow into the gelatin, where it uses light and CO2 to produce calcium carbonate.  The result is a rigid cement-like brick after all calcium carbonate is one of the components of cement. 

Additionally, the bricks can heal themselves if cracked or even reproduce themselves if broken in half.   The researches cut bricks in half placed half back in the mold with more gelatin and sand, and the bacteria reformed the brick.

While I don’t expect to be living in a house, I grew myself anytime soon. It is starting to look like science will again make science fiction a reality.  While most of what scientists are developing is for use in resource-poor areas like the moon or Mars. We will see offshoots of this technology in use here on earth.  For instance, the bricks created by cyanobacterium absorb CO2 from the environment, unlike regular cement, which produces CO2.

Additionally, the company Basilisk out of the Netherlands is already selling self-healing concrete, which uses calcium carbonate producing bacteria.  For schools and universities, there is a tremendous research opportunity.  While researchers have established the basic idea behind biological building materials, there is still a lot to learn.  For example, there are large numbers of microorganisms that deposit minerals, which ones work best.  Does a mix of multiple microbes work better than one?  What is the most efficient sand size is it only one size or various sizes? This type of research that involves testing thousands of small permutations is perfect for undergraduate researchers and undergraduate classes.

I don’t know what effect all these biological materials will have on construction, but I’m sure it will be fascinating.  Maybe next time someone asks me, “when will we grow cars?” I will tell them, “I’m not sure, but I can grow your garage.”

Thanks for Listing to My Musings
The Teaching Cyborg

How Genetic Engineering Should Be Done

“As medical research continues and technology enables new breakthroughs, there will be a day when malaria and most all major deadly diseases are eradicated on Earth.”
Peter Diamandis

It seems that I have written about genetic engineering in humans a lot.  Most of the writing has focused on Dr. He Jiankui and his experiments to produce humans genetically resistant to HIV.  For a while, it was not even clear where Dr. Jiankui was, though he was said to be under house arrest. On January 3, 2020, Nature published a news article, “What CRISPR-baby prison sentences mean for research.” This article adds several pieces of information to the CRISPR-baby story.  First, China has confirmed that there was an additional birth.  Dr. Jiankui had previously stated that a second woman was pregnant.  However, the mother was in the earliest stages, so it was not clear whether the pregnancy would carry to term.  We now know that a third child was born.

Second, Chines news announced that Dr. Jiankui and two of his colleges were convicted.  The Chines court said, “in the pursuit of “fame and profit,” He and two colleagues had flouted regulations and research and medical ethics by altering genes in human embryos that were then implanted into two women.” Dr. Jiankui received the most severe sentence of three years in prison while his calibrators received shorter sentences.

While some scientist thinks this is a positive step. “Tang (a science-policy researcher at Fudan University in Shanghai) says the immediate disclosure of the court’s result demonstrates China’s commitment to research ethics. This is a big step forward in promoting responsible research and the ethical use of technology, she says.”  Lu You another scientist worries this could negatively impact other research into CRISPR mediated social health care. “If I were a newcomer, a researcher wishing to start gene-editing research and clinical trials, the case would be enough to alert me to the cost of such violations.”

I suspect that a lot of people will find it surprising that after the controversy over Dr. Jiankui’s use of CRISPR to engineer babies that there is any work going on using CRISPR and humans.  However, not only is their research into using CRISPR to treat human disease, some of this research has reached the stage of clinical trials.  Additionally, this use of CRISPR is a whole different animal from Dr. Jiankui’s work. Now that we have reached the end of Dr. Jiankui’s story, let’s talk about how to do human genetic engineering correctly.

First, when it comes to human genetic engineering, there are two general classifications, heritable and nonheritable. As the name implies heritable means, it can be passed on to children and released into the general population.  In nonheritable genetic engineering, parents cannot pass the genetic changes to their offspring.  In general, the difference between heritable and nonheritable genetic engineering is the cells that scientists genetically engineer.  The nonheritable engineering usually uses cells taken from an adult often adult stem cell.  In both cases, we will be discussing the use of CRISPR to modify adult blood stem cells.

Blood is composed of four components, red blood cells, white blood cells, platelets, and plasma.  The four types of blood cells have a finite lifetime, and the body continually replaces them.  The body uses stem cells to produce new blood cells.  For example, a red blood cell also known as an erythrocyte, develops from the common myeloid progenitor cell (Figure 1 B).  The common myeloid progenitor cell develops from the Hemocytoblast (Figure 1 A), which is a multipotent stem cell.  Hemocytoblasts are a stem cell because when it divides one of the daughter cells regenerates the Hemocytoblast while the other daughter develops into a mature cell type like an erythrocyte.  It is a multipotent stem cell because its progeny can develop into multiple types of cells (Figure 1 D1-10).

A basic diagram of hematopoiesis. Image modified from Hematopoiesis simple.png by Mikael Häggström. Creative Commons Attribution-Share Alike 3.0 Unported.
A basic diagram of hematopoiesis. Image modified from Hematopoiesis simple.png by Mikael Häggström. Creative Commons Attribution-Share Alike 3.0 Unported.

In addition to regenerating themselves and producing a differentiating daughter cell, hemocytoblasts can divide to produce two hemocytoblasts.  Since hemocytoblasts can produce two hemocytoblast stem cells, scientists can expand populations of hemocytoblasts.  The ability to expand the stem cells makes them particularly useful for genetic engineering.

Hemocytoblasts can be clonally grown in culture in a lab.  Growing cells clonally means that the population starts from a single cell. Therefore, all the cells are genetically identical.  The specifics of clonal cell culture are not essential to this article, but you can read the basics here. Clonal cell culture gives us the first advantage over embryotic genetic engineering.  When scientists genetically engineer an embryo, the only way to know if the change was successful in all the cells is to test all the cells, which will destroy the embryo.  With clonal cells, you can test as many of the cells as you want and grow more.  Additionally, since the cells are clonal, you know all the cells in the population are genetically the same. 

The other advantage of genetically engineered hemocytoblasts is that they can be transplanted into patients using the techniques for bone marrow transplantation, which brings us to the current generation of CRISPR mediated medical treatments.

The first clinical trial using CRISPR was carried out by oncologist Lu You at Sichuan University in Chengdu, China.  The plan was to use CRISPR to increase the immune system’s response to aggressive lung cancer.  The researchers removed cells from the patents and then disabled the PD-1 gene, which should enhance the immune response.  Dr. You is currently working on a manuscript describing the results of his work. This experiment is not a surprise; genetic engineering of immune cells for the treatment of cancer has a long history.  What CRISPR has added to the technique is a faster, more accurate way to change the cells.

In addition to the cancer work in China, the US has also approved CRISPR mediated medical treatment.  The treatment we know the most about involves Victoria Gray, who suffers from sickle cell anima.  Sickle cell anime is a painful, debilitating disease that causes red blood cells to become misshapen and sticky.  Victoria Gray volunteered to have her blood stem cells engineered so that the red blood cells express fetal hemoglobin which the doctors hope will compensate for the defective adult hemoglobin that causes sickle cell anima.  Victoria received the transfusion of genetically edited cells early this summer (2019) and the results are quite promising.  Doctors will follow Victoria’s progress for months perhaps even decades.  The researchers will also have to repeat the treatment with additional patients.  Using gene editing to treat sickle cell anime is by no means a done deal but for the first-time individuals that suffer from the illness might have a real permeant treatment.

Hopefully, people will be able to see how the work that scientists are doing to engineer adult cells for the treatment of diseases is different from what Dr. Jiankui did.  One of the most important things we need to get across is that there is nothing wrong with CRISPR or gene editing in general.  Gene editing is a powerful research tool with lots of benefits not only for general research but also for medical treatment.  Scientific techniques are not good or bad by themselves; they are only good or bad in how people use them.  After all, I bet Victoria Gray likes CRISPR.

Thanks for Listing to My Musings
The Teaching Cyborg

Gene Editing in Humans Part Two

“The power to control our species’ genetic future is awesome and terrifying. Deciding how to handle it may be the biggest challenge we have ever faced.”
Jennifer A. Doudna

A year ago, He Jiankui announced that he had used CRISPR to create two gene engineered baby girls.  Since the initial announcement, there has been little new information released.  The government terminated Dr. Jiankui’s lab and research activities. “China’s Vice-Minister of Science and Technology Xu Nanping quickly shut down Dr He’s lab, ordering a full investigation and flagging some form of punishment for the researchers.”  It is also not clear where Dr. Jiankui is located “Hong Kong media reported that the university president, Chen Shiyi, personally flew to Hong Kong to collect and escort Dr He back to Shenzhen, where he was put “under house arrest”. The university denied Dr He was detained, telling the South China Morning Post “nobody’s information is accurate” on Dr He’s whereabouts, but refused to provide any details.

For about a year, this was the state of the information about the first two genetically engineered human beings.  Then early this month (Dec 2019), MIT Technology Review published a series of articles about Dr. Jiankui’s research.  It seems that He Jiankui wrote a 4699-word article titled “Birth of Twins After Genome Editing for HIV Resistance,” while the paper remains unpublished Dr. Jiankui submitted it to Nature and JAMA the Journal of the American Medical Association (China’s CRISPR babies: Read exclusive excerpts from the unseen original research.) Dr. Jiankui’s unpublished work answer several questions that were left unanswered last year.  However, the answers are perhaps more troubling than the speculation.

To understand what Dr. Jiankui was trying to accomplish, we need a little bit of history. Dr. Jiankui calms that he was trying to engineer humans to be resistant to HIV. The HIV-1 virus infects CD4 immune cells.  Over time an individual infected with HIV reaches a point where they cannot produce enough CD4 cells to mount a viable immune response. Leading to the collapse of the immune system and often death by disease.  Around 1996 a naturally occurring mutation in the CCR5 gene was discovered that rendered individuals resistant or possibly immune to infection by the HIV-1 virus.  The CCD5 mutation is a deletion of 32 base pairs (called Δ32) in the coding sequence of the CCR5 gene.  (Homozygous defect in HIV-1 coreceptor accounts for resistance of some multiply-exposed individuals to HIV-1 infection.)

The Δ32 mutation makes it so that the HIV-1 virus can’t bind to the CCR5 protein.  Since the HIV-1 virus uses the CCR5 receptor to enter cells, this mutation renders cells resistant or immune to infection by HIV-1.  According to his paper, Dr. Jiankui used the CRISPR technology to engineer the CCR5-Δ32 mutation into the in-vitro fertilized embryos of a couple where the husband was HIV positive, and the wife was HIV negative. The genetic change, in turn, would confer immunity to the children born from these embryos.

In the abstract, Dr. Jiankui says they were successful in editing the CCR5 gene.  “Genomic sequencing during pre-implantation genetic testing and after birth confirmed that the twins’ CCR5 genes were edited successfully and are thus expected to confer either complete or partial HIV resistance.” (China’s CRISPR babies: Read exclusive excerpts from the unseen original research) However, the actual data in the paper shows that one of the embryos has a frameshift mutation in the CCR5 gene, while the second embryo has a 15 bp delegation.  While both mutations cause changes in the CCR5 protein, they do not create the same disruption as the CCR5-Δ32 mutation.  It is not clear that these mutations will confer immunity to HIV since not every single mutation in CCR5 confers HIV immunity.  Beyond the question of the effectiveness of the created mutations, new research suggests that being homozygous for the CCR5-Δ32 mutation leads to a decrees in life expectancy (CCR5-∆32 is deleterious in the homozygous state in humans.)  Additionally, it appears that the cells in the embryos may not have all changed to the same extent leading to mosaics.

In addition to the potentially harmful nature of CCR5 mutations, there is a question about the type of genetic engineering.  If you are going to induce a mutation in a gene because of an observed effect, you need to create the same mutation that caused the original effect, not a similar mutation.  The reason is twofold one if you create a new mutation, you don’t know that the new mutation will have the same effect as the old one.  Two of the new mutation could cause a new and unintended consequence that the original mutation did not have.

Now at a first pass, Dr. Jiankui’s motives sound reasonable and altruistic.  He is trying to help a couple that is HIV positive have children.  Dr. Jiankui even states in his abstract, “Millions of children are born annually with inherited genetic diseases or infectious diseases acquired from parents.”  (China’s CRISPR babies: Read exclusive excerpts from the unseen original research) As stated by Rita Vassena, scientific director of the Eugin Group, there are well-established techniques to prevent transmission of HIV from parent to offspring. 

“It is worth remembering that HIV infection is not passed on through generations like a genetic disease; the embryo needs to “catch” the infection. For this reason, preventive measures such as controlling the viral load of the patient with appropriate drugs, and careful handling of the gametes during IVF, can avoid contagion very efficiently.” (China’s CRISPR babies: Read exclusive excerpts from the unseen original research

From a medical point of view, it is rarely if ever considered acceptable to use an experimental and potentially dangers technique when effective options already exist.

Beyond the question of whether the technology was ready, Dr. Jianjui’s foray into genetic engineering brings into sharp focus a question about the utilization of genetic engineering.  Dr. Jianhui attempted to create a new biological function in the twins.  He tried to engineer viral resistance.  What makes this especially troubling is that while we know that CCR5-Δ32 confers resistance to HIV-1 research into the normal biological functions of CCR5 is still ongoing.  The work showing that being homozygous for the CCR5-Δ32 can be harmful to the life span was published this year (2019). Instead of trying to create something “new,” why didn’t Dr. Jianjui try and fix a “broken” gene.

If Dr. Jianjui had at least tried to use genetic engineering to reverse a disease-causing mutation to the normal function, he would not have had to deal with the potential conquests of changing a gene function he would have restored it to normal function.  While this article should make it clear that gene-editing technology is not yet specific enough for reproductive genetic engineering, at the rate the technology is improving, it will not be long before we the technology can make specific changes in an embryo without producing additional defects.

A companion article also from the MIT Technology Review Opinion: We need to know what happened to CRISPR twins Lulu and Nana States that Dr. Jianjui’s papers need to be made public.  Specifically, Dr. Kiran Musunuru says, “Why must the information be public? It’s because He’s work reveals serious, unresolved safety concerns. It’s not clear that any effort to directly edit human embryos, even if done ethically and with full social approval, can reliably avoid these problems.”  While I think Dr. Musunuru’s interpretation is a little extreme.  I don’t believe Dr. Jianjui had a good enough grasp of what he was trying to do to use his research as a cornerstone of the technology limits.  After all, most of the information uncovered in his unpublished work is in aliment with the concerns and beliefs put forward by the scientist when human engineering was first made public.  I do agree that we need to discuss the uses and potentials of genetic engineering.

However, for people to have discussions about the ethics of genetic engineering, people need to understand the basics of genetics and genetic engineering.  Humans have been using genetic engineering since we planted our first crops and domesticated the first animals.  Our faithful companion, the dog, is the product of thousands of years of genetic engineering.  We are entering a point when we can change our ecosystem and ourselves at a rate faster than ever before. However, how much does the general public know about genetics?  Can we make a legitimate decision about genetic engineering if we don’t even understand the basics of what is going on?

Thanks for Listing to My Mussing’s
The Teaching Cyborg

Double-Blind Education

“It is a capital mistake to theorize before one has data.”
Arthur Conan Doyle (via Sherlock Holmes)

Several years ago, I was attending a weekly Discipline-Based Educational Research (DBER) meeting. Two senior faculty members led and organized the weekly meetings.  Both faculty members had trained in STEM disciplines.  One had received their educational research training through a now-defunct National Science Foundation (NSF) program, while the other was mostly self-taught through multiple calibrations with educational researchers.

The group was discussing the design of a research study that the Biology department was going to conduct.  One of the senior faculty members said if they were serious, they would design a double-blind study.  The other senior faculty member said that not only should they not do a double-blind study, but a double-blind study was likely a bad idea. I don’t recall the argument over double-blind studies in education ever getting resolved. We also never found out why one of the faculty members thought double-blind studies were a bad idea in educational research.

Double-blind studies are a way to remove bias. Most people know about them from drug trials.  Educational reform is not likely to accidentally kill someone if an incorrect idea gets implemented due to a bias in the research.  However, a person’s experiences during their education will certainly have a lifelong impact.  While double-blind studies might be overkill in education research, there is the question of what is enough.  As I have said before, it is the job of educators to provide the best educational experience possible; this should extend to our research.

How do faculty know how they should teach? What research should faculty members use?  Should we be concerned with the quality of educational research? Let me tell you a story (the names have been changed to protect the useless).  A colleague of mine was looking for an initial research project for a graduate student. My college told me about a piece of educational “research” that was making the rounds on his campus.  Alice, a well-respected STEM (Science Technology Engineering and Math) faculty member, had observed her class.  She noted what methods of note-taking her students were using.  At the end of the semester, she compared the method of notetaking to the student’s grade. On average, the students that used the looking glass method of notetaking had grades that avraged one letter grade lower than the other method of notetaking.

Alice told this finding to a friend the Mad Hatter, a DBER (Discipline-Based Education Research) expert.  The Mad Hatter was so impressed with the result that he immediately started telling everyone about it and including it in all his talks.  Now because Alice did her study on the spur of the moment, she did not get research approval and signed participation agreements.  The lack of paperwork meant that Alice couldn’t publish her results.  With such a huge effect, my colleague thought repeating this study with the correct permissions so that it could be published would be perfect for a graduate student.

They set-up the study; this time, to assess what methods the students were using to take notes, they videotaped each class period.  Additionally, the researchers conducted a couple of short questioners and interviewed a selection of the students.  After a full semester of observation, the graduate students analyzed the data. The result, there was no significant difference between looking glass notetaking and all the other types.  Just a little while ago, I saw a talk by the Mad Hatter. It still included Alice’s initial results.  Now the interesting thing is neither Alice nor the Mad Hatter would have excepted Alice’s notetaking research methodology if it was a research project in their STEM discipline.  However, as an educational research project, they were both willing to take the notetaking results as gospel.

While there is a lot of proper educational research, researchers have suggested that a lot of faculty and policymakers have a low bar for what is acceptable educational research.  The authors of We Must Raise the Bar for Evidence in Education suggest a solution to this low bar in educational research.  Their recommendation is to change what we except as the basic requirement of educational research.  Most of the author’s suggestions center around eliminating bias (the idea at the core of the double-blind study) their first suggestion is,

“to disentangle whether a practice causes improvement or is merely associated with it, we need to use research methods that can reliably identify causal relationships. And the best way to determine whether a practice causes an outcome is to conduct a randomized controlled trial (or “RCT,” meaning participants were randomly assigned to being exposed to the practice under study or not being exposed to it).”

One of the biggest problems with human research, which includes educational research, is the variability in the student population.  As so many people are fond of saying, we are all individuals.  By randomly assigning individuals to a group, you avoid the issue of concentrating traits in one group. 

Their second suggestion is, “policymakers and practitioners evaluating research studies should have more confidence in studies where the same findings have been observed multiple times in different settings with large samples.”  The more times you observe something, the more likely it is to be true (there is an argument against this, but I will leave that for another time.)

Lastly, the authors suggest, “we can have much more faith in a study’s findings when they are preregistered. That is, researchers publicly post what their hypotheses are and exactly how they will evaluate each one before they have examined their data.”  Preregistration is a lot like the educational practice used with student response systems were the student/researcher is less likely to delude themselves about the results if they must commit to an idea ahead of time.

If we are going to provide the best educational experiences for our students, we need to know what the best educational experiences are.  However, it is not enough to conduct studies. We need to be as rigorous as possible in our studies.  The next time you perform an educational research project, take a minute, and ask yourself how I can make this study more rigorous.  Not only will your students benefit, so will your colleagues. Thanks for Listing to My Mussing
The teaching Cyborg

Much Ado about Lectures

“Some people talk in their sleep. Lecturers talk while other people sleep”
Albert Camus

The point of research is to improve our knowledge and understanding.  An essential part of research is the understanding that what we know today may be different from what we know tomorrow as research progresses, our knowledge changes.  Conclusions changing over time does not mean the earlier researchers were wrong. After all, the researchers based their conclusions on the best information; they had available at the time.  However, future researchers have access to new techniques, equipment, and knowledge, which might lead to different conclusions.  Education is no different. As research progresses and we get new and improved methods, our understanding grows.

Out of all the topics in educational research, the most interesting is the lecture.  No subject seems to generate as much pushback as the lecture.  A lot of faculty seems to feel the need to be offended for the lecture’s sake.  Anyone that has trained at a university and received a graduate degree should understand that our understanding changes over time.  Yet no matter how much researchers publish about the limited value of the lecture in education, large numbers of faculty insist the research must be wrong.

I suspect part of the push back about the lectures is because lecturing is what a lot of faculty have done for years.  If they except that the lecture is not useful, then they have been teaching wrong.  Faculty shouldn’t feel bad about lectures; after all, it is likely what they experienced in school.  I think it is the faculty member’s own experience with lectures as students that lead to the problem.  I have had multiple faculty tell me over the years some version of the statement “The classes I had were lectures, and I learned everything, so lectures have to work.”

The belief that you learned from lectures when you where a student is likely faulty.  The reason this belief is defective is that you have probably never actually had a course that is exclusively a lecture course.  I can hear everyone’s response as they read that sentence, “what are you talking about as a student most of my classes were lectures.  I went into the classroom, and the teacher stood at the front and lectured the whole period. So, of course, I have had lecture courses.”

Again, I don’t think most people have ever had an exclusive lecture course. Let’s braked down a course and see if you really can say you learned from the lecture.  First, did your course have a textbook or other readings assignments?  Just about every course I took had reading assignments.  In most of my classes, I spent more time reading then I spent in the class listing to the lecturer.  Most of my courses also had homework assignments and written reports.  Many of the courses also had weekly quizzes, and one or two midterms were we could learn from the feedback.

Can you honestly say that in a lecture course, you didn’t learn anything from the course readings?  That you didn’t learn anything from the homework assignments and papers. That you didn’t learn anything by reviewing the graded homework assignments, papers, quizzes, and midterms, the truth is even in a traditional lecture course, there are lots of ways for students to learn.  As a student, it is next to imposable to determine how much you learn from any one thing in a course.  So, with all these other ways to learn in a “Lecture” course, can you honestly say you learned from the lecture?  In truth, the only way to have a course where you could say you learned from the lecture is if you had a course that only had a lecture and final, no readings, no assignments, no exams with feedback, only a lecture.

However, there is an even deeper issue with the lecture, then the faculty insisting it works (without any real evidence.)  As faculty members, what should our goal as a teacher be?  It is quite reasonable to say that anyone teaching at a college, university, or any school should attempt to provide the best learning environment they can.  So, even if we accept the argument that students can learn from, let’s call it, a traditional lecture (I don’t) if the research says there is a better way to teach shouldn’t we be using it?

If faculty approach teaching based on what is the best way to teach, it does not matter if students can learn from lectures if there is a better way to teach, we should use it.  The research says we should be using Active Learning when we teach because it benefits the students.  A recent article, Active learning increases student performance in science, engineering, and mathematics from PNAS show that students in classes that don’t use active learning are 1.5 times more likely to fail the course.  At a time when universities and the government are pushing for higher STEM graduation rates, active learning would make a big difference.

So how much of a problem is the lecture?  I know a lot of faculty that say they use active learning in their classrooms.  In a recent newsletter from the Chronicle of Higher Education, Can the Lecture Be Saved? Beth McMurtrie states, “Most professors don’t pontificate from the moment class starts to the minute it ends, but lecturing is often portrayed that way.”

However, a recent paper from the journal Science Anatomy of STEM teaching in North American universities might refute this statement.  The Science paper shows, at least in the STEM disciplines, that when classroom teaching methods are observed rather than reported by survey, 55% of all the course observed are traditional lectures.  Only 18% of the courses are student-centered active learning environments.  The rest have some amount of active learning.

Regardless of whether you think the lecture works or not, it is long past time to change.  There is no reason to feel ashamed or think poorly about faculty that used lectures in the past.  After all, for a lot of reasons, lectures where believed to work.  However, we are also long past the time where anyone should be offended for the lecture’s sake.  We need to use the best teaching methods currently available.  The best methods are the techniques called active learning because students measurably learn better than in a traditional lecture.

Thanks for Listing To my Musings
The Teaching Cyborg

Does a Letter Grade Tell You Whether Students are Learning?

“If I memorize enough stuff, I can get a good grade.”
Joseph Barrell

What do grades tell you?  Colleges and Universities accept student in part based on their GPA, which is determined by their grades.  Students get excepted as transfers based on the grades they received.  A student’s ability to move on to the next course is dependent on grades.  One of the reasons schools created grades was because of transfers and advanced degrees. “Increasingly, reformers saw grades as tools for system-building rather than as pedagogical devices––a common language for communication about learning outcomes.”  A student’s transcript is a list of the courses they took with the grade they received.  Some employers even look at grades when hiring.

We could forgive society in thinking that grades tell us everything.  In a lot of ways, modern educational institutions seem to center around grades.  Even a lot of educational professionals believe grades tell us everything.  I once participated in a meeting where a school was trying to work out an assessment to prove that an educational intervention was effective.  After a little bit of discussion about some of the possible approaches we could use, one of the individuals that had not participated up to that point spoke up and said:

“All of this is incredibly stupid, a complete waste of time.  We know this technique works.  Anyone that complains is just stupid.  After all the students pass the course, and we have good student distributions.  What more does anyone need besides grades.” (Quote Intentionally not cited)

After this statement, several people in the meeting agreed.  Now there are a lot of issues with grades and GPAs. Leaving aside the issue of grade inflation, let’s ask the question, do grades tell us how much a student learns in a course?  Were letter grades even meant to determine how much a student learns over the length of a course?  Maybe grades were just meant to show what skills a student had mastered at the end of the course?  The last two questions may sound similar, but they are not.

Let’s start with what problems we can run into using grads to assess student learning.  Let’s begin with curved grads.  Faculty started curving grades based on the belief that student grads should match the normal distribution.  The bell curve began to take hold in the early part of the 20th century. “It is highly probable that ability, whether in high school or college, is distributed in the form of the probability curve.” (Finkelstein, Isidor Edward. The marking system in theory and practice. No. 10. Baltimore: Warwick & York, 1913. p79.)  If faculty use a curved grading system, then any variations or changes in student performance based on educational interventions will be covered up by the curved grades.

Outside of curved grades, there is also the fact that different faculty and different schools (if you work in a multi-school system) will often have different grading scales. There is also the argument that the modern grading system is not about teaching but sorting.  “All stratification systems require “a social structure that divides people into categories” (Massey 2007, p. 242). Educational systems are among the most critical such structures in contemporary societies.” (Categorical Inequality: Schools As Sorting Machines).

Suppose we could deal with all the above issues.  We use a fixed (none curved) grading system.  All faculty and schools use the same grading system and the same assessments.  We record all the data year after year.  Now if we introduce an educational innovation and a statistically significant number of students get higher grades.  Then can we use grades to determine student learning?

In short, No, if a higher percentage of students continue to get higher grades, you could say that you have found a better way to teach. You can’t say anything about how much students have learned.  Assessing how much students learn in a course requires a piece of information that the student’s grades don’t provide.

To determine how much a student or group of students learn throughout a course, you need to know what their starting point is.  No student is a blank slate when they start in a course.  While part of the job of an educator is helping the student identify and deal with miss conceptions, incorrect information brought into a class.  Students will also bring correct information into a course.  Suppose you assessed all your students at the beginning of your course and discovered that all the students that got As scored 90% or higher on your pre-assessment.  Did you teach you’re A student’s anything?

 Measuring how much a student learns over a course based on their starting and ending knowledge is called Learning Gains.  The critical thing about Learning Gains is that it is a measure of how much a student can learn.  As an example, your pre-test showed that student A already knows 20% of the material that you will cover in the course.  While Student B already knew 30% of the material.  That means to reach 100% student A needs to learn 80% while student B needs only to learn 70%.  The actual learning gain of a student can be calculated using the mean normalized gain (g), which is calculated by (post-test – pre-test) / (100% – pre-test) = g.

Therefor using pre and post-tests we can measure the actual amount of learning as a fraction of the total learning that can occur over the length of a course.  While grades are useful for a lot of things, they don’t tell us how much students learn throughout a course.  Remember when you’re trying to improve your teaching use a measure that will show you the information you need.

Thanks for Listing to My Musings
The Teaching Cyborg

1, 2, 3, 4, 5, and 6? Extinctions

“Extinction is the rule. Survival is the exception.”
Carl Sagan

An article in Scientific America asked an interesting question, Why Don’t We Hear about More Species Going Extinct? There have been a lot of stories about the planet being in the middle of the 6th mass extinction.  Reports are saying that the rate of extinction is as much a 1000 times normal.  If these articles are correct shouldn’t we see articles in the news about species going extinct?  However, I wonder if people even understand the context of mass extinctions?  If asked, what is a mass extinction, could you answer? 

To understand what a mass extinction is, we need to understand life on earth and the fossil record.  All five existing mass extinctions are in the fossil record.  The first life that appeared were microbes around 3.7 billion years ago.  They lived in a world that was quite different from present-day earth.  The atmosphere was almost devoid of O2 (molecular oxygen) and high in things like methane.  Molecular oxygen is highly reactive and will spontaneously react with any oxidizable compounds present.  The early earth was full of oxidizable compounds, any molecular oxygen that did appear was almost instantly removed by chemical reaction.

About 1.3 billion years later the first cyanobacteria evolved, these were the first photo-synthesizers. Over possibly hundreds of millions of years molecular oxygen produced by the cyanobacteria reacted with compounds in the environment until all the oxidizable compounds were used up.  A great example of this is banded iron deposits.  Only when molecular oxygen reacted with all the oxidizable compounds could molecular oxygen begin to accumulate in the environment. 

After another 1.7 billion years the first multicellular organisms, sponges appeared in the fossil record.  Around 65 million years later a group of multicellular organisms called the Ediacaran Biota joined the sponges on the seafloor.  Most of these organisms disappeared around 541 million years ago.  However, the loss of the Ediacaran Biota is not one of the five mass extinction events.  How much of an evolutionary impact the Ediacaran Biota had on modern multicellular organisms is still an open question.  Most of the Ediacaran Biota had body planes quite different from modern organisms.

The next period is especially important; it started about 541 million years ago and lasted for about 56 million years.  The period is known as the Cambrian.  This period is referred to as the Cambrian explosion because all existing types (phyla) of organisms we see in modern life emerged during this period. The Cambrian explosion is also essential because the diverse number and types of organisms that evolved during the Cambrian explosion form the backdrop for mass extinctions.

The first mass extinction occurred 444 million years ago at the end of the Ordovician period.  During this extinction event, 86% of all species disappeared from the fossil record over about 4.4 million years. Global recovery after the extinction event took about 20 Million years

The Second mass extinction occurred at the end of the late Devonian Period.  The Devonian extinction is the extinction that eliminated the Trilobites.  During this extinction event, 75% of all species disappeared from the fossil record over as much as 25 million years

The third and largest mass extinction occurred at the end of the Permian period 251 million years ago.  During this mass extinction, 96% of all species disappeared from the fossil record over 15 million years.  Research suggests that the Permian mass extinction took 30 million years for full global recovery.

The fourth mass extinction occurred 200 million years ago at the end of the Triassic period.  During this mass extinction, 80% of all species disappeared from the fossil record. The Triassic mass extinction appears to have occurred over an incredibly short period, less than 5000 years.

The fifth mass extinction occurred at the end of the Cretaceous period 66 million years ago.  This extinction is by far the most famous of the mass extinctions because it is the meteor strike that killed the dinosaurs.  During this extinction, 76% of all the spices disappeared from the fossil record.  Research suggests this mass extinction only took 32,000 years.

Now that we have looked at mass extinctions what about regular extinctions.  Normal or background extinction rate is the number of extinctions per million species per year (E/MSY).  Current estimates put the background extinction rate at 0.1 E/MSY.  If the the current extinction rate is 1000 times the background extinction rate, then currently the extinction rate is 100 E/MSY.

The current estimate for the total number of species is 8.9 million.  That means that 890 species are going extinct every year or 2.5 species a day.  So why don’t we hear more about spices going extinct if the extinction rate is that high?  First, the current catalog of identified species is 1.9 million, which means there are currently 7 million species (79%) that are undescribed.  That means 700 of the 890 extinctions a year would be in species that scientists haven’t identified.

The second problem is that even with identified species, it is often difficult to know if a species has gone extinct.  The International Union for Conservation of Nature (IUCN) maintains the Red List of critically endangered species.  One of the categories is Possibly Extinct (PE) based on the last time anyone saw an organism.  For example, no one has seen the San Quintin Kangaroo Rat in 33 years, no one has seen the Yangtze River Dolphin in 17 years, and no one has seen the Dwarf Hutia in 82 years.  It is likely that these three spices, along with several others, are extinct.

However, not being seen is not good enough to classify a species extinct.  After all, the Coelacanth was thought to be extinct for 65 million years until a fisherman caught one in 1938.  For a species to be declared extinct, a thorough and focused search must be made for the organism to declare it extinct.  These types of searches require time, personnel, and money.  Therefore, searches don’t often happen.  So with the exceptions of particular cases, like Martha the last Carrier pigeon who died on September 1, 1914, most species go extinct with a whimper, not a bang.

We don’t hear more about species going extinct because even knowing extinctions are occurring, in many cases, we don’t know about them.   Returning to the question of what is a mass extinction, and could there be a 6th happening?

Use the five existing mass extinctions as examples a simple definition of a mass extinction is an event where 75% or more of the existing species become extinct within a short (less than 30 million years) time.  Using the current estimated number of spices, and the current estimated rate of extinction we can calculate how long it will take to reach the 75% mark the answer is 7500 years. Since 7500 years is less than 30 million years, we could be on course for a 6th mass extinction.  However, as Doug Erwin says we are not in the middle of the 6th mass extinction.  If we were in the middle of a mass extinction like Dr. Erwin said cascade failures would already have started in the ecosystem and there would be anything we could do.  However, that is good news we still have time to do something.

What we need is an accurate count of extinct species.  So, do you have a class that could do fieldwork?  There is probably a critically endangered species near you.  Maybe you will even be lucky, and you will find the species then you can help with a plan to save it.

Thanks for Listening to My Musings
The Teaching Cyborg