“The New Ph.D.” Again

“There are far, far better things ahead than any we leave behind.”
C.S. Lewis

A while ago, I wrote a blog post Re-Envisioning the PhD +13 Years.  While a graduate student, I was associated with the Woodrow Wilson Re-Envisioning the PhD project.  In that blog post, I reviewed my old notes to see if my opinion had changed. I concluded that the problem was not with a PhD degree but people trying to hijack the degree for other uses.

Today I came across an article in the Chronical of Higher Education, “The New Ph.D.: Momentum grows to rewrite the rules of graduate training.” While I am reluctant to dip back into the topic of changing the PhD, there is a lot going on, and I think we should give the Chronical article a look.  As Rear Admiral Grace Hopper said, “The most dangerous phrase in the language is: We’ve always done it this way.” (By the way, if you don’t know who Grace Hopper is, shame on you and educate yourself.)

The article starts with a story about Meg Berkobien, a graduate student in comparative literature.  Her dissertation was on 19th-century Catalan-language periodicals.  Meg was not motivated by her project and eventually decided to leave the program.  In a letter to her department chair, Meg wrote,

“Every time I sit down to write, I’m overwhelmed by a quiet despair — that our world is literally on fire and I’m not doing nearly enough to build a better world,” Berkobien wrote in an email to her department chair. “Pair these concerns with a downright awful job market, and I hope it’s clear why I think my best option is to leave.”

Instead of letting Berkobien leave the department let her “reimagine her dissertation as a series of essays focused largely on her public-facing work, which included building a translators’ collective that prints books and creating translation workshops for immigrant high schoolers learning English.” Beyond Berkobien’s story, the authors focused on a whole section of the Chronicle article on the dissertation.

One complaint is that the dissertation does not prepare students for jobs outside of academia. Since the bulk of Doctoral graduates will work outside of academia, maybe the dissertation should reflect that.  Sidonie Smith argues, “The one-size-fits-all proto-book structure shackles scholarship,” “It often yields bloated projects that don’t merit such long-form treatment.” While Earl Lewis says, “Lewis made a much-discussed suggestion that historians should consider allowing students to pursue co-authored dissertations. This, he says, would enable them to produce better answers to really big scholarly questions.”

The Chronical article lists several programs experimenting with alternative dissertations. It also contains several examples were alternative dissertation formats have been successful. However, the article never talks about the purpose of the dissertation.  Why is the dissertation part of the PhD?  Additionally, the dissertation is not that old.  According to DED: A Brief History of the Doctorate, a University awarded the first doctoral degree in the 12th century.  Universities awarded the first PhD in the 19th centaur, and Yale awarded the first US PhD in 1861.  Therefore, in the US, at most, the PhD dissertation is only 159 years.

What is the dissertation purpose? Why should the students write anything? The PhD is predominantly a research degree. If you do, a web search asking what a PhD is some were in the description will be a phrase like “original research” or “contribute new knowledge to your field.” The writing of a dissertation is how you show that your research answered the original research question.

I think the writers of the Chronical article are confusing several different problems. Let’s use Meg Berkobien as an example.  Meg was not engaged by her original research into 19-century Catalan-language periodicals.  As the article said, “What excited her was political organizing and mobilizing her translation expertise outside academe.” The department let her change her research topic to her translational working outside academia. They also changed the format of her dissertation.  Did the department have to do both?  Why couldn’t they have let Meg do a research project about her translational work outside academia while still writing a traditional dissertation?

Over the years, I have met many graduate students that have complained about their research projects.  There was an English lit major that wanted to study a 20th-century science fiction writer. The student’s advisors told the student no because science fiction wasn’t scholarly enough.  There was a biology student who wished to understand society’s comprehension of science. The student was told that it was not scientific enough.  I know an engineering student that wanted to understand how engineering impacted government policy; their advisor told them the department didn’t care.

In the end, these three students and many others left school.  In this case, the problem was not with the dissertation but with what was considered “scholarly” research.  However, it seems to me that almost any topic can be a research project, especially if we truly believe that all knowledge is worthwhile.  Do books have to be 100, 200, or 400 years old to be worthy of research. Isn’t it worthwhile to understand what the best way to communicate scientific information is?  The dissertation does not have to change to let in new and modern research questions.

The other reason given to change the dissertation is because it does not adequately prepare a student for work outside of academia.  While it is undoubtedly vital to train people so that they can be happy contributing members of society, we also need to train people for jobs in academia and research.  Part of the problem is overfilling in graduate programs, coupled with schools not being transparent about prospects.  I have had several faculty members tell me the only reason their departments enroll the number of graduate students is to fill the Graduate Teaching Positions, not because they need them.

While schools should be aware of student futures and provide their prospective students with realistic expectations, instead of changing the dissertation, why not allow a student to create additional projects or participate in internships to complement and enhance their graduate experiences.

The last issue brought up by Dr. Smith, and Dr. Lewis is that the current dissertation model inhibits the type of research and questions that students can ask. These are good questions concerning changes to the dissertation.  If a change to the structure of the dissertation improves the student’s ability to do research or open new kinds of research, then we should make changes.

While continuing to do something because we have always done it, that way is dumb.  It is equally foolish to change something because of problems with something else.  It is still worth looking for a better way to do things.  Just because something is not a perfect fit for everything doesn’t mean it should be changed.  After all, there are things for which a PhD is ideal.  As time and society change, schools will undoubtedly have to adapt to provide an educated society. However, as I have said before, perhaps the appropriate switch is to create a new degree not to edit the old degree out of existence.

Thanks for Listing to my Musings
The Teaching Cyborg

PS. In case you think rose-tinted glass biased my opinion, I hate my dissertation.  Not just because the company my school used to print and bind the digital files did such a horrible job.  The entire document looks like a bad copy produced off a low-quality copy machine. 

I suppose what gets me is that while I was worried about writing a document that large, I had a plan and was looking forward to creating the pseudo book.  I had a story to tell, present the background, which showed where there were holes in our knowledge.  Then develop the experimental methods to address the gaps.  Finally, I would get to show how my data added to the models and lead to new questions for future research.  Instead, my department wanted a catalog of every single experiment I did.  In the end, I felt like “my” dissertation belonged more to my committee, then it did to me.

Misconceptions in Cell Biology

“Every living thing is made of cells, and everything a living thing does is done by the cells that make it up.”
L.L. Larison Cudmore

Cells are the building blocks of all biology.  Every living organism is composed of cells.  All cells came from preexisting cells.  If you are a trained biologist, you recognize the last two sentences as The Cell Theory, one of the core theories of modern biology.  A lot of The Cell Theory seems basic considering what we know.  However, remember cells are smaller than can be seen by the naked eye.  Until the invention of microscopes, we didn’t even know cells existed.  The word cell was first used by Robert Hooke in the 1660s while examining thin slices of cork.  Hooke used the word cell to describe the structures he observed because they reminded him of the rooms of monks.

Additionally, it wasn’t until Loise Pasteur’s famous swan-necked flask experiment in 1859 that the idea of spontaneous generation, life spontaneous occurring out of organic material, was disproven.  Therefore, every cell must come from a preexisting cell. With the importance of The Cell Theory, it is not surprising that students spend a lot of time learning about the structure, function, and behavior of cells.  However, because cells are not visible to the naked eye, it is not surprising that many students have misconceptions concerning cells.

What is a misconception? Scientific misconceptions “are commonly held beliefs about science that have no basis in actual scientific fact. Scientific misconceptions can also refer to preconceived notions based on religious and/or cultural influences. Many scientific misconceptions occur because of faulty teaching styles and the sometimes-distancing nature of true scientific texts.”  When we teach students biology, how good are we at dealing with misconceptions?  The critical questions are what the student’s misconceptions are and how do we deal with them?

Musa Dikmenli looked at the misconceptions that student teachers had in his article Misconceptions of cell division held by student teachers in biology: A drawing analysis.  In the study, Dikmenli examined the understanding of 124 student teachers in cell division.  According to the study, these student teachers “had studied cell division in cytology, genetics, and molecular biology, as a school subject during various semesters.”  Therefore, the student teachers had already studied cell division at the college level.

At a basic level, cell division is the process of a single cell dividing to form two cells.  Scientists organize cell division (the cell cycle) into 5 phases Interphase, Prophase, Metaphase, Anaphase, and Telophase.  The cell cycle is often depicted using a circle. 

Figure of the cell cycle at different levels of detail. Created by PJ Bennett
Figure of the cell cycle at different levels of detail. Created by PJ Bennett

Instead of answering quiz questions or writing essays, the students were “asked to draw mitosis and meiosis in a cell on a blank piece of A4-sized paper. The participants were informed about the drawing method before this application.” (Dikmenli) The use of drawing as an analysis method has several advantages.  The most important of which is that it can be used across languages and by students in multiple nationalities.

After analyzing the drawings, almost half of the student teachers had misconceptions about cell division.  Some of the most come misconceptions are, when DNA synthesis occurs during mitosis and mistakes about the ploidy, the number of chromosome copies, during meiosis.  The research results mean that individuals that are going to teach biology at the primary and high school level are likely to pass their misconceptions along to their students.

So, where does the problem with student misconceptions start?  Students learn misconceptions from their teacher about cell division.  However, the teachers all have biology degrees from colleges, and their college faculty failed to address their misconceptions. However, perhaps we are not asking the correct questions.  Instead of trying to decide who, K-12 or College, is responsible for correcting student misconceptions, we should ask why students get through any level of school with misconceptions?

I can hear all the teachers now, while obviously, students get through school with misconceptions because it’s difficult to correct misconceptions. However, we know a lot about teaching to correct misconceptions.  Professor Taylor presents one method, refutational teaching in the blog post GUEST POST: How to Help Students Overcome Misconceptions.  With a quick Google search, you can find other supported methods.  In all cases getting the student to overcome the misconception, the student must actively acknowledge the misconception while confronting countering facts.

It is unlikely that the problem is that it is hard to teach to misconceptions, let’s be honest most teachers at any level are willing to use whatever techniques work.  No, I suspect the real problem is that most teachers don’t realize their students have misconceptions. So, then the real questions are why instructors don’t realize students have misconceptions.  In this case, I suspect it is the method of assessment.

Most classroom assignments and assessments ask the students to provide the “right” answer.  The right answer is especially prevalent in the large lecture class where multiple-choice questions are common.  However, the fact that a recent review article A Review of Students’ Common Misconceptions in Science And Their Diagnostic Assessment Tools covers 111 research articles suggest that identifying misconceptions is not complicated if teachers use the correct methods.  Therefore, the incorporation of the proper assessment methods alongside teachers’ standard methods will help teachers identify student misconceptions.

However, it is not good enough to identify misconceptions. The misconceptions must be identified early enough in the course so the teacher can address them.  Finding misconceptions is a perfect justification for course pretests either comprehensively at the beginning of the course or smaller pretests at the start of unites.  In an ideal world, pretests would be a resource that departments or schools would build, maintain and make available to their teachers ideally as a question bank.  Until schools provide resources to identify misconceptions, think about adding a pretest to determine your student’s misconceptions.  It will help you do a better job in the classroom

Thanks for Listing to My Musings
The Teaching Cyborg

Double-Blind Education

“It is a capital mistake to theorize before one has data.”
Arthur Conan Doyle (via Sherlock Holmes)

Several years ago, I was attending a weekly Discipline-Based Educational Research (DBER) meeting. Two senior faculty members led and organized the weekly meetings.  Both faculty members had trained in STEM disciplines.  One had received their educational research training through a now-defunct National Science Foundation (NSF) program, while the other was mostly self-taught through multiple calibrations with educational researchers.

The group was discussing the design of a research study that the Biology department was going to conduct.  One of the senior faculty members said if they were serious, they would design a double-blind study.  The other senior faculty member said that not only should they not do a double-blind study, but a double-blind study was likely a bad idea. I don’t recall the argument over double-blind studies in education ever getting resolved. We also never found out why one of the faculty members thought double-blind studies were a bad idea in educational research.

Double-blind studies are a way to remove bias. Most people know about them from drug trials.  Educational reform is not likely to accidentally kill someone if an incorrect idea gets implemented due to a bias in the research.  However, a person’s experiences during their education will certainly have a lifelong impact.  While double-blind studies might be overkill in education research, there is the question of what is enough.  As I have said before, it is the job of educators to provide the best educational experience possible; this should extend to our research.

How do faculty know how they should teach? What research should faculty members use?  Should we be concerned with the quality of educational research? Let me tell you a story (the names have been changed to protect the useless).  A colleague of mine was looking for an initial research project for a graduate student. My college told me about a piece of educational “research” that was making the rounds on his campus.  Alice, a well-respected STEM (Science Technology Engineering and Math) faculty member, had observed her class.  She noted what methods of note-taking her students were using.  At the end of the semester, she compared the method of notetaking to the student’s grade. On average, the students that used the looking glass method of notetaking had grades that avraged one letter grade lower than the other method of notetaking.

Alice told this finding to a friend the Mad Hatter, a DBER (Discipline-Based Education Research) expert.  The Mad Hatter was so impressed with the result that he immediately started telling everyone about it and including it in all his talks.  Now because Alice did her study on the spur of the moment, she did not get research approval and signed participation agreements.  The lack of paperwork meant that Alice couldn’t publish her results.  With such a huge effect, my colleague thought repeating this study with the correct permissions so that it could be published would be perfect for a graduate student.

They set-up the study; this time, to assess what methods the students were using to take notes, they videotaped each class period.  Additionally, the researchers conducted a couple of short questioners and interviewed a selection of the students.  After a full semester of observation, the graduate students analyzed the data. The result, there was no significant difference between looking glass notetaking and all the other types.  Just a little while ago, I saw a talk by the Mad Hatter. It still included Alice’s initial results.  Now the interesting thing is neither Alice nor the Mad Hatter would have excepted Alice’s notetaking research methodology if it was a research project in their STEM discipline.  However, as an educational research project, they were both willing to take the notetaking results as gospel.

While there is a lot of proper educational research, researchers have suggested that a lot of faculty and policymakers have a low bar for what is acceptable educational research.  The authors of We Must Raise the Bar for Evidence in Education suggest a solution to this low bar in educational research.  Their recommendation is to change what we except as the basic requirement of educational research.  Most of the author’s suggestions center around eliminating bias (the idea at the core of the double-blind study) their first suggestion is,

“to disentangle whether a practice causes improvement or is merely associated with it, we need to use research methods that can reliably identify causal relationships. And the best way to determine whether a practice causes an outcome is to conduct a randomized controlled trial (or “RCT,” meaning participants were randomly assigned to being exposed to the practice under study or not being exposed to it).”

One of the biggest problems with human research, which includes educational research, is the variability in the student population.  As so many people are fond of saying, we are all individuals.  By randomly assigning individuals to a group, you avoid the issue of concentrating traits in one group. 

Their second suggestion is, “policymakers and practitioners evaluating research studies should have more confidence in studies where the same findings have been observed multiple times in different settings with large samples.”  The more times you observe something, the more likely it is to be true (there is an argument against this, but I will leave that for another time.)

Lastly, the authors suggest, “we can have much more faith in a study’s findings when they are preregistered. That is, researchers publicly post what their hypotheses are and exactly how they will evaluate each one before they have examined their data.”  Preregistration is a lot like the educational practice used with student response systems were the student/researcher is less likely to delude themselves about the results if they must commit to an idea ahead of time.

If we are going to provide the best educational experiences for our students, we need to know what the best educational experiences are.  However, it is not enough to conduct studies. We need to be as rigorous as possible in our studies.  The next time you perform an educational research project, take a minute, and ask yourself how I can make this study more rigorous.  Not only will your students benefit, so will your colleagues. Thanks for Listing to My Mussing
The teaching Cyborg

Much Ado about Lectures

“Some people talk in their sleep. Lecturers talk while other people sleep”
Albert Camus

The point of research is to improve our knowledge and understanding.  An essential part of research is the understanding that what we know today may be different from what we know tomorrow as research progresses, our knowledge changes.  Conclusions changing over time does not mean the earlier researchers were wrong. After all, the researchers based their conclusions on the best information; they had available at the time.  However, future researchers have access to new techniques, equipment, and knowledge, which might lead to different conclusions.  Education is no different. As research progresses and we get new and improved methods, our understanding grows.

Out of all the topics in educational research, the most interesting is the lecture.  No subject seems to generate as much pushback as the lecture.  A lot of faculty seems to feel the need to be offended for the lecture’s sake.  Anyone that has trained at a university and received a graduate degree should understand that our understanding changes over time.  Yet no matter how much researchers publish about the limited value of the lecture in education, large numbers of faculty insist the research must be wrong.

I suspect part of the push back about the lectures is because lecturing is what a lot of faculty have done for years.  If they except that the lecture is not useful, then they have been teaching wrong.  Faculty shouldn’t feel bad about lectures; after all, it is likely what they experienced in school.  I think it is the faculty member’s own experience with lectures as students that lead to the problem.  I have had multiple faculty tell me over the years some version of the statement “The classes I had were lectures, and I learned everything, so lectures have to work.”

The belief that you learned from lectures when you where a student is likely faulty.  The reason this belief is defective is that you have probably never actually had a course that is exclusively a lecture course.  I can hear everyone’s response as they read that sentence, “what are you talking about as a student most of my classes were lectures.  I went into the classroom, and the teacher stood at the front and lectured the whole period. So, of course, I have had lecture courses.”

Again, I don’t think most people have ever had an exclusive lecture course. Let’s braked down a course and see if you really can say you learned from the lecture.  First, did your course have a textbook or other readings assignments?  Just about every course I took had reading assignments.  In most of my classes, I spent more time reading then I spent in the class listing to the lecturer.  Most of my courses also had homework assignments and written reports.  Many of the courses also had weekly quizzes, and one or two midterms were we could learn from the feedback.

Can you honestly say that in a lecture course, you didn’t learn anything from the course readings?  That you didn’t learn anything from the homework assignments and papers. That you didn’t learn anything by reviewing the graded homework assignments, papers, quizzes, and midterms, the truth is even in a traditional lecture course, there are lots of ways for students to learn.  As a student, it is next to imposable to determine how much you learn from any one thing in a course.  So, with all these other ways to learn in a “Lecture” course, can you honestly say you learned from the lecture?  In truth, the only way to have a course where you could say you learned from the lecture is if you had a course that only had a lecture and final, no readings, no assignments, no exams with feedback, only a lecture.

However, there is an even deeper issue with the lecture, then the faculty insisting it works (without any real evidence.)  As faculty members, what should our goal as a teacher be?  It is quite reasonable to say that anyone teaching at a college, university, or any school should attempt to provide the best learning environment they can.  So, even if we accept the argument that students can learn from, let’s call it, a traditional lecture (I don’t) if the research says there is a better way to teach shouldn’t we be using it?

If faculty approach teaching based on what is the best way to teach, it does not matter if students can learn from lectures if there is a better way to teach, we should use it.  The research says we should be using Active Learning when we teach because it benefits the students.  A recent article, Active learning increases student performance in science, engineering, and mathematics from PNAS show that students in classes that don’t use active learning are 1.5 times more likely to fail the course.  At a time when universities and the government are pushing for higher STEM graduation rates, active learning would make a big difference.

So how much of a problem is the lecture?  I know a lot of faculty that say they use active learning in their classrooms.  In a recent newsletter from the Chronicle of Higher Education, Can the Lecture Be Saved? Beth McMurtrie states, “Most professors don’t pontificate from the moment class starts to the minute it ends, but lecturing is often portrayed that way.”

However, a recent paper from the journal Science Anatomy of STEM teaching in North American universities might refute this statement.  The Science paper shows, at least in the STEM disciplines, that when classroom teaching methods are observed rather than reported by survey, 55% of all the course observed are traditional lectures.  Only 18% of the courses are student-centered active learning environments.  The rest have some amount of active learning.

Regardless of whether you think the lecture works or not, it is long past time to change.  There is no reason to feel ashamed or think poorly about faculty that used lectures in the past.  After all, for a lot of reasons, lectures where believed to work.  However, we are also long past the time where anyone should be offended for the lecture’s sake.  We need to use the best teaching methods currently available.  The best methods are the techniques called active learning because students measurably learn better than in a traditional lecture.

Thanks for Listing To my Musings
The Teaching Cyborg

In Education What Does It Mean to Be Competent?

“What you know is more important than where or how you learned it.”
Tina Goodyear

While competency-based education (CBE) has been part of US education for 40- or 50-years, interest has been increasing over the last couple of years.  A faculty member I was working with once described a problem they were having at his school.  He worked with a system of schools that used a common course system across all their campuses.  A common course system can solve a lot of issues. The common course system allows students to transfer between schools smoothly.  It also lets the system office negotiate guaranteed transfer agreements with other universities for all the schools in the system rather than each school having to negotiation individual transfer agreements.

However, how the system he worked at maintained their common courses system was causing problems.  The system office maintained a central list of the learning outcomes of the common courses.  When a school taught a class, they only needed to teach 80% of the outcomes that were on the common list.  If the common list had 26 learning outcomes, you only need to teach 21 (20.8). Faculty don’t have to teach five of the learning outcomes on the common course list.

To pass a common course, the student must earn at least a C (70% of the learning outcomes taught).  That means a student can pass while only learning 15 (14.56) learning outcomes.  Therefore, a student can pass without knowing 11 of the 26 learning outcomes on the system’s core list. Taken to the extreme, it means that two students each from a different school that both earned a C and transferred to the same school might only have four learning outcomes in common between them.

The committee my friend was working with suggested the implementation of competency-based education as a solution to the problem with their current common course system.  I asked how they were planning on implementing CBE.  He answered, “well, we already have learning goals all we need to do is turn them into competencies then modify our assessments a little, and we will be doing CBE.”

I remember asking, “are you changing how you assign grades?”  “If you’re not changing the grades, are you going to change how your transcript?”  If you don’t make changes like “grading” differently or list mastered competencies on your transcripts, you will still have the same problem. A lot of people that are trying to jump on the CBE bandwagon are just rephrasing their learning goals into “competencies.”

Implement of CBE requires changes to the whole system.  One of the core ideas behind competency-based education is that given enough time, most people can master any concept.

“Supporters of mastery learning have suggested that the vast majority of students (at least 90%) have the ability to attain mastery of learning tasks (Bloom, 1968; Carroll, 1963). The key variables, rather, are the amount of time required to master the task and the methods and materials used to support the learning process.” (How did we get here? A brief history of competency‐based higher education in the United States)

This one idea turns the current educational system on its head.  In most schools,’ students’ progress is measured by the number of credits earned.  Students earn credits by passing a class.  If a student passes the class, they get the credits whether they get an A or C. Institutions assign the number of credits to a course based on the number of hours the course meets.  This system, the Carnegie Unit, was established over a century ago by the Carnegie Foundation.  Therefor students earn credits based on time.

However, the Carnegie Unit or Credit Hour was initially created as part of a program to determine eligibility in the Carnegie Pension Plan (today is known as TIAA-CREF).

“To qualify for participation in the Carnegie pension system, higher education institutions were required to adopt a set of basic standards around courses of instruction, facilities, staffing, and admissions criteria. The Carnegie Unit, also known as the credit hour, became the basic unit of measurement both for determining students’ readiness for college and their progress through an acceptable program of study.” (The Carnegie Unit A Century­ old Standard In A Changing Education Landscape)

While the Carnegie Unit brought standardizations to a nascent US educational system, it is possible, if not likely, that we have become too focused on the easily measured like the Carnegie unit.  In the CBE system, students earn credits based on mastery of concepts. Therefore, students take as much or little time as they need to master concepts and move forward at a pace that best suits them.  CBE puts the information learned as the central component used to earn credits, not the length of time spent in a course.

Beyond restructuring the educational experience to focus on mastery, there are questions about assessments.  It is not merely a matter of rewording learning goals into competencies.  Course designers build competencies around what students should be able to do, or vice versa.  The assessments must be carefully thought out to match the desired outcome and then ascertain whether the student has mastered the competency.  While the process of assessment creation is involved, the fact that schools like Western Governor’s University and the University of Wisconsin’s Flexible Option program are using CBE can provide examples and a knowledge pool to develop new programs.

I don’t know if most of the educational system will adopt CBE.  The changes need to the standard system are enormous. After all, if students can learn at their pace, semesters, and time to degree will have to be rethought. However, the thought of competency-based education changing the focus back to learning over sorting is appalling.  The CBE system could also help alleviate student frustrations over a course moving to slow or too fast, leading to higher matriculation rates.  In the long run, I suspect the degree to which CBE is adopted will depend mostly on the success of the institutions currently leading the way.  Regardless of the success or failure of CBE, it will be fun to follow the developments in CBE over the next several years.

Thanks for Listing to My Musings
The Teaching Cyborg