“Ignorance more
frequently begets confidence than does knowledge: it is those who know little,
not those who know much, who so positively assert that this or that problem
will never be solved by science.”
Charles Darwin
When can you use a title? What makes someone an expert? Over the years, I have built several pieces of furniture, tables, bookshelves, and chests does that make me a master carpenter? I have met several master carpenters and seen their work; I am most definitely not a master carpenter. Using the book Make Your Own Ukulele: The Essential Guide to Building, Tuning, and Learning to Play the Uke, I’ve built two ukuleles. While the author of the books says once you’ve made “a professional-grade ukulele” you are a luthier I don’t think I will be calling myself a luthier anytime soon.
I have a lot of “hobbies,” I have made knives, braided whips, bound books, made hard cider, and cooked more things that I can remember. The only one of my hobbies that I might be willing to use a title for is photography. I have been practicing outdoor and nature photography for 30+ years, and if you caught me in the right mood, I might call myself a photographer. What makes photography different? It’s not the time I have put into it, though I have long past the 10,000-hour mark. I’ve had my work reviewed and excepted by people in the field, not every picture but enough to be comfortable with my skill.
I am selective when it comes to titles and proclaiming my expertise. However, there are people that are not selective about their expertise. Believing your knowledge to be greater than it is, is common enough to have a name the Dunning–Kruger effect. However, an even bigger problem than an individual mistaking their knowledge is when an individual mistakes their knowledge and present themselves as an expert.
The internet and self-publishing have increased our access to knowledge and different points of view. Previously it was simply not possible, for multiple reasons, to publish everything, so editors and review boards had to decide what to publish.
While the benefits to open publications are significant, we must ask without “gatekeepers” how do we identify expertise? Many people may ask, “why do we care?” Well, we have issues like GMOs, STEM cell therapy, cloning, genetically engineered humans, and technology we have not even thought of yet. How will people decide what to do with these technologies if they can’t identify expertise?
A great example of this is a recent study on GMO’s Those who oppose GMO’s know the least about them — but believe they know more than experts. In the study, most people said that GMOs are unsafe to eat, which differs from scientist where the majority say GMOs are safe. People’s views of GMOs are not a surprise news coverage of GMO clearly shows how people feel. The interesting thing was the second point covered in the study. The people that were most opposed to GMOs thought they knew the most about them. However, when this group of self-identified experts had their scientific knowledge tested, they scored the lowest.
The difference between people’s beliefs and actual knowledge gets even more complicated when we move beyond GMOs. While the consensus is that GMOs are safe and could be beneficial, their loss isn’t instantly deadly. After all, we haven’t developed that GMO that will grow in any condition and solve world hunger or capture all the excess CO2 from the atmosphere. However, what about the Anti-vaccination movement? I’m not going to get into all the reasons people think they shouldn’t get vaccinated. However, let’s talk about how their action will affect you.
I know a lot of people say it’s just a small percentage and I’ve been vaccinated so ignore it. You may even be one of them, let me ask you to have you heard about things like efficacy and herd immunity? Additionally, do you remember or know that the measles can kill? Let’s look at the numbers, according to the CDC; the Measles vaccine is 93% effective. Using the recommended two doses, 3 out of every 100 people that are vaccinated can get the measles. Even if everyone in the US were vaccinated, there would be 9.8 million people still susceptible to measles.
A lot of people don’t believe this; after all, we don’t see millions of measles cases every year. Herd immunity (community immunity) is the reason we don’t see millions of cases. The idea is if enough people in a community are immunized, illness can’t spread through the community. So even if you are one of the individuals were the vaccine was ineffective, you don’t catch the disease because the individuals around you have an effective immunization.
What percentage of vaccination against measles grants herd immunity? According to a presentation by Dr. Sebastian Funk Critical immunity thresholds for measles elimination for herd immunity to work for measles, the population needs an immunization level of 93-95%. According to the CDC, the percentage of individuals 19-35 months is 91.1% while the percentage of individuals 13-17 years old is 90.2%. That is below the level needed for herd immunity. Therefore, individuals choosing not to get vaccinated are endangering, not just themselves but others.
Fortunately, we know individuals can learn earlier this year Ethan Lindenberger, an 18-year-old teen that got himself vaccinated against his anti-vaccination mother’s wishes testified before congress about how he made the decision. A lot of what he talked about was reading information from credible sources and real experts.
So how do we teach students to identify credible experts and valid information? I have heard a lot of faculty say identifying reliable experts is easy. You look at who they are and where they work. Well, it’s not quite that easy; for example, Andrew Wakefield was a gastroenterologist and a member of the UK medical register and published researcher. He claimed that the MMR vaccine was causing bowel disease and autism. After his research was shown to be irreproducible and likely biased and fraudulent, the general medical council removed him from the UK medical register. However, he continues to promote anti-vaccine ideas.
We need a better approach than where they work. Dr. David Murphy suggests we interrogate potential experts using the tools of the legal system interrogation and confrontation. Gary Klein suggests a list of seven criteria;
- Successful performance—measurable track record of making good decisions in the past.
- Peer respect.
- Career—number of years performing the task.
- Quality of tacit knowledge, such as mental models.
- Reliability.
- Credentials—licensing or certification of achieving professional standards.
- Reflection.
While none of these criteria are guarantees individually taken as a whole, they can give a functional assessment of expertise. However, we don’t often interview every individual we encounter in research. A third and likely most applicable approach involves reading critically and fact-checking. To quote a phrase, “we need to teach students to question everything.”
One approach is the CRAAP test (Currency, Relevance, Authority, Accuracy, and Purpose) developed by Sarah Blakeslee of California State University, Chico. The CRAAP Test is a list of questions that the reader can apply to a source of information to help determine if the information is valid and accurate. The questions for Currency are:
- When was the information published or posted?
- Has the information been revised or updated?
- Does your topic require current information, or will older sources work as well?
- Are the links functional?
The currency questions address the age of the information. Each section of the CRAAP test has 4 – 6 questions. The idea behind the CRAAP test is that once the researcher/student answers all the questions, they will be able to determine if the information is good or bad.
As an alternative or perhaps compliment, we should be teaching our student to think and behave like fact-checkers. One of the most compelling arguments about fact-checkers comes from the book Why Learn History (When It’s Already on Your Phone)by Sam Wineburg. In chapter 7: Why Google Can’t Save Us, the author talks about a study where Historians (average age 47) from several four-year institutions were asked to compare information about bullying on two sites. A long-standing professional medical organization maintains one site. While a small splinter group maintains the other (the issues that caused the split was adoption by same-sex couples). A group of professional fact-checkers also examined the two sites.
Many of the professional histories decided that the splinter group was the more reliable source of information. In contrast, the fact-checkers decided that the original organization was the most reliable. The difference between the two groups is what the author calls vertical (historians) versus lateral (fact-checkers) reading. The historians tend to read down the page and look at internal information. The fact-checkers jump around and leave the page to check additional information like where these two organizations came from, what others write about them, and what other groups and individuals say about the same questions.
The way information is published and disseminated has changed and will likely continue to change as the tools become easier to use and cheaper. Education needs to change how we teach our student to evaluate information. I think I will argue for a bit of lateral thinking.
Thanks for Listing to My Musings
The Teaching Cyborg