Tell Me a Story

“A story has no beginning or end: arbitrarily one chooses that moment of experience from which to look back or from which to look ahead.”
Graham Greene

Story it’s an interesting word like so many words in English it has many meanings.  If you look in the Mariam Webster’s Dictionary, the word story has 18 definitions if you include the sub-definitions.  We use story a lot in the sciences.

How do I know when my research is ready for publication?  You’re ready for publication when you can tell a story.  How will I know when I’m prepared to write my dissertation?  You’re prepared to write your dissertation when you can write a complete story. The answer to many a question is when you can tell a story.

A lady telling a gripping story to young women and children. Mezzotint by V. Green, 1785, after J. Opie. Credit: Wellcome Collection, CC BY
A lady telling a gripping story to young women and children. Mezzotint by V. Green, 1785, after J. Opie. Credit: Wellcome Collection, CC BY

Why a story?  A story is a very efficient way to teach something.  A properly constructed story helps us understand what is going on by logically presenting information and highlighting the links and connections between separate facts and events.  There is even a word for this storification in the paper Storification in History education: A mobile game in and about medieval Amsterdam the authors talk about the advantages of storytelling in History,

“In History education, narrative can be argued to be very useful to overcome fragmentation of the knowledge of historical characters and events, by relating these with meaningful connections of temporality and sequence (storification).” (Computers & Educations Vol 52, Issue 2, February 2009, p449.)

Storification also makes sense in regards to working and short-term memory.  Working memory and short-term memory are transient; permanent information storage takes place in long-term memory.  However, they are both critical to the establishment of long-term memory.  Information enters the memory system through Short-term memory, and processing and connections happen in working memory.

Unlike long-term memory, both short-term and working memory have limits on their capacity.   Recent work suggests that the size of working memory is 3 – 5 items.  For example, I could reasonably be expected to memorize a list of letters; H, C, L, I, and Z. I know some of you were going to say seven items as in the magical number seven, I break down the changes in our understanding of working memory in another blog post, you can read about it here.

However, we can quickly see a problem with 3-5 items; I can also remember a sentence, “All the world’s a stage” this sentence has 18 characters 19 if I count the apostrophe. I can hold this sentence in short-term memory.  I can remember these 18 characters due to a process called chunking coined by George Miller in his paper The Magical Number Seven, Plus or Minus Two Some Limits on Our Capacity for Processing Information.  Miller describes it as “By organizing the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break (or at least stretch) this informational bottleneck.” (Psychology Review Vol. 101, No. 2 p351)

In our example’s words are chunks; specifically, each word is a list of letters that have a specific meaning.  If I were to present that list of letters to you in a different way as zilch, it would be much easier to remember. Chunking is the same idea behind storification or storytelling; you are organizing the information into related chunks to make it easier for the mind to remember and digest.

With all the complicated information in a scientific paper, A story is a perfect format to present new scientific knowledge.  A scientific paper starts with an abstract which gives an overview. Then the paper has an introduction which places the new information in context with the old. Then we show the experiments (in the order that explains the information the best. not necessarily chronologically). Lastly, there is a summary that reiterates the new information in context with the old and what directions the research could go next.

A faculty advisor of mine once described writing a science paper as tell them what you are going to tell them, tell it to them, then tell them what you told them.  That might seem a bit excessive, in fact, I once had a non-science faculty member after hearing this triple approach to paper writing say, “what are scientists stupid?”  I think it’s a smart strategy, after all, have you ever had a teacher tell you how many times you need to hear something to commit it to memory? (I always heard it was three)

There is one thing I find quite strange about storytelling in science education.  It seems to me that helping students make connections and tie information together is the most important in the earliest stages of education — for instance, the steps of education that use textbooks.  However, the writing of most current science textbooks presents information as separate chunks.

Like I have said in previous blog posts the reason for writing the modern textbook as independent chunks are so we can use the textbook in any class and any order. However, if we want textbooks to be as useful as possible shouldn’t they be written as a story?  We should write the textbook so that we group information into meaningful chunks, we should write the textbook so that we present information in ways that reinforce the relationships and dependencies between new information and preexisting knowledge.

What do you think is the lack of storytelling harming modern textbooks?  Has our desire to produce textbooks (commercial and open source) that can be used in as many different classes as possible hurting the usability of the modern textbook?  Can we create textbooks that are storified or would they be unusable in current courses?  However, if a storified textbook helps the students learn and if we can’t use them in current courses is the problem with the textbook or the course?

Thanks for Listing to My Musings

The Teaching Cyborg

Genetics, Sorry Its Actually Math

“The truth, it is said, is rarely pure or simple, yet genetics can at times seem seductively transparent.”
Iain McGilchrist

Depending on the type of biology degree a student is earning the classes taken can vary. However, in a lot of programs, you will take a basic genetics course as the second or third course of the introductory sequence.

Sometimes I think genetics is a lot like the game of GO simple to learn but challenging to master. Genetics relies on simple rules and principles. These rules and principles can combine to form surprising complexity. There are only five types of genetic mutations and three laws of Mendelian inheritance. A Punnett square (a tool to analyze potential outcomes of a genetic cross) for a cross between to heterozygous (Aa) parents has four boxes. A Punnett square for a five gene heterozygous (AaBbCcDdEe) cross has 1024 boxes.

However, for all the simplicity of basic genetics, many students drop out of biology during or after that first genetics class. So, if the foundation of genetics is simple why do so many students leave or fail genetics. The reason is math, invariably a week or two into a genetics class I always hear students say something like “I choose biology, so I didn’t have to do math.”

Thinking biology does not use math is a funny statement to anyone that has completed any science degree because we all know science always includes some math. Most science degrees require at least some level of calculus graduate. For most biology students’ genetics is the first time where a lot of math is part of the biology.

Beyond the fact that genetics integrates math the bulk of the math is statistics, you could even say that genetics is statistics. Even if the students had statistics, it was probably not embedded into biology. While students might know the basics of statistics, they might have problems with transference, the ability to take preexisting knowledge and apply it to a new situation.

If students are having problems with transference concerning the principles of statistics, or even worse have not had a statistics course, they are not going to be able to focus on biology. Think about a simple piece of information; we tell the students that the probability of a baby being a girl is 50%. Then on a quiz, we ask the students this question (I have seen it used) “In a family with four children how many are girls and how many are boys?” The answer that the instructor is looking for is two girls and two boys. However, I know families that have four girls, or four boys, or three girls and one boy, or 1 girl and three boys. If a student put down one of these other answers, it is technically correct because all these options have happened.

While one problem is the poorly written questions, there is also a problem with understanding what a 50% probability means. One of the most important things that students need to understand is that a 50% probability is a statistic based on population. It is entirely possible for probabilities to vary widely with small sample sizes, as the sample size gets larger the probability of heads to tails to get closer and closer to 50%.

A simple way to think about the sex ration is coin flips. When we flip a coin, we say you have a 50% chance of getting heads. Now suppose I flipped a coin three times and got tails on all three, what is the probability that the fourth flip will be tails? There is two answer I hear most often 6.25% and 50%. The correct answer is 50%. You see every coin flip is an independent event that means each coin flip has a 50% probability of coming up tails.

Coin Toss by ICMA Photos, This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.
Coin Toss by ICMA Photos, This file is licensed under the Creative Commons Attribution-Share Alike 2.0 Generic license.

Now if we were to flip a coin 200 times in a row, the total data set would average out to be close to 50% heads to tails. However, even in this larger sample, there are likely to be several relatively long runs of heads or tails in some case more than seven in a row. People can quickly detect fake versus real data directly from the fact that most faked data does not have long enough runs of heads or tails, you can read about it here.

Therefore, one of the most important things we can teach students is the principle of significance. Students need to understand that it is not essential to merely show that probabilities and averages are different but that the difference between them is significant.

What does all this mean for genetics education? First, students should have a basic understanding of statistics before they take genetics. I believe that if statistics are not required to take statistics as a prerequisite for genetics you are not seriously trying to teach genetics to everyone.

However, even if the students have a foundation in statistics genetics lessons should be designed to help the students transfer knowledge from basic statistics into genetics. The transfer of information is also a situation where technology can help. In many math classes especially at calculus and above students often use software like Mathematica to solve the math equations once the student determines the correct approach and writes the equation.

In a genetics’ class students don’t need to derive or prove statistical equations. The students need to know what equations to use and when to use them. There are several statistics analysis software programs available. We should let the students use these tools in their class, a lot of professional scientists do. If we made statistical analysis software available, then students could focus on learning what calculations to apply were and focus on the biology that the statistics are highlighting.

What do you think should we design genetics classes to try and reach all the students? Could statistical analysis tools help the students taking a genetics class? Have you tried helping your students transfer knowledge from their statistics class to their genetics class? How often do we consider transference when we design new courses, should we be doing it more?

Thanks for Listing To my Musings

The Teaching Cyborg

Let Them Run Their Own Labs

“Research is creating new knowledge.”
Neil Armstrong

I suspect that people have been arguing about teaching science since we started teaching science. There are multiple groups that each have their models and best practices. In recent years we have even seen the progression of specialized undergraduate majors. Which suggests that some schools think content that used to be part of a foundational bachelor’s degree is no longer necessary.

One of the things that most of the groups interested in science education agree on is the more like real science we can make the learning experience the better the learning and understanding of science will be. There are even some schools like Reed College that requires all their students to complete a senior thesis and oral defense, under a faculty members supervision, to earn a bachelor’s degree.

Imagine if every bachelor’s student could spend a year studying and writing about a topic in their field that interested them. Not only would students get to “geek out” about a topic that interested them, think about how much we would learn.

Chemical research lab, Beckenham. Two chemists at work, surrounded by equipment and apparatus. Archives & Manuscripts, This file comes from Wellcome Images, license CC BY 4.0
Chemical research lab, Beckenham. Two chemists at work, surrounded by equipment and apparatus. Archives & Manuscripts, This file comes from Wellcome Images, license CC BY 4.0

The problem with Reed’s model is that it does not scale. Reed College has an enrollment of 1400 students and a 9 to 1 student to faculty ratio. It’s not feasible to scale this to a Tier 1 research institution that has 25 – 40 thousand students and nowhere near a 9 to 1 student to faculty ratio. Faculty don’t have space in their research labs to support student populations in the 10s of thousands.

There have also been a lot of programs developed and tested to provide students with research experiences. Most of these programs are small only 20 – 30 students. Also, a lot of these programs are short 8 – 12 weeks during summer. Additionally, since most are small, they have become highly competitive leading to access to only the top students.

While these programs have their heart in the right place, they are not going to provide research experiences to all students with program sizes of 20 – 30 students. If we are going to have a goal of providing research experiences for all bachelor’s students, we need another approach.

I have put a lot of thought into the idea of incorporation research into required laboratory science classes. If we incorporated a year-long research project in required laboratory courses all students would get research experiences. Additionally, the class would be more coherent because experiments would flow one to the other based on the results from previous work. However, research as a lab course is an idea for another day.

I recently came across an article that potentially presents another way to give students real research experiences. Before I get to the article, I want to show some of the background ideas that make this idea possible.
One of the most significant problems with scaling research experiences in a large university is the availability of space in faculty research labs and the availability of research mentors. It might be possible to reduce the burden on faculty by using the knowledge of the crowd.

We already use per – per instruction in large lecture classes, why not use it in research. After all, in professional research, you can’t look up the answer to your research question. In professional research, we talk to our colleagues and try out experiments until we get a direction or answer the question. Additionally, many of the groups that are interested in science education suggest having students work in groups.

The idea for undergraduate research comes from an article Pushing Boundaries: Undergrad launches student-driven particle astrophysics research group published in CU Boulder today on November 16, 2018. The article describes a research group formed by Jamie Principato that was established and run by undergraduates. The group is composed of 30 undergraduates who are designing and building an instrument to measure cosmic radiation. The group’s detector has already flown on high altitude balloons. You can read the full article here.

From my point of view, one of the most interesting things is that the 30 members of the group had little or no previous research experience. While Jamie Principato is an exceptional student, I can’t help but think undergraduate formed research groups could be the solution or at least part of the solution to undergraduate research experiences.

Depending on the question some of these groups could run for years with new undergraduates joining each year. If we think of undergraduate research groups having about 30 students than a departmental graduating class of 250 students would need nine groups a class of 500 would require 17 groups. With some proper planning and organizing this seems a reasonable number of groups for a department.

What do you think could student-run, and organized research groups be the solution to undergraduate research experiences for all students? Do you think undergraduate research experiences for all students are something we should be trying to develop? I think student-run and organized research groups could be the solution to undergraduate research experiences for all students, especially at large universities.

Thanks for Listing to My Musings
The Teaching Cyborg

Making Science

“The advance of technology is based on making it fit in so that you don’t really even notice it,so it’s part of everyday life.”
Bill Gates

There was a time when all biologists were also artists because they had to create drawings of their observations. Even after the invention of the camera, it was still easier to reproduce line art on a print press then photographs for quite some time.

Modern chemists purchase their glassware online or through a catalog. However, there was a time when a lot of chemists were also glass blowers. After all, if you can’t buy what you need, you must make it.  When I was an undergraduate, my university still had a full glass shop.

Early astronomers like Galileo designed and built their telescopes. Early biologists like van Leeuwenhoek, the discoverer of microorganisms, made their microscopes.  The development of optics for both telescopes and microscopes is a fascinating story in and of itself.

In a lot of ways, the progression of science is the progress of technology. The use of new technology in scientific research allows us to ask questions and collect data in ways that we previously could not, leading to advancements in our scientific understanding.

There are still fields like physics and astronomy were building instruments a standard part of the field. However, for many areas, the acquisition of new technology is most often made at conference booths or out of catalogs.

There is a problem with the model of companies providing all the scientific instrumentation. While standard equipment is readily available companies know about it and can make money, companies rarely invest in equipment with a tiny market.  It just happens the rare and nonexistent instrumentation is where innovation can move science forward: Unfortunately, only the scientists working at the cutting edge of their fields know about these needs.

Historically building new equipment has been a costly and challenging process. The equipment used to make a prototype has been expensive and took up a lot of space.Depending on the type of equipment created the electronics and programming might also be complicated.

However, over the last couple of decades, this has changed. There are now desktop versions of laser cutters, vinyl cutters, multi-axis CNC machines;I even recently saw an ad for a desktop water jet cutter. There is also the continuously improving world of 3-D printers. On the electronic side, there is both the Arduino and the Raspberry Pi platforms that allow rapid electronics prototyping using off-the-shelf equipment. Additionally, these tools allow the rapid creation of sophisticated equipment.

This list only represents some of the equipment currently available. The one thing that we can say for sure is that desktop manufacturing tools will become more cost-effective and more precise with future generations.

However, right now I could equip a digital fabrication(desktop style) shop with all the tools I talked about for less than the cost of a single high-end microscope. If access to desktop fabrication tools become standard how will it change science and science education?

There are currently organizations like Open-Labware.net and the PLoS Open Hardware Collection, making open-source lab equipment available. These organizations design and organize open-source science equipment. The idea is that open-source equipment can be cheaply built allowing access to science at lower costs. Joshua Pearce, the Richard Witte Endowed Professor of Materials Science and Engineering at Michigan Tech,has even written a book on the open-source laboratory, Open-Source Lab, 1st Edition, How to Build Your Own Hardware and Reduce Research Costs.

Imagine a lab that could produce equipment when it needs it.It would no longer be necessary to keep something because you might need it someday. Not only would we be reducing costs, but we would also free up limited space. As an example, a project I was involved with used multiple automated syringe pumps to dispense fluid through the internet each pump cost more than$1000.  A paper published in PLOS ONE describes the design and creation of an open-sourceweb controllable syringe pump that costs about $160.

Researchers can now save thousands of dollars and slash the time it takes to complete experiments by printing parts for their own custom-designed syringe pumps. Members of Joshua Pearce's lab made this web-enabled double syringe pump for less than $160. Credit: Emily Hunt
Researchers can now save thousands of dollars and slash the time it takes to complete experiments by printing parts for their own custom-designed syringe pumps. Members of Joshua Pearce’s lab made this web-enabled double syringe pump for less than $160. Credit: Emily Hunt

Let’s take this a step further, why create standard equipment. As a graduate student, I did a lot of standard experiments especially in the areas of gel electrophoresis. However, a lot of the time I had to fit my experiments into the commercially available equipment. If I could’ve customized my equipment to meet my research, I could’ve been more efficient and faster. 

Beyond customization what about rare or unique equipment, the sort of thing that you can’t buy. Instead of trying to find a way to ask a question with equipment that is”financially” viable and therefore available design and builds tools to ask the questions the way you want.

What kind of educational changes would we need to realize this research utopia? Many of the skills are already taught and would only require changes in focus and depth.

In my physical chemistry lab course, we learn Basic programming so that we could model atmospheric chemistry. What if instead of Basic we learned to program C/C++ that Arduino uses. If we design additional labs across multiple courses that use programming to run models, simulations, and control sensors learning to program would be part of the primary curriculum.

In my introductory physics class,I learned basic electronics and circuit design. Introductory physics is a course that most if not all science students need to take.With a little bit of refinement, the electronics and circuit design could take care of the electronics for equipment design. The only real addition would be a computer-aided design (CAD) course so that students/researchers can learn to design parts for 3-D printers and multi-axis CNC’s. Alternatively, all the training to use and run desktop fabrication equipment could be taken care of with a couple of classes.

The design and availability of desktop fabricating equipment can change how we do science by allowing customization and creation of scientific instruments to fit the specific needs of the researcher. What do you think,should we embrace the desktop fabrication (Maker) movement as part of science?Should the creation of equipment stay a specialized field? Is it a good idea but perhaps you think there isn’t space in the curriculum to fit in training?

Thanks for Listening to My Musings

The Teaching Cyborg

How Deep is Deep Enough?

“Perfection is the enemy of the good”
Voltaire

 

Education is about depth. Generally, we start with overviews and the big picture. Then we move on filling in the gaps and providing additional information. To fulfill one of my general education requirements, I took an Introduction to Western Civilizations course. We covered the rise of western civilization from prehistory all the way up to the modern age. This course included only the most essential points.  If I had gone on and studied western history, we would have expanded on the main points covered in the Introduction to Western Civilizations course.

As an example, there were courses on the Middle Ages like The Medieval World and Introduction to Medieval People, and then going into more depth Medieval Women. Each course led to a narrower but deeper dive into the topic.

Another example of this depth occurred during my science education. In my Introductory Chemistry courses, we learned about the laws of thermodynamics; there are four laws if you include the zeroth law. The laws of thermodynamics were only a single chapter in my introductory textbook, covered in just a couple of class periods.

Several years later as part of physical chemistry, I took thermodynamics, a required course for chemistry and biochemistry majors.  We spent the entire course studying the laws of thermodynamics, including mathematically deriving all the laws from first principles.

While I have used a lot of my chemistry over the years, I’ve never used that deep dive into thermodynamics. There are fields and research areas where this information is needed, however, I wonder how many chemistry students need this deep a dive into thermodynamics.

Determining what to teach students and what depth they need to learn each of these topics is a critical point of the educational design process. There has recently been a change to a topic that all (US) science students need to cover, the International System of Units abbreviated SI for Système International d’unités or classically the metric system.

The SI system is the measurement system used in scientific research. The SI system has seven base units, and 22 (named) derived units (made by combine base units).   In the US we teach students the SI system because the US is one of three countries that didn’t adopt the SI system. Science students need to use the SI system; the question is how much they need to know about the system.

The French first established the original two unit’s, length (meter) and mass (kilogram) in 1795. The system was developed to replace the hundreds of local and regional systems of measurement that were hindering trade and commerce.  The idea was to create a system based on known physical properties that were easy to understand, this way anyone could create a reference standard.  The definition of the meter was 1/10,000,000 of the distance from the North Pole to the equator on the Meridian that ran through Paris. The Kilogram was the mass of 10cm³ or 1/1000 of a cubic meter of distilled water at 4°C.

Basing the units on physical properties was supposed to give everyone the ability to create standards, in practice difficulties in producing the standards meant the individually created standards varied widely.  In 1889 the definitions of the meter and kilogram were changed to an artifact standard; an artifact standard is a standard based on a physical object, in this case, a platinum-iridium rod and cylinder located just outside of Paris France.

The original Kilogram stored in several bell jars.
National Geographic magazine, Vol. 27, No.1 (January 1915), p. 154, on Google Books. Photo credited to US National Bureau of Standards, now National Institute of Standards and Technology (NIST).

The use of the artifact standers lasted for quite a while; however, as science progressed we needed more accurate standards and the definition’s changed again, the new idea was to define all the base units on universal physical constants.  Skipping over the krypton 86 definition, in the 1960s the definition of the meter was changed to the distance light travels in a vacuum in 1/299,792,458 of a second (3.3 nanoseconds).

The speed of light was chosen to define the meter because it contains the meter, the speed of light is 299,792,458 m/s. This definition might seem a little strange, but it makes a lot of sense.  The speed of light is a universal constant, no matter where you are the speed of light in a vacuum is the same. To determine the length of the meter, you measure how far light travels in 3.3 nanoseconds. If your scientific experiment requires higher precision, you can make a standard with higher accuracy, instead of using 3.3 nanoseconds you could measure how far light travel in 3.33564 nanoseconds.

On November 17, 2018, the definition of the kilogram changed at the 26th meeting of the General Conference on Weights and Measures. The new definition of the kilogram uses the Planck’s constant which is 6.62607015×10-34 Kg m2/s.  Like the meter, the definition of the kilogram applies a constant that contains the standard.  Just like the meter the determining the precision of the kilogram is dependent on the accuracy of the measurements.

Up to this point, we’ve taught the kilogram as an object; the definition of the kilogram was a cylinder just outside of Paris no matter what happened that cylinder was the kilogram. However, with these new definitions, it becomes possible for students to derive the standards themselves. Scientists at the National Institute of Standard and Technology (NIST) created a Kibble or watt balance, the device used to measure the Planck constant, built out of simple electronics and Legos.

It is surprisingly accurate (±1%) you can read about it here. Using the Kibble or watt balance, it would be possible to develop lab activities were students create a kilogram standard and then compare it to a high-quality purchased standard.

With the change to the kilogram standard, is now possible to use the metric system to teach universal constants and have the students derive all the SI standards based on observations and first principles. The real question is, should we? For the bulk of the science students and scientist for that matter, how deep does their knowledge of the SI system need to be? Most are not going to become metrologist’s the scientist that study measurements and measurement sciences. With the ever-growing amount of scientific information, we need to think about not only what we teach but how deep we teach. What do you think, students can now derive the standards of the SI system from first principles, should they? We can’t teach everything how do we determine what to teach and how much to teach?

 

Thanks for Listening to My Musings

The Teaching Cyborg