Making Science

“The advance of technology is based on making it fit in so that you don’t really even notice it,so it’s part of everyday life.”
Bill Gates

There was a time when all biologists were also artists because they had to create drawings of their observations. Even after the invention of the camera, it was still easier to reproduce line art on a print press then photographs for quite some time.

Modern chemists purchase their glassware online or through a catalog. However, there was a time when a lot of chemists were also glass blowers. After all, if you can’t buy what you need, you must make it.  When I was an undergraduate, my university still had a full glass shop.

Early astronomers like Galileo designed and built their telescopes. Early biologists like van Leeuwenhoek, the discoverer of microorganisms, made their microscopes.  The development of optics for both telescopes and microscopes is a fascinating story in and of itself.

In a lot of ways, the progression of science is the progress of technology. The use of new technology in scientific research allows us to ask questions and collect data in ways that we previously could not, leading to advancements in our scientific understanding.

There are still fields like physics and astronomy were building instruments a standard part of the field. However, for many areas, the acquisition of new technology is most often made at conference booths or out of catalogs.

There is a problem with the model of companies providing all the scientific instrumentation. While standard equipment is readily available companies know about it and can make money, companies rarely invest in equipment with a tiny market.  It just happens the rare and nonexistent instrumentation is where innovation can move science forward: Unfortunately, only the scientists working at the cutting edge of their fields know about these needs.

Historically building new equipment has been a costly and challenging process. The equipment used to make a prototype has been expensive and took up a lot of space.Depending on the type of equipment created the electronics and programming might also be complicated.

However, over the last couple of decades, this has changed. There are now desktop versions of laser cutters, vinyl cutters, multi-axis CNC machines;I even recently saw an ad for a desktop water jet cutter. There is also the continuously improving world of 3-D printers. On the electronic side, there is both the Arduino and the Raspberry Pi platforms that allow rapid electronics prototyping using off-the-shelf equipment. Additionally, these tools allow the rapid creation of sophisticated equipment.

This list only represents some of the equipment currently available. The one thing that we can say for sure is that desktop manufacturing tools will become more cost-effective and more precise with future generations.

However, right now I could equip a digital fabrication(desktop style) shop with all the tools I talked about for less than the cost of a single high-end microscope. If access to desktop fabrication tools become standard how will it change science and science education?

There are currently organizations like Open-Labware.net and the PLoS Open Hardware Collection, making open-source lab equipment available. These organizations design and organize open-source science equipment. The idea is that open-source equipment can be cheaply built allowing access to science at lower costs. Joshua Pearce, the Richard Witte Endowed Professor of Materials Science and Engineering at Michigan Tech,has even written a book on the open-source laboratory, Open-Source Lab, 1st Edition, How to Build Your Own Hardware and Reduce Research Costs.

Imagine a lab that could produce equipment when it needs it.It would no longer be necessary to keep something because you might need it someday. Not only would we be reducing costs, but we would also free up limited space. As an example, a project I was involved with used multiple automated syringe pumps to dispense fluid through the internet each pump cost more than$1000.  A paper published in PLOS ONE describes the design and creation of an open-sourceweb controllable syringe pump that costs about $160.

Researchers can now save thousands of dollars and slash the time it takes to complete experiments by printing parts for their own custom-designed syringe pumps. Members of Joshua Pearce's lab made this web-enabled double syringe pump for less than $160. Credit: Emily Hunt
Researchers can now save thousands of dollars and slash the time it takes to complete experiments by printing parts for their own custom-designed syringe pumps. Members of Joshua Pearce’s lab made this web-enabled double syringe pump for less than $160. Credit: Emily Hunt

Let’s take this a step further, why create standard equipment. As a graduate student, I did a lot of standard experiments especially in the areas of gel electrophoresis. However, a lot of the time I had to fit my experiments into the commercially available equipment. If I could’ve customized my equipment to meet my research, I could’ve been more efficient and faster. 

Beyond customization what about rare or unique equipment, the sort of thing that you can’t buy. Instead of trying to find a way to ask a question with equipment that is”financially” viable and therefore available design and builds tools to ask the questions the way you want.

What kind of educational changes would we need to realize this research utopia? Many of the skills are already taught and would only require changes in focus and depth.

In my physical chemistry lab course, we learn Basic programming so that we could model atmospheric chemistry. What if instead of Basic we learned to program C/C++ that Arduino uses. If we design additional labs across multiple courses that use programming to run models, simulations, and control sensors learning to program would be part of the primary curriculum.

In my introductory physics class,I learned basic electronics and circuit design. Introductory physics is a course that most if not all science students need to take.With a little bit of refinement, the electronics and circuit design could take care of the electronics for equipment design. The only real addition would be a computer-aided design (CAD) course so that students/researchers can learn to design parts for 3-D printers and multi-axis CNC’s. Alternatively, all the training to use and run desktop fabrication equipment could be taken care of with a couple of classes.

The design and availability of desktop fabricating equipment can change how we do science by allowing customization and creation of scientific instruments to fit the specific needs of the researcher. What do you think,should we embrace the desktop fabrication (Maker) movement as part of science?Should the creation of equipment stay a specialized field? Is it a good idea but perhaps you think there isn’t space in the curriculum to fit in training?

Thanks for Listening to My Musings

The Teaching Cyborg

How Deep is Deep Enough?

“Perfection is the enemy of the good”
Voltaire

 

Education is about depth. Generally, we start with overviews and the big picture. Then we move on filling in the gaps and providing additional information. To fulfill one of my general education requirements, I took an Introduction to Western Civilizations course. We covered the rise of western civilization from prehistory all the way up to the modern age. This course included only the most essential points.  If I had gone on and studied western history, we would have expanded on the main points covered in the Introduction to Western Civilizations course.

As an example, there were courses on the Middle Ages like The Medieval World and Introduction to Medieval People, and then going into more depth Medieval Women. Each course led to a narrower but deeper dive into the topic.

Another example of this depth occurred during my science education. In my Introductory Chemistry courses, we learned about the laws of thermodynamics; there are four laws if you include the zeroth law. The laws of thermodynamics were only a single chapter in my introductory textbook, covered in just a couple of class periods.

Several years later as part of physical chemistry, I took thermodynamics, a required course for chemistry and biochemistry majors.  We spent the entire course studying the laws of thermodynamics, including mathematically deriving all the laws from first principles.

While I have used a lot of my chemistry over the years, I’ve never used that deep dive into thermodynamics. There are fields and research areas where this information is needed, however, I wonder how many chemistry students need this deep a dive into thermodynamics.

Determining what to teach students and what depth they need to learn each of these topics is a critical point of the educational design process. There has recently been a change to a topic that all (US) science students need to cover, the International System of Units abbreviated SI for Système International d’unités or classically the metric system.

The SI system is the measurement system used in scientific research. The SI system has seven base units, and 22 (named) derived units (made by combine base units).   In the US we teach students the SI system because the US is one of three countries that didn’t adopt the SI system. Science students need to use the SI system; the question is how much they need to know about the system.

The French first established the original two unit’s, length (meter) and mass (kilogram) in 1795. The system was developed to replace the hundreds of local and regional systems of measurement that were hindering trade and commerce.  The idea was to create a system based on known physical properties that were easy to understand, this way anyone could create a reference standard.  The definition of the meter was 1/10,000,000 of the distance from the North Pole to the equator on the Meridian that ran through Paris. The Kilogram was the mass of 10cm³ or 1/1000 of a cubic meter of distilled water at 4°C.

Basing the units on physical properties was supposed to give everyone the ability to create standards, in practice difficulties in producing the standards meant the individually created standards varied widely.  In 1889 the definitions of the meter and kilogram were changed to an artifact standard; an artifact standard is a standard based on a physical object, in this case, a platinum-iridium rod and cylinder located just outside of Paris France.

The original Kilogram stored in several bell jars.
National Geographic magazine, Vol. 27, No.1 (January 1915), p. 154, on Google Books. Photo credited to US National Bureau of Standards, now National Institute of Standards and Technology (NIST).

The use of the artifact standers lasted for quite a while; however, as science progressed we needed more accurate standards and the definition’s changed again, the new idea was to define all the base units on universal physical constants.  Skipping over the krypton 86 definition, in the 1960s the definition of the meter was changed to the distance light travels in a vacuum in 1/299,792,458 of a second (3.3 nanoseconds).

The speed of light was chosen to define the meter because it contains the meter, the speed of light is 299,792,458 m/s. This definition might seem a little strange, but it makes a lot of sense.  The speed of light is a universal constant, no matter where you are the speed of light in a vacuum is the same. To determine the length of the meter, you measure how far light travels in 3.3 nanoseconds. If your scientific experiment requires higher precision, you can make a standard with higher accuracy, instead of using 3.3 nanoseconds you could measure how far light travel in 3.33564 nanoseconds.

On November 17, 2018, the definition of the kilogram changed at the 26th meeting of the General Conference on Weights and Measures. The new definition of the kilogram uses the Planck’s constant which is 6.62607015×10-34 Kg m2/s.  Like the meter, the definition of the kilogram applies a constant that contains the standard.  Just like the meter the determining the precision of the kilogram is dependent on the accuracy of the measurements.

Up to this point, we’ve taught the kilogram as an object; the definition of the kilogram was a cylinder just outside of Paris no matter what happened that cylinder was the kilogram. However, with these new definitions, it becomes possible for students to derive the standards themselves. Scientists at the National Institute of Standard and Technology (NIST) created a Kibble or watt balance, the device used to measure the Planck constant, built out of simple electronics and Legos.

It is surprisingly accurate (±1%) you can read about it here. Using the Kibble or watt balance, it would be possible to develop lab activities were students create a kilogram standard and then compare it to a high-quality purchased standard.

With the change to the kilogram standard, is now possible to use the metric system to teach universal constants and have the students derive all the SI standards based on observations and first principles. The real question is, should we? For the bulk of the science students and scientist for that matter, how deep does their knowledge of the SI system need to be? Most are not going to become metrologist’s the scientist that study measurements and measurement sciences. With the ever-growing amount of scientific information, we need to think about not only what we teach but how deep we teach. What do you think, students can now derive the standards of the SI system from first principles, should they? We can’t teach everything how do we determine what to teach and how much to teach?

 

Thanks for Listening to My Musings

The Teaching Cyborg

Obviously, They Should Read 40 Pages, Right?

“No two persons ever read the same book.”
Edmund Wilson

 

The designing of a course is about more than what happens in the classroom.  A course also includes homework, papers, and reading assignments to name a few.  According to the Carnegie unit recommendation, all the out of class work should fit into a period equal to two hours for ever credit.  Therefore a 3-credit course would have 6 hours of work outside classroom a week, how should that time be divided.  A question often asked is how much reading should I assign?

What this usually means is how much reading is reasonable considering all the other learning obligations the students have.  In the book, Academically Adrift: Limited Learning on College Campuses, Richard Arum, and Josipa Roksa state that students that have at least 40 pages of reading a week had more substantial gains on the College Learning Assessment. Since the information on the reading is self-reported, we don’t know what kind of reading this represents.  There are multiple types of reading, as an example, there is skimming, scanning, intensive, and extensive another set of options is surveying, understanding, and engaging used by the Center for Teaching Excellence at Rice University.

When students read for the survey, they are just trying to find the main points.  Reading for understanding requires the student to attempt to understand all the text down to the level of single sentences.  Finally Engaging with the book requires all the skills of reading for understanding while using the book to solve problems and build connections.

A book being viewed through a magnifying glass.
Book viewed through a magnifying glass. Image by Monica Velazquilo (CC BY-SA 3.0).

One way to estimate how much time it will take students to read a specific number of pages is a course workload calculator on the Reflections on Teaching & Learning blog on the Center for Teaching Excellence site at Rice University.  Using the workload calculator if the students reads 40 pages in a survey mode it takes 1.43 hours, Understanding takes 2.86 hours, while Engaging takes 5.71 hours.  If a three-credit class has an out of class workload of 6 hours, reading for engagement would take up all a student’s out of class time. Therefore if the point of your reading assignments is reading for engagement either 40 pages is too heavy, or it is the only thing the students should be doing.

There are other factors beyond the type of reading that affect how long the reading takes, like the complexity of the text.  The more significant the amount of new information in a book the longer it is going to take to read.

While the 40+ page suggestions from Academically Adrift is one of the few research-based examples I have seen there are additional suggestions.  In one case a course that meets on Tuesdays and Thursdays the instructor suggest assigning 80 – 120 pages for the period between Thursday and Tuesday and 30 – 40 pages for the period from Tuesday to Thursday.  The argument being that the weekend adds 48 hours, so the students have more time and can read more.

I don’t like this argument, the students have additional time, so they should do more reading.  The main point of the reading assignments is to get ready for in-class activities or to reinforce class activities.  In this example, the two class periods are the same length the amount of material used to prep for the class should be the same.

So, how many pages should be an assignment for each class period?  It should be clear that this is not a simple or straightforward issue.  Let’s start with a 3-credit class that meets Monday, Wednesday, and Friday, 3-credit hours times 2 hours per credit means this course has 6 hours per week for reading and assignments.  So, if we assume, we are talking about an introductory course that uses a textbook, and we devote half the total students time to reading (reading for understanding) then using the Rice tool the students would reading 42 pages in 3 hours. The 42 pages suggested by the tool match the reading recommendation from Academically Adrift.

Dividing the 42 pages by the three, students should read approximately 14 pages for each class period.  In a regular semester excluding exams and holidays, there are 40 class periods this gives us a maximum of 560 pages per semester.

How does 560 pages compare with what courses are doing? Looking at the reading list for some introductory science courses, the total number of pages assigned are 261, 256, 338, 463, 475, and 347.  The average page number is 375 ± 87. If we divide the average by the total number of class periods (40) that would mean students would be reading about 9.4 pages for each class or 28.1 pages per week.

So, what does this mean, are introductory science courses are underperforming?  I don’t think so.  For instance, the estimation tool I have been using lists different word densities for different types of books.  For a paperback book, it lists 450 words per page while a textbook has 750 words per page. If we went with word count, then 40 pages of a paperback equal 24 pages of a textbook.

Beyond word count, we should also ask about the number of new concepts? Additionally, is the student reading to prepare for a discussion, to get a general overview of a topic, or to gain a deeper understanding?  While I would love to have a rule or a set of rules that will help us design the best learning experiences, I don’t think we are there yet.

Is course design by word count the way we should go?  Again, I don’t think straight numbers whether pages or word count is the way to go. Because of variables like words per page, number of new concepts and types of reading I’m not sure we will ever have a single rule that determines the optimal number of pages to read.

Just using a number does not consider the reason for the reading assignment or the number of new topics in the text.  Since new concepts and long-term learning are impacted by things like working memory, and short- and long-term memory, I think the number of new ideas and the complexity of the text may end up being the most critical aspects when determining the length of reading assignments.

To determine the amount of reading appropriate for a course we defiantly need more research.  However, I’m not sure this is something that is really on the research radar.  If your students are having trouble do you ever think about changing the amount of reading?  How important do you think the reading assignments are to your students learning?  Do you think we are too concerned with how much reading we assign to students?

 

Thanks for Listening to My Musings

The Teaching Cyborg

But I Thought I Knew That!

“We are infected by our own misunderstanding of how our own minds work.”
Kevin Kelly

 

Over the last several decades we have learned a lot about teaching and learning.  One of the most critical things with regards to education is the addition of new information to memory. The storage of new information in memory and our understanding of that information is dependent on what we already know. According to Jean Piaget’s Cognitive theory, three critical components of learning depend on preexisting knowledge Equilibrium, Assimilation, and Accommodation.

In Piaget’s modal assimilation occurs when the new information matches a learner’s preexisting views and without changing can be incorporated into their view.  Accommodation happens when new knowledge conflicts with the learner’s preexisting view of the world, in this case, the student’s view must change to incorporate the new knowledge.  Equilibrium is the condition where most new knowledge can be dealt with by the students existing view.

In simpler terms, preexisting knowledge can either help or hinder a student’s learning.  If the preexisting knowledge aligns with the existing knowledge, it helps, when the current information does not align with existing knowledge it hinders.

PriorKnowledge_Combined Files-1

Modified From: Exploring Research-based Principles of Learning and Their Connection to Teaching, Dr. Susan Ambrose

Since no student is a blank slate, they will always have a view based on their own life experiences.  When a student learns something that does not fit their view, either their view must change (accommodation), or the new information is altered to fit their view (incorrect assimilation).

In modern education, we call these incorrect views a misconception.  To overcome misconception so that accommodation can occur students must actively acknowledge their misconceptions.  These misconceptions can be especially impactful in science education where many of the ideas taught can’t be touched or physically observed.

In chemistry, we teach students about atoms and molecules, which are too small to see or feel. In astronomy, we teach students that the earth is orbiting around the sun at 67,000 miles per hour.  However, do we feel that speed on the surface of the planet?

Beyond misconceptions derived from observations, students can also acquire misconceptions from language.  In the field of genetics, a common misconception is: A dominant mutation is the most likely one to be found in the population. This misconception likely comes from the word dominant which has six definitions according to the Marian-Webster dictionary.

Dominant

  1. a: commanding, controlling, or prevailing over all others the dominant culture
    b: very important, powerful, or successful a dominant theme a dominant industry the team’s dominant performance
  2. overlooking and commanding from a superior position a dominant hill
  3. of, relating to, or exerting ecological or genetic dominance dominant genes dominant and recessive traits
  4. biology: being the one of a pair of bodily structures that is the more effective or predominant in action dominant eye used her dominant hand
  5. music: the fifth tone of a major or minor scale (see scale entry six sense 2)
  6. a: genetics: a character or factor that exerts genetic dominance (see dominance sense 1b)
    b: ecology: any of one or more kinds of organism (such as a species) in an ecological community that exerts a controlling influence on the environment and thereby largely determines what other kinds of organisms are present dominant conifers
    c: sociology: an individual having a controlling, prevailing, or powerful position in a social hierarchy: a dominant (see dominant entry one sense 1) individual in a social hierarchy

Most of the definitions have to do with importance, power, and control, which is likely why students think a dominant mutation is the most likely one to be found in a population.  However, there is another genetic term for the most common allele in a population, wild-type.  In genetics the term dominant must always be used about something else, for example, the phenotype of the dominant allele B is expressed instead of allele b.

I have always preferred to use the five-terms established by Hermann Muller to classify the specific types of genetic mutations over general terms like dominant and recessive.  Regardless of the words used, the students need to understand that we are discussing mutations that change the function of genes which has nothing to do with a mutation’s frequency in a population.

Another common genetic misconception is that all mutations are harmful.  At the DNA level, a mutation is simply a change to the DNA, a lot of mutations do not affect.  As an example, if a mutation occurred in a coding region, there is a good chance it will not change the final product.  If the mutation occurred in the third position of the alanine codon GCT and became GCC, it would still code for alanine, in fact, all four GCx codons GCT, GCC, GCA, and GCG code for alanine. That means any change in the third position of this triplet will not affect the protein formed. There are a lot of other misconceptions in genetics, but that is a discussion for another day.

When it comes to helping students deal with their misconceptions, it can help to try and understand where the misconceptions came from, and what might be influencing them.  As a faculty member once said, “If you want to understand what a student is thinking, ask them.”  If a student does not comprehend new information, it might be because of previous notions.  Learning what the student’s assumptions are and how the assumptions are interfering with the students learning will only make you a better teacher.

 

Thanks for Listening To my Musings

The Teaching Cyborg

It’s All in the Primes

“The greatest single achievement of nature to date was surely the invention of the molecule DNA.”
Lewis Thomas

 

When you’re an undergraduate student, two words mean a lot to you prerequisite and corequisite.  These two words let you know whether you must take courses one after the other or at the same time.  Ever since my undergraduate days, I have found these terms to be fascinating.  As a student, I often thought of the words differently.  Prerequisite meant we believe you need this information to understand our class, while corequisite indicated this information might be useful, but we don’t care.

That may seem a bit harsh, but that is the way it seemed to me when I was an undergraduate, and to be honest, it still seems that way to me. My experience for the first couple of years as a biology major was a little different than several of my classmates.  As a high school student, I had been fortunate enough to attend a school with a robust Advanced Placement (AP) and International Baccalaureate (IB) program, because of this I tested out of first-year biology and chemistry.  Then in a fit of madness, I took a full years’ worth of organic chemistry with labs over the summer.

Biology students would take organic chemistry the same time the would take the second year introductory biology courses, i.e., corequisite.  The first biology class I took was Molecular Biology, one day we were sitting in class, and the professor was talking about DNA replication.  If you know anything about DNA you know, the terms 5’ and 3’ (also written 5 prime and 3 prime) get used a lot.  DNA is composed of two directional strands if one strand is 5’ to 3’ left to right the other strand will be 3’ to 5’ left to right.  DNA replication is carried out by DNA Polymerase III which synthesizes new DNA from 5’ to 3’.  I could go on, but that should make the idea clear enough.

DNA replication or DNA synthesis is the process of copying a double-stranded DNA molecule. This process is paramount to all life as we know it.
DNA Replication Image by Mariana Ruiz

One day my classmate turned to me and said, “I don’t understand anything he’s talking about what the hell does all this 5’ and 3’ stuff mean.” It took me a second to figure out what my classmate was saying the terms had been obvious to me.  I told him the names came from organic chemistry; they are referencing the 3rd and 5th carbon on the deoxyribose ring. Specifically, the 5’ carbon on one nucleic acid binds to the 3’ carbon on another forming the DNA backbone. Didn’t they cover numbering carbons in your organic chemistry course I asked, it turns out they had not gotten to that yet?

Many times during my undergraduate education corequisite courses did not cover material before it was needed.  It was this tendency of separate classes not to line up that lead me to start thinking of corequisite courses as “we really don’t care.”  As a student, I usually assumed corequisite courses would be no help in a class I was taking.

As a professional, I understand the constraints that impact educational choices.  Ideally, we are trying to fit all the courses needed for a degree in four years, that is four years minus summers.  I suspect if we made every corequisite a prerequisite we would not fit all the courses into a four-year program.  Interestingly according to the Marian Webster’s dictionary, the first known use of corequisite as we use it in education was circa 1948.  The fact that corequisite didn’t exist until 1948 suggests to me that we used to fit all the courses into a four-year degree without corequisites, I wonder what changed? I would assume this has to do with the growth in the amount of material covered in a Bachler’s program while maintaining the time to degree.

The other impact on the usability of corequisite courses is that they are taught by different faculty sometimes in other departments.  We hire faculty because of their experts in a field, to take full advantage of this expertise faculty are given the freedom to design and teach subject matter in the method they determine is best.  I wonder if schools are doing enough to promote communication between faculty members that teach courses related by corequisites.

Then again is a corequisite essential enough for a faculty member to change how they teach their course?  When thinking about curriculum design and degrees, I often think where is the line between the needs of the degree and the design freedom of a faculty member, is there a line? With the constant changes in many if not most fields and the growing amount of knowledge we must teach, we must rely on the experts in the field to keep the content of individual courses relevant.  With the continual work to keep course content relevant is it even possible to create a completely unified curriculum?

It may be that corequisite is the best we can do with respects to a degree’s curriculum.  However, I do know that anytime I deal with the curriculum of either a single course or a whole degree, I always remember “about what the hell does all this 5’ and 3’ stuff mean.”

 

Thanks for Listing To my Musings

The Teaching Cyborg