Making Science

“The advance of technology is based on making it fit in so that you don’t really even notice it,so it’s part of everyday life.”
Bill Gates

There was a time when all biologists were also artists because they had to create drawings of their observations. Even after the invention of the camera, it was still easier to reproduce line art on a print press then photographs for quite some time.

Modern chemists purchase their glassware online or through a catalog. However, there was a time when a lot of chemists were also glass blowers. After all, if you can’t buy what you need, you must make it.  When I was an undergraduate, my university still had a full glass shop.

Early astronomers like Galileo designed and built their telescopes. Early biologists like van Leeuwenhoek, the discoverer of microorganisms, made their microscopes.  The development of optics for both telescopes and microscopes is a fascinating story in and of itself.

In a lot of ways, the progression of science is the progress of technology. The use of new technology in scientific research allows us to ask questions and collect data in ways that we previously could not, leading to advancements in our scientific understanding.

There are still fields like physics and astronomy were building instruments a standard part of the field. However, for many areas, the acquisition of new technology is most often made at conference booths or out of catalogs.

There is a problem with the model of companies providing all the scientific instrumentation. While standard equipment is readily available companies know about it and can make money, companies rarely invest in equipment with a tiny market.  It just happens the rare and nonexistent instrumentation is where innovation can move science forward: Unfortunately, only the scientists working at the cutting edge of their fields know about these needs.

Historically building new equipment has been a costly and challenging process. The equipment used to make a prototype has been expensive and took up a lot of space.Depending on the type of equipment created the electronics and programming might also be complicated.

However, over the last couple of decades, this has changed. There are now desktop versions of laser cutters, vinyl cutters, multi-axis CNC machines;I even recently saw an ad for a desktop water jet cutter. There is also the continuously improving world of 3-D printers. On the electronic side, there is both the Arduino and the Raspberry Pi platforms that allow rapid electronics prototyping using off-the-shelf equipment. Additionally, these tools allow the rapid creation of sophisticated equipment.

This list only represents some of the equipment currently available. The one thing that we can say for sure is that desktop manufacturing tools will become more cost-effective and more precise with future generations.

However, right now I could equip a digital fabrication(desktop style) shop with all the tools I talked about for less than the cost of a single high-end microscope. If access to desktop fabrication tools become standard how will it change science and science education?

There are currently organizations like Open-Labware.net and the PLoS Open Hardware Collection, making open-source lab equipment available. These organizations design and organize open-source science equipment. The idea is that open-source equipment can be cheaply built allowing access to science at lower costs. Joshua Pearce, the Richard Witte Endowed Professor of Materials Science and Engineering at Michigan Tech,has even written a book on the open-source laboratory, Open-Source Lab, 1st Edition, How to Build Your Own Hardware and Reduce Research Costs.

Imagine a lab that could produce equipment when it needs it.It would no longer be necessary to keep something because you might need it someday. Not only would we be reducing costs, but we would also free up limited space. As an example, a project I was involved with used multiple automated syringe pumps to dispense fluid through the internet each pump cost more than$1000.  A paper published in PLOS ONE describes the design and creation of an open-sourceweb controllable syringe pump that costs about $160.

Researchers can now save thousands of dollars and slash the time it takes to complete experiments by printing parts for their own custom-designed syringe pumps. Members of Joshua Pearce's lab made this web-enabled double syringe pump for less than $160. Credit: Emily Hunt
Researchers can now save thousands of dollars and slash the time it takes to complete experiments by printing parts for their own custom-designed syringe pumps. Members of Joshua Pearce’s lab made this web-enabled double syringe pump for less than $160. Credit: Emily Hunt

Let’s take this a step further, why create standard equipment. As a graduate student, I did a lot of standard experiments especially in the areas of gel electrophoresis. However, a lot of the time I had to fit my experiments into the commercially available equipment. If I could’ve customized my equipment to meet my research, I could’ve been more efficient and faster. 

Beyond customization what about rare or unique equipment, the sort of thing that you can’t buy. Instead of trying to find a way to ask a question with equipment that is”financially” viable and therefore available design and builds tools to ask the questions the way you want.

What kind of educational changes would we need to realize this research utopia? Many of the skills are already taught and would only require changes in focus and depth.

In my physical chemistry lab course, we learn Basic programming so that we could model atmospheric chemistry. What if instead of Basic we learned to program C/C++ that Arduino uses. If we design additional labs across multiple courses that use programming to run models, simulations, and control sensors learning to program would be part of the primary curriculum.

In my introductory physics class,I learned basic electronics and circuit design. Introductory physics is a course that most if not all science students need to take.With a little bit of refinement, the electronics and circuit design could take care of the electronics for equipment design. The only real addition would be a computer-aided design (CAD) course so that students/researchers can learn to design parts for 3-D printers and multi-axis CNC’s. Alternatively, all the training to use and run desktop fabrication equipment could be taken care of with a couple of classes.

The design and availability of desktop fabricating equipment can change how we do science by allowing customization and creation of scientific instruments to fit the specific needs of the researcher. What do you think,should we embrace the desktop fabrication (Maker) movement as part of science?Should the creation of equipment stay a specialized field? Is it a good idea but perhaps you think there isn’t space in the curriculum to fit in training?

Thanks for Listening to My Musings

The Teaching Cyborg

How Deep is Deep Enough?

“Perfection is the enemy of the good”
Voltaire

 

Education is about depth. Generally, we start with overviews and the big picture. Then we move on filling in the gaps and providing additional information. To fulfill one of my general education requirements, I took an Introduction to Western Civilizations course. We covered the rise of western civilization from prehistory all the way up to the modern age. This course included only the most essential points.  If I had gone on and studied western history, we would have expanded on the main points covered in the Introduction to Western Civilizations course.

As an example, there were courses on the Middle Ages like The Medieval World and Introduction to Medieval People, and then going into more depth Medieval Women. Each course led to a narrower but deeper dive into the topic.

Another example of this depth occurred during my science education. In my Introductory Chemistry courses, we learned about the laws of thermodynamics; there are four laws if you include the zeroth law. The laws of thermodynamics were only a single chapter in my introductory textbook, covered in just a couple of class periods.

Several years later as part of physical chemistry, I took thermodynamics, a required course for chemistry and biochemistry majors.  We spent the entire course studying the laws of thermodynamics, including mathematically deriving all the laws from first principles.

While I have used a lot of my chemistry over the years, I’ve never used that deep dive into thermodynamics. There are fields and research areas where this information is needed, however, I wonder how many chemistry students need this deep a dive into thermodynamics.

Determining what to teach students and what depth they need to learn each of these topics is a critical point of the educational design process. There has recently been a change to a topic that all (US) science students need to cover, the International System of Units abbreviated SI for Système International d’unités or classically the metric system.

The SI system is the measurement system used in scientific research. The SI system has seven base units, and 22 (named) derived units (made by combine base units).   In the US we teach students the SI system because the US is one of three countries that didn’t adopt the SI system. Science students need to use the SI system; the question is how much they need to know about the system.

The French first established the original two unit’s, length (meter) and mass (kilogram) in 1795. The system was developed to replace the hundreds of local and regional systems of measurement that were hindering trade and commerce.  The idea was to create a system based on known physical properties that were easy to understand, this way anyone could create a reference standard.  The definition of the meter was 1/10,000,000 of the distance from the North Pole to the equator on the Meridian that ran through Paris. The Kilogram was the mass of 10cm³ or 1/1000 of a cubic meter of distilled water at 4°C.

Basing the units on physical properties was supposed to give everyone the ability to create standards, in practice difficulties in producing the standards meant the individually created standards varied widely.  In 1889 the definitions of the meter and kilogram were changed to an artifact standard; an artifact standard is a standard based on a physical object, in this case, a platinum-iridium rod and cylinder located just outside of Paris France.

The original Kilogram stored in several bell jars.
National Geographic magazine, Vol. 27, No.1 (January 1915), p. 154, on Google Books. Photo credited to US National Bureau of Standards, now National Institute of Standards and Technology (NIST).

The use of the artifact standers lasted for quite a while; however, as science progressed we needed more accurate standards and the definition’s changed again, the new idea was to define all the base units on universal physical constants.  Skipping over the krypton 86 definition, in the 1960s the definition of the meter was changed to the distance light travels in a vacuum in 1/299,792,458 of a second (3.3 nanoseconds).

The speed of light was chosen to define the meter because it contains the meter, the speed of light is 299,792,458 m/s. This definition might seem a little strange, but it makes a lot of sense.  The speed of light is a universal constant, no matter where you are the speed of light in a vacuum is the same. To determine the length of the meter, you measure how far light travels in 3.3 nanoseconds. If your scientific experiment requires higher precision, you can make a standard with higher accuracy, instead of using 3.3 nanoseconds you could measure how far light travel in 3.33564 nanoseconds.

On November 17, 2018, the definition of the kilogram changed at the 26th meeting of the General Conference on Weights and Measures. The new definition of the kilogram uses the Planck’s constant which is 6.62607015×10-34 Kg m2/s.  Like the meter, the definition of the kilogram applies a constant that contains the standard.  Just like the meter the determining the precision of the kilogram is dependent on the accuracy of the measurements.

Up to this point, we’ve taught the kilogram as an object; the definition of the kilogram was a cylinder just outside of Paris no matter what happened that cylinder was the kilogram. However, with these new definitions, it becomes possible for students to derive the standards themselves. Scientists at the National Institute of Standard and Technology (NIST) created a Kibble or watt balance, the device used to measure the Planck constant, built out of simple electronics and Legos.

It is surprisingly accurate (±1%) you can read about it here. Using the Kibble or watt balance, it would be possible to develop lab activities were students create a kilogram standard and then compare it to a high-quality purchased standard.

With the change to the kilogram standard, is now possible to use the metric system to teach universal constants and have the students derive all the SI standards based on observations and first principles. The real question is, should we? For the bulk of the science students and scientist for that matter, how deep does their knowledge of the SI system need to be? Most are not going to become metrologist’s the scientist that study measurements and measurement sciences. With the ever-growing amount of scientific information, we need to think about not only what we teach but how deep we teach. What do you think, students can now derive the standards of the SI system from first principles, should they? We can’t teach everything how do we determine what to teach and how much to teach?

 

Thanks for Listening to My Musings

The Teaching Cyborg

Obviously, They Should Read 40 Pages, Right?

“No two persons ever read the same book.”
Edmund Wilson

 

The designing of a course is about more than what happens in the classroom.  A course also includes homework, papers, and reading assignments to name a few.  According to the Carnegie unit recommendation, all the out of class work should fit into a period equal to two hours for ever credit.  Therefore a 3-credit course would have 6 hours of work outside classroom a week, how should that time be divided.  A question often asked is how much reading should I assign?

What this usually means is how much reading is reasonable considering all the other learning obligations the students have.  In the book, Academically Adrift: Limited Learning on College Campuses, Richard Arum, and Josipa Roksa state that students that have at least 40 pages of reading a week had more substantial gains on the College Learning Assessment. Since the information on the reading is self-reported, we don’t know what kind of reading this represents.  There are multiple types of reading, as an example, there is skimming, scanning, intensive, and extensive another set of options is surveying, understanding, and engaging used by the Center for Teaching Excellence at Rice University.

When students read for the survey, they are just trying to find the main points.  Reading for understanding requires the student to attempt to understand all the text down to the level of single sentences.  Finally Engaging with the book requires all the skills of reading for understanding while using the book to solve problems and build connections.

A book being viewed through a magnifying glass.
Book viewed through a magnifying glass. Image by Monica Velazquilo (CC BY-SA 3.0).

One way to estimate how much time it will take students to read a specific number of pages is a course workload calculator on the Reflections on Teaching & Learning blog on the Center for Teaching Excellence site at Rice University.  Using the workload calculator if the students reads 40 pages in a survey mode it takes 1.43 hours, Understanding takes 2.86 hours, while Engaging takes 5.71 hours.  If a three-credit class has an out of class workload of 6 hours, reading for engagement would take up all a student’s out of class time. Therefore if the point of your reading assignments is reading for engagement either 40 pages is too heavy, or it is the only thing the students should be doing.

There are other factors beyond the type of reading that affect how long the reading takes, like the complexity of the text.  The more significant the amount of new information in a book the longer it is going to take to read.

While the 40+ page suggestions from Academically Adrift is one of the few research-based examples I have seen there are additional suggestions.  In one case a course that meets on Tuesdays and Thursdays the instructor suggest assigning 80 – 120 pages for the period between Thursday and Tuesday and 30 – 40 pages for the period from Tuesday to Thursday.  The argument being that the weekend adds 48 hours, so the students have more time and can read more.

I don’t like this argument, the students have additional time, so they should do more reading.  The main point of the reading assignments is to get ready for in-class activities or to reinforce class activities.  In this example, the two class periods are the same length the amount of material used to prep for the class should be the same.

So, how many pages should be an assignment for each class period?  It should be clear that this is not a simple or straightforward issue.  Let’s start with a 3-credit class that meets Monday, Wednesday, and Friday, 3-credit hours times 2 hours per credit means this course has 6 hours per week for reading and assignments.  So, if we assume, we are talking about an introductory course that uses a textbook, and we devote half the total students time to reading (reading for understanding) then using the Rice tool the students would reading 42 pages in 3 hours. The 42 pages suggested by the tool match the reading recommendation from Academically Adrift.

Dividing the 42 pages by the three, students should read approximately 14 pages for each class period.  In a regular semester excluding exams and holidays, there are 40 class periods this gives us a maximum of 560 pages per semester.

How does 560 pages compare with what courses are doing? Looking at the reading list for some introductory science courses, the total number of pages assigned are 261, 256, 338, 463, 475, and 347.  The average page number is 375 ± 87. If we divide the average by the total number of class periods (40) that would mean students would be reading about 9.4 pages for each class or 28.1 pages per week.

So, what does this mean, are introductory science courses are underperforming?  I don’t think so.  For instance, the estimation tool I have been using lists different word densities for different types of books.  For a paperback book, it lists 450 words per page while a textbook has 750 words per page. If we went with word count, then 40 pages of a paperback equal 24 pages of a textbook.

Beyond word count, we should also ask about the number of new concepts? Additionally, is the student reading to prepare for a discussion, to get a general overview of a topic, or to gain a deeper understanding?  While I would love to have a rule or a set of rules that will help us design the best learning experiences, I don’t think we are there yet.

Is course design by word count the way we should go?  Again, I don’t think straight numbers whether pages or word count is the way to go. Because of variables like words per page, number of new concepts and types of reading I’m not sure we will ever have a single rule that determines the optimal number of pages to read.

Just using a number does not consider the reason for the reading assignment or the number of new topics in the text.  Since new concepts and long-term learning are impacted by things like working memory, and short- and long-term memory, I think the number of new ideas and the complexity of the text may end up being the most critical aspects when determining the length of reading assignments.

To determine the amount of reading appropriate for a course we defiantly need more research.  However, I’m not sure this is something that is really on the research radar.  If your students are having trouble do you ever think about changing the amount of reading?  How important do you think the reading assignments are to your students learning?  Do you think we are too concerned with how much reading we assign to students?

 

Thanks for Listening to My Musings

The Teaching Cyborg

But I Thought I Knew That!

“We are infected by our own misunderstanding of how our own minds work.”
Kevin Kelly

 

Over the last several decades we have learned a lot about teaching and learning.  One of the most critical things with regards to education is the addition of new information to memory. The storage of new information in memory and our understanding of that information is dependent on what we already know. According to Jean Piaget’s Cognitive theory, three critical components of learning depend on preexisting knowledge Equilibrium, Assimilation, and Accommodation.

In Piaget’s modal assimilation occurs when the new information matches a learner’s preexisting views and without changing can be incorporated into their view.  Accommodation happens when new knowledge conflicts with the learner’s preexisting view of the world, in this case, the student’s view must change to incorporate the new knowledge.  Equilibrium is the condition where most new knowledge can be dealt with by the students existing view.

In simpler terms, preexisting knowledge can either help or hinder a student’s learning.  If the preexisting knowledge aligns with the existing knowledge, it helps, when the current information does not align with existing knowledge it hinders.

PriorKnowledge_Combined Files-1

Modified From: Exploring Research-based Principles of Learning and Their Connection to Teaching, Dr. Susan Ambrose

Since no student is a blank slate, they will always have a view based on their own life experiences.  When a student learns something that does not fit their view, either their view must change (accommodation), or the new information is altered to fit their view (incorrect assimilation).

In modern education, we call these incorrect views a misconception.  To overcome misconception so that accommodation can occur students must actively acknowledge their misconceptions.  These misconceptions can be especially impactful in science education where many of the ideas taught can’t be touched or physically observed.

In chemistry, we teach students about atoms and molecules, which are too small to see or feel. In astronomy, we teach students that the earth is orbiting around the sun at 67,000 miles per hour.  However, do we feel that speed on the surface of the planet?

Beyond misconceptions derived from observations, students can also acquire misconceptions from language.  In the field of genetics, a common misconception is: A dominant mutation is the most likely one to be found in the population. This misconception likely comes from the word dominant which has six definitions according to the Marian-Webster dictionary.

Dominant

  1. a: commanding, controlling, or prevailing over all others the dominant culture
    b: very important, powerful, or successful a dominant theme a dominant industry the team’s dominant performance
  2. overlooking and commanding from a superior position a dominant hill
  3. of, relating to, or exerting ecological or genetic dominance dominant genes dominant and recessive traits
  4. biology: being the one of a pair of bodily structures that is the more effective or predominant in action dominant eye used her dominant hand
  5. music: the fifth tone of a major or minor scale (see scale entry six sense 2)
  6. a: genetics: a character or factor that exerts genetic dominance (see dominance sense 1b)
    b: ecology: any of one or more kinds of organism (such as a species) in an ecological community that exerts a controlling influence on the environment and thereby largely determines what other kinds of organisms are present dominant conifers
    c: sociology: an individual having a controlling, prevailing, or powerful position in a social hierarchy: a dominant (see dominant entry one sense 1) individual in a social hierarchy

Most of the definitions have to do with importance, power, and control, which is likely why students think a dominant mutation is the most likely one to be found in a population.  However, there is another genetic term for the most common allele in a population, wild-type.  In genetics the term dominant must always be used about something else, for example, the phenotype of the dominant allele B is expressed instead of allele b.

I have always preferred to use the five-terms established by Hermann Muller to classify the specific types of genetic mutations over general terms like dominant and recessive.  Regardless of the words used, the students need to understand that we are discussing mutations that change the function of genes which has nothing to do with a mutation’s frequency in a population.

Another common genetic misconception is that all mutations are harmful.  At the DNA level, a mutation is simply a change to the DNA, a lot of mutations do not affect.  As an example, if a mutation occurred in a coding region, there is a good chance it will not change the final product.  If the mutation occurred in the third position of the alanine codon GCT and became GCC, it would still code for alanine, in fact, all four GCx codons GCT, GCC, GCA, and GCG code for alanine. That means any change in the third position of this triplet will not affect the protein formed. There are a lot of other misconceptions in genetics, but that is a discussion for another day.

When it comes to helping students deal with their misconceptions, it can help to try and understand where the misconceptions came from, and what might be influencing them.  As a faculty member once said, “If you want to understand what a student is thinking, ask them.”  If a student does not comprehend new information, it might be because of previous notions.  Learning what the student’s assumptions are and how the assumptions are interfering with the students learning will only make you a better teacher.

 

Thanks for Listening To my Musings

The Teaching Cyborg

The Use of Technology Requires Flexibility

“Any sufficiently advanced technology is equivalent to magic.”
Arthur C. Clarke

 

At one time Best Buy was the largest reseller of CDs, this year Best Buy announced it was going to stop selling CDs.  I’m a little sad about that, I rather like CDs I own hundreds of them.  Don’t get me wrong I like the convenience of MP3’s, I still own an iPod and have a flash drive with over 3000 songs plugged into my car radio.  However, I still buy CD’s it looks like I am going to have given that up.

This decision from Best Buy got me thinking about audio technology in general.  Have you ever thought about how audio technology has changed in the last half-century?  Imagine telling someone they could have a device that let them listen to thousands of songs that was smaller than a deck of cards in 1968.  Think about all the different types of audio technology that was available over the last 50 or so years

  • Vinyl Albums – early 1900’s to the 1980s (peak usage)
  • 8 Track Tapes – 1964 to 1988
  • Cassette Tapes – 1962 to early 2000s (Still made on a minimal amount)
  • Compact Disks – 1982 to Present (phasing out)
  • MP3s – 1993 to Present
  • Streaming Services – 2005 to Present

Image showing vinyl records, 8-track Taps, cassette tapes, CDs, MP3s, and Streaming musics technologies. Image from PJ Bennett
Types of Audio Technologies

Over the last 50 years, several different types of audio technology have been developed and mostly discarded. Additionally, audio technology covers the development of technology in just one area. Add to that phones, movies, computers, cameras, and on and on and the development of technology has been astounding.

Something else I find interesting is the tendency to forget about technology as it develops.  I have had an interest in photography for most of my life. In that time, I have seen a lot of changes.  The most prominent being the conversion to digital over film.  Somewhere between 3 to 5 years after digital pictures became the predominant form of photography I started noticing ads for new photo editing tools.  When I first saw these tools my thought was “why would you want that?”  The purpose of these tools was to add grain to give your photos the traditional film look.

You might ask why does this surprise you? Well for many years (maybe even decades) each month I would receive photography magazines in the mail.  In each of these magazines, there was always a review of the newest film, one of the questions asked about each type of film was “how big you could enlarge a picture before the grain became evident?”  That is right for decades the goal of film development was to reduce and eliminate visible grain.  Now that digital cameras have finally given us that dream people want to put the grain back.  People have forgotten that in the days of film grain was the enemy.

In addition to all the new technologies, we live with the rate at which devices and technology get replaced by new ones has increased.  There is even a term for the rate of change Velocity of Obsolescence, the speed at which a newer/different technology replaces an older technology.  In a Forbs article from several years ago, one of the things they talked about was the time to obsolesce of web-enabled services, in 1998 the lifetime was 3-5 years while in 2013 the lifetime was 14-18 months.  New smartphones come out every 12 months.  With the subscriptions plans the update cycle on software has drastically changed.  As an example, Adobe software used to update every 18 months with a new version every three years.  Now they seem to add new features every couple of months. With this rate of change, there is no way to say what technology, our daily lives, and by extant society will look like in 20 or 30 years.

What does this increasing speed of technological development mean for schools? The most important thing is flexibility, schools need to develop a mindset that does not focus on specific technologies but the teaching and operational needs of the school.  Schools also need to realize we are entering a time where they can’t take years examining, testing, and adopting new technology and expect it to stay current.  While it is essential to think critically about educational technology we need to shorten the selection and implementation process so that we can get the most out of the life expectancy of the technology.

With the increasing impact of Bring Your Own Device (BYOD) schools are also going to need to develop technologies that are device agnostic.  As an example, many schools have adopted the use of clickers (student response systems). While the stand-alone devices work just fine to reduce the number of things students need to purchase schools have started using smartphone apps.  The cost savings side is always a good idea, the problem with apps is the upkeep.  As I said earlier new phones, come out every year, and operating systems upgrade multiple times a year, the maintenance could amount to 2 or more version of each app a year.

Why 2 or more versions a year? The mobile release cycle means you or the company you purchase your app from will need to have at least two apps one for iOS and one for Android.  In case you are one of the people that think you only need iOS the current US market share is 53.7% iOS and 45.96% Android (there are a few smaller ones).  With updates and end of support cycles on operating systems, you will need a new version of the app for each operating system at least once a year.

However, even if you go with mobile apps, this is still limiting.  All the apps give you is access to smartphones, you can probably get tablets out of the same app, but laptops and netbooks/Chromebooks are perhaps out of the question.

There is another option that schools could be using.  That option is web apps, which is an app that lives and runs on a web page.  The app is accessible through a web browser which means it is available through any web-enabled device.  The web app gives you access to phones, tablets, laptops, desktops (labs, distance education), netbooks/Chromebooks, and many emerging smart devices.  Also, since the app is a web page, you only need to maintain a single app, and operating system updates have little effect on the app.

Additionally, schools have turned to apps so that they could add functionality to the clickers.  Schools now want to add the ability to have students type out long answers, do complicated math, and so on.  The schools have forgotten that the use and reason for clickers were to collect quick short feedback. That feedback was then used to motivate peer to peer discussions or the direction of lectures.

With the current rapid speed of technological development, schools need to develop a streamlined method of assessing and choosing technology.  Schools also need to think about multiple platforms, so they don’t exclude students.  Because of the rate of development and the diversity of devices, it is entirely possible that schools will need to do more and more of their maintenance and development.  After all the rules and design considerations that were used to make that new killer app might not be usable in an app developed for education.

 

Thanks for Listening to My Musings

The Teaching Cyborg