Hypotheses, parent hypotheses and instrumentation

The pursuit of knowledge begins with a hypothesis.

Whether or not it is explicitly known or made conscious of, the act of posing a hypothesis itself rests on a hypothesis (or assumption) that the truth value of a hypothesis can be ascertained (that the hypothesis can be verified or refuted). This must be assumed otherwise posing the hypothesis itself would be a vain pursuit. One asks a question with the implicit assumption that an answer exists. I do not ask, “Why is the garbage can sprinkler farm house?”, because I do not believe that to be a question, I do not believe it to be a question because there exists no answer, and there is a specific logical structure to questions and answers, being that questions have answers. If a question has no answer, then it isn’t really a question, it is just a string of words I can utter but has no meaning.

I also do not believe, “Why does the farm house cook?”, to be a question because though I can create a concept of what this sentence means, it still is a meaningless question. Questions, in order to be questions, must make sense, be meaningful, and be categorized as having an answer.

No hypothesis can be posed without another hypothesis being integral, inclusive, contained within itself. For example, when I pose the hypothesis that expressing a certain protein in liver cells will have causal influences on the levels (and thus function (note: protein levels correlate to function is itself a hypothesis)) of the other proteins expressed in that cell, this hypothesis poses within itself a vast multitude of preceding and necessary hypotheses. Hypotheses such as: that proteins exist, that proteins carry out functions, that proteins interact, that our model of the cell is true, that our instrumentation works the way it does (from the accuracy of a pipette that draws our chemical solutions, to the mechanical integrity of a centrifuge, to the accuracy of our electronic thermostats, to the software we use, etc, etc), and many other hypotheses. Essentially every currently held to be “true” model within science is a hypothesis that has yet to be refuted. It can never be proven to be true (as this would be inductive and not deductive), so it is still a hypothesis yet to be refuted. And so for any hypothesis, H, stated at time, t(n), then that hypothesis statement would rest on all other non-refuted hypotheses at time t(n-1), t(n-2)…until it finally rests on the axioms that provided the very logical structure (rules, assumptions, postulates, definitions) of the field of knowing that the hypothesis exists in.

Now, I am curious about the logical structure behind an explanation/interpretation behind instrumentation in quantum mechanical experiments. Actually, about instrumentation in general.

A couple of months ago I was watching a scientific documentary on netflix and there was “imaging” of the surface of materials such as metals. Images were something like this:

And so we are told a story, an interpretation of this image that we are viewing (note: the image itself is represented via atoms that form a screen printout from the computer this was used to create and in our case represented via atoms composing our laptop screens) is that atoms really are physically extended things in space, we can kind of visualize them. The amazing thing about this image is the interpretation/meaning conveyed that atoms really are physical geometric things that exist. It is interpreted as a confirmation that beneath what we can see are these smaller, real, physical things that we can’t see, and those things make up the things we can see. Atoms. Matter.

But here is the problem with an experiment like this. What you get from this image is that atoms have these three dimensional shapes. There are parts of an atom that are closer to one point in space (i.e. you, or something one namometer away from a certain gps location) and then there are some parts of that very same atom that are further away from that very same point. That is to say, a single atom is not a dimensionless point, it has a physical geometry and has regions that differ over space. The atoms on the surface of the substance depicted above clearly have dimension. There are parts of the atoms that appear closer to the perspective of the observer than other regions of the very same atom. Meaning, pick any object in the room you are in. Comprising that object are atoms, let’s pick one of them. Now, if we were to measure the distance from let’s say your nose, or any other single point in space, to that atom, we could get multiple measurements. We could get multiple measurements because we could be measuring from that point (your nose) to different areas/regions/parts of the same single atom. Just like measuring your distance from a ball, that distance depends on what point you are measuring on the ball, the front, the side, the back, the top, bottom, etc.

Ok. This might seem obvious. Why am I taking such great attention to this? Because if this is true, then we have to see what causal effects, what logical relations this new truth has for all the preceding hypotheses that this hypothesis is founded upon. In this case, what struck out to me was instrumentation. Now, the instrument that do these imaging techniques, which tell us that atoms have geometry, involves measuring. It has to be able to measure distances and thus geometries. There is a source, which exists in space, and thus has a (hypothesized) very real space it occupies, with a very real distance separating itself from the sample it is to measure. The method to measure is to send, from the source, a laser (electromagnetic radiation) which moves at a constant and known (Maxwellian/Einsteinian) speed (the speed of light), and it will hit the surface of the object, and bounce back to a detector. We can measure the time it takes for the laser beam to be emitted from the source and then detected by the detector. Since we know the time it took for the laser to travel, and we know how fast it is travelling, we can tell how far (the distance) it traveled. If we can be incredibly precise with this distance we can compute the geometry of the surface the light is reflecting from. Just think if you were blind folded and you were given a device, say a cannon or gun, that you knew shot something at a constant and defined speed. You shot the cannon/gun in a direction, and all you knew was the length of time it took to hit something. You could move yourself left and right slowly shooting over and over, collecting data points and deduce the shape of the thing you were shooting at. You could do this in the very room you are in now. Closer things to you would get hit earlier, further things would get hit later.

Now, returning to the experiment itself, the entire instrument itself is made up of atoms, including the detector. Now, if we were to visualize what the surface of the detector would look like, it should (by hypothesis) look like the surface of the material in the image above (since it too is made up of atoms). Now, if this is the case, that means that the detector itself has a geometrical shape, with regions of a given atom that are closer or farther away than any given point in space.

This is important. This is important because the detector detects once the laser reaches (excites) the detector. The laser is assumed to be so precise (small in physical size in terms of the wave traversing over space which would include concepts such as frequency, amplitude, wave length) that it will contact a point on the material (collection of atoms) the size smaller than an atom. I find this hard to get across. What I mean is this, if you use your right index finger to touch your leg, the thing we would call the point of contact isn’t really much of a point. It probably occupies an area of several millimeters. When you try to be precise with your big clunky finger tip, well, it touches an area of your leg. Now, our laser, like our finger tip, is going to touch the surface of that material. In order for it to produce the resolution of an atom in an image the laser (finger tip) has to have an area of contact that is smaller than the area of an atom. It must be smaller than an atom. That is a pre-requisite to be able to tell that one point on an atom is further or closer away than another, which after all is the entire pursuit of this imaging.

So, this laser is so precise that it can touch one part of an atom while still not touching another part of that very same atom. Ok. Now, let us try to visualize with our imagination this experiment occurring. Shrink yourself down to the size of the atoms. The light source is emitted (laser), it travels across space and hits on an area of an atom that makes up the “surface” of the material object, then the light reflects and moves towards the detector, where there again the laser will touch another area on the surface of another geometrically three dimensional atom. Is this not a problem? The reason why this would be a problem stems from the fact that we take this image to be a truth statement. It is a visual statement, and can be verbalized by stating that it is true and representative of reality that atoms are three dimensionally extended things. We make this statement because we assume that our hypothesis that our instrumentation functions “properly” is true. Our concept of our instrumentation functioning properly is that it is representative of reality, that it functions as we think it does.

But now, if we come to recognize that a physical detector made up of physical atoms which all have a three dimensional geometry, and that the laser can hit those atoms at any various part and still be touching that same atom (this is to say, that the laser can touch the same atom at different areas), and that each of these parts/areas of the atom can be physically closer or farther away from the path in which the laser has been travelling, then we can not know if the laser hit a region/area/part of the atoms of the detector that were closer or further from the point that the laser hit on the atoms of the material.

Maybe something like this:


In this image you can see that the light hits a specific point on the atom of the material, then moves towards the detector and hits a specific point on the detector. Now, the detector can tell when it was excited (when the ‘detection’ occurred). It cannot tell, by default, which area of a single atom was activated, since it is the very same atom and just one thing. It is binary, either activated or not, there is no more information that can be contained within the activation. The activation/excitation/detection within the detector would be the exact same whether it occurred at one point of the atom or another. It is still the same atom.  And so the light source could hit a single point on the material, but the path the laser follows, as it is a wave and thus moving and spreading over an area as it travels, it could hit different areas/points/regions of the very same atom. Why does this matter? Because different areas of the same atom necessarily occupy different regions in three dimensional space, relative to each other these areas of the atom will be closer/further in relation to a straight line from the point of contact of the laser on the surface of the material. Meaning, one laser will travel a bit further and one a bit less than the other. Moving at a constant and equivalent speed, this means that what is detected will be in one scenario, a wave having a shorter time span, and in another scenario a wave will have traveled over a longer time span.

Why does this matter? In these scenarios we have a source that emits at the same time point, time = 0, and both hit the same point on the atom of the surface material being imaged, then the laser reflects and moves towards the detector. In both instances the laser strikes the same identical atom (in non-simultaneous events of course!). In one instance the atom is contacted at a point closer, and in one instance the atom is contacted at a point further from the point of the surface material. This means that the time recorded for one instance will be longer than the other. Most importantly, they will be different! Yet for both scenarios the same point on the surface being imaged is being measured by the laser. It is the same point of contact, and thus, our hopes, our hypothesis for the instrument we are using is that when the laser hits a specific point on the surface, it will reliably be detected as an accurate and truthful representation of what the actual surface material is like in reality. But this cannot be.

This cannot be the case because we can have two very separate pieces of data for a single measurement. Is that point in space, the thing we are trying to image closer or farther? Did it take the laser 0.0000000000000000000000000000000000000000000000001 or 0.00000000000000000000000000000000000000000000000011 seconds to travel to the detector? It might seem like a small difference, but in all reality that is a 10% difference, highly significant. If you look at any object relative to you that you can see right now and you could not tell if it was where it actually is, relative to you, or 10% closer or further away, well, then the print on my wall is either 8 feet away, or it is close to 7 or close to 9 feet away. I don’t mean seemingly, I mean truthfully, in reality. Because we take that image to be a truth statement, and I think that the print on my wall is truthfully a specific distance away from me.

So we cannot truly, from the data, tell what could be the true image, the true three dimensional geometry of the material. We can’t because we start with the hypothesis that the image produced is true. If it is true, then we must take that new piece of data and see how it affects all the preceding hypotheses which that hypothesis (that the image is true). In doing so, we find that the hypothesis that the image is true becomes an impossible contradiction: it cannot be true since it refutes itself as argued above. The result of the hypothesis implies a direct refutation of a hypothesis from which the parent hypothesis relies upon. So the imaging is meaningless. Yet we take it to be true.

I am also interested in this line of reasoning and looking at preceding hypotheses when it comes to instrumentation for experiments in quantum mechanics. But I am tired and this thought will have to wait for another time.