The 6 precepts for nanoscience

When someone hears the words nanoscience or nanotechnology many people think about the secret laboratories of a mad scientist controlled by governments, where robots very, very small are developed to be injected in our blood to control us from the inside. That is, the miniaturization of macroscopic structures.

Nanoscience researches the physical, chemical or biological properties of atomic, molecular or macromolecular structures, or what is the same, the structures that have a size between 1 and 100 nanometres. One nanometre is equivalent to 10-9 m or 0,000000001 m.

The study of nanoscience and the development of nanotechnology presents a lot of advantages to us such as the development of encapsulated medicines in molecules that can release its active component only where it is needed. This is the case of the treatment of patients with cancer because it could be possible to release the treatment only in the areas affected by the tumour instead of affecting other body tissues. Another example is the study of materials whose electrical conducting properties are better or even to try new methods to transmit information through materials, which, at least one of its dimensions is within the nanometric scale.

Maybe the most famous material developed in nanoscience is the graphene. In fact, Andre Geim and Konstantin Novoselov were awarded with the physics Nobel Price because of their experiments with graphene.

800px-Graphen Artistic representation of Graphene (Source: Wikimedia Commons)

To obtain materials at a nanometric scale, one of the most important ways of doing it, is to use techniques coming from the chemistry, because it can be used the properties that have atoms and molecules to bond together to create nanometric structures.

But the question here is if the miniaturization of macroscopic structures to nanometric scales can be considered as nanoscience or nanotechnology. The answer is no. In fact, not everything is nanoscience or nanotechnology and there is a set of six principles or precepts about what this emergent branch of science is.

First precept: Bottom-up building approach

This implies that miniaturizing, that is, to reduce the size of something is not nanoscience. To use the fundamental building blocks, that is atoms and molecules, and from there, to use their properties to build nanometric structures that could perform specific functions is indeed nanoscience.

Second precept: Cooperation

It does not deal with the fact that diverse institutions cooperate amongst them to develop nanostructures, which is also important, but the development of different nanostructures with different functionalities that cooperate amongst them to give rise to more complex nanodevices with better functionalities.

Third precept: Simplicity

To simplify the problems that faces the nanotechnology developments so that only the necessary scientific laws are used to avoid unnecessary complexities.

Fourth precept: Originality

Coming back to the example of the robot at the beginning of this post. It is avoided to develop things that already exist and simply reduce their size. What it is looked for are different structures. To reduce the scale has more implications than one could think, such as the fact that the volume depends on a cubic length and the area on a squared length, making a scale reduction unfeasible. Thus, it is necessary to be original and creative with the developments.

Fifth precept: Interdisciplinary nature

Previously we mentioned that the cooperation between institutions was also important, but it is even more important the cooperation between different areas of science. For this reason, the cooperation between biologists, chemists, physicists and engineers is more than needed. In nanoscience, the fact that a researcher is a pure physicist, chemist or biologists does not provide a complete knowledge because it has to face problems that will not solve unless the knowledge field is widened.

Sixth precept: Observation of nature

Nature offers us a lot of examples of nanotechnology. Molecules that make up our tissues and organs, as well as how they are organized and interact amongst them, are the best example of nanotechnology. If we observe and study them our developments will be much more innovative, efficient and will improve our lives.

It is complex to find examples that follow all these precepts simultaneously but that is why science and scientists exist, to develop nanostructures following these precepts using the laws that nature imposes.

References

The Nobel Prize in Physics 2010″. Nobelprize.org. Nobel Media AB 2014. Web. 14 Aug 2014.

El nanomundo en tus manos. Las claves de la nanociencia y la tecnología. José Ángel Martín-Gago, Carlos Briones, Elena Casero y Pedro A. Serena. Editorial Planeta S.A. Junio 2014

More flavors

The subatomic particle zoo has been growing along the years since the discovery of the electron, as one of the fundamental constituents of atoms. In the beginning, the discovery of these new particles occur accidentally through the use of the first particle accelerators or through the study of cosmic rays, that is, the high energy particles that impact on the atmosphere, coming from the outer space, where they collide with atoms in the air producing new particles. There was not any theoretical model predicting these particles, so everything was a surprise.

One example of it occur in 1936 when Carl Anderson and Seth Neddermeyer, by that time at Caltech, were studying cosmic rays using a cloud chamber, applying to it a magnetic field, and found particle tracks that curved differently of those of the electrons. It was clear, because of their curvature that these particles were negatively charged but the curvature radius was larger than the one of electrons. It was assumed that this new particle had the same charge as the electron and, therefore, to get such curvature radius, it had to be heavier than the electron, taking into account particles moving at the same speed. It was also compared to the curvature of proton tracks (although these had positive charge) and it was seen that the curvature radius of the new particle was smaller than the one of protons and thus its mass had to be smaller.

The existence of this new particle added and additional complexity to the particle zoo that was appearing at that time. Initially it was given the name of mesotron and it was even thought that it was the particle carrying the strong force predicted by Yukawa, so it was renamed as mu meson. After the discovery of the pion (or pi meson) and other mesons (a meson is a particle made of two quarks with an integer value of spin), it was seen that the mu meson did not have the same properties that mesons, they did not interact with the strong force. Besides, it was discovered that mu mesons decayed in neutrinos and antineutrinos. Because of it, the name was changed again and it was given the name of muon.

muon_neutrino-4e7cb2b-intro

 

Muon neutrino (Source: Particle Zoo)

The appearance of neutrinos and antineutrinos, presented an important question. Are they the same neutrinos than those associated to electrons in beta decay? It was clear that they had to continue the task initiated by Reines and Cowan as neutrino hunters and try to solve the mistery.

A good way of studying the nature of neutrinos associated to muons is through the study of the decay π → μν. The problem was that, to obtain enough pions in a quantity enough to conduct the research, it was necessary an energy that was not reached through the study of pions generated in the atmosphere as a product of cosmic ray collisions. It was thus necessary to use particle accelerators. Apart from a group of bright researchers to know that through the study of such decay the problem could be investigated.

This conjunction, an accelerator and bright researchers, took place at Brookhaven particle accelerator in 1962, where Leon Lederman, Mel Schwartz and Jack Steinberger were working.

In his book, The God Particle (let’s leave apart the story about the book’s name), Lederman explains the story of how the experiment was thought and built.

Using the Brookhaven’s Alternating Gradient Synchrotron, which in 1960 reached unprecedented energies by accelerating protons to 33 GeV, Lederman, Schwarz and Steinberger accelerated protons to an energy of 15 GeV. Once this energy was reached the proton beam was directed towards a beryllium target where, after the collision, produced pions. After 70 feet distance of free flight decaying into muons and neutrinos, the particles collided with a shield of more than 13 m thickness and 5000 tons made of steel coming from battleship plates, where they were stopped except for the neutrinos, giving as result a neutrino (associated to muons) beam with energies up to 1 GeV.

What they detected were the traks of 34 muons (considering a background of about 5 muons coming from cosmic rays). If neutrinos were the same both for the pion decay and beta decay, it would had been observed, theoretically, about 29 electron tracks, which were well known by the team, and if it were different, they would have been observed, as maximum, one or two electrons coming from kaon decays such as K+ → e+ + νe + π0. No electrons were observed.

Because of the discovery of the muon neutrino, Lederman, Schwarz and Steinberger received the Nobel prize in 1988.

lederman_postcard

Leon Lederman (Source: Nobelprize.org)

schwartz_postcard

Mel Schwarz (Source: Nobelprize.org)

steinberger_postcard

Jack Steinberger (Source: Nobelprize.org)

Now we know that there are three types of neutrinos. The third is the associated to the tau lepton, which is like the muon and electron, but even heavier. However, the discovery of the tau neutrino did not solve all the unknowns we have about neutrinos. We still have a lot of things to learn about, but we better leave it for another occasion.

References

Discovery of the Muon-Neutrino

T2K Experiment

The God Particle. Leon Lederman and Dick Teresi.

Seth H. Neddermeyer and Carl Anderson. Note on the Nature of Cosmic-Ray Particles. Phys. Rev., Vol. 51, 884.

Determinism, indeterminism and chaos

Physical phenomena in nature occur for some reasons, they follow specific patterns or laws, but can be predicted with a total certainty what the result will be?

Depending on the phenomenon we could predict the exact outcome or have an uncertainty. It is even possible that the only thing we get is a probable value according to a statistical criterion.

In the history of science we have gone through different stages. There was a time when all could be exactly predicted, being thus a deterministic period. However the discovery of new phenomena led to think that it was impossible to know the exact result of such phenomena, thus appearing a non-deterministic stream of scientific thinking. Later, the study of non-linear dynamical systems led to a new field of study: the study of systems with a completely erratic and unpredictable behaviour, though in principle, their formulation can be deterministic. This field is known as chaos.

The scientific determinism considers that, although the world is complex and unpredictable in many ways, it always evolves according to principles or rules that are totally determined, being chance something that only occurs apparently.

In the middle of the XIX century, determinism fell down piece by piece. There were two reasons for it.

Firstly, it was needed a completely and detailed knowledge of the initial conditions of the system under study to introduce them in the equations providing the system evolution and get a result.

Secondly, the dynamics of systems made of a large number of particles were very complex to solve.

This second reason was the one that made necessary to introduce concepts related to probability and statistics to solve the problems, giving as a result the creation of a new field of mechanics: statistical mechanics and thus a change in the scientific paradigm from a deterministic paradigm to a non-deterministic one.

The discovery of quantum mechanics had also consequences in the deterministic view of the world because from the Heisenberg’s uncertainty principle it is derived an impossibility to apply deterministic equations to the microscopic world because of the impossibility of knowing the value of two conjugate variables at the same time (e.g. position and speed)

In the mind of many people, it is associated the indeterminism with quantum mechanics and determinisms with classical physics, but, as it was demonstrated by Noble prize Max Born, determinism in classical mechanics is not real because it is not possible to establish with an infinite accuracy the initial conditions of an experiment.

On another hand, Feynman, in his famous lectures, said that the indeterminism does not exclusively belong to quantum mechanics but it is a basic property of many systems.

Almost all physical systems are dynamical systems, they are systems described by one or more variables that change with time.

There are dynamical systems that have a periodic behaviour ant other systems that do not have such behaviour. When the movement is not periodic, it depends on the initial conditions and it is unpredictable in long time intervals (although it can be predictable in short time intervals) it is said that the movement is chaotic.

In other words, chaos is a type of movement that can be described by equations, some times very easy, and that is characterised by:

  • An irregular movement with time that has neither periodicities nor superposition of periodicities.
  • It is unpredictable with time because it is very sensitive to the initial conditions.
  • It is complex but ordered in the phase space

For example, when there are three different masses moving because of the action of gravity (let’s say three planets), the study of their evolution in time is really complex because their initial conditions, position and speed of the three masses. Poincaré showed that it was not possible to find an exact solution to the problem.

Lorenz_system_r28_s10_b2-6666

Lorenz atractor. Does it not resemble the wings of a butterfly (Source: Wikimedia Commons)

One of the most famous cases in the study of these non-linear dynamical systems took place in 1963 when Edward Lorenz developed a model with three ordinary differential equations to describe the movement of a fluid under the action of a temperature gradient (or in other words, he was studying the behaviour of the atmosphere) Using a computer he searched for numerical solutions to the system and found that it was very sensitive to the initial conditions. It was James York who recognised Lorenz’s work and introduced the term chaos.

Currently it is thought that after the discovery of quantum mechanics and Einstein’s relativity, all physics spins around these fields. However, chaos is a very wide field gaining followers not only physicists and mathematicians, but also in other fields such as biology, genetics and neuroscience. This interdisciplinary nature is amazing and shows how much some people can learn from others to make the science advance towards a greater knowledge of the world.

References

Las matemáticas y la física del Caos. Manuel de León, Miguel A. F. Sanjuán. CSIC

Caos. La creación de una ciencia. James Gleick.

E. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences. Volume 20

Neutrino Hunters

Universe is mysterious and exciting. There are so many things to discover that it is very likely that we will ever know only a negligible portion of everything around us during the humanity lifetime. In the universe there are things that we can directly observe, develop a theory about them, apply all our mathematical knowledge and say that the portion of the universe subject to our observation behaves in a specific way. With such a theory, we could also make future predictions about future behaviors. But, there are things that we cannot directly observe and, in principle, develop a theory about them. Notwithstanding, humanity inventiveness and willingness to know do not have limits and we manage to make unveil the invisible universe.

As well as in many science areas, physics has things that cannot be observed what makes them even more interesting for physicist. We are made to learn about what we cannot see. One example is particle physics. When we start to investigate what there is inside atoms, which we do not directly see, or what happens when two atoms collide at high speed (and high energy), we get such an amount of information that in many cases is disconcerting. Smaller particles, than the constituent elements of the atoms, appear. Their behavior largely differs from what we are used in our daily experience. New mysteries appear which we pile up while waiting for solving them. Neutrinos are one of these mysteries.

electron neutrino

Electron neutrino (Source: Particle Zoo)

Shortly after the discovery of radioactivity, Ernest Rutherford discovered, in 1988, that one of the possible manners that radioactivity showed up was through the emission of negatively charged particles with a charge equal to that of the electron. Initially, they were named as beta particles, following the name of this kind of radioactivity which was known as beta decay, until they were finally identified as electrons.

The discovery of this kind of radioactivity opened a new research area.

Besides the efforts to understand this kind of radioactivity, it was still necessary to discover some of the missing ingredients in our knowledge of atomic nuclei and this happened after the beta decay discovery

Atomic nuclei are made of protons and neutrons. The proton was discovered by Rutherford in 1919. The existence of the neutron was proposed by Rutherford one year later to explain why atomic nuclei did not disintegrate due to the electric repulsion of protons. Many other scientists theorized about the existence of neutrons after Rutherford and it was finally experimentally found in 1932 by James Chadwick.

Once all the elements of the atomic nucleus were known, it was then possible to start developing a theory for beta decay. Observations implied that the electron was emitted by the nucleus, but it was known that nucleus were only made up of neutrons and protons, to that it was impossible that the electron was inside the nucleus. The explanation that was accepted is that a neutron transforms into a proton emitting, at the same time, an electron.

However, from Einstein’s equation E = mc2, it was expected that the electron would carry off, in form of kinetic energy, the mass difference between the initial nucleus and the nucleus after the electron emission. It was expected that the energy was conserved. But this did not happen. Conservation of energy is one of the basic principles of physics and when it is not conserved it can happen either we are doing something wrong or there is something new that we do not know yet. The latter was the solution.

In 1930, Wolfgang Pauli proposed the existence of a new particle without electric charge emitted with the electron, so that total energy was conserved. This new particle had not been detected so far. Although Pauli gave it the name of neutron, it was Fermi who renamed is as neutrino, due to the discovery of neutron in 1932, when he incorporated it to his beta decay theory.

The problem was that the neutrino was still undiscovered. Even Pauli thought that he had postulated a particle that nobody could ever detect because it was so small, in fact it was thought that it did not have mass, and without electric charge that it was impossible for it to interact with any kind of matter, even with that of the most sophisticated instruments of that time.

Everything changed with the advent of nuclear fission reactors. Nuclear fission uses very heavy elements that when fission occurs, the resulting lighter elements, which nucleus has such an amount of neutrons (isotopes), cannot be stable and they disintegrate emitting electrons and (anti)neutrinos, that is, they emit beta radioactivity. Although one single neutrino is very difficult to be detected when there a lot of them, the likelihood of detecting at least one is increased.

CentralNuclear

 

Already closed Zorita Power Plant (Source: mine)

Larger experiments started to be developed, with more and more sophisticated detectors, whose intention was to detect the neutrino. Many years passed by, since it existence was postulated and the beta decay was described, until the neutrino was discovered. It was in 1956 when Reines and Cowan, managed to find a clear signal that confirmed that the mysterious neutrino had been discovered… and they did it looking for the inverse beta decay.

As we have seen before, the beta decay consists in a neutron which is transform in a proton emitting an electron and a neutrino. Actually, it is an antineutrino because when the quantum formalism is combined with the relativistic one, it is found that every particle has its own antiparticle, that is the same particle but with opposite electric charge. In the case of neutrinos, because they do not have electric charge, it is not clear yet whether the neutrino and antineutrino are the same particle (Majorana particles) or different ones (Dirac particle), but let’s leave this for another moment. The inverse beta decay consists in an antineutrino colliding with a proton giving as result a neutron and a positron (the electron antiparticle, which is like an electron but with a positive electric charge).

In their experiment, Reines and Cowan dissolved 40kg of Cadmium Chloride (CdCl2) in 400 liters of water in tanks. These tanks were at 12 meters underground, to be shielded from cosmic rays which may interfere in the data gathering, and 11 meters away from the Savannah River reactor, which was the antineutrinos source. Above the water there were liquid scintillators and below they installed photomultiplier tubes to detect the scintillating light. The positron was detected by its slowing down and annihilating with an electron of the tank content and thus emitting two gamma rays that were detected by the liquid scintillators and the photomultiplier tubes. The neutron was also slowed by the water and captured by the cadmium microseconds after the positron capture. In this neutron capture several gamma rays emitted and detected by the liquid scintillators just after the detection of the gamma rays produced by the annihilation of the positron. This delay was theoretically predicted and thus, if the experimental measure matched the prediction, it can be demonstrated that the neutrino had produced the reaction.

reinescontrols

Reines y Cowan in the control centre (Source)

In this way an era started for researching the unknown, the small and invisible, the neutrinos. But there were still many surprises to be found, because what they had detected was only one of the varieties of neutrinos, electron neutrinos. Later, other experiments would detect other varieties.

But let’s not get ahead of ourselves…

References:

T2K Experiment

First Detection of the Neutrino by Frederick Reines and Clyde Cowan

Neutrino. Frank Close. RBA Divulgación

 

 

 

X-rays, energy quantization and the Planck’s constant

There are products that use X-rays to make our life better and funnier, such as X-rays glasses to see underneath other people clothes and also many superheroes use them to defeat super villains as doctors and radiologists use to diagnose diseases but what are X-rays? In this post, I want to clarify what they actually are and how, thanks to them, it is possible to demonstrate that the energy is quantized and how the Planck’s constant is precisely calculated. For the latter I will use a few formulas but don’t be afraid, they are easy and not so many.

First of all, let’s do a bit of history.

In 1895, Wilhelm Konrad Röntgen worked in Würzburg, Germany, in a new research field, the cathode rays. Using a cathode rays tube, concretely a Hittorf-Crook tube, covered by a black paper, he observed that a on an indicator paper made of Barium platinocyanide, located beneath the tube, appeared a transversal line when there was a current circulating by the tube. He found this line on the indicator paper strange. On the one hand, according to the research status of the time, the effect could only be due to the radiation of light, but on the other hand, it was impossible that the light was from the tube because of the black paper cover, which didn’t allow the light to go through. Röntgen gave this radiation the name of X-rays because he didn’t know its origin. Almost two months later he had already prepared a communication announcing these results and he even attached a series of pictures that have become famous such as the one of the hand of his wife Anna Bertha Röntgen.

Mano mujer Roentgen

Radiography of the hand of Anna Bertha Röntgen

Röntgen couldn’t give an explanation to the X-rays, but now we know what they are. The cathode rays tube has two electrodes in its opposite sides. One of them, the cathode is heated until it emits electrons and through an electrical potential of some set of tens of thousands volts, the electrons are accelerated towards the electrode on the other side of the tube, the anode. When the electrons hit the anode, it is observed a continuum spectrum of electromagnetic radiation that has wavelengths of 1×10-10 m. What it actually happens in the anode is that the electron passes close to the nucleus of the atoms of the anode material and they are diverted by the electrical field of the nucleus and they are slowed down and thus they emit radiation (a photon), because when a charged particle it is accelerated or slowed down its speed, in other words its speed changes with time, it emits radiation.

Not all the electrons slow down in the same way, that is, not all of them have the same deceleration because not all of them pass at the same distance of the nucleus. Since they don’t fell the same intensity of the electric field, they don’t get the same deceleration. Because of this, it appears the continuum spectrum. A different value for each electron, and as there are many, the spectrum looks like a continuum.

continuo copia

X-rays continuum spectrum

However an odd phenomenon occurs. For each value of the applied electric potential, it appears a minimum wavelength in the continuum spectrum below which, it is not emitted any radiation. This phenomenon couldn’t be explained with classical physics.

We come now to the second part of the title of this post: energy quantization.

By the end of the XIX century, it was experimentally known that the energy density at a given energy in a frequency interval varied in a way that a low frequencies the bodies emitted such an amount of energy that increased as the frequency increased until it reached a maximum energy and then the energy emitted decreased as the frequency increased. Rayleigh and Jeans tried to explain this energy density distribution but using the physics existing by that time. However they could only explain that the energy increased continuously and that was against the observations. This phenomenon was named ultraviolet catastrophe.

Max Planck proposed that the energy was cuantized, that is there were very small energy packets, named quanta, where each packet had an energy proportional to the frequency. Mathematically this is written as E= hυ, where h is the Planck’s constant. Using this approach, the Rayleigh and Jeans problem was solved and the energy density distribution matched with what was experimentally observed.

220px-Wiens

Emitted energy as a function of wavelength at different temperatures

Back to the X-rays, the kinetic energy of the electrons is given by their charge e and by the electric potential that accelerates them V, thus:

E=eV

In the case that the electron is totally stopped after interacting with the nucleus and bearing in mind that the energy does neither create nor destroy, that is, the energy before interacting with the nucleus (E=eV) equals the energy of the photon when the electron slows down (E=hυ), we have:

eV=hυ

Resolving for υ and knowing that the speed of light c equals the frequency times the wavelength (c=υλ) we obtain:

λ=hc/eV

We thus obtain a wavelength for each electric potential. Here we can see that something that was not possible to be explained using classical physics, it can be explained using energy quantization, that is, quantum physics.

In the latest formula we see the Planck’s constant.

Now let’s go for the last part of the title of the post: the Planck’s constant.

The speed of light and the charge of the electron have constant and very well known values (c=300000 km/s y e=1,602×10-19C). When we use a cathode rays tube to generate X-rays, we use a fixed electric potential. If for such electric potential we draw the continuum spectrum of the generated X-rays, we can observe what is the minimum wavelength that enables the generation of X-rays. Therefore, once known the minimum wavelength we can enter all the values in the equation, resolve it for h and establish the value of the Planck’s constant:

h=6,626×10-34Js

The value of h calculated in this way is very precise due to the precision that we know the rest of the parameters of the equation.

Have the mathematics of this post been painful?

References:

Marie Curie y su tiempo. José Manuel Sanchez Ron

Anna Bertha Roentgen (1832-1919): La mujer detrás del hombre. Daniela García P., Cristián García P. Revista Chilena de Radiología Vol II Nº4, año 2005; 1979-1981

Física Cuántica. Carlos Sanchez del Río (Coordinador)

http://en.wikipedia.org/wiki/Ultraviolet_catastrophe

 

The colour of things

What is the colour of things? And, if we are in a completely dark room, what is their colour? Let’s change the situation. We are in a completely dark room and there are a number of objects we have not ever seen, what is their colour? Do they have any? The answer to the first question is difficult because if we have not ever seen an object, the only thing we can do is imagine a colour. The second answer can be a bit philosophical, however in my opinion it has a colour which in fact is the same of the rest of the objects in the room: black. If they have another colour I’m not able to say it. But I don’t want to talk about Philosophy but Physics, concretely about the interaction between radiation and matter.

Anything we see, touch, breath… is made of atoms, and atoms are made of a nucleus, protons and neutrons, and an outer shell of electrons (here I talk a bit more about atoms). Electrons are the ones in charge of giving things their characteristics colours. But they can’t do it on their own and need the energy support provided by photons, that is, the light.

In 1911, Rutherford published his atomic model where he proposed that electrons were orbiting the nucleus in a similar way as planets do around the sun. However, the problem was that when electrons orbit in this way they emit radiation and thus they lose energy until they fall to the nucleus. The atom would not be stable in this case. In 1913, Niels Bohr took this idea together with the quantum hypothesis made by Max Planck and proposed that electrons would orbit the nucleus in circular orbits, which is the content of his first postulate of his model, but not all the orbits are allowed, electrons only can orbit in specific quantized orbits. This is known as the second Bohr’s atomic model postulate that says that not every orbit for the electron is allowed, only those whose radius is such that the angular momentum of the electron is n times h/2π, being n an integer and h the Planck constant. In these orbits the electron would not emit radiation and the atom would be stable.

But then, is the electron always in the same orbit, in the same manner planets are always in their orbit? For a planet to be in its orbit, it needs that there is not any perturbation that gives the planet energy and pushes it out of its orbit, as it could be the case of a meteorite. And even so, there could be meteorites without enough energy to take the planet out of its orbit. In the case of electrons, it happens the same, when there is not a perturbation with enough energy to make the electron jump to another orbit. What is the kind of perturbation that can make the electron jump to another orbit? Here it is the relation with the colour of things. This perturbation is the light, more concretely the photons of light. Photons have a specific energy that depends on its wavelength, or in other words, the colour of light. If light has little energy, its wavelength would be close to the red colour and if it has much more energy, its wavelength would be close to the blue colour.

When a photon collides with an electron in its stable orbit, it gives it energy to jump to another orbit. But it cannot be any orbit, it has to be one that meets the second Bohr’s postulate, that is, it has to be quantized.

310px-Bohr-atom-PAR

The electron in the new orbit cannot remain there forever if there is not a continuous source of energy, therefore it will jump back to its initial orbit. The problem is that the initial orbit has less energy than the final one; therefore to go back it has to lose the excess of energy by emitting a new photon whose energy is the difference between the energies of the initial and final orbits. This new photon has a wavelength that depends on the energy and thus has a specific colour. It is this new photon the one that arrives to our eyes and makes us see things of a certain colour.

Where does the initial photon, that makes the electrons jump, from? For example from the light of the sun, from the light bulbs, from a fire… Because of it, we can’t see the colour of things in the darkness, because in the absence of light, ‘there is not’ photons making the electron jump of its orbit (although there are continuously photons arriving and colliding with electrons, but they don’t have enough energy to make the electron jump to an orbit that makes it emit another photon of a colour, e.g. green, that arrives to our eyes.

I have to say that I used the concept of orbit as Bohr defined it, that is, making the assumption that it is like the orbit of a planet, however the reality is always more complex and I should have talked about energy levels or even could have used a more quantum specific and rigorous terminology, but probably nobody would have read beyond the first paragraph.

References

Bohr’s atomic model – Wikipedia

A quick look at the Standard Model of Particle Physics

If someone has ever watched the The Big Bang Theory show, probably it would have noticed that every time that Sheldon, Leonard, Raj or Howard are at the university this poster is hanged on the walls of the halls.

 Standard Model Particles and their interactions

The Standard Model of Fundamental Particles and their interactions

This image represents everything we know, and that has been experimentally verified, about the structure of the matter we are made and everything we have observed in the universo, with the precisión we are able to reach using the instruments we have.

Let’s try to explain the image.

Atomo

Inner structure of the atom

Basically, all of us know that atoms have two differentiated parts: an outer shell where the electrons are and the nucleus, which is made of protons and neutrons.

The electrons have negative charge and are responsible of, for instance, conducting electricity (when they are free) or make things have one colour or another (due to the transitions between the different possible energy levels of the atom, but that is another story). Protons have positive electric charge and in the same quantity as electrons so that the atom is electrically neutral. Neutrons do not have electric charge they are neutral.

Electrons are fundamental particles by themselves; they cannot be broken down in other more elementary particles, but protons and neutrons can broken down in smaller particles. These particles are the quarks, concretely two of the six that exist, the quark up and the quark down. Since some years ago, it is being proposed, at a theoretical level that quarks and electrons are not point-like particles but small strings made of pure energy that are vibrating. Depending on how they vibrate, they exhibit the properties of electron or quarks (and the rest of particles we are going to see later). However, this theory has not yet verified by experiments and, in fact it is beyond the standard model we are dealing with.

Electrons, as well as their heavy cousins known as muons and taus or their lighter cousins the neutrinos, are known as leptons, and together with the quarks they are known as fermions. The reason for this name is that they obey the Fermi-Dirac statistics and thus they verify the Pauli exclusion principle, which says that it is not possible to find two fermions in the same quantum state simultaneously.

It has to be highlighted that every fermion has associated an antiparticle, which is the same particle but with the opposite charge. For instance, the antiparticle of the electron is the positron (which is different to the proton) and the antiparticle of the quark up is the antiquark up. The antiparticles are represented with the same symbol as the particle but with a at its top.

Fermiones

Fermions and their properties

Each lepton, that is the electron, the muon and the tau, has a lighter cousin. The electron has an electron neutrino, the muon a muon neutrino and the tau a tau neutrino. As can be seen in the table above, the only difference between the electron, the muon and the tau is that the mass increases. All of them have a negative electric charge. Neutrinos do not have electric charge and have a very small mass (but they have mass indeed and this is one of the reasons why they change flavour, which means that when, for instance, they start their travel to the earth from the sun they are electron neutrinos but when we detect them on the earth we measure less electron neutrinos than expected because during their travel they have change their flavour and have transformed into muon or tau neutrinos.

Above we have talked about two types of quarks, the up and the down quark, which are the ones that make the protons and neutrons up, but we also have the quarks charm, strange, top and bottom. The names they have is because physicists are funny people, although it may look like the opposite and they like to put strange names to these things. However they are better known as u, d, c, s, t and b quarks.

Quarks c, s, t and b does not form part of the ordinary matter by themselves, but they are the result of high energy collisions between other particles (for example, between two protons as it is done in the LHC and mainly thanks to the famous Einstein equation E = mc2) or in nuclear decays.

One of the peculiarities of quarks is that they are never alone in nature but grouped, as in the case of the proton and the neutron. Apart form the particles that make the atomic nucleus up, they can be also found forming other particles.

Bariones

A few baryons

Baryons are made of three quarks or three antiquarks. In the latter case they are known as antibaryons.

Mesons are made of two quarks and mandatorily one is a quark and the other one an antiquark.

Mesones

A few mesons

On another hand we have the bosons.

Bosones_juntos

Bosons

They are called bosons because, opposite to fermions, they obey the Bose-Einstein statistics, which says that they many bosons can exist in the same quantum state at the same time (remember that only two fermions can be in the same quantum state). Some bosons have the particularity that they are the carriers of the fundamental forces of nature, each time two particles interact, what they are doing is to exchange a boson. These forces are the electromagnetism, the weak force and the strong force. There is another boson for the gravitational force, known as the graviton. The standard model does not explain the gravitational force and thus the graviton does not form part of it.

These forces or interactions are represented hereafter.

Interacciones

Fundamental interactions

The weak force is the responsible for the radioactive decays, which occurs when a particle transforms into another particle through the emission of one or more additional particles. This interaction is mediated by the W+, W- y Z0 bosons. This bosons, opposite to the rest of the bosons, have mass.

The strong force makes that the quarks that make the atomic nucleus up are kept together and don’t break spontaneously. The boson in charge of this task is the gluon.

The electromagnetic force is the better known by all of us, because it is composed of the electric and magnetic forces (in fact it is just a single force that is revealed in two different ways and that is why it is called electromagnetic force). The boson that carries this force is the photon. Our daily experience is based on this force and every time we see the light, feel the heat, cook the meal in the microwave, etc., what we are doing is interacting with photons of different energies.

As we have said before, particles are interacting with each other and they are doing it permanently.

 Decaimientos

Particle interactions.

In the left image, it is represented how a neutron decays to produce a proton, an electron and an antielectron neutrino. This decay is known as beta decay.

In the middle it is shown a collision between an electron and a positron that gives rise to a disintegration of matter into pure energy, again through the Einstein’s equation E=mc2. The energy then transforms again, by the same equation, in different particles. In this case it is form a B0 meson and an anti B0 meson.

Lastly, in the right image appears a collision between two protons (as those that occur in the LHC at CERN) to produce two Z0 bosons and a number of assorted hadrons, which can be mesons or baryons.

This are not the only interactions that can happen, there are many other more and they follow strict conservation rules (for instance, energy conservation, momentum conservation, etc.), but they are a good example.

At a mathematical level, the standard model is quite complex and difficult to understand but at the level of the fundamental particles that make it up and their interactions is much easier and can be explained in a poster that can be hanged on the wall of any university hall.

References:

The Particle Adventure

The beginning of the research on cosmic rays

Everything in life has a beginning and science, and all of its areas, has a beginning as well. This is the case of cosmic rays research too, but what are cosmic rays? In a simply way, they are subatomic particles, smaller than atoms such as their constituents like protons, that come from the outer space moving at speeds close to the speed of light

Victor Hess

Victor Hess

While studying radioactivity at the beginning of the 20th century, it was found that when an electroscope, that is a devise to determine whether body has electrical charge and its sign (positive or negative), was put close to a radioactive source, the air was ionized, that is that the atoms and molecules of the air were charged electrically. If the electroscope was put far from the radioactive source, it was fount that the air was also ionized, therefore it was thought that it was due to the existence of natural radioactive sources in the surface or the interior of the earth ant that this ionization should decrease at higher altitudes.

Electroscopio

How an electroscope works

In 1910, Austrian physicist Victor Hess, climbed up the Eiffel tower in Paris with an electroscope in order to try to determine at what altitude ionization was negligible or non-existent. The result was amazing, because instead of decreasing, ionization increased with altitude. As with any other scientific result, that has to be supported with multiple evidences and various experiments repeated, when possible, in different conditions, Hess repeated its experiment but an altitude of 5000 m! For it, in 1912 he used a hot air balloon but this time with an ionization chamber.

An ionization chamber basically is an instrument with a gas inside between two metallic plates, which are applied a voltage. When the gas inside the instrument is hit by, for instance, a cosmic ray, the ions generated inside the gas move towards the metallic plates because of the voltage in a way that an electrical current, that can be measured, is generated.

The results that Hess obtained were the same as those in the Eiffel tower, thus he arrived to the conclusion that the radiation causing the ionization of the air was not coming from the ground but from above. The name of cosmic rays, is not from those days but from 1932 when Robert Millikan named in this way the radiation coming from the outer space as he thought that they were gamma rays, the most penetrating electromagnetic radiation known to date, although later was discovered that it was not electromagnetic radiation but mostly particles with mass.

From Hess’ discovery, the history of cosmic rays has advanced a lot until nowadays.

Dimitri Skobelzyn used the cloud chamber to detect the first traces of the products of cosmic rays in 1929, as well as Carl Anderson did in 1932 to discover the positron, which is the electron anti-particle (but it is not the proton, because, although it has positive charge, it is 2000 times heavier than the electron).

Later in 1938, Pierre Auger, having placed detectors in various distant points in the Alps, detected that the arrival of particles in both detectors was simultaneous, so he found that the impact of high energy particles in the higher layers of the atmosphere generated secondary particle showers.

particle shower

Secondary particle shower generated in the atmosphere by the impact of a cosmic ray

Currently the detectors used to study cosmic rays are more sophisticated and, because the intensity of the particles coming from space is higher at higher altitudes, they are located in mountains and elevated areas as it is the case of the Pierre Auger observatory in the Pampa Amarilla in Argentina with an average altitude over the mean sea level of 1400 m or the MAGIC experiment in the Roque de los Muchachos observatory in the Palma Island of the Canary Islands (Spain).

MAGIC

Telescopios del Experimento MAGIC en el Roque de los Muchachos

Our detectors are even in the space like the Alpha Magnetic Spectrometer, also known as AMS-02 installed in the International Space Station whose objective is to measure the antimatter of cosmic rays to search for eviden of dark matter.

References:

Arqueros F. Rayos Cósmicos: Las Partículas más Energéticas de la Naturaleza. Revista “A Distancia (UNED), 1994.

http://visitantes.auger.org.ar/index.php/historia/historia-de-los-rayos-cosmicos.html

http://www.biografiasyvidas.com/biografia/h/hess_victor.htm

Afraid of Maths? Why?

If I remember well, 5 + 1 = 2 x 3, is it right? You may wonder what it represents. Well, it is an equality or mathematical equation, where the key word is ‘mathematical’. Now, what is the difference with the following equations?

Maxwell Equations

Maxwell’s equations: there is not a single day when I don’t write them somewhere just for fun (Note. For practical reasons, the point means the same as the x, that is a product, in the first case it is a scalar product and in the second a vector product, but for us, now it doesn’t matter.

They are a bit scary, aren’t they?

Let’s do the following, if in the equation 5 + 1 = 2 x 3, we replace 5 by the an A, 1 by a B, 2 by a C and 3 by a D, we can write as

A x B = C + D

Let’s go back to the Maxwell’s equations and pay attention to the last one. Is there any difference? Apart from the arrows over the symbols, the inverted triangle and the quotient involving something similar to a 6 reflected on a mirror, if we look at it carefully, It is similar to our something multiplied by another something which equals to another something times something and as we said that the product and addition of something was 5 + 1 = 2 x 3, we come to the conclusion (at least me) that the Maxwell’s equations are not so scary as we may think at the beginning, they are just additions and products that we know since we were children.

Actually, Maxwell’s equations, apart from being beautiful, are a bit scary, specially when one has to solve them in an exam with limited time taking into account that you has not studied too much.

There is a generalised terror between different people towards Maths, and it can be heard very often that they are difficult (in fact, they are), that they are useless (a lie!) and why should I learn Maths when the only thing I need is to know how to add and multiply the prices in supermarkets (if you know add and substract, you can already read Maxwell’s equations, and yes they are really useful)

Although in many cases Maths are difficult, it is not truth to affirm that they are useless. Maxwell’s equations (remember, 4 equations that are additions and products), are in fact an explanation of everything we see! They explain why the light is as it is, they explain all the electricity we use every day from the moment we wake up until we go to sleep, they explain why when we are sit, despite of the gravity force that pull us towards the floor, we don’t pass through the chair and fall down, they explain why magnets are attracted and why the engines of our fridges and our washing machines work, they explain… well, I will not keep on giving examples, otherwise I will not finish this post.

Even If I put Maxwell’s equations as an example (only because they are my favourite equations), the usefulness is not only limited to them. I could have started with something easier such as the famous Einstein’s equation E =mc2 and make a comparison with the equation 1.53×1016 = 0,51 x (3×108)2 and say that 1.53×1016 is E, 0,51 is m and 3×108 is c, where they represent the rest energy of the electron in MeV (Megaelectronvolts), the rest mass of the electron in MeV and the speed of light in meters per second respectively, but I have already written about this equation here and don’t want to repeat myself.

In any case, Maths are fundamental in any moment of our lives. For instance the following equations

vientogeostrófico1

vientogeostrofico2

 

Equations of the geostrophic wind approximation

 

 

represent the geostrophic wind approximation, that explain the anticlockwise turn of high pressure systems in the atmosphere and the clockwise turn of low pressure systems (in the northern hemisphere) as well as how meteorologists are able of establish the wind direction by looking at the isobar maps (it is easy, the rule is that the wind always moves in the direction where the low pressures are left to the left)

But this is not the end, there are much more examples outside the world of Physics and Natural Sciences. For example, in Economics, the equation

economia

 

Equation of the change in the value of money

represents the change in the value of money when the price index at the beginning and at the end of an specific period are known.

Even in Medicine, the equations

Modelo SIR 1

 

Modelo SIR 2

 

Modelo SIR 3

 

SIR Model for the development of a disease along time

 

 

represent the SIR model, which indicates how a disease evolves along time

And there are much more examples that we don’t pay attention to and that give practical results we use everyday.

Maths is extremely useful, without Maths we would probably still live in caves (Egyptians used Maths already to build pyramids and even for agricultural purposes and since then a long time has passed). And yes, they are difficult. If you don’t believe me, just look at the Standard Model Lagrangian (nice word!) of Particle Physics which explains all the forces or nature we feel, except for gravitation (electromagnetic force which includes Maxwell’s equations, weak force which explains nuclear decays and radioactivity and strong force which explains why atomic nucleus are as they are) and basically it explains how the world that surround us is. Let’s see how many additions and products like those at the beginning of the post you are able to see…

Lagrangiana del Modelo Estandar de Física de Partículas

Standard Model Lagrangian of Particle Physics

References:

How to write Maxwell’s Equations on a T-Shirt

http://en.wikipedia.org/wiki/Geostrophic_wind

Luis E. Rivero. La medición del valor del dinero

http://en.wikipedia.org/wiki/Epidemic_model

The Standard Higgs. Richard Ruiz. Quantum Diaries. http://www.quantumdiaries.org/author/richard-ruiz/

 

Galaxies, distances and the expansion of the Universe

When we look at the sky during a dark night, far from the city lights, we can see so many stars that we can feel overwhelmed because of the amount of stars. When we look towards specific areas we can see an almost continuous belt of dust, similar to the trace that someone leaves when a spills a bottle of milk. This trace of milk is our Galaxy, the Milky Way. However, the Milky Way does not cover everything that exists, the Universe extends beyond our galactic home.

The Milky Way is one in the hundreds of thousands of millions of galaxies that share the Universe with us, each one being an enormous set of stellar systems on their own.

From the Earth, to the naked eye and depending on the region of the sky we are observing, we can easily see three galaxies. It is the Local Group, which includes the Milky Way the Large Magellanic Cloud, and the Small Magellanic Cloud. The third member of the trio is the Andromeda galaxy in the homonymous constellation.

Imagen guardada con los ajustes integrados.

Magellanic Clouds and the Milky Way

In the 18th century, the French Charles Messier, who was a comet hunter, scrutinized the sky with his telescope (less than 20 cm of diameter) until he saw a blurry spot. When he found any, he took note of the position in a stellar map. The night after he aimed his telescope to the same point to see whether the spot was still there. If the spot had moved from there, it was a comet if not it was other thing. By that time, those spots were known as nebulae, latin word meaning ‘mist’ or ‘cloud’. In 1774, Messier had catalogued 45 nebulae together with their celestial coordinates and in 1784 the catalogue included 103 objects already.

A German born musician, William Herschel, who worked in the second half of his life in the construction of large telescopes together with his sister Caroline aimed his instruments towards the objects discovered by Messier, and given the ‘power’ of his telescope (4 times bigger than the one of Messier) discovered in seven years more than 2000 objects.

With this catalogue, Herschel tried to make a celestial map including all these objects. From the study of the nebulae, Herschel proposed that if the Milky Way was observed from a distance far enough it would look like a nebula itself.

Because of the power of Herschel telescope, he could resolve some of the blurry spots into globular clusters. In the decade of 1840, William Parsons started to build a telescope 16 m long with a mirror of 2 m of diameter, which was bigger than the one of Herschel.

Parsons aimed his telescope towards one of the objects of the Messier catalogue, concretely M51, and his surprise was immense when he saw a spiral structure, that later was named as the Whirpool galaxy because of this characteristic. He couldn’t find individual stars in it but discovered other nebulae with the same spiral structure.

Processed with MaxIm DL

M51. The Whirpool galaxy

At this point, the question arose: did these nebulae belong to the Milky Way? To answer it, it was necessary to know the size of the Milky Way and the distance to the nebulae.

Shortly before this discover, astronomers knew already the parallax method to measure the distance to close stars, but due to the large distances of the nebulae, this method was useless. After the initial development of methods in spectroscopy the English astronomer William Huggins aimed, in 1867, his telescope equipped with a spectroscope to the brightest star in the night sky, and applying the Doppler effect theory, developed by the Austrian Christian Doppler 20 years before, he found a slight red shift in the spectrum of the star. He calculated that Sirius was moving away from us at a speed of 50 km/s in the line of sight. In the same way, he calculated the speed of a large number of stars. It was only the beginning of the use of the Doppler effect technique in astronomy. Some years later it was found a way to use this method to calculate the distance of stars.

In the early 20th century, the observatory of the Harvard College was making tedious stellar observations from photographic and spectroscopic plates. Women who were deemed by that time appropriate for the tedious and repetitive work, showing machismo behaviour, did the measures and calculations earning less money than men. Several of these women made important contributions but, among all, stressed Henrietta Swan Leavitt.

Leavitt

Henrietta Swan Leavitt

In a number of photographic plates of the Small Magellanic Cloud, Leavitt observed multitude of stars, which changed their brightness periodically because they ‘pulse’, i.e. they expand and shrink regularly. These stars are known as Cepheid variables because the firsts Cepheid discovered is in the homonymous constellation.

Leavitt compiled more than one thousand of Cepheid in the Small Magellanic Cloud and, at least, 16 of them appeared in several photographic plates what enabled her calculating their periods.

She found that the stars were brighter when their periods where longer and established that the period and the brightness were related and that it was possible to graphically show the relation between the period and the luminosity, i.e. Leavitt had found a relation between the apparent magnitude of a variable star with a measure that was independent of the distance to the star: the change in the brightness. Leavitt had discovered a connexion between the period and the absolute magnitude, i.e. its actual magnitude.

Being these stars in the same region of the Small Magellanic Cloud, it could be assumed that all were almost at the same distance from Earth.

The difference between the absolute magnitude in the Cepheid of the Small Magellanic Cloud and their apparent magnitude could be then used to calculate the distance to the star using the inverse squared law: A star, as any other light source, will only show a quarter of its brightness if the distance to the observer is doubled, a sixteenth if the distance is increased fourfold, etcetera.

Since the relation discovered by Leavitt is applied to the Cepheid in general, the fact that determining the absolute magnitude would enable estimate the absolute magnitude of the others and it could be used the period-luminosity scale to find the absolute magnitude of any Cepheid variable star, and thus the distance to the star.

The problem was how to create a distances pattern from the behaviour of Cepheid because the closer Cepheid was farther enough so as to measure its distance using the parallax method.

Leavitt was set aside from the tasks she was doing because the boss in the observatory thought that her job was to gather data and not making calculations, but Ejnar Hertzsprung, in the observatory close to Berlin, took charge of it.

Hertzsprung studied the proper motion of stars, the motion in the space of the star and our Sun, of thirteen Cepheid close to the Sun, and using statistics calculated the ‘average’ distance to the local Cepheid, as well as an ‘average’ apparent magnitude. With these values he was able to calculate an ‘average’ absolute magnitude for an ‘average’ period Cepheid.

Perhaps there were too many ‘averages’, but what Hertzsprung did next was to choose a Cepheid in the Small Magellanic Cloud with the same period as his ‘average’ star. He compared the photographic brightness of the Cepheid in the Cloud with absolute magnitude it should have and calculated the distance: 3000 light-years. This distance put the Small Magellanic Cloud within the Milky Way. It is thought that it was a typo and that the distance should have been 30000 light-years. Even though the distance was well below the actual one.

Why this discrepancy? Actually it was an experimental error. The Cepheid in the Small Magellanic Cloud had been photographed using plates sensitive to the blue light while the local Cepheid had been photographed using plates sensitive to red light. This provoked a difference in the apparent brightness making that the Cepheid in the Small Magellanic Cloud look brighter and thus closer.

The North American astronomer Harlow Shapley knew how to understand the astronomical meaning of Cepheid. Working in the Mount Wilson observatory in Los Angeles, with the 1.5 m telescope, Shapley studied the globular clusters and discovered Cepheid in them. Using the Hertzsprung’s technique, and refining it, he determined the distance to the clusters, being between 50000 and 220000 light-years. It was thought that the clusters were within the Milky Way, but it also was thought that the Milky Way had a diameter of 30000 light-years so the real diameter should be larger than thought. Shapley estimated a diameter of the Milky Way of 300000 light-years being the galactic centre in the direction of Sagittarius.

Astronomers were cautious about this result, in part because they considered as unreliable the Hertzsprung’s method.

At the same time, telescopes were aiming at spiral nebulae and many astronomers suggested that they were galaxies comparable to the Milky Way full of stars because, when the light was passed through a spectroscope it was similar to the light of the stars and not to the one of a gas cloud.

In 1912, Vesto Slipher, in the Lowell observatory, took a detail look at a spiral galaxy in the Andromeda constellation and could measure its Doppler shift. The value he found impressed everybody: it was approaching at a speed of 300 km/s. Later, Slipher observed 15 more spiral galaxies and discovered that 13 of them were moving away from the Earth even faster than the approaching speed of Andromeda.

andromeda

M31. Andromeda galaxy

In 1919, after having received education as lawyer and after getting a Ph.D. in astronomy and return from war, Edwin Hubble started trying to classify nebulae. Using the new 2,5 m telescope in Mount Wilson he hoped to resolve stars in the spiral galaxies, concretely in Andromeda.

Hubble focused his attention in a points of light known as novae, stars that suffer recurrent mass explosions provoking their luminosity to change (don’t mix it up with supernovae where the explosion of the full star occurs)

Through the comparison of photographic plates showing the same region of the sky, what he initially thought to be a nova he later realised that a star was increasing and decreasing its brightness periodically. It wasn’t a nova but a Cepheid!

VAR_Hubble

Plate where Hubble annotated that it was a variable Cepheid and not a nova

Using the Hertzsprung’s technique, as refined by Shapley, he calculated the distance to Andromeda and got a value of 900000 light-years, what was larger that the size of the Milky Way as Shapley calculated. Andromeda was itself a galaxy!

Because he found Cepheid in spiral galaxies, Hubble made the Universe size to considerably increase. Hubble used the Cepheid to develop distance indicators for galaxies in the same way Shapley did for globular clusters.

While this was occurring in Mount Wilson, in Lowell, Slipher was still measuring Doppler shifts for spiral galaxies, including the ones where Hubble used his technique to calculate the distance.

Milton Humason joint Mount Wilson to work as assistant because of his father-in-law. When one night the telescope operator was ill he worked in his position in such a successful way that he was appointed as permanent operator as well as assistant to Hubble permanently. Humason got enough information about additional Doppler shifts from other galaxies. Hubble gathered all these data to establish a relation between the red shifts and the distances. The relation was simple: except for the closest galaxies, the farther a galaxy the faster it moved away. The rate is now known as the Hubble’s constant.

Although the values that Shapley or Hubble found by their time were rough, the precision has been increased now and currently we know that the Milky Way has a diameter of 100000 light-years and that the Andromeda galaxy is 2.5 million light-years away from us. Though the values are slightly different, the important thing is to remember that the efforts to understand the Universe made possible the development of techniques and methods that, even nowadays, are being used by the modern astronomers and astrophysicists.

As Hubble said:

But with increasing distance our knowledge fades, and fades rapidly, until at the last dim horizon we search among ghostly errors of observations for landmarks that are scarcely more substantial. The search will continue. The urge is older than history. It is not satisfied and it will not be suppressed.

References:

Galaxias. Time Life Folio

Astrofísica. Manuel Rego, María José Fernández