The 6 precepts for nanoscience

When someone hears the words nanoscience or nanotechnology many people think about the secret laboratories of a mad scientist controlled by governments, where robots very, very small are developed to be injected in our blood to control us from the inside. That is, the miniaturization of macroscopic structures.

Nanoscience researches the physical, chemical or biological properties of atomic, molecular or macromolecular structures, or what is the same, the structures that have a size between 1 and 100 nanometres. One nanometre is equivalent to 10-9 m or 0,000000001 m.

The study of nanoscience and the development of nanotechnology presents a lot of advantages to us such as the development of encapsulated medicines in molecules that can release its active component only where it is needed. This is the case of the treatment of patients with cancer because it could be possible to release the treatment only in the areas affected by the tumour instead of affecting other body tissues. Another example is the study of materials whose electrical conducting properties are better or even to try new methods to transmit information through materials, which, at least one of its dimensions is within the nanometric scale.

Maybe the most famous material developed in nanoscience is the graphene. In fact, Andre Geim and Konstantin Novoselov were awarded with the physics Nobel Price because of their experiments with graphene.

800px-Graphen Artistic representation of Graphene (Source: Wikimedia Commons)

To obtain materials at a nanometric scale, one of the most important ways of doing it, is to use techniques coming from the chemistry, because it can be used the properties that have atoms and molecules to bond together to create nanometric structures.

But the question here is if the miniaturization of macroscopic structures to nanometric scales can be considered as nanoscience or nanotechnology. The answer is no. In fact, not everything is nanoscience or nanotechnology and there is a set of six principles or precepts about what this emergent branch of science is.

First precept: Bottom-up building approach

This implies that miniaturizing, that is, to reduce the size of something is not nanoscience. To use the fundamental building blocks, that is atoms and molecules, and from there, to use their properties to build nanometric structures that could perform specific functions is indeed nanoscience.

Second precept: Cooperation

It does not deal with the fact that diverse institutions cooperate amongst them to develop nanostructures, which is also important, but the development of different nanostructures with different functionalities that cooperate amongst them to give rise to more complex nanodevices with better functionalities.

Third precept: Simplicity

To simplify the problems that faces the nanotechnology developments so that only the necessary scientific laws are used to avoid unnecessary complexities.

Fourth precept: Originality

Coming back to the example of the robot at the beginning of this post. It is avoided to develop things that already exist and simply reduce their size. What it is looked for are different structures. To reduce the scale has more implications than one could think, such as the fact that the volume depends on a cubic length and the area on a squared length, making a scale reduction unfeasible. Thus, it is necessary to be original and creative with the developments.

Fifth precept: Interdisciplinary nature

Previously we mentioned that the cooperation between institutions was also important, but it is even more important the cooperation between different areas of science. For this reason, the cooperation between biologists, chemists, physicists and engineers is more than needed. In nanoscience, the fact that a researcher is a pure physicist, chemist or biologists does not provide a complete knowledge because it has to face problems that will not solve unless the knowledge field is widened.

Sixth precept: Observation of nature

Nature offers us a lot of examples of nanotechnology. Molecules that make up our tissues and organs, as well as how they are organized and interact amongst them, are the best example of nanotechnology. If we observe and study them our developments will be much more innovative, efficient and will improve our lives.

It is complex to find examples that follow all these precepts simultaneously but that is why science and scientists exist, to develop nanostructures following these precepts using the laws that nature imposes.

References

The Nobel Prize in Physics 2010″. Nobelprize.org. Nobel Media AB 2014. Web. 14 Aug 2014.

El nanomundo en tus manos. Las claves de la nanociencia y la tecnología. José Ángel Martín-Gago, Carlos Briones, Elena Casero y Pedro A. Serena. Editorial Planeta S.A. Junio 2014

Anuncios

(Inter)stellar chemistry

When we want to know about what a material is made of, the first thing we need is, obviously, to have a certain amount of the material we want to study. Once we get it, we go to that area of the scientific knowledge, often hated by students, which is chemistry. Chemistry, as Linus Pauling defined it, is the science that study substances, its structure, its properties and the reactions that transform them in other substances.

Within chemistry, there is an area in charge of telling us what the chemical composition of the substance we want to study is:  the analytical chemistry. To achieve its objective, the analytical chemistry uses a set of methods that depending on its nature can be pure chemical methods, based on the reactions that a substance has in the presence of other ones, or physical chemistry methods, which depend on how some substances physically interact with other ones.

But, how do we do to study the chemical composition of something that we do not have any amount at hand?. This is the situation that appears, without exception, when we want to know the composition of stars or the interstellar medium. It could seem to be impossible but it is true that we know it better and better. As a proof of that, it has been recently searched for ethanethiol in the Kleinmann-Low region inside the Orion nebula.

orionkl_subaru

 

Kleinmann-Low region in the Orion nebula (Source: NASA APOD, CISCO, Subaru 8.3 m telescope, NAOJ)

Then, how do we do it? We need to go to different areas of science, among which we have chemistry (analytical chemistry), astrophysics and astronomy (concretely astronomical instrumentation)

We mentioned before that the analytical chemistry is in charge of studying the chemical composition and for that purpose it uses different methods. One of them is the spectrometric method, which consists in studying the interaction of the electromagnetic radiation (in every wavelength of the electromagnetic spectrum) with the matter it impacts. Spectrometry uses spectrometers (also known as spectroscopes or spectrographs) which are devices that separate de light in the wavelengths that is made of. The most simple spectroscope (in which functional principle are based all of them) is a simple prism. Using a prism, Newton managed to split the white light coming from the sun in the colors it was made of, getting in such a way the first spectrum in history. However, it was Kirchhoff and Bunsen who invented the first spectroscope by adding a graduated scale allowing them to identify the wavelength of the spectral lines observed when the light passed through the prism. When the image is registered on a device, either electronic or a simple photographic plate, we used to speak about a spectrograph.

Chemical substances can be found in the form of atoms or molecules. In the first one, individual atoms are not bounded to other atoms. In the second one, atoms, which can be of the same kind or different, are bonded amongst them through chemical bonds giving rise to molecules. The electrons that are inside atoms, or that are bond to make up a molecule, can be in different energy states. If there is not any radiation impacting on them (whatever the wavelength), electrons are in their lowest energy state. When radiation impact on them, the electrons jump to a higher energy state. However, due to the electrons tend to be in their lowest energy state, once the radiation stops impacting on them they return to their lowest energy state and, for it, they have to get rid of the energy excess provided by the incoming radiation, emitting the excess of energy in the form of radiation. The difference between the energy of the initial and final energy states tells us the wavelength of the emitted radiation.

Each atom or molecule has different energy states that are characteristic and, therefore, depending on the incoming energy, the transitions between energy states will be different and thus the emitted radiation will be different too. These energy states can be of different types and include vibrational states (due to the vibration of the atom or molecule) or rotational states (due to the rotation of the atom or molecule). On another hand, each energy state cannot be a single one, but can be split in different states in presence of, for instance, a magnetic field, thus enabling additional transitions and the possibility of the emission of the excess of energy in additional wavelengths.

image014

 

Transitions between energy states that give rise to spectra (Source: monografías.com)

Analytical chemists use, amongst other methods, the spectrometric method to study the structure of these atoms or molecules and their interaction with radiation. Because each atom and molecule has a specific structure of energy levels different to the rest, and that it interacts with radiation in a specific way depending on the type of radiation (and the environmental conditions, such as the presence of magnetic fields), it can be created a catalogue of spectra, so that the next time we see the same spectrum in other place, we could identify the substance we are dealing with.

Some of the places where we can find spectra are stars and the interstellar medium. The problem we have is that we cannot directly access to the star, gather some amount of matter and take it back the laboratory to study it. What we can do is to use our optical telescopes or radio telescopes, to equip them with spectrographs to give us the chance to observe the spectra, with sensors appropriate for the type of radiation (wavelength) we want to observe and point them towards the region of the sky we want to study. The analysis of the spectra through its comparison with the spectra obtained in the laboratory could give us the chemical composition of our object under study. And not only that, we are going to be able to get much more information such as the rotation and translation speeds that have the object (through the measure of the Doppler effect, because the spectral lines will appear displaced, with regards to its position in the laboratory, to the spectrum part where there are longer or shorter wavelengths depending on whether the object is approaching to or recessing from us) or even the intensity of the magnetic field that could be in the region we are looking at.

As always, reality is always much more complex, but we can always rely on the human will and the capacity that scientists have to look for solutions to the problems that the universe presents. And as it has been seen, what for me is the most important thing, we can rely on the collaboration between different scientific areas in the search for those solutions, appearing, in some cases, new scientific fields as result of such collaboration. This is the case of Astrochemistry, which is basically the scientific area that has been addressed in this post.

References:

http://www.espectrometria.com/

More flavors

The subatomic particle zoo has been growing along the years since the discovery of the electron, as one of the fundamental constituents of atoms. In the beginning, the discovery of these new particles occur accidentally through the use of the first particle accelerators or through the study of cosmic rays, that is, the high energy particles that impact on the atmosphere, coming from the outer space, where they collide with atoms in the air producing new particles. There was not any theoretical model predicting these particles, so everything was a surprise.

One example of it occur in 1936 when Carl Anderson and Seth Neddermeyer, by that time at Caltech, were studying cosmic rays using a cloud chamber, applying to it a magnetic field, and found particle tracks that curved differently of those of the electrons. It was clear, because of their curvature that these particles were negatively charged but the curvature radius was larger than the one of electrons. It was assumed that this new particle had the same charge as the electron and, therefore, to get such curvature radius, it had to be heavier than the electron, taking into account particles moving at the same speed. It was also compared to the curvature of proton tracks (although these had positive charge) and it was seen that the curvature radius of the new particle was smaller than the one of protons and thus its mass had to be smaller.

The existence of this new particle added and additional complexity to the particle zoo that was appearing at that time. Initially it was given the name of mesotron and it was even thought that it was the particle carrying the strong force predicted by Yukawa, so it was renamed as mu meson. After the discovery of the pion (or pi meson) and other mesons (a meson is a particle made of two quarks with an integer value of spin), it was seen that the mu meson did not have the same properties that mesons, they did not interact with the strong force. Besides, it was discovered that mu mesons decayed in neutrinos and antineutrinos. Because of it, the name was changed again and it was given the name of muon.

muon_neutrino-4e7cb2b-intro

 

Muon neutrino (Source: Particle Zoo)

The appearance of neutrinos and antineutrinos, presented an important question. Are they the same neutrinos than those associated to electrons in beta decay? It was clear that they had to continue the task initiated by Reines and Cowan as neutrino hunters and try to solve the mistery.

A good way of studying the nature of neutrinos associated to muons is through the study of the decay π → μν. The problem was that, to obtain enough pions in a quantity enough to conduct the research, it was necessary an energy that was not reached through the study of pions generated in the atmosphere as a product of cosmic ray collisions. It was thus necessary to use particle accelerators. Apart from a group of bright researchers to know that through the study of such decay the problem could be investigated.

This conjunction, an accelerator and bright researchers, took place at Brookhaven particle accelerator in 1962, where Leon Lederman, Mel Schwartz and Jack Steinberger were working.

In his book, The God Particle (let’s leave apart the story about the book’s name), Lederman explains the story of how the experiment was thought and built.

Using the Brookhaven’s Alternating Gradient Synchrotron, which in 1960 reached unprecedented energies by accelerating protons to 33 GeV, Lederman, Schwarz and Steinberger accelerated protons to an energy of 15 GeV. Once this energy was reached the proton beam was directed towards a beryllium target where, after the collision, produced pions. After 70 feet distance of free flight decaying into muons and neutrinos, the particles collided with a shield of more than 13 m thickness and 5000 tons made of steel coming from battleship plates, where they were stopped except for the neutrinos, giving as result a neutrino (associated to muons) beam with energies up to 1 GeV.

What they detected were the traks of 34 muons (considering a background of about 5 muons coming from cosmic rays). If neutrinos were the same both for the pion decay and beta decay, it would had been observed, theoretically, about 29 electron tracks, which were well known by the team, and if it were different, they would have been observed, as maximum, one or two electrons coming from kaon decays such as K+ → e+ + νe + π0. No electrons were observed.

Because of the discovery of the muon neutrino, Lederman, Schwarz and Steinberger received the Nobel prize in 1988.

lederman_postcard

Leon Lederman (Source: Nobelprize.org)

schwartz_postcard

Mel Schwarz (Source: Nobelprize.org)

steinberger_postcard

Jack Steinberger (Source: Nobelprize.org)

Now we know that there are three types of neutrinos. The third is the associated to the tau lepton, which is like the muon and electron, but even heavier. However, the discovery of the tau neutrino did not solve all the unknowns we have about neutrinos. We still have a lot of things to learn about, but we better leave it for another occasion.

References

Discovery of the Muon-Neutrino

T2K Experiment

The God Particle. Leon Lederman and Dick Teresi.

Seth H. Neddermeyer and Carl Anderson. Note on the Nature of Cosmic-Ray Particles. Phys. Rev., Vol. 51, 884.

Determinism, indeterminism and chaos

Physical phenomena in nature occur for some reasons, they follow specific patterns or laws, but can be predicted with a total certainty what the result will be?

Depending on the phenomenon we could predict the exact outcome or have an uncertainty. It is even possible that the only thing we get is a probable value according to a statistical criterion.

In the history of science we have gone through different stages. There was a time when all could be exactly predicted, being thus a deterministic period. However the discovery of new phenomena led to think that it was impossible to know the exact result of such phenomena, thus appearing a non-deterministic stream of scientific thinking. Later, the study of non-linear dynamical systems led to a new field of study: the study of systems with a completely erratic and unpredictable behaviour, though in principle, their formulation can be deterministic. This field is known as chaos.

The scientific determinism considers that, although the world is complex and unpredictable in many ways, it always evolves according to principles or rules that are totally determined, being chance something that only occurs apparently.

In the middle of the XIX century, determinism fell down piece by piece. There were two reasons for it.

Firstly, it was needed a completely and detailed knowledge of the initial conditions of the system under study to introduce them in the equations providing the system evolution and get a result.

Secondly, the dynamics of systems made of a large number of particles were very complex to solve.

This second reason was the one that made necessary to introduce concepts related to probability and statistics to solve the problems, giving as a result the creation of a new field of mechanics: statistical mechanics and thus a change in the scientific paradigm from a deterministic paradigm to a non-deterministic one.

The discovery of quantum mechanics had also consequences in the deterministic view of the world because from the Heisenberg’s uncertainty principle it is derived an impossibility to apply deterministic equations to the microscopic world because of the impossibility of knowing the value of two conjugate variables at the same time (e.g. position and speed)

In the mind of many people, it is associated the indeterminism with quantum mechanics and determinisms with classical physics, but, as it was demonstrated by Noble prize Max Born, determinism in classical mechanics is not real because it is not possible to establish with an infinite accuracy the initial conditions of an experiment.

On another hand, Feynman, in his famous lectures, said that the indeterminism does not exclusively belong to quantum mechanics but it is a basic property of many systems.

Almost all physical systems are dynamical systems, they are systems described by one or more variables that change with time.

There are dynamical systems that have a periodic behaviour ant other systems that do not have such behaviour. When the movement is not periodic, it depends on the initial conditions and it is unpredictable in long time intervals (although it can be predictable in short time intervals) it is said that the movement is chaotic.

In other words, chaos is a type of movement that can be described by equations, some times very easy, and that is characterised by:

  • An irregular movement with time that has neither periodicities nor superposition of periodicities.
  • It is unpredictable with time because it is very sensitive to the initial conditions.
  • It is complex but ordered in the phase space

For example, when there are three different masses moving because of the action of gravity (let’s say three planets), the study of their evolution in time is really complex because their initial conditions, position and speed of the three masses. Poincaré showed that it was not possible to find an exact solution to the problem.

Lorenz_system_r28_s10_b2-6666

Lorenz atractor. Does it not resemble the wings of a butterfly (Source: Wikimedia Commons)

One of the most famous cases in the study of these non-linear dynamical systems took place in 1963 when Edward Lorenz developed a model with three ordinary differential equations to describe the movement of a fluid under the action of a temperature gradient (or in other words, he was studying the behaviour of the atmosphere) Using a computer he searched for numerical solutions to the system and found that it was very sensitive to the initial conditions. It was James York who recognised Lorenz’s work and introduced the term chaos.

Currently it is thought that after the discovery of quantum mechanics and Einstein’s relativity, all physics spins around these fields. However, chaos is a very wide field gaining followers not only physicists and mathematicians, but also in other fields such as biology, genetics and neuroscience. This interdisciplinary nature is amazing and shows how much some people can learn from others to make the science advance towards a greater knowledge of the world.

References

Las matemáticas y la física del Caos. Manuel de León, Miguel A. F. Sanjuán. CSIC

Caos. La creación de una ciencia. James Gleick.

E. Lorenz. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences. Volume 20

Neutrino Hunters

Universe is mysterious and exciting. There are so many things to discover that it is very likely that we will ever know only a negligible portion of everything around us during the humanity lifetime. In the universe there are things that we can directly observe, develop a theory about them, apply all our mathematical knowledge and say that the portion of the universe subject to our observation behaves in a specific way. With such a theory, we could also make future predictions about future behaviors. But, there are things that we cannot directly observe and, in principle, develop a theory about them. Notwithstanding, humanity inventiveness and willingness to know do not have limits and we manage to make unveil the invisible universe.

As well as in many science areas, physics has things that cannot be observed what makes them even more interesting for physicist. We are made to learn about what we cannot see. One example is particle physics. When we start to investigate what there is inside atoms, which we do not directly see, or what happens when two atoms collide at high speed (and high energy), we get such an amount of information that in many cases is disconcerting. Smaller particles, than the constituent elements of the atoms, appear. Their behavior largely differs from what we are used in our daily experience. New mysteries appear which we pile up while waiting for solving them. Neutrinos are one of these mysteries.

electron neutrino

Electron neutrino (Source: Particle Zoo)

Shortly after the discovery of radioactivity, Ernest Rutherford discovered, in 1988, that one of the possible manners that radioactivity showed up was through the emission of negatively charged particles with a charge equal to that of the electron. Initially, they were named as beta particles, following the name of this kind of radioactivity which was known as beta decay, until they were finally identified as electrons.

The discovery of this kind of radioactivity opened a new research area.

Besides the efforts to understand this kind of radioactivity, it was still necessary to discover some of the missing ingredients in our knowledge of atomic nuclei and this happened after the beta decay discovery

Atomic nuclei are made of protons and neutrons. The proton was discovered by Rutherford in 1919. The existence of the neutron was proposed by Rutherford one year later to explain why atomic nuclei did not disintegrate due to the electric repulsion of protons. Many other scientists theorized about the existence of neutrons after Rutherford and it was finally experimentally found in 1932 by James Chadwick.

Once all the elements of the atomic nucleus were known, it was then possible to start developing a theory for beta decay. Observations implied that the electron was emitted by the nucleus, but it was known that nucleus were only made up of neutrons and protons, to that it was impossible that the electron was inside the nucleus. The explanation that was accepted is that a neutron transforms into a proton emitting, at the same time, an electron.

However, from Einstein’s equation E = mc2, it was expected that the electron would carry off, in form of kinetic energy, the mass difference between the initial nucleus and the nucleus after the electron emission. It was expected that the energy was conserved. But this did not happen. Conservation of energy is one of the basic principles of physics and when it is not conserved it can happen either we are doing something wrong or there is something new that we do not know yet. The latter was the solution.

In 1930, Wolfgang Pauli proposed the existence of a new particle without electric charge emitted with the electron, so that total energy was conserved. This new particle had not been detected so far. Although Pauli gave it the name of neutron, it was Fermi who renamed is as neutrino, due to the discovery of neutron in 1932, when he incorporated it to his beta decay theory.

The problem was that the neutrino was still undiscovered. Even Pauli thought that he had postulated a particle that nobody could ever detect because it was so small, in fact it was thought that it did not have mass, and without electric charge that it was impossible for it to interact with any kind of matter, even with that of the most sophisticated instruments of that time.

Everything changed with the advent of nuclear fission reactors. Nuclear fission uses very heavy elements that when fission occurs, the resulting lighter elements, which nucleus has such an amount of neutrons (isotopes), cannot be stable and they disintegrate emitting electrons and (anti)neutrinos, that is, they emit beta radioactivity. Although one single neutrino is very difficult to be detected when there a lot of them, the likelihood of detecting at least one is increased.

CentralNuclear

 

Already closed Zorita Power Plant (Source: mine)

Larger experiments started to be developed, with more and more sophisticated detectors, whose intention was to detect the neutrino. Many years passed by, since it existence was postulated and the beta decay was described, until the neutrino was discovered. It was in 1956 when Reines and Cowan, managed to find a clear signal that confirmed that the mysterious neutrino had been discovered… and they did it looking for the inverse beta decay.

As we have seen before, the beta decay consists in a neutron which is transform in a proton emitting an electron and a neutrino. Actually, it is an antineutrino because when the quantum formalism is combined with the relativistic one, it is found that every particle has its own antiparticle, that is the same particle but with opposite electric charge. In the case of neutrinos, because they do not have electric charge, it is not clear yet whether the neutrino and antineutrino are the same particle (Majorana particles) or different ones (Dirac particle), but let’s leave this for another moment. The inverse beta decay consists in an antineutrino colliding with a proton giving as result a neutron and a positron (the electron antiparticle, which is like an electron but with a positive electric charge).

In their experiment, Reines and Cowan dissolved 40kg of Cadmium Chloride (CdCl2) in 400 liters of water in tanks. These tanks were at 12 meters underground, to be shielded from cosmic rays which may interfere in the data gathering, and 11 meters away from the Savannah River reactor, which was the antineutrinos source. Above the water there were liquid scintillators and below they installed photomultiplier tubes to detect the scintillating light. The positron was detected by its slowing down and annihilating with an electron of the tank content and thus emitting two gamma rays that were detected by the liquid scintillators and the photomultiplier tubes. The neutron was also slowed by the water and captured by the cadmium microseconds after the positron capture. In this neutron capture several gamma rays emitted and detected by the liquid scintillators just after the detection of the gamma rays produced by the annihilation of the positron. This delay was theoretically predicted and thus, if the experimental measure matched the prediction, it can be demonstrated that the neutrino had produced the reaction.

reinescontrols

Reines y Cowan in the control centre (Source)

In this way an era started for researching the unknown, the small and invisible, the neutrinos. But there were still many surprises to be found, because what they had detected was only one of the varieties of neutrinos, electron neutrinos. Later, other experiments would detect other varieties.

But let’s not get ahead of ourselves…

References:

T2K Experiment

First Detection of the Neutrino by Frederick Reines and Clyde Cowan

Neutrino. Frank Close. RBA Divulgación

 

 

 

Stars’ home

A few weeks ago we talked about radio telescopes and said that they were very important for the study of different astrophysical phenomena. Today, we are going to talk about the interstellar medium and will see that, in some cases, radio telescopes are useful to study it.

When we look at the sky, we see with a naked eye a lot of stars and still there are many more. In some cases, we can distinguish other objects that have a magnitude enough to see them with a naked eye, like nebulae or galaxies, but if we look at them without knowing what we are looking at, we may mix them up with unremarkable stars due to we cannot distinguish their shape and extension. However when we look at a region where we don’t see anything, we may think that it is an empty region, but actually it is not empty at all.

Between stars, it exists what it is called the interstellar medium and, although we don’t see it, it is impressive and deserves being studied because of what it implies: it is the place where the stars are born.

Interstellar medium is primarily made of gas, concretely hydrogen gas which is the main component; although it also contains traces of other “heavier” chemical components like helium, carbon, nitrogen or oxygen among others, which are in very small quantities. The reason why these heavier elements exist is that the interstellar medium is not only the place where stars are born, but also the place where they die. When a star evolves, it generates heavier elements in its interior through nuclear fusion processes. When the star dies, like in the case of a supernova, it spreads these elements that are incorporated to the interstellar medium.

The hydrogen we find in the interstellar medium can be in three different states: neutral hydrogen or HI, molecular hydrogen or H2 and ionized hydrogen or HII. To understand these three states, we have to know that hydrogen is the simplest atom because it only has a nucleus made of one proton and an electron bound to it. When hydrogen has this simple structure it is called neutral hydrogen and when it is ionized, that is, when the atom has been given enough energy to provoke that the electron is released from the electrical attraction of the proton is called ionized hydrogen. The third state, the molecular hydrogen, is formed when two hydrogen atoms are bound sharing their respective electrons.

The presence and abundance of these states determines the existence of three types of regions, which are named as: atomic gas regions or HI regions, molecular gas regions or H2 regions and ionized gas regions or HII regions.

HI regions are very cold areas (with minimum temperatures around 30K) which are studied using the 21 cm line of the electromagnetic spectrum which is the range of radio wavelengths, and thus studied using radio telescopes. There can be regions in the sky where, when observing them in visible wavelengths, we don’t see anything, but if we observe them in the 21 cm line, we see that wherever we point the radio telescope we will always detect a signal.

This signal corresponds with a photon emitted when the spins of the electrons and protons are return to a state where they are not aligned after having been aligned, for instance, because of a collision between atoms. The fact that this line can be observed, regardless of the direction we observe, is a proof that atomic hydrogen is everywhere.

We can also use the Doppler Effect to determine how HI regions move. If the 21 cm line is shifted to the part of the spectrum where there are longer wavelengths, it means that the region is approaching us and if it is shifted to the shorter wavelengths part, it means that it is moving away from us. These observations provide us with information, for example, about the rotation of the Galaxy around its centre.

nHI_alt_skyview_big

The sky in the 21cm line (Source: NASA APOD. Credits: J. Dickey (UMn), F. Lockman (NRAO), SkyView)

Also, when we observe the sky in the visible part, we see that there are areas densely populated with stars, but between them it seems that there are empty spaces and areas completely dark. These regions are actually molecular hydrogen and dust clouds, being the dust the responsible for the darkness. Molecular hydrogen regions are even colder than HI regions (around a minimum temperature of 10 K, but more dense). These regions are very important because it is inside them where the stars are born. Sadly, there is not a specific line, as in the case of HI region, to observe them. In fact, it is quite difficult to observe molecular hydrogen because H2 is a molecule without a dipolar moment and does not present lines similar to the 21 cm line (concretely it does not present rotational lines. It does have vibrational lines, but it is necessary a very high energy to produce transitions that generate these lines. These conditions are not present in every part of the cloud, only in the proximity of stars being formed which provides very little information about the rest of the cloud).

If the dust prevents us to observe in the visible part of the spectrum and there is not a clear line to observe in the radio wavelengths, how can we observe these regions?

As we have mentioned before, there other heavier elements in the interstellar medium and these elements form molecules that, although its abundance is lower, they let us observe the interior of these clouds. One of these molecules is the Carbon monoxide (CO) which has a net dipolar moment and thus emits rotational lines that can be observed using radio telescopes. Ammonia (NH3) also helps us to look at the inside of these clouds. In this way we can study the environment where the stars are born through its density and temperature, for example.

barnard68_vlt_big

Barnard 68. A molecular cloud (Source: NASA APOD. Credits. FORS Team, 8.2-meter VLT Antu, ESO)

When stars, in the interior of molecular clouds, are being formed, they are young and with a lot of energy and thus they emit very high energy radiation (in the range of ultraviolet wavelengths) what ionizes the hydrogen in the clouds and becomes HII. HII regions are thus very hot. Even though, the interstellar dust does not permit us to observe the interior of the clouds and we have to, again, use radio telescopes. The radiation these brand new stars emits provokes, in the HII regions, the generation of bremsstrahlung radiation (braking radiation). This radiation appears when an electron approaches an atom of ionized hydrogen which makes that the electron is deviated from its trajectory emitting a radiation that, because there is a lot of electron approaching a lot of protons at different distances, makes to appear a continuous spectrum of radiation. In this case the bremsstrahlung radiation is studied in the range of X-Rays and thus we do not use radio telescopes to study it, however combining the X-rays information with the one in X-Rays when studying molecular clouds, the information obtained is very valuable.

M17-HST-Subaru-LLL

Messier 17 or Omega Nebulae. An HII region (Source: NASA APOD. Crédits: Subaru Telescope (NAOJ), Hubble Space Telescope, Color data: Wolfgang Promper, Processing: Robert Gendler)

As we have seen, there are many things that our eyes cannot see with a naked eye or even using conventional telescopes. The interstellar medium is, in many aspects, an unsolved mystery. Either using radio telescopes or any other type of detectors, we still have a long way to go. Meanwhile we can enjoy some of the beautiful images that other telescopes have gathered throughout the years, like one of my favorites The Orion Nebula.

m42_hst_big

M42 or Orion Nebula (Source: NASA APOD. Credits: NASA, ESA, M. Robberto (STScI/ESA) et al.)

Antennae to observe the Universe

Who has not looked at the sky in a clear sky night, in a place apart from the city lights and has not asked himself if there is anything more beautiful than a starry sky? It is almost sure that all of us who decided to study the universe started in this way and, in fact, besides being delighted with the beauty of the sky, we started to wonder why everything was like it was and not in another way.

Also, it is almost sure, that we all started to ask our parents for a telescope, that it was always smaller than the one we finally got.

Later on, besides the telescope, we wanted books about how to observe the sky, what objects could be observed, when to observe them and that, in addition they had cool pictures of galaxies, nebulae, globular clusters, and all in all, any object out there.

In those books there were also pictures of telescopes, and why not, it is almost sure that we wanted them all, refractors, reflectors…, but for many of us, the surprise was to learn that there were telescopes without any hole to look through. In fact, they did not look like conventional telescopes. They were similar to the parabolic antennae that some people had to watch several television channels. What was that? Was it possible to observe the Universe with those antennae?

These antennae are actually radio telescopes, and yes, the Universe can be observed with them. In fact, it is a must to observe the Universe with them.

 Yebes_40m40m radio telescope of the IGN in Yebes (Source: IGN)

Conventional telescopes, with a hole to look through, usually observe the Universe in the visible part of the spectrum. Of all the electromagnetic spectrum, they can only see the wavelengths corresponding to the visible light, which are the same as the ones we can see with our eyes. However, radio telescopes are capable of detecting other wavelengths, longer than those of the conventional telescopes. These wavelengths are in the radio part of the spectrum.

A radio telescope is, in general terms, a large parabolic surface (paraboloid of revolution) which acts as a radio waves collector. Having a parabolic form, the incoming waves are reflected by the surface and concentrated in point known as primary focus. In this point, two things can happen. One is that in this point there is a receptor in charge of transmitting the reflected radiation to the instruments that will measure it. The other one is that in the primary focus there is a sub reflector that reflects the radiation to a receptor located in the collector and, from there, transmit it to the instruments. Both options are feasible, but the second one permits accessing to the receptor when it is needed to perform maintenance tasks and it also allows the receptor to be heavier.

 EsquemaElements of a radio telescope (Source: Wikipedia Commons)

Radio telescopes are antennae that can be very large, reaching diameters of 100 m or even more like the 300 m of the Arecibo radio telescope. The size impacts on the resolution of the information gathered. The larger is the size, the larger the resolution. The main problem is that it is practically impossible to build antennae with several kilometres to get a large resolution. This does not mean that the smaller radio telescopes, 100 meters and below, are useless because of the lack of sufficient resolution. Several important discoveries have been made using these radio telescopes. But as any other scientist, astronomers and astrophysicists always want more, especially when for each answer new questions arise.

 Effelsberg_total2100 m radio telescope of the Max Planck Institute in Effelsberg (Source: Wikipedia Commons)

 Arecibo_Observatory_Aerial_View305 m radio telescope in Arecibo (Source: Wikipedia Commons)

To give an answer to these new questions, not only larger radio telescopes are built, but several smaller radio telescopes are built and connected amongst themselves, either in a physical manner so that the radiation gathered by all of them are sent to the same analysis center at the moment of its reception or “virtually” so that each radio telescope gathers its own information and later sends it to other remote centers where it is analyzed together with the information gathered by other radio telescopes.

This is possible by using interferometry techniques. Interferometry consists in combining the radiation gathered by several sources (several radio telescopes) in a way that the resolution of the information being received is increased. Interferometry it is based on the fact that radiation is an electromagnetic wave. To understand what interferometry is, let’s talk about a classical experiment in the history of Physics: the double slit experiment.

When a source of light is located in front of a screen, and between them it is placed a plate which does not let the light go through it but with two thin slits drilled on it, the light when it goes through both slits it is diffracted and follows different paths. When the light impacts on the screen, the diffracted light coming from each slit interferes because it comes from different directions and at different moments. This interference makes that dark lines can be observed where the light has destructively interfered and bright lines where the light has constructively interfered. It can be also seen that when the source of light is a point (a small source) the contrast between bright and dark lines is bigger and when the light source is wide, the contrast is diffuse.

 quantum-double-slitDouble slit experiment (Source: Wikipedia Commons)

Interferometry using radio telescopes follows the same principle. Radio waves arrive to the radio telescopes that are separated a certain distance at different moments (the time difference is very small but noticeable with precise systems for the measure of time). This makes that the signal of the radio waves measured by all of the radio telescopes follows generates an interference pattern. By studying the pattern and the contrast between bright and dark signals measured, the form and characteristics of the source of radio waves can be reconstructed.

ALMAArtist rendering of ALMA (Atacama Large Millimeter/submillimeter Array) for long baseline interferometry (Source: ESA)

Using interferometry techniques we can increase the resolution of the image because, although we have small radio telescopes separated a few meters or kilometers away in the case they are physically connected (also known as long baseline interferometry) or several kilometers in the case they are “virtually” connected (also known as very long baseline interferometry), the final outcome will be as if we had a radio telescope of the size of the maximum separation of the smaller radio telescope. This technique can be used even with radio telescopes in orbit.

Radio telescopes are very useful to study different phenomena occurring in the Universe that we cannot observe with conventional telescopes as the birth of stars as well as the interstellar medium where stars are born. How radio telescopes are used and the physics behind the phenomena they observe will be explained in another post.

References:

PARTNeR Project: http://partner.cab.inta-csic.es/

http://www.upv.es/satelite/trabajos/pracGrupo11/radio/

X-rays, energy quantization and the Planck’s constant

There are products that use X-rays to make our life better and funnier, such as X-rays glasses to see underneath other people clothes and also many superheroes use them to defeat super villains as doctors and radiologists use to diagnose diseases but what are X-rays? In this post, I want to clarify what they actually are and how, thanks to them, it is possible to demonstrate that the energy is quantized and how the Planck’s constant is precisely calculated. For the latter I will use a few formulas but don’t be afraid, they are easy and not so many.

First of all, let’s do a bit of history.

In 1895, Wilhelm Konrad Röntgen worked in Würzburg, Germany, in a new research field, the cathode rays. Using a cathode rays tube, concretely a Hittorf-Crook tube, covered by a black paper, he observed that a on an indicator paper made of Barium platinocyanide, located beneath the tube, appeared a transversal line when there was a current circulating by the tube. He found this line on the indicator paper strange. On the one hand, according to the research status of the time, the effect could only be due to the radiation of light, but on the other hand, it was impossible that the light was from the tube because of the black paper cover, which didn’t allow the light to go through. Röntgen gave this radiation the name of X-rays because he didn’t know its origin. Almost two months later he had already prepared a communication announcing these results and he even attached a series of pictures that have become famous such as the one of the hand of his wife Anna Bertha Röntgen.

Mano mujer Roentgen

Radiography of the hand of Anna Bertha Röntgen

Röntgen couldn’t give an explanation to the X-rays, but now we know what they are. The cathode rays tube has two electrodes in its opposite sides. One of them, the cathode is heated until it emits electrons and through an electrical potential of some set of tens of thousands volts, the electrons are accelerated towards the electrode on the other side of the tube, the anode. When the electrons hit the anode, it is observed a continuum spectrum of electromagnetic radiation that has wavelengths of 1×10-10 m. What it actually happens in the anode is that the electron passes close to the nucleus of the atoms of the anode material and they are diverted by the electrical field of the nucleus and they are slowed down and thus they emit radiation (a photon), because when a charged particle it is accelerated or slowed down its speed, in other words its speed changes with time, it emits radiation.

Not all the electrons slow down in the same way, that is, not all of them have the same deceleration because not all of them pass at the same distance of the nucleus. Since they don’t fell the same intensity of the electric field, they don’t get the same deceleration. Because of this, it appears the continuum spectrum. A different value for each electron, and as there are many, the spectrum looks like a continuum.

continuo copia

X-rays continuum spectrum

However an odd phenomenon occurs. For each value of the applied electric potential, it appears a minimum wavelength in the continuum spectrum below which, it is not emitted any radiation. This phenomenon couldn’t be explained with classical physics.

We come now to the second part of the title of this post: energy quantization.

By the end of the XIX century, it was experimentally known that the energy density at a given energy in a frequency interval varied in a way that a low frequencies the bodies emitted such an amount of energy that increased as the frequency increased until it reached a maximum energy and then the energy emitted decreased as the frequency increased. Rayleigh and Jeans tried to explain this energy density distribution but using the physics existing by that time. However they could only explain that the energy increased continuously and that was against the observations. This phenomenon was named ultraviolet catastrophe.

Max Planck proposed that the energy was cuantized, that is there were very small energy packets, named quanta, where each packet had an energy proportional to the frequency. Mathematically this is written as E= hυ, where h is the Planck’s constant. Using this approach, the Rayleigh and Jeans problem was solved and the energy density distribution matched with what was experimentally observed.

220px-Wiens

Emitted energy as a function of wavelength at different temperatures

Back to the X-rays, the kinetic energy of the electrons is given by their charge e and by the electric potential that accelerates them V, thus:

E=eV

In the case that the electron is totally stopped after interacting with the nucleus and bearing in mind that the energy does neither create nor destroy, that is, the energy before interacting with the nucleus (E=eV) equals the energy of the photon when the electron slows down (E=hυ), we have:

eV=hυ

Resolving for υ and knowing that the speed of light c equals the frequency times the wavelength (c=υλ) we obtain:

λ=hc/eV

We thus obtain a wavelength for each electric potential. Here we can see that something that was not possible to be explained using classical physics, it can be explained using energy quantization, that is, quantum physics.

In the latest formula we see the Planck’s constant.

Now let’s go for the last part of the title of the post: the Planck’s constant.

The speed of light and the charge of the electron have constant and very well known values (c=300000 km/s y e=1,602×10-19C). When we use a cathode rays tube to generate X-rays, we use a fixed electric potential. If for such electric potential we draw the continuum spectrum of the generated X-rays, we can observe what is the minimum wavelength that enables the generation of X-rays. Therefore, once known the minimum wavelength we can enter all the values in the equation, resolve it for h and establish the value of the Planck’s constant:

h=6,626×10-34Js

The value of h calculated in this way is very precise due to the precision that we know the rest of the parameters of the equation.

Have the mathematics of this post been painful?

References:

Marie Curie y su tiempo. José Manuel Sanchez Ron

Anna Bertha Roentgen (1832-1919): La mujer detrás del hombre. Daniela García P., Cristián García P. Revista Chilena de Radiología Vol II Nº4, año 2005; 1979-1981

Física Cuántica. Carlos Sanchez del Río (Coordinador)

http://en.wikipedia.org/wiki/Ultraviolet_catastrophe

 

The colour of things

What is the colour of things? And, if we are in a completely dark room, what is their colour? Let’s change the situation. We are in a completely dark room and there are a number of objects we have not ever seen, what is their colour? Do they have any? The answer to the first question is difficult because if we have not ever seen an object, the only thing we can do is imagine a colour. The second answer can be a bit philosophical, however in my opinion it has a colour which in fact is the same of the rest of the objects in the room: black. If they have another colour I’m not able to say it. But I don’t want to talk about Philosophy but Physics, concretely about the interaction between radiation and matter.

Anything we see, touch, breath… is made of atoms, and atoms are made of a nucleus, protons and neutrons, and an outer shell of electrons (here I talk a bit more about atoms). Electrons are the ones in charge of giving things their characteristics colours. But they can’t do it on their own and need the energy support provided by photons, that is, the light.

In 1911, Rutherford published his atomic model where he proposed that electrons were orbiting the nucleus in a similar way as planets do around the sun. However, the problem was that when electrons orbit in this way they emit radiation and thus they lose energy until they fall to the nucleus. The atom would not be stable in this case. In 1913, Niels Bohr took this idea together with the quantum hypothesis made by Max Planck and proposed that electrons would orbit the nucleus in circular orbits, which is the content of his first postulate of his model, but not all the orbits are allowed, electrons only can orbit in specific quantized orbits. This is known as the second Bohr’s atomic model postulate that says that not every orbit for the electron is allowed, only those whose radius is such that the angular momentum of the electron is n times h/2π, being n an integer and h the Planck constant. In these orbits the electron would not emit radiation and the atom would be stable.

But then, is the electron always in the same orbit, in the same manner planets are always in their orbit? For a planet to be in its orbit, it needs that there is not any perturbation that gives the planet energy and pushes it out of its orbit, as it could be the case of a meteorite. And even so, there could be meteorites without enough energy to take the planet out of its orbit. In the case of electrons, it happens the same, when there is not a perturbation with enough energy to make the electron jump to another orbit. What is the kind of perturbation that can make the electron jump to another orbit? Here it is the relation with the colour of things. This perturbation is the light, more concretely the photons of light. Photons have a specific energy that depends on its wavelength, or in other words, the colour of light. If light has little energy, its wavelength would be close to the red colour and if it has much more energy, its wavelength would be close to the blue colour.

When a photon collides with an electron in its stable orbit, it gives it energy to jump to another orbit. But it cannot be any orbit, it has to be one that meets the second Bohr’s postulate, that is, it has to be quantized.

310px-Bohr-atom-PAR

The electron in the new orbit cannot remain there forever if there is not a continuous source of energy, therefore it will jump back to its initial orbit. The problem is that the initial orbit has less energy than the final one; therefore to go back it has to lose the excess of energy by emitting a new photon whose energy is the difference between the energies of the initial and final orbits. This new photon has a wavelength that depends on the energy and thus has a specific colour. It is this new photon the one that arrives to our eyes and makes us see things of a certain colour.

Where does the initial photon, that makes the electrons jump, from? For example from the light of the sun, from the light bulbs, from a fire… Because of it, we can’t see the colour of things in the darkness, because in the absence of light, ‘there is not’ photons making the electron jump of its orbit (although there are continuously photons arriving and colliding with electrons, but they don’t have enough energy to make the electron jump to an orbit that makes it emit another photon of a colour, e.g. green, that arrives to our eyes.

I have to say that I used the concept of orbit as Bohr defined it, that is, making the assumption that it is like the orbit of a planet, however the reality is always more complex and I should have talked about energy levels or even could have used a more quantum specific and rigorous terminology, but probably nobody would have read beyond the first paragraph.

References

Bohr’s atomic model – Wikipedia

A quick look at the Standard Model of Particle Physics

If someone has ever watched the The Big Bang Theory show, probably it would have noticed that every time that Sheldon, Leonard, Raj or Howard are at the university this poster is hanged on the walls of the halls.

 Standard Model Particles and their interactions

The Standard Model of Fundamental Particles and their interactions

This image represents everything we know, and that has been experimentally verified, about the structure of the matter we are made and everything we have observed in the universo, with the precisión we are able to reach using the instruments we have.

Let’s try to explain the image.

Atomo

Inner structure of the atom

Basically, all of us know that atoms have two differentiated parts: an outer shell where the electrons are and the nucleus, which is made of protons and neutrons.

The electrons have negative charge and are responsible of, for instance, conducting electricity (when they are free) or make things have one colour or another (due to the transitions between the different possible energy levels of the atom, but that is another story). Protons have positive electric charge and in the same quantity as electrons so that the atom is electrically neutral. Neutrons do not have electric charge they are neutral.

Electrons are fundamental particles by themselves; they cannot be broken down in other more elementary particles, but protons and neutrons can broken down in smaller particles. These particles are the quarks, concretely two of the six that exist, the quark up and the quark down. Since some years ago, it is being proposed, at a theoretical level that quarks and electrons are not point-like particles but small strings made of pure energy that are vibrating. Depending on how they vibrate, they exhibit the properties of electron or quarks (and the rest of particles we are going to see later). However, this theory has not yet verified by experiments and, in fact it is beyond the standard model we are dealing with.

Electrons, as well as their heavy cousins known as muons and taus or their lighter cousins the neutrinos, are known as leptons, and together with the quarks they are known as fermions. The reason for this name is that they obey the Fermi-Dirac statistics and thus they verify the Pauli exclusion principle, which says that it is not possible to find two fermions in the same quantum state simultaneously.

It has to be highlighted that every fermion has associated an antiparticle, which is the same particle but with the opposite charge. For instance, the antiparticle of the electron is the positron (which is different to the proton) and the antiparticle of the quark up is the antiquark up. The antiparticles are represented with the same symbol as the particle but with a at its top.

Fermiones

Fermions and their properties

Each lepton, that is the electron, the muon and the tau, has a lighter cousin. The electron has an electron neutrino, the muon a muon neutrino and the tau a tau neutrino. As can be seen in the table above, the only difference between the electron, the muon and the tau is that the mass increases. All of them have a negative electric charge. Neutrinos do not have electric charge and have a very small mass (but they have mass indeed and this is one of the reasons why they change flavour, which means that when, for instance, they start their travel to the earth from the sun they are electron neutrinos but when we detect them on the earth we measure less electron neutrinos than expected because during their travel they have change their flavour and have transformed into muon or tau neutrinos.

Above we have talked about two types of quarks, the up and the down quark, which are the ones that make the protons and neutrons up, but we also have the quarks charm, strange, top and bottom. The names they have is because physicists are funny people, although it may look like the opposite and they like to put strange names to these things. However they are better known as u, d, c, s, t and b quarks.

Quarks c, s, t and b does not form part of the ordinary matter by themselves, but they are the result of high energy collisions between other particles (for example, between two protons as it is done in the LHC and mainly thanks to the famous Einstein equation E = mc2) or in nuclear decays.

One of the peculiarities of quarks is that they are never alone in nature but grouped, as in the case of the proton and the neutron. Apart form the particles that make the atomic nucleus up, they can be also found forming other particles.

Bariones

A few baryons

Baryons are made of three quarks or three antiquarks. In the latter case they are known as antibaryons.

Mesons are made of two quarks and mandatorily one is a quark and the other one an antiquark.

Mesones

A few mesons

On another hand we have the bosons.

Bosones_juntos

Bosons

They are called bosons because, opposite to fermions, they obey the Bose-Einstein statistics, which says that they many bosons can exist in the same quantum state at the same time (remember that only two fermions can be in the same quantum state). Some bosons have the particularity that they are the carriers of the fundamental forces of nature, each time two particles interact, what they are doing is to exchange a boson. These forces are the electromagnetism, the weak force and the strong force. There is another boson for the gravitational force, known as the graviton. The standard model does not explain the gravitational force and thus the graviton does not form part of it.

These forces or interactions are represented hereafter.

Interacciones

Fundamental interactions

The weak force is the responsible for the radioactive decays, which occurs when a particle transforms into another particle through the emission of one or more additional particles. This interaction is mediated by the W+, W- y Z0 bosons. This bosons, opposite to the rest of the bosons, have mass.

The strong force makes that the quarks that make the atomic nucleus up are kept together and don’t break spontaneously. The boson in charge of this task is the gluon.

The electromagnetic force is the better known by all of us, because it is composed of the electric and magnetic forces (in fact it is just a single force that is revealed in two different ways and that is why it is called electromagnetic force). The boson that carries this force is the photon. Our daily experience is based on this force and every time we see the light, feel the heat, cook the meal in the microwave, etc., what we are doing is interacting with photons of different energies.

As we have said before, particles are interacting with each other and they are doing it permanently.

 Decaimientos

Particle interactions.

In the left image, it is represented how a neutron decays to produce a proton, an electron and an antielectron neutrino. This decay is known as beta decay.

In the middle it is shown a collision between an electron and a positron that gives rise to a disintegration of matter into pure energy, again through the Einstein’s equation E=mc2. The energy then transforms again, by the same equation, in different particles. In this case it is form a B0 meson and an anti B0 meson.

Lastly, in the right image appears a collision between two protons (as those that occur in the LHC at CERN) to produce two Z0 bosons and a number of assorted hadrons, which can be mesons or baryons.

This are not the only interactions that can happen, there are many other more and they follow strict conservation rules (for instance, energy conservation, momentum conservation, etc.), but they are a good example.

At a mathematical level, the standard model is quite complex and difficult to understand but at the level of the fundamental particles that make it up and their interactions is much easier and can be explained in a poster that can be hanged on the wall of any university hall.

References:

The Particle Adventure