FREE! Click here to Join FunTrivia. Thousands of games, quizzes, and lots more!
Home: Our World
Geography, History, Culture, Religion, Natural World, Science, Technology
View Chat Board Rules
Post New
 
Subject: Can someone please explain?

Posted by: Mixamatosis
Date: Jan 21 17

I've read that it's dangerous to mix ammonia and bleach. Variously I've read that it can produce deadly cyanide gas, chlorine gas (which is said to be bad for you) and even explosions.

However swimming pools are kept fit for use with chlorine, and our urine contains ammonia but then we may clean toilets with bleach. Also many cleaning products contain either ammonia or bleach and it would be easy to use them unthinkingly in combination.

How is it that people aren't generally harmed by these dangers when swimming in swimming pools or doing daily cleaning, or are we being harmed at low level and is the harm cumulative?

526 replies. On page 26 of 27 pages. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
brm50diboll star


player avatar
Nuclear fission is a set of reactions that produce a great deal of energy by splitting heavy nuclei with free neutrons. Nuclei contain binding energy to hold them together. The amount of binding energy per nucleon for a nucleus is minimized at Fe-56. For nuclei lighter than Fe-56, binding energy per nucleon increases as the nuclei get lighter (in general). For nuclei heavier than Fe-56, binding energy increases as nuclei get heavier. What this means is that for the lighter nuclei, *fusion* releases energy as smaller nuclei are joined together to produce larger ones. The larger ones require less binding energy per nucleon than the smaller ones did, so the excess energy is released, typically as gamma rays. Similarly, for heavy nuclei, *fission* releases energy as the large nuclei are broken up to smaller ones. Free neutrons are used to initiate fission in heavy nuclei, such as certain uranium or plutonium isotopes. When fission occurs, it rarely breaks into two equal nuclei. Fission breaks nuclei apart differently each time, with dozens of possible products, but typically there are two unequally sized nuclei produced as well as two or three free neutrons. The fact that fission produces multiple free neutrons allows a "chain reaction" to occur, producing large amounts of energy. If this chain reaction is uncontrolled, the result is an explosion like in the atomic bombs of Hiroshima and Nagasaki. The chain reaction may be controlled using moderators to absorb and slow some of the free neutrons, and nuclear reactors generate energy using this controlled process. Not all heavy nuclei are fissionable, however. I will discuss some of the more common fissionable isotopes next time.

Reply #501. Dec 12 21, 5:43 PM
brm50diboll star


player avatar
Probably the most famous fissionable isotope is U-235, a naturally-occurring isotope of uranium with a half-life of 704 million years, which seems like a long time, but is short compared to the age of the Earth (4.5 billion years). As a consequence, U-235 only makes up 0.72% of natural uranium. 99.274% of natural uranium is U-238, which, unfortunately for this discussion, is not fissionable. U-238 has a much longer half-life of 4.468 billion years, comparable to the age of the Earth itself. There are no natural processes on Earth to form heavy isotopes. They are formed in ancient supernova explosions which produced the primordial nebula from which our solar system and Earth itself originally formed. Presumably, when Earth formed, the percentage of U-235 in the uranium was much higher than today, but since U-235 has a much shorter half-life than U-238, the percentage of U-235 has greatly decreased over the eons as the vast majority of U-235 has decayed away but only about half the U-238 has. In any event, in order to produce a fission reaction, the uranium fuel needs to have a much higher concentration of U-235 as compared to U-238 than exists naturally, as the nonfissionable U-238 "quenches" any fission that might start from the U-235. So scientists had to find a way to "enrich" the U-235 content of uranium. The process had to be physical, not chemical, as both isotopes are uranium and have identical chemical properties. Mass spectrometry works in separating the two, but it is tedious and inefficient to use for large scale separation. Nevertheless, the Oak Ridge Laboratory in Tennessee was set up to enrich uranium for this purpose. When the percentage of U-235 gets above a certain percentage (very roughly 50%) an uncontrolled fission reaction can then occur. Such highly enriched uranium is called weapons grade. The Hiroshima bomb was made from this level of enriched uranium. But a lesser degree of enrichment (very roughly 35%) will not support an uncontrolled chain reaction but will support a controlled chain reaction that can be used to generate energy in a nuclear reactor. This level of uranium enrichment is called reactor grade uranium. It is impossible to get a true Hiroshima-type explosion from reactor grade uranium. The worst that can occur with reactor grade uranium is a melt down, such as in Chernobyl or Fukushima. A byproduct of the enrichment process is the formation of "depleted" uranium, uranium is that is almost 100% U-238. Depleted uranium is useless for fission, but has other uses, such as military shielding and shells.

Slowly moving free neutrons will not start fission in U-235. Slow neutrons get absorbed by the U-235 to form U-236, which then decays according to its own 23.4 million year half-life. What are needed for fission are fast neutrons, such as those created by the polonium-beryllium mixture I described in an earlier post. This is the "trigger". Fast neutrons hit U-235 and break it apart in fission, releasing large amounts of energy and more fast neutrons, creating a chain reaction. U-238 can absorb some of those neutrons, and carbon and heavy water can slow down other neutrons, allowing for the chain reaction to be controlled. A certain minimum amount of enriched uranium must be present for the chain reaction to get started. This amount is called the critical mass.

There are other fissionable isotopes besides U-235, and I will get into a discussion of some of those in a future post.

Reply #502. Dec 14 21, 1:13 PM
brm50diboll star


player avatar
One very important fissionable isotope that does not exist naturally is plutonium-239. This is not the longest-lived isotope of plutonium, but it is the most famous. Pu-239, which has a half-life of 24,100 years (very long indeed, but way too short as compared with the age of the Earth to have existed naturally) is a byproduct of uranium fission discussed previously. Even in enriched uranium high in the fissionable isotope U-235, there is still lots of the nonfissionable isotope U-238 around. Some of the free neutrons produced by the fission reaction are absorbed by the U-238, converting it to U-239, which is a beta emitter with the short half-life of 23.45 minutes, and therefore rapidly converts to Np-239, which is also a beta emitter with a short half-life (2.356 days), so it converts to Pu-239, which, as stated before, has a much longer half-life, so it sticks around. Consequently, Pu-239 accumulates in uranium fuel rods that have been used for fission for an extended period of time. Since plutonium is a completely different element from uranium, its chemical properties are quite different and can be easily separated and isolated from the uranium, and kilogram quantities of essentially pure Pu-239 are rather easily obtainable. Now Pu-239 is fissionable, and does not have the problem U-235 did with inherent contamination with nonfissionable U-238, so, given a source of Pu-239, uncontrolled fission reactions (as in *bombs*), are easily produced. In the famous atomic bombs "Little Boy" (used in Hiroshima) and "Fat Man" (used in Nagasaki), "Little Boy" was an enriched U-235 bomb, but "Fat Man" was a Pu-239 bomb. International agencies attempting to stop nuclear proliferation must be very concerned about Pu-239, since any commercial nuclear reactor is a potential source of Pu-239 production, and care needs to be made to see that the spent fuel rods from the reactor are not being diverted to Pu-239 isolation, a difficult process to detect in a country motivated to hide such a process. Slow neutrons do not produce fission in Pu-239, but instead lead to further neutron absorption, creating such products as Pu-240, Pu-241, and a host of further transuranic artificial elements. For completions' sake, I will point out that the longest-lived isotope of plutonium, Pu-244, (which is extremely difficult to synthesize, by the way) has a very long half-life of 80.8 million years, vastly longer than Pu-239, but given the difficulty in creating Pu-244 and its lack of uses, still leaves Pu-239 as the most abundant isotope of plutonium in existence today, despite its extreme hazards.

Reply #503. Feb 02 22, 8:32 PM
brm50diboll star


player avatar
Before I get into the details of my next fissionable isotope, I want to discuss something that is mostly a research tool nowadays rather than having current practical use: breeder reactors. A breeder reactor is a nuclear reactor that generates its own fuel (to some extent) as it produces power. Enriched reactor-grade uranium still has a very high percentage of nonfissionable U-238 in it. As mentioned previously, this U-238 absorbs neutrons during the operation of the reactor and some of it gets converted into Pu-239, as well as other isotopes. Pu-239 is itself fissionable and if the percentage of the fuel rods got high enough in Pu-239, the fission reaction would actually be self sustaining just on the Pu-239 content. But in conventional nuclear reactors, fuel rods are removed as "spent" without allowing the Pu-239 content to reach the self-sustaining level. (In fact, they won't reach it with conventional processes; however, a modification of the process to increase neutron flux would reach sustainable levels.) A few pilot breeder reactors have been made, and the technology is workable, but the nuclear waste produced by the process necessarily contains a higher percentage of transuranium isotopes than in conventional reactors, and such waste is even more hazardous than conventional nuclear waste. Since nuclear waste generation and its management is the biggest technical problem with nuclear reactors as it is, applying Pu-239 breeder reactor technology commercially never seemed economically advantageous.

But U-238/Pu-239 breeder reactors are not the only possible breeder reactor technology. More broadly speaking, a breeder reactor takes any nonfissionable isotope and, in the natural process of its fission reaction (initiated, of course, by inclusion of a sufficient amount of a fissionable isotope) generates enough of a fissionable isotope for the reaction to be self-sustaining without having to add more of the fissionable isotope, because a nonfissionable isotope is being converted into a fissionable isotope in the process. And U-238 to Pu-239 is not the only possible such breeder reactor process. More on that next time.

Reply #504. Mar 11 22, 10:27 PM
brm50diboll star


player avatar
I know, I know, I've been gone from this thread for too long. I do want to talk about Th-232 and U-233 at some point, but I just haven't found enough time for a proper discussion of that topic. Anyway, I come here now to remind any readers that tomorrow night there will be a total lunar eclipse visible from most of the US. Here in the Central Daylight Time zone, totality should begin about 10:29 pm tomorrow night (May 15). Depending on your location, some of you may find the ideal viewing time to be Monday, May 16 in the wee early hours of the morning. But here in my home state of Texas, the pre-midnight time is excellent for viewing.

Reply #505. May 14 22, 2:57 PM
brm50diboll star


player avatar
All right. So the other breeder reactor system that is of interest to me is the Thorium-232 - Uranium-233 system. Thorium-232 is the primary naturally-occurring isotope of thorium, making up 99.98% of natural thorium. Th-232 also has the extremely long half-life of 14.05 billion years, making it the least radioactive of the actinide elements. Thorium is three times as abundant in nature as uranium, and because natural thorium is almost completely Th-232, isotopic separation, a very complicated and expensive process, is not necessary for thorium as it is for uranium. U-233, on the other hand, does not really exist naturally at all (only a trace, due to low-level neutron flux in uranium ores). The problem is that Th-232 is not fissionable but U-233 is. However, it was discovered long ago that if small amounts of thorium were placed in a nuclear reactor, it would largely be converted into U-233 by the neutron absorption process typical of a breeder reactor. Th-232 absorbs free neutrons to become Th-233, a beta emitter with the short half-life of 21.83 minutes. After beta decay, it becomes Pa-233, also a beta emitter with the short half-life of 26.967 days. After its beta decay, it becomes U-233, which has a much, much longer half-life of 159,200 years, far longer than a human life span but also far too short to be a primeval isotope like U-238 or U-235. The combination of these factors results in the accumulation of U-233 in a reactor that has been fed thorium over a few months' time. This U-233 can be chemically separated from its thorium substrate (since uranium is a different element from thorium and thus its chemical properties are different) thus again avoiding the problematic issue of isotopic separation needed for traditional uranium-based reactors. The extracted U-233 can then be put into its own breeder reactor, periodically adding more thorium to replenish the U-233 that undergoes fission to continue the breeder reactor process indefinitely. As pointed out, this system has the advantage of using a much more common and less radioactive element as its primary fuel source (thorium) while avoiding the tricky isotopic separation issue traditional uranium fission requires. Historically, long ago the decision was made to pursue uranium fission over the thorium process because it is much easier to produce weapons-grade material from the uranium process over the thorium process. Nevertheless, the thorium breeder reactor system is known to be workable. Currently, the Chinese are developing a thorium reactor and I am much interested in following their progress with that.

link https://www.nature.com/articles/d41586-021-02459-w

Nuclear power is not the bogeyman some people have characterized it as. Unfortunately, there are many situations where hydroelectric, solar, wind, or geothermal energy generation is just not practical. For one specific example, all our exploratory vessels to the outer solar system (Jupiter and beyond) use some variant of nuclear power generation because the available solar energy in the outer solar system is simply too weak to be able to operate the craft. But fission produces only a tiny fraction of the available power that could potentially be produced by fusion, my topic for next time, whenever that may be.

Reply #506. Jul 02 22, 6:44 PM
brm50diboll star


player avatar
Nuclear fusion is essentially the opposite of nuclear fission. Instead of splitting apart heavy nuclei as in fission, fusion joins together lighter nuclei to form heavier nuclei. Both processes produce energy, since binding energy per nucleon is minimized at Fe-56, so nuclei significantly lighter than that release energy when they become heavier, and nuclei significantly heavier than Fe-56 release energy when they become lighter. But fusion, though it is the primary power source in the cores of stars, is much more complicated than fission. Fission can be initiated by firing neutrons (which are neutral and therefore not repelled by nuclei) at target nuclei at any temperature. But fusion involves bringing together two positively charged nuclei, which repel each other electrically. If the nuclei are brought close enough together, the strong nuclear force will overcome electrostatic repulsion and the nuclei will fuse together and release energy. But "close enough" turns out to be extremely close due to the short range of the SNF, in the picometer range (10^-12 meter). Nuclei have to be traveling extremely fast to approach each other that closely before being driven away by repulsion, and only extremely high temperatures such as found in the cores of stars can produce such nuclear velocities. ("Cold fusion" is a myth, despite claims, (and "The Flash" plotlines)). This is thermonuclear fusion. In hydrogen bombs, a nuclear fission reaction is required to achieve the temperatures necessary for nuclear fusion to be initiated, and the resulting nuclear fusion is an uncontrolled explosion. Since nuclear fusion is much more efficient at converting mass into energy than fission, a controlled nuclear fusion reactor, if it were possible, would generate vastly more energy than any controlled fission reactors can. Unfortunately, while controlling nuclear fission is straightforward through moderators and control rods, controlling nuclear fission is a dauntingly difficult problem that has not been fully solved. The extremely high temperatures required instantly vaporize any containers or material near the fusing plasma. Hot plasma expands rapidly to destroy whatever container it may be built in, and the only thing that can oppose that expansive force is extremely powerful magnetic fields, which are very complicated to design and consume a great deal of energy to generate. For decades, researches in nuclear fusion have used experimental tokamak reactors to generate magnetic fields to control the plasma. They generally initiate fusion by intense laser beams focused on a target mix of deuterium and tritium (H-2 and H-3 (which is highly radioactive) isotopes). Hydrogen-1, which makes up over 99% of natural hydrogen and is the primary fuel for fusion in main sequence stars like our sun, is not used in tokamaks because the initiation temperatures are too high for even lasers to achieve. The need for deuterium and tritium is another serious technical problem for trying to obtain controlled fusion. For decades, the energy required to operate the lasers and the magnetic fields in tokamaks actually exceeded the energy produced by the fusion reaction, making tokamaks useless as practical energy generators. More recently, there have been documented examples of net positive energy production - where the energy from the fusion exceeds the energy input. The catch is, these examples of net positive energy production last only tiny fractions of a second. So far, no continuously stable controlled nuclear fusion has been produced.

My reading of the state of development of nuclear fusion research is similar to my reading of the state of development of manned Mars missions - despite claims over the decades that manned Mars landings and nuclear fusion reactors are "20 years away" (for at least the last 50 years), the technical problems are still enormous. I don't see any manned Mars missions ever occurring in my lifetime (despite promises by each new administration) and I also don't see controlled nuclear fusion reactors that can practically and reliably generate power for consumer and industrial use coming on line in my lifetime either. Some technical problems can not be solved no matter how much research money is devoted to it. Fundamental breakthroughs in science are needed to overcome such obstacles, and those tend to come when they come, if ever at all. Too bad, Dr. Emmett Brown's "Mr. Fusion" that eats garbage to power the flying DeLorean time machine would certainly be a help.

Reply #507. Aug 24 22, 4:56 PM
brm50diboll star


player avatar
I think my next topic in Physics should be an overview of some basic principles of thermodynamics. There are a lot of misconceptions out there in the general media's reports on science that betrays the writers' misunderstanding of these concepts. There is a fundamental difference between the field of thermodynamics and the field of kinetics, for example, but it seems this is often misunderstood. Claims that catalysts can lower the energy input for endothermic processes are frequently floated out there. Unfortunately, just defining the proper terminology involved is arduous enough but I will still try.

Let me begin with the idea of heat. Heat is the flow thermal energy, which at the atomic level can be visualized as vibrations of atoms and molecules. The higher the temperature of a material, the faster those atoms and molecules vibrate. Two different materials which are in contact are said to be in thermal equilibrium with each other if they are both at the same temperature. That is, there is no *net* flow of heat between these materials. Heat is a form of energy which in the metric system is measured in joules just like other forms of energy, but heat is a diffuse form of energy which is harder to use for practical processes (so-called work) than other, more tightly organized forms of energy like electrical current, for example. When two different materials in contact with each other are at *different* temperatures, however, there is a natural tendency for heat to flow from the hotter material to the cooler material, causing the hotter material to cool and the cooler material to warm over time. Eventually, thermal equilibrium may be established between the two materials if they reach the same temperature. Restating this in slightly different terms, the *natural* or *spontaneous* tendency is for heat to flow from a warmer material to a cooler material. It is possible to transfer heat the other way (fortunately; otherwise air conditioners wouldn't work), but to transfer heat from cooler materials (like indoors) to warmer materials (like outside air on a hot summer's day) requires the input of a not-insignificant amount of some external source of energy (like electricity) to drive the heat flow the opposite direction using some carefully chosen phase changes of matter in the right places. Heat and temperature are not synonymous. Heat is a form of energy that is measured in joules in the metric system, whereas temperature is a scale calibrated usually in degrees (depending on the scale chosen - the official metric scale for temperature is the Kelvin scale) The temperature scale reflects the average kinetic energy of individual particles of the material whose temperature is being measured, not the total thermal energy of the system. That is, heat is an extrinsic property which depends on the *amount* of material involved (more heat flows between a ton of materials in contact that are 100 °C different in temperature than between an ounce of the same materials at the same temperature difference), while temperature is an intrinsic property which does *not* depend on the amount of matter present. (A liter of boiling water and a milliliter of boiling water are still both at 100 °C at standard pressure). Heat and temperature are definitely *related* and there are numerous equations that show this relationship mathematically, but it is important to state for the record in any discussion of thermodynamics at the very beginning that they are very definitely *NOT* the same thing and care needs to be taken not to be sloppy about the use of terminology when discussing them.

With this introduction, next time (whenever that may be), I will begin discussing the Laws of Thermodynamics.

Reply #508. Sep 06 22, 10:10 PM
brm50diboll star


player avatar
Before I return to my topic, a piece of astronomy news caught my eye. On September 26 (tomorrow), the giant planet Jupiter will have an unusually close opposition to Earth, the closest approach in almost 60 years. I have been watching Jupiter the past few nights as it approaches its opposition. To the naked eye, it appears as an exceptionally bright "star" (brighter than all the true stars of the night sky), but even with just binoculars, its true planetary nature can be seen, along with its four largest moons (the Galilean moons). Jupiter is the second-brightest planet (behind Venus), but it is easy to see, even in the light-polluted skies of cities. At opposition, it rises in the east at sunset and sets in the west at sunrise, and therefore is visible all night long.

link https://m.youtube.com/watch?v=vJZGxESGQQ4

Reply #509. Sep 25 22, 11:23 AM
brm50diboll star


player avatar
So I will begin my discussion of Thermodynamics by talking about the so-called Zeroth Law of Thermodynamics. "Zeroth Law"? Yes. This is the sort of thing that happens in science when a numerical list of laws has already been created and then someone points out something even more fundamental than what was on the existing list that really should have been included, so it gets added on as the "Zeroth Law".

The Zeroth Law says that if System A is in thermal equilibrium with System B and System B is in thermal equilibrium with System C, then System A is in thermal equilibrium with System C. A law of transitivity for thermal equilibrium.

A few comments. What is thermal equilibrium? When two systems (however defined) are in contact with each other, if they are *not* in thermal equilibrium, then energy (in the form of heat, as we shall see) will start to be transferred from one system to the other, although such transfer may occur very slowly. If no such transfer is occurring, even very slowly, we can say the two systems are in thermal equilibrium with each other. Not everything in science is transitive. Some things are not transitive. So A relates a certain way with B, and B relates that same certain way with C does not always mean A relates that same certain way with C. But since the property of thermal equilibrium *is* a transitive one, this fact has an important consequence which could not happen if thermal equilibrium was not transitive. What is that important consequence?

We can define a temperature scale. Systems A, B, and C are all in thermal equilibrium with each other *because* they are all at the same temperature. Whew! If thermal equilibrium wasn't transitive, we couldn't define a temperature scale. We could have a weird situation where A is "hotter" than B, and B is "hotter" than C, but somehow C was "hotter" than A. Fortunately, that can't happen because of the Zeroth Law.

Which leads us to the next issue: What does "hotter" and "colder" actually mean? It is important we be clear about that, not just arm-waving.

Topic for next time.

Reply #510. Jan 26 23, 12:03 AM
brm50diboll star


player avatar
So let's try to be specific about what temperature really is. There are multiple different temperature scales, each with its uses and various pros and cons, but thanks to the Zeroth Law, we can say that if System A has a higher temperature than System B in one scale, it has a higher temperature than B in every scale, and scales can be interconverted using relatively simple mathematical expressions. If two systems are in thermal equilibrium with each other, there will be no net spontaneous transfer of thermal energy between the systems. But if the systems are *not* in thermal equilibrium with each other, then there will be a spontaneous flow of thermal energy from the system with the higher temperature (warmer) to the system with the lower temperature (cooler). Because of this spontaneous thermal energy flow, the warmer system will begin to cool (temperature drops) and the cooler system will begin to warm (temperature rises). In the absence of other factors, given enough time (which may be very long), the two systems will eventually arrive at a common temperature and will be in thermal equilibrium.

But the time involved may be immense. It is a factor of insulation, among other things. And we are speaking of the *spontaneous* situation. What, in thermodynamics, does *spontaneous* really mean? It means there is no net input (or exit) of energy to any of the systems under consideration from outside those systems. Spontaneity is a very important factor to consider. There are many non-spontaneous processes. Is it possible to transfer thermal energy from a cooler system to a warmer system? Yes. It is definitely possible to do that, and this in fact is what happens in refrigeration and air conditioning systems. But to do this is *not* spontaneous and requires input of energy from outside the systems. Much of modern society is dependent on this non-spontaneous transfer of thermal energy. We need air conditioning and refrigeration, but to do those things requires energy, generally in the form of electricity, and large amounts of power are need for refrigeration and air conditioning in modern industrial society.

Even in spontaneous processes, the rate of thermal energy flow and temperature change is very complicated. I could go into Newton's Law of Cooling here, but I think it is too much a departure from the real points I want to make for me to go into that. Let's just say that thermal equilibrium isn't always possible in realistic time scales. In geothermal processes, some cooling takes many centuries.

So what causes the spontaneous flow of thermal energy between systems of different temperatures? It isn't necessarily that the warmer system has more thermal energy than the cooler system - a common mistake. Thermal energy depends on the mass and other properties of the system. We say it is an *extensive* property because of that. But temperature does not depend on the mass of the system and is said to be an *intensive* property. A large cooler system may well contain more thermal energy than a small warmer system. Temperature, at a *molecular* level, is related to the average total kinetic energy of the component molecules of the system. I say average because in any system, some molecules will have greater kinetic energies than others. There is a distribution of kinetic energies in a system at a given temperature. I will spare you the actual equations here.

But there is a minimum for just how low kinetic energies can be for molecules. That is related to their velocities, and in the hypothetical case where the velocities of all the molecules was zero, so would their kinetic energies be. This puts a lower bound on any temperature scale - the lowest possible temperature - called *absolute zero*. Now, due to quantum effects of uncertainty, etc. completely zero velocities of all molecules in a system is not achieved even at absolute zero, but the details of why that is again is too much a digression from where I want to go on this. As a first approximation, though, molecules at absolute zero are in their lowest energy state and *almost* motionless. As molecules heat, they move faster. For extremely high temperatures, such as in stars, thermometers do not exist which can tolerate such temperatures, so their temperatures are in fact calculated by looking at the velocities of the constituent particles - so-called "kinetic temperatures".

There are some limitations of the applicability of kinetic temperatures measurements, leading to certain apparent paradoxes like why the corona of the sun has a higher temperature than the surface of the sun, even though it is farther away from the core. Again, dealing with problems like this is beyond the scope of where I want to go with this for now. The lasers used in nuclear fusion research are particularly subject to the vagaries and extreme intricacies of kinetic temperature analysis. So when you read a report that some experimental fusion reactor achieved a temperature of 100 million °C, it is true but not exactly the same thing as what is going on in the cores of stars. That's all I want to say about that sort of thing now. The gory details of that are not worth getting into here.

Now I want to look at mechanisms of spontaneous energy flow, and to begin that I will discuss the First Law of Thermodynamics next time.

Reply #511. Jan 31 23, 12:49 PM
brm50diboll star


player avatar
The First Law of Thermodynamics is an adaptation of the Law of Conservation of Energy, which states that energy can not be created nor destroyed, but simply interconverted from one form to another. Those familiar with Einstein's famous equation know that energy and matter can be interconverted and will substitute "mass-energy" where that process is relevant, but for non-nuclear processes, mass-energy interconversions do not contribute a significant amount numerically and mass and energy can be treated as separate entities. Nevertheless, even scientists who are well aware of the First Law and its implications will frequently speak of "energy loss". I do that myself. What I actually mean by this, though do not take the time on each occasion to re-explain, is energy lost to human uses, or energy that has left the system or systems under consideration at the time. But let's look at this so-called "energy loss" carefully this time.

Energy has numerous different forms. Among them: kinetic, gravitational potential, spring potential, electrical, work, heat and others encountered in introductory physics classes usually with equations demonstrating their usual applications (such as kinetic energy = (1/2)mv^2). Some forms of energy are more useful for human applications and purposes than others. Electrical energy is highly useful, which is why entire semesters of physics courses are devoted specifically to its study. Heat, on the other hand, is one of the least useful forms of energy there is. It is not that heat has *no* uses. Heat can certainly be used by humans. It is just that heat has a great tendency to escape from any apparatus designed to contain and manage it, and when it leaves a system under consideration, it is generally lost to human uses forever. Insulation helps, but no insulation is perfect, and heat especially escapes processes. A term used in physics, efficiency, is defined as the energy output divided by the energy input (into a system under consideration) multiplied by 100. Efficiencies are invariably less than 100% for real processes because, whatever that process may be, some energy undergoing interconversions in that process converts to heat and escapes the system. So measured output energies are almost always less than input energies.

An example would be in the conversion of gravitational potential energy to kinetic energy by a falling body. In an introductory physics course, students would set (1/2)mv^2 = mgh and solve for the variable wanted in the problem. As a body falls, it loses gravitational potential energy (because h decreases), but gains kinetic energy (because v increases). But if someone actually measured the velocity of a falling body that has fallen, say 2000 feet, the actual velocity measured would be a little less than the velocity calculated from the above expression. Why? Because not all the gravitational potential energy of the body gets converted into kinetic energy. Due to friction with the air as the body falls, a small portion of that energy gets converted to heat and "lost" from the system under consideration. For the purposes of an introductory physics course, such a loss may not be detectable to three significant digits, a common target of such kinds of problems. But for more complex problems, frictional heat loss may well be a very significant issue.

In complicated physics and engineering problems, ignoring the effects of heat "losses" is not advisable. Some widely used industrial processes may in fact have very low efficiencies, even in the 20% range, simply because the process is so convenient to human uses and other processes with higher efficiencies have drawbacks that make them less useful to humans than the less efficient process. Efficiency is not the only consideration in what process is used. There are other issues. Burning wood in a fireplace is vastly less efficient at heating a room than electrical central heating, but people still do it, some for esthetic reasons, and some, occasionally, because of power outages. But there are problems much more complicated and thornier out there than just heating a room.

Next time I will get into the Second Law of Thermodynamics.

Reply #512. Feb 14 23, 8:35 PM
brm50diboll star


player avatar
To understand the Second Law of Thermodynamics, one needs to understand the concept of entropy. Entropy is a measure of the amount of disorder present in a system. By "measure", I mean it can be quantified and tabulated. This may be hard to understand. We know that a shuffled deck of cards is more disordered than a brand-new factory-packaged deck, but it isn't intuitive how we can quantitate that level of disorder. It can be explained, but I would prefer not to get into the nitpicky details. Suffice it to say that the more disordered a system is, the higher its entropy value. In chemical systems, entropy is essentially a statistical phenomenon. The arrangement of atoms and molecules in a substance can progress from highly ordered crystals (low entropy) to random gas particles flying around (high entropy). Statistically, higher states of disorder are more probable than lower states of disorder, just as drawing 5 random cards is more likely to yield a poker hand of no pairs than a royal flush. Thus, in any process that can change the order or arrangement of matter, it is more likely to produce a state a higher entropy (more disorder) than a state of lower entropy (less disorder). It is not, strictly speaking, *impossible* to go to a state of lower disorder, it is just very improbable at the macroscopic level, when there are upwards of 10^23 particles of some observable material involved. So a gas escaping from a pinhole in a balloon to move to a larger space is very probable and, in fact, what we observe happens. But we do not observe outside air rushing into the balloon pinhole to expand the balloon (even though a few gas particles actually will move in that direction) because there is more disorder (higher entropy) in gas filling a larger volume than a smaller one.

So the Second Law of Thermodynamics states that processes in a closed system (one in which energy does not enter or leave the system) proceed in a direction of increasing entropy for that system.

A couple of notes here: First is the so-called Arrow of Time. We are used to things moving spontaneously in the direction of increased entropy. You drop a vase, it falls to floor and shatters. We are not used to seeing things move in the direction of decreased entropy. If you saw a video of fragments of a broken vase coming together and then jumping off the floor as an intact vase on the shelf, you would say it was a backwards-run video, because that sort of thing doesn't happen spontaneously. We often "intuitively" know what processes can occur spontaneously and what processes can't. Time only moves forward, not backward.

The second point is not to go too far: the Second Law does *not* say entropy can't decrease. It says entropy increases spontaneously in a *closed system*. But if a system isn't closed, than its entropy can certainly decrease due to flow of energy in or out of the system. In other words, we definitely can *make* entropy decrease in a system if we wish, but there is a "cost" to doing so.

As an example, frost can form overnight on a window. Where did that frost come from? From water vapor forming solid ice crystals. But doesn't ice (a solid) have a lower entropy than water vapor (a gas)? Absolutely. The entropy of those water particles definitely did decrease and they did so spontaneously. But the water particles aren't in a closed system. That's the catch. As the frost formed, heat from the water particles flowed into the cool air warming the air slightly and increasing the *air particles'* entropy. The there is actually a net increase in entropy if we consider the air as well as the water vapor. So again, the Second Law does *not* say entropy can't decrease. There are plenty of cases where entropy decreases. We need to pay attention to that "closed system" part. I have heard people argue that evolution is prohibited by the Second Law because more evolved species are more highly ordered with less entropy than less evolved species. Even if we assume we can actually define what "highly evolved" or "less highly evolved" actually means (an interesting topic in and of itself), the argument is *still spurious*. It is spurious because *Earth is NOT a closed system*.

Next time I will talk about "driving forces" and the "battle" between entropy and enthalpy.

Reply #513. Mar 20 23, 10:50 AM
pennie1478
Reading this for the first time I am instantly reminded of Peggy Hill telling Arlen, Texas to use the ingredients for mustard gas to clean with on "King of the Hill".

Reply #514. Mar 20 23, 11:06 AM
brm50diboll star


player avatar
If you're gonna visit Arlen, you should check out Tom Landry Middle School. Hank Hill is the representative for Strickland Propane for all of TLMS' propane and propane accessory needs. And remember, due to the high critical temperature of propane, unlike natural gas (methane), propane can be liquified by pressure alone, not requiring refrigeration. So those tanks of propane contain liquid propane even though propane is a gas at STP (standard temperature and pressure: 25 °C and 1 atm pressure).

Reply #515. Mar 20 23, 6:44 PM
brm50diboll star


player avatar
I hope anyone reading this doesn't think I think Arlen actually exists. I actually live about 70 miles from Garland, Texas (the suburb of Dallas that Mike Judge reportedly based the name of Arlen on.)

Reply #516. Mar 20 23, 6:49 PM
brm50diboll star


player avatar
I have had difficulty finding enough time to add to this thread for some months now. Thermodynamics is an area shared by both physics and chemistry. My undergraduate degree was in chemistry and, although I have tried to be fair to physics, I am more familiar with the chemistry presentation of it and for the next part, I will blatantly go into that chemistry presentation.

So there are things called "state functions" which are standardized properties that are actually tabulated. Enthalpy is the state function which corresponds to heat. Heat varies under all sorts of conditions, but enthalpy values can be calculated from standard enthalpies of formation for compounds which are well-established tabulated values. Essentially enthalpy changes are negative (heat is released - or exothermic) when chemical bonds formed by a chemical reaction are stronger than bonds broken by that reaction. Combustion reactions in particular are highly exothermic, with large negative enthalpy changes. If a reaction has a positive enthalpy change, then heat is absorbed in the reaction and it is said to be endothermic.

There are two major "driving forces" for spontaneous chemical reactions: 1) the tendency for enthalpy changes to be negative and 2) the tendency for entropy changes to be positive. If only one of these two "forces" favors spontaneity, then whether the reaction is spontaneous or not will depend on a "war" between entropy and enthalpy, the outcome of which is temperature dependent and was quantified into a new state function over a century ago which became known as Gibbs' Free Energy. Strongly endothermic reactions are relatively rare, for example, but may occur if there is a large positive entropy change that occurs. In an example, a mixture of barium chloride and ammonium thiocyanate can get very cold - a very nice classroom demonstration of such a spontaneous endothermic reaction.

But most endothermic reactions will be nonspontaneous and can still be performed by directly moving energy into the system, usually in the form of electricity. So, as an example, the extremely stable compound water can be broken down into hydrogen and oxygen gases by running an electric current through it, a process known as electrolysis. A great deal of electricity is needed to break up water on a large-scale basis and increasing the efficiency of the process does not change the amount of energy required significantly since it is fixed by the laws of thermodynamics. It turns out that the energy required to break up a fixed amount of water into hydrogen and oxygen will *always* be greater than the energy released by mixing that hydrogen and oxygen together again and converting it back to water, whether it be by combustion or a hydrogen fuel cell.

The old chemistry joke is that the First Law declares you can't win in energy transformations, you can only break even, but the Second Law declares you can't break even, you can only lose, as some energy is always going to be "lost" to human uses in any processes due to entropy effects. So no perpetual motion machines. Any process that claims to produce more energy than it takes in is hiding a source of energy input. Yes, it is easy in a high school chemistry lab to make hydrogen gas from water by dropping sodium metal in water (a great lab demonstration), but what is hidden here is the *enormous* amount of energy (by mass) to get the sodium metal in the first place, as sodium metal does not exist in nature and has to be formed from sodium compounds through electrolysis.

Reply #517. Jun 09 23, 9:31 PM
brm50diboll star


player avatar
Correction: I said barium chloride, but I meant barium hydroxide (the dihydrate, actually) in my endothermic reaction I mentioned with ammonium thiocyanate. It's getting late.

Reply #518. Jun 09 23, 10:08 PM
brm50diboll star


player avatar
The approach to actually quantifying entropy numerically is complicated and involves an integral expression (for those of you familiar with calculus.) As entropies are essentially relative, it is necessary to assign a zero value somewhere. This is where the last law of thermodynamics comes in, the Third Law of Thermodynamics. It states that the entropy of a material decreases towards a fixed lower limiting value as the temperature of that material approaches absolute zero (-459.67 °F, -273.15 °C, or 0 K). In Chemistry, we assign the value of zero (in units of J/K, or joules per kelvin) to the entropies of any pure crystalline substance (element or compound) at absolute zero. If you consult a table of standard entropy values for chemical substances, you will see that (with very few exceptions), those entropy values are all positive values, unlike standard enthalpies or standard free energies, both of which are more often negative than positive. The reason is that standard thermodynamic temperature (the temperature these values are tabulated at) is *not* absolute zero but 25 °C (or 298.15 K), and since entropies increase with temperature, the values will be positive since they are zero at absolute zero. The rare exceptions involve standard entropies *of solution* of aqueous *ions*, where the entropy change of the ionic species is considered relative to the ionic crystal before it dissolves in the water as compared with the final aqueous solution. Most solvation entropies are still positive, but a few ions, especially those with high charge/volume ratios, have a pronounced "ordering" affect on water molecules that form hydration spheres around the ions, resulting in tabulated entropy values that are negative. This is not due to "exceptions" of the Third Law, but due to the fact that a completely different type of entropy is what is being tabulated, even if it is listed in the same table as others. I know, too much information, but I *was* a Chemistry major.

Now that I have, at long last, finished listing all the Laws of Thermodynamics, I am ready to give a few examples of their applications in some common problems (steering clear of most of the math, by the way), and show a few results that may seem counterintuitive and surprising to people not familiar with thermodynamics. On my next post, whenever that may be.

Reply #519. Aug 20 23, 10:01 PM
brm50diboll star


player avatar
Before I continue my story, I will point out there is an annular solar eclipse that passes through the western US on October 14, including my home state of Texas. Although I live a bit north of the actual path of annularity, it will still be a deep partial eclipse here, so if the weather is permitting, it should be quite a sight.

link https://www.greatamericaneclipse.com/october-14-2023

Reply #520. Sep 17 23, 7:33 PM


526 replies. On page 26 of 27 pages. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Legal / Conditions of Use