A conversation regarding the “electric universe”

170 Comments
Posted March 28th, 2009 in Astronomy. Tags: , , , , , .

Marble and I have previously discussed creationism and evolution, but our conversation later centered on a non-standard cosmology known as plasma cosmology (popularized as the “Electric Universe”).


Written by Marble on March 25, 2009 at 9:28 PM

I have an interest in AI, physics, astronomy, and am very keen to work out the reality of the spirituality portrayed by Christianity and its roots.

I’ve generally avoided entering heated discussions about evolution because I feel they create a lot of hot air and not much progress. I also think (controversially perhaps) that the inability to question the ‘accepted opinion’ is a very human foible – not limited to people of religion, nor excluding those of higher intellect. I like William Beaty’s page of maverick scientists and thought it interesting you quoted Carl Sagan

… oh wow – just discovered your theological journey article searching for your Carl Sagan quote… [ed note: the quote is here but that theology article led to a side conversation]


Written by Dumb Scientist on March 26, 2009 at 2:37 AM

I like William Beaty’s page of maverick scientists

Interesting page. Scientists sometimes say that the only way for a new theory to be accepted is for the old scientists to die. I’d like to think that’s a pessimistic viewpoint, though.


Written by Marble on March 26, 2009 at 6:12 AM

Here’s just one place where I think it’s happening today (astronomy). I came across the electric universe theory a few years ago and was majorly impressed. Prior to that I just accepted that the moon’s craters were impacts, that comets were dirty snowballs of melting ice, that the Sun was powered by fusion – but I couldn’t quite comprehend how we got a flat galaxy, flat solar system and flat rings around Saturn via gravity… so I googled for an explanation a few years back and didn’t find anything that really explained it, a year later the question burned again – and that’s when I found that site and one or two others. I was blown away. There were explaining so much more and poking significant holes in the current understanding of astronomy. (I’ll come back to this.)

Actually – around the same time and unrelated to the electric universe – this image / series of images really nailed home to me that I no longer believed the astronomers really knew what they were talking about…

What’s fascinating about this is that the star is believed to be too far away for ‘expanding shell’ to be anything other than a brief flash of light traveling through concentric shells of dust – because if it was dust moving that distance over that period of time would be moving around the speed of light…. i.e. – they say, it can’t be what clearly appears to be an expanding shell of dust due to its high velocity so it must be a light echo.

Now on the original NASA description (which I read some years ago – which now seems to be replaced with the above ones) was a prediction based on the nature of a light echo (which I found no fault with) – it was that the ‘shell’ would appear to contract inward as the light bouncing off the back of the shell finally made it to Earth. Funnily enough – it ain’t happening, and I never expected it to. And it appears NASA have retracted that prediction. So I’ll contend that with mon 838 what you’re seeing actually is what you’re getting. An expanding single shell of dust. And I suspect the dust’s velocity / expansion is slowing, rather than having a constant expansion rate at the speed of light as the light fades. I just tonight discovered the Groucho Marx quote – “Who you gonna believe? Me? Or your lying eyes?” ;)

So basically I believe that it’s the light from the star illuminating the singular dust shell – and it’ll continue to fade as the dust disperses – but that will (continue) to have a different visual affect to that of a true light echo. But the ramifications of that are huge! Either our distance measurements are off (it’s apparently a bit dodgy anyway once parallax runs out), and/or our understanding of the speed of light is in question (I suppose dark matter will bound to the rescue there).

The thing that impresses me the most about the electric universe is that its explanation of a number of plainly visible astronomical phenomena can be physically demonstrated: crater creation (from here), Equatorial Ridges and Saturn’s Rings

I drilled through that site & more like it over a couple of months, and while there are some aspects I’m not sure about / disagree with – I’ve got no doubt that they’ve fired the first shots in the oncoming astronomical revolution ;) My unfinished comment about Carl Sagan was that he is naturally dismissive & derisive of the electrical universe… But there is so much evidence there – plain as day evidence – that it warrants a much deeper look. So the revolution isn’t starting from the top / the high priests of science ;) I think the evidence is so plain though – that it wont take the older generation to drop off…surely… ;)

BTW – here’s a link for an electromagnetically shrunken coin.


Written by Dumb Scientist on March 26, 2009 at 7:35 AM

Here’s just one place where I think it’s happening today (astronomy). I came across the electric universe theory a few years ago and was majorly impressed.

I’ve seen it too, but wasn’t very impressed. I tend to agree with the wikipedia page‘s “comparison to mainstream cosmology.” The main problem is that plasma cosmology introduces lots of new assumptions, and can’t account for nearly as many phenomena as mainstream cosmology. For example…

Prior to that I just accepted that the moon’s craters were impacts,

We’ve seen meteorites causing craters. It’s an established fact. I’m thinking of the craters that have formed on Earth during recorded history, as well as craters that have formed on the moon and been seen by our telescopes (it looks like a single bright flash, not a lightning-like spark), and events such as Shoemaker-Levy 9 which show that comets do strike planets.

Yes, the electric universe page shows pictures of scorch marks that look like craters, but they simply assert that the solar system was more “electrically active” in the past. This might happen on Io (because its proximity to Jupiter opens up a giant EM flux tube) but in any other case I think it’s a solution in search of a problem.

that comets were dirty snowballs of melting ice,

Which has been confirmed by spectroscopic analysis. We can point spectrometers at comets and analyze the spectral fingerprints of the comets, verifying that they’re made of water ice.

that the Sun was powered by fusion

Solar physics is probably one of the most impressively accurate theories ever developed. It accounts for not only the behavior of the Sun, but also explains the light spectra of much larger and much smaller stars, as well as explaining the way stars die.

Here’s an example: as the “electric universe” says, scientists used to be confused by the fact that we could only see 1/3 of the neutrinos expected from the Sun. The particle physicists measuring the neutrino flux kept saying the problem was due to solar physicists- that they’d just gotten their models of the solar interior wrong. The solar physicists stuck by their answer, and eventually we discovered that neutrinos have mass (which surprised the particle physicists) and as a result they “oscillate” between three flavors of neutrinos. Since the particle physicists were only looking for one flavor, they missed the other two.

Solar physics essentially rewrote particle physics, which really impresses me.

– but I couldn’t quite comprehend how we got a flat galaxy, flat solar system and flat rings around Saturn via gravity…

The galaxy is believed to have condensed from a much larger cloud of primordial hydrogen and helium (there’s some evidence that supermassive black holes played a large role in this process). Because the proto-galaxy condensed from something much larger, its moment of inertia reduced dramatically, rather like an ice-skater drawing her arms in to spin faster. This caused the rotation rate of the galaxy to increase around whatever axis the angular momentum pointed originally, which is completely random for each galaxy.

This doesn’t mean that there shouldn’t be any spherical-like orbits, just that there are more objects orbiting in the disk that’s perpendicular to the axis of rotation than in any other plane. Here’s the punchline: over billions of years, the objects that aren’t orbiting in the galaxy’s disk have close encounters with the more numerous objects in the disk, and are either flung out of the galaxy or put into more normal orbits. The same process accounts for the fact that all planets in the Solar System orbit in a common plane. (Incidentally, elliptical galaxies look different because they’ve collided with other galaxies “recently,” disrupting the natural flat spiral shape.)

Actually – around the same time and unrelated to the electric universe – this image / series of images really nailed home to me that I no longer believed the astronomers really knew what they were talking about…

What’s fascinating about this is that the star is believed to be too far away for ‘expanding shell’ to be anything other than a brief flash of light traveling through concentric shells of dust – because if it was dust moving that distance over that period of time would be moving around the speed of light…. i.e. – they say, it can’t be what clearly appears to be an expanding shell of dust due to its high velocity so it must be a light echo.

The stars “clearly” rotate around the Earth once every 23 hours, 56 minutes, 4 seconds. Anyone who attempts to explain that away as the “rotating Earth” is just trying to get you to disbelieve your lying eyes. If you think that’s silly, keep in mind that some people literally believe it to be true.

Now on the original NASA description (which I read some years ago – which now seems to be replaced with the above ones) was a prediction based on the nature of a light echo (which I found no fault with) – it was that the ‘shell’ would appear to contract inward as the light bouncing off the back of the shell finally made it to Earth. Funnily enough – it ain’t happening, and I never expected it to. And it appears NASA have retracted that prediction.

I’m not privy to these details. I’m also not convinced that there really isn’t any evidence of reflections from the back of the nebula. If you’re really curious, google the principle investigator and ask him if he’d help you understand it. Most scientists like to talk about their work with the general public as long as people ask polite questions in a non-confrontational manner.

But for the sake of argument, let’s say NASA predicted that the reflections from the back would make the nebula appear to shrink and then retracted those predictions because it didn’t happen. I’d like to know the diameter of the nebula, which converted into light travel time would tell us how long we’d have to wait. I’d also like to know the light spectrum, the dust density and the dust size distribution. This would allow me to calculate the scatter of the light, perhaps using Mie theory.

I suspect what’s happening here is that the light isn’t as strongly scattered backwards as it is in the forward direction. So the echo from the front part of the nebula is brighter than the reflection of the back part of the nebula because the light from the back part of the nebula has to be reflected nearly 180 degrees. I can’t be sure without devoting a lot of time to this problem that I don’t have, though.

Also, scattered light from the back of the nebula would be scattered again on its way to us as it passes through the front part of the nebula, so that complicates the interpretation somewhat.

So I’ll contend that with mon 838 what you’re seeing actually is what you’re getting. An expanding single shell of dust. And I suspect the dust’s velocity / expansion is slowing , rather than having a constant expansion rate at the speed of light as the light fades. I just tonight discovered the Groucho Marx quote – “Who you gonna believe? Me? Or your lying eyes?” ;)

So basically I believe that it’s the light from the star illuminating the singular dust shell – and it’ll continue to fade as the dust disperses – but that will (continue) to have a different visual affect to that of a true light echo. But the ramifications of that are huge! Either our distance measurements are off (it’s apparently a bit dodgy anyway once parallax runs out), and/or our understanding of the speed of light is in question (I suppose dark matter will bound to the rescue there).

Your explanation opens up a giant can of worms. We have an enormous amount of evidence that the galaxy is ~100,000 LY across, that Andromeda is ~2,000,000 LY away, and that lightspeed is 299,792,458 m/s. You’re trying to solve a really tiny mystery, but in the process you’re going to have to explain a lot of astronomical observations.

I don’t have time to fully describe the last century of astronomy, but I’ll note that our distance measurements are based on (in order of increasing distance) parallax, Cepheid variables, type 1A supernova, and redshift measurements. This is a good overview.

It’s true that we’re constantly recalibrating these “standard candles” and that all science is subject to change from new information. The problem is that our observations place rigorously defined error bars on those distances. I wouldn’t be surprised if these measurements are off by 10-20% because the error bars really are that big.

But if this echo is anything but light, you’re talking about a HUGE change in these distance measurements, or the speed of light. This kind of change would require you to explain all the measurements made by telescopes all over the world for the last century. I hope you like tilting against windmills…

Also, dark matter wouldn’t have anything to do with this. Dark matter was originally an hypothesis that explained the anomalous velocity rotation curves within galaxies and the unusually high orbital velocities of entire galaxies in superclusters. But it’s been experimentally verified by the Bullet cluster. In addition, the WMAP results are inexplicable without a certain amount of non-baryonic dark matter and something bizarre called dark energy.

Dark matter/energy are ridiculously complicated topics, but they’re not simply “fudge factors” that scientists throw at phenomena they don’t understand.

Equatorial Ridges

Interesting coincidence: I recently met a scientist– Emily Dahlberg– at last December’s AGU Fall Meeting who was studying the Iapetus ridge. She presented three theories and cast serious doubt on all of them. We really don’t know why the ridge exists, but I read that page and don’t see how plasma cosmology has a better explanation for all the various mysteries of the ridge.

Saturn’s Rings

As far as I can tell, the Cassini probe has been discovering new rings and gaps in the ring system, and they don’t seem to be having trouble describing them with standard gravitational physics. It’s weird physics- moons can actually push rings away with their gravity (counterintuitive, has to do with rotating coordinate systems), but it’s all comprehensible with enough math.

BTW – here’s a link for an electromagnetically shrunken coin.

Yes, electromagnetism is a very powerful force. It’s 1036 times more powerful than gravity, in fact. So I can understand its appeal in terms of explaining the universe. Maybe we’ve even underestimated the importance of interstellar plasma interactions. Who knows?

It’s true that both gravity and electromagnetism have infinite range and can cross empty space (the electric universe site claims that scientists don’t acknowledge that electromagnetic forces can cross empty space, but I have yet to meet a physicist who’s that ignorant). The sweeping claims made by plasma cosmology are ignored by mainstream physicists because electric charges come in two types which tend to attract each other and cancel out. Gravitational mass only comes in positive quantities, so it never cancels out.

As a result, the universe’s large scale structure is dominated by gravitational interactions. Galaxies form because of gravity, and random collisions between objects form the flat disk shape. Stars collapse because of gravity until they become hot enough to fuse hydrogen, then remain stabilized by gravity until it ultimately ends when nuclear fuel runs out, etc.


Written by Marble on March 26, 2009 at 10:22 AM

Yeah – I’ll admit I’ve never bothered with trying to distinguish between the electric universe & the plasma cosmology – I figured they were largely on the same track.

Wikiquote – Most astrophysicists accept dark matter as a real phenomenon and a vital ingredient in structure formation, which cannot be explained by appeal to electromagnetic processes

Obviously most astrophysics agreeing, a fact does not make – and a citation is required for ‘cannot be explained’.

See this link for an electrical simulation of spiral galaxy formation. Of course I haven’t deconstructed his mathematical model or attempted to reproduce it in my microwave – so I’m going to have to take it on face value.

But if this echo is anything but light, you’re talking about a HUGE change in these distance measurements, or the speed of light. This kind of change would require you to explain all the measurements made by telescopes all over the world for the last century. I hope you like tilting against windmills…

If the speed of light dramatically changes outside our solar system… then why wouldn’t our observations still be consistent with what we have – how would we know / not know if light travels much faster or slower between the stars & systems? And how many light years across are those huge galaxies – yet we seem to see both arms practically identical (well my very basic observation – I don’t know if there’s any research to indicate that the stars on the far side of the galaxies are ‘younger’ or if the arms are skewed to allow for the distant light taking longer to reach us etc).

Solar physics is probably one of the most impressively accurate theories ever developed. It accounts for not only the behavior of the Sun, but also explains the light spectra of much larger and much smaller stars, as well as explaining the way stars die.

I’m not sure it’s that good. And I don’t think history bears out your assertion. Solar physics is probably one of the most impressively modified theories that changes on a regular basis. The Sun only became a fusion powered entity when we discovered fusion for instance. And I believe stars have only recently been shown to consist mostly hydrogen & helium (apart from the core in older/larger stars), and that the spectrometer readings of elements present are due to the extreme heat effectively bouncing electrons through the orbital shells of the hydrogen. (If I’m correctly recalling the 2006 astronomy 162 podcast of lectures I’ve listened to recently.) Stars are changing brightness, size, colour too rapidly for the current theory. The corona is too hot. When was dark matter & energy ‘discovered’? Weren’t the insides of the galaxies meant to spin faster? This is largely how astonomy works these days – either ignore the evidence that doesn’t fit the model, or change the model to refit the evidence – the latter of which is fine – but if the model’s not making predictions… I think your model is kinda worthless – falsifiability or something isn’t it?.. ;P

We’ve seen meteorites causing craters. It’s an established fact. I’m thinking of the craters that have formed on Earth during recorded history, as well as craters that have formed on the moon and been seen by our telescopes (it looks like a single bright flash, not a lightning-like spark), and events such as Shoemaker-Levy 9 which show that comets do strike planets.

Yes, the electric universe page shows pictures of scorch marks that look like craters, but they simply assert that the solar system was more “electrically active” in the past. I can maybe see this being true on Io (because its proximity to Jupiter opens up a giant EM flux tube) but in any other case I think it’s a solution in search of a problem.

I’m not disputing that physical impacts occur – but perhaps you can point me to some images or descriptions of craters caused by such impacts? Why are the impact craters (say on the moon for arguments sake), pretty much perfectly circular? Unless some sort of atomic type explosion is invoked upon impact – I can’t see how the pretty much all the impacts on the moon would be perpendicular to the moon’s surface, considering that the moon has such a weak gravity well, because I figure a lot of those large craters caused by large / fast moving meteorites that should really spread themselves along the moon’s surface in the direction the meteorite was travelling. And then you have to explain the flat bottoms and ridge walls – which the electrical machining can clearly demonstrate (and the little peak in the middle occasionally – which probably rules out an explosion BTW).

Which has been confirmed by spectroscopic analysis. We can point spectrometers at comets and analyze the spectral fingerprints of the comets, verifying that they’re made of water ice.

You’re studying for a PhD right – I’ll forgive you for not being up on it all ;)

Here’s an example: as the “electric universe” says, scientists used to be confused by the fact that we could only see 1/3 of the neutrinos expected from the Sun.

So if we can’t get comet dust right… how much more so exotic particles – don’t we only detect 1 a day or something – as a flash of light in a large water container miles underground? (I’m just filling in space here cause I’m having trouble finding the EU’s rebuttal.)

And I’m not saying the Sun doesn’t have fusion reactions – it’s just that they’re not the main power source. If the fusion reaction in the center was the source of heat then why is the corona (the outer most atmosphere) orders of magnitude hotter than the Sun’s surface (photosphere). How does the heat get to the corona and stay there without moving back? They’re speculating of course – but as far as I’m aware – there’s no demonstrable mechanism. But an arc discharge where the energy is coming from the outside I perceive as a less problematic explanation. Combine that with a radial field flattening the solar system due to the incoming energy feeding the Sun – then it does tie in nicely – even if I don’t know what I’m talking about ;)

As far as I can tell, the Cassini probe as been discovering new rings and gaps in the ring system, and they don’t seem to be having trouble describing them with standard gravitational physics. It’s weird physics- moons can actually push rings away with their gravity (counterintuitive, has to do with rotating coordinate systems), but it’s all comprehensible with enough math.

I’ll take the simpler model – which is electrical repulsion. Tends to push things like that. And you have those IO plumes remember…so there’s no doubt there’s significant electrical charge available. What’s more is that the ring reforms too. Gravity just doesn’t do that – I don’t care how much math you throw at it ;) And that wouldn’t be similar math to the one that has the bug-hole paradox in it? Or the barn-pole one? I’m sorry but paradoxes particularly like those tell me there’s something wrong somewhere… (in the model – not reality … *plugs ears so doesn’t have to enter philosphical debates on reality*).

Your explanation opens up a giant can of worms.

But if you don’t ask the questions, people generally don’t start to think of the answers… and they just keep accepting the high priests (peer reviewed) version of reality ;)

Sometimes the devil is in the details.

Your explanation opens up a giant can of worms.

Revolutions are messy affairs….

It’s true that both gravity and electromagnetism have infinite range and can cross empty space (the electric universe site claims that scientists don’t acknowledge that electromagnetism can cross empty space, but I have yet to meet a physicist who’s that ignorant). But the sweeping claims made by plasma cosmology are ignored by mainstream physicists because electric charges come in two types, and they tend to attract each other and cancel out. Gravitational mass only comes in positive quantities, so it never cancels out.

Apparently plasma effects scale really well (electric machining micro craters to craters on the moon – I know at this point you may not accept that – or planetary Lichtenberg figures perhaps but I’m dying here due to lack of sleep…). And plasma doesn’t just ‘cancel out’ charges – see the Birkeland currents / plasma sheaths for starters. Of course you could ask what powers the super galactic currents – but well – apart from super cluster currents etc – I suppose we could equally ask what kicked off the big bang.

Here’s the 3 legs of the stool that I think make it very difficult to upturn the current theories

You need billions of years for:

  1. Evolution
  2. Gravitationally based solar system stability
  3. Geological weathering through water & wind

That ties biology, geology & astrophysics. The weight against saying such & such an event happened in a much shorter time frame in one field is caused by the other two. However, in my opinion (iamadumbnonscientist) all 3 fields could be reduced to a shorter timespan through the electric universe concepts & creation/ID. Of course I may have to create an anti gravity drive, disprove the constancy of light and build an AI to argue on my behalf before anyone will listen to me….but even then I’m not so sure ;)


Written by Dumb Scientist on March 26, 2009 at 9:53 PM

If the speed of light dramatically changes outside our solar system… then why wouldn’t our observations still be consistent with what we have – how would we know / not know if light travels much faster or slower between the stars & systems?

First, the problem of varying physical “constants” has been examined in detail here (see section 3). There’s weak evidence that some physical constants were different in the past, but young Earth creationism requires a much larger change than the evidence supports. In this context “much larger” means millions of times too large.

Second, if the speed of light is different outside of our solar system, there must be a boundary layer (abrupt or gradual) between the region in our solar system with a low speed of light and the outside universe where the speed of light is higher. In either case, this is the definition of a lens. You’re basically saying we live in a glass marble (or glass ellipsoid, or glass ballerina figurine). Look at a glass marble in the Sun sometime- the boundary layer between air and glass bends light and focuses it. This interface bends the light because the speed of light is 33% slower in glass than in air.

Even if the boundary layer is gradual, that’s the same as the case of the Earth’s atmosphere: light travels around 0.03% slower in air than vacuum, and this change occurs gradually as the air gets thicker towards the ground. This allows people standing on top of a mountain to see over their geometrically defined horizon because the light bends down towards them. It’s also part of the reason the moon turns red during lunar eclipses (the moon is in the Earth’s shadow, but it’s being lit up by the refracted light of all the sunrises and sunsets in the world).

If light really is faster outside our solar system, that boundary layer would have experimentally measurable consequences:

  • The sky would look weird— some regions would be totally dark, and the “fixed” stars would shift in queasy patterns as the Earth revolved around the Sun.
  • Depending on the geometry of the boundary layer, at least one focal point would exist where electromagnetic radiation across the frequency spectrum are concentrated. All those frequencies would have to be focused to the same point because our local measurements reveal all EM radiation to travel at the same speed in our vacuum, and remote measurements of extra-galactic events reveal the same thing to within the limits of experimental uncertainty. This means that the boundary layer you’re proposing can’t have any chromatic aberration. So the solar system would have “death zones” that would be subjected to extreme radiation whenever a “local” supernova exploded… and I haven’t seen evidence for anything like this.
  • An abrupt boundary layer would result in total internal reflection. If the boundary layer is spherical, sufficiently large and centered on the Sun, the Sun’s light wouldn’t be totally reflected, but depending on the boundary’s size the light reflected from Jupiter and Saturn (along with their radio emissions) would bounce around the solar system.

Transmission coefficients are also dependent on the relative speeds of light in both regions, and generally not the same for different frequencies. The boundary would have to be almost perfectly clear across all observed frequencies to account for its invisibility, which means it has to have some kind of idealized anti-reflective coating.

And how many light years across are those huge galaxies – yet we seem to see both arms practically identical (well my very basic observation – I don’t know if there’s any research to indicate that the stars on the far side of the galaxies are ‘younger’ or if the arms are skewed to allow for the distant light taking longer to reach us etc).

Cool! Some new evidence indicates that the Milky Way is about twice as big massive as I thought it was. The Very Long Baseline Array (VLBA) “networked” many telescopes together to form a telescope with unprecedented angular resolution, and imaged star-producing clusters on the opposite side of the galaxy in radio waves twice- once in January and once in June to obtain parallax measurements. Also, their density vs. distance measurements suggest that the Milky Way has 4 arms, not 2 as previously believed.

I guess that some medium-distance standard candles need to be revised again, by a factor of at most 2. Nope. My bad. That’s a larger error than I would’ve expected off the top of my head (note that I’m a physicist, not an astronomer). Anyway, I found that just now and thought it was both informative and relevant.

To answer your question, pretty much all galaxies are about the same size mass as ours or slightly smaller lighter. It used to be odd that Andromeda seemed so much larger heavier than our galaxy, but the VLBA showed that this was because, paradoxically, it’s harder to study our own galaxy than it is to study galaxies millions of light years away. And in Andromeda’s case, the disk is nearly edge-on, so the near stars appear 200,000 ~141,000 years younger than the farther ones. But since most stars live for billions of years this isn’t noticeable. Stars the size of our Sun live for 5-10 billion years, smaller stars like red dwarfs last tens of billions of years and larger stars can shine so brightly that they exhaust their fuel in mere millions of years. So the 200,000 ~141,000 years it takes for light to cross the galaxy is a small percentage of the lifetimes of all but the largest stars, which are being born all the time so they don’t have a uniform age anyway.

The Sun only became a fusion powered entity when we discovered fusion for instance.

So… you’re saying it’s surprising we didn’t realize that the Sun was fusion powered before fusion was discovered in the 1930s? (I consider Bethe’s and Chandrasekhar’s works in 1939 to mark the dawn of modern solar physics.)

And I believe stars have only recently been shown to consist mostly hydrogen & helium (apart from the core in older/larger stars)

That depends on your definition of “recently.” Helium and hydrogen were found to dominate the Sun’s spectrum in 1868. So it wasn’t surprising when fusion-based stellar models developed in the 1930s didn’t allow for large percentages of other elements. Otherwise fusion would be harder to start, causing the minimum size of a viable star to be higher than we’ve observed.

… and that the spectrometer readings of elements present are due to the extreme heat effectively bouncing electrons through the orbital shells of the hydrogen.

I don’t understand this point. What elements are you talking about? I’d be interested to see if there’s some way for spectroscopic “fingerprints” to be mistaken for something else (which is what I think you’re saying) but the predicted signatures have extremely narrow peaks in the frequency domain, and thermal motion usually just results in Doppler broadening…

(If I’m correctly recalling the 2006 astronomy 162 podcast of lectures I’ve listened to recently.) Stars are changing brightness, size, colour too rapidly for the current theory.

I think it’s likely that steady-state predictions are simpler than predictions of the transition states. In other words, it’s easy to predict the temperature and neutrino flux from fusion in a stable star, but transitions and oscillations are harder to describe. At least, that’s been my experience in a different field of physics…

When was dark matter & energy ‘discovered’?

1933 – Zwicky studies the Coma cluster of galaxies and is surprised to find that these galaxies are orbiting each other much faster than he predicted based on their visible mass. He proposes that each galaxy actually contains much more mass than is visible.

1959 – Measurements of galactic rotational velocities conflict with expected velocities based on the amount of matter observed to be present. The dark matter concept proposed by Zwicky is found to solve this problem too.

1970s – Big Bang nucleosynthesis has trouble reconciling observations of high deuterium density with the expansion rate of the universe. Non-baryonic dark matter solves this problem as well.

At this point, dark matter was simply an hypothesis. MOdified Newtonian Dynamics (MOND) was another hypothesis with equal weight. But then in 2006 measurements of the Bullet Cluster supported the dark matter hypothesis over the MOND hypothesis.

Simultaneously, WMAP (2001-present) measured the microwave background radiation and independently confirmed the existence of dark matter. It also revealed an even larger amount of “dark energy” which confirmed the 1998 discovery that the expansion of the universe is accelerating. I can’t claim to understand any of the debate after that point, though: it’s over my head.

Weren’t the insides of the galaxies meant to spin faster?

Yes, but it’s a little complicated. Kepler’s laws say inner planets orbit faster than outer planets, but in a very specific manner: “the square of the orbital period of the planet is directly proportional to the cube of the radius of the orbit.

That wasn’t what scientists were expecting when they looked at galaxies, though. Their models accounted for the fact that galaxies are densely filled with stars rather than dominated by a single point mass like our solar system. Thus, stars at the edges should be a little faster than the Keplerian estimate. The problem was that the actual observations revealed a velocity curve (i.e. orbital velocity of stars versus their distance from the center of the galaxy) that was way too flat. In other words, stars at the edge were traveling much too fast.

But then someone noticed that if you hypothesized the existence of a (nearly) uniform “halo” of matter around the galaxy, the problem went away (I had to do this homework problem in my cosmology class). This hypothesis of a non-interacting dark matter halo wasn’t distinguishable from MOND until several years ago, though.

This is largely how astonomy works these days – either ignore the evidence that doesn’t fit the model, or change the model to refit the evidence – the latter of which is fine – but if the model’s not making predictions… I think your model is kinda worthless – falsifiability or something isn’t it?.. ;P

I’ve never seen astronomers ignore evidence- not the astronomers whose papers I read or my astronomer friends. Perhaps my experiences are less representative of the astronomy community than yours are, though. Can’t say for sure.

I agree that models which don’t make falsifiable predictions are worthless. I’ve just never seen that happen in peer reviewed journals. Theories are modified by new evidence all the time, but those modifications make predictions of their own. An excellent example is that the dark matter hypothesis drastically modified our understanding of galactic structure and evolution. It used to be indistinguishable from MOND until someone realized that dark matter’s signature weak interactions imply that it would behave differently in a collision between galaxies. The ionized gas that makes up the bulk of the visible mass of the galaxies would collide and slow down, while the dark matter of each galaxy would fly right through the other galaxy and keep going. It’s possible to view the total amount of matter in this case because matter (dark or ordinary) acts as a gravitational lens- it bends light from even more distant galaxies.

By carefully examining the extent of this lensing, a map of the total amount of matter was revealed. It wasn’t in the same place as the light from the ionized gas. In fact, the mass is centered along several lobes outside each galaxy along the direction of their motion, which is exactly what the dark matter hypothesis predicted decades earlier.

I’m not disputing that physical impacts occur – but perhaps you can point me to some images or descriptions of craters caused by such impacts?

  • NASA routinely observes craters being formed on the moon. It’s a serious problem for the (possibly) upcoming moon base, so they’re trying hard to characterize the impact frequencies and size distributions to keep the colonists safe. Here’s the best video I’ve found that shows an impact.
  • The largest impact in recorded history was the Tunguska event in Russia in 1908. Recently, researchers have claimed that the impact crater is hidden under a lake. I think this is the lake in question, and they’re planning to take core samples to confirm this (by searching for the expected ejecta at the right depth).
  • In 1947, a meteorite hit Russia and left several craters, the largest of which was 26m across and 6m deep.
  • In 2007, a meteorite hit Peru, and left a roughly circular crater 13m across and 4.5m deep.

Also, over a thousand meteorites have been recovered after eyewitnesses followed the fireball to the rock. These meteorites show a significantly different chemical makeup than earthly rocks, and the resulting ejecta is spread over a wide area. Thus a chemical fingerprint of a foreign object is recorded. The best known example is Barringer Meteor crater. In 1960, Shoemaker showed that it was caused by a high velocity impact with an iron-nickel asteroid.

Any alternative explanation would have to explain why this ejecta looks so different than the rest of the Earth, and why it looks so similar to meteorites.

Why are the impact craters (say on the moon for arguments sake), pretty much perfectly circular? Unless some sort of atomic type explosion is invoked upon impact –

That’s actually a pretty good description of what happens. The kinetic energy of a multi-kiloton rock moving at an orbital velocity is so large that the resulting explosion is sometimes more powerful than even the Tsar Bomba (without the radioactivity).

I can’t see how the pretty much all the impacts on the moon would be perpendicular to the moon’s surface, considering that the moon has such a weak gravity well, because I figure a lot of those large craters caused by large / fast moving meteorites that should really spread themselves along the moon’s surface in the direction the meteorite was traveling.

They’re not all perpendicular, it’s just that the resulting explosion is relatively spherical regardless of the incoming direction of the meteorite.

And then you have to explain the flat bottoms and ridge walls – which the electrical machining can clearly demonstrate (and the little peak in the middle occasionally – which probably rules out an explosion BTW).

Craters with flat bottoms are larger, commonly known as impact basins due to their size. The larger size results in greater melting of the rocks, which makes the craters flatter. Here’s a good site.

All these features have been studied and reproduced both in the lab and in simulations. In the 1960s scientists literally shot big guns at cement and observed craters that matched observations. In 1981 the central peaks were examined in more detail, and explained by the interaction of two shock waves. More recent research is being performed by scientists like Dan Durda: KC-135 microgravity experiments in regolith properties and cratering mechanics, Mark Cintala, Josh Colwell, and Daniel D. Durda. (From here.)

Which has been confirmed by spectroscopic analysis. We can point spectrometers at comets and analyze the spectral fingerprints of the comets, verifying that they’re made of water ice.

You’re studying for a PhD right – I’ll forgive you for not being up on it all ;)

That’s fascinating– I didn’t catch that discovery. The percentage of water in comets may be lower than I thought before, making the separation between comets and asteroids fuzzier. Interesting. I’d imagine that there are still reasons for comets to be different than asteroids in more circular orbits because comets are continually re-heated when they pass by the Sun, and cross many planets’ orbits during their circuits through the inner solar system so they probably accrete more dust.

But spectroscopic measurements of comets have been conclusive: comets contain water. Also, Cassini has literally flown through water plumes from Enceladus which is a moon of Saturn that might be a captured comet. Certainly these new observations push down the likely percentage of water, but it has to be higher than zero otherwise other observations wouldn’t make sense.

So if we can’t get comet dust right… how much more so exotic particles – don’t we only detect 1 a day or something – as a flash of light in a large water container miles underground? (I’m just filling in space here cause I’m having trouble finding the EU’s rebuttal.)

We’ll always be getting stuff wrong. That I can promise you. But the people doing the comet research aren’t the same people detecting neutrinos, and they’re using very different physics. I don’t see a connection between the two fields that’s strong enough to make me think that failures in one field imply anything in particular about the conclusions from the other field…

And you’re right– neutrino detection is really difficult. Despite freakishly large detectors, I think your estimate of the flash counts isn’t too far off. That’s why it takes them a long time to build up enough statistics to rule out this-or-that physical theory. But based on their successes in correlating increases in flash count rates to supernovae, I think the detectors work correctly.

And I’m not saying the Sun doesn’t have fusion reactions – it’s just that they’re not the main power source.

Then you’d have to explain the fact that we see just enough neutrinos from the Sun to account for the fusion-based solar models. Remember that solar physicists (usually regarded as lowly experimentalists) went up against the particle physicists (if physicists had superstars, it would be these guys) and they won. Furthermore, after the particle physicists relented, neutrino oscillation was independently confirmed in at least three different ways.

Neutrinos were predicted to exist long before any direct evidence was found. Pauli actually predicted the existence of neutrinos when analyzing beta decay (a type of nuclear reaction) in 1930. Using nothing more than conservation of energy and momentum, Pauli predicted a particle that wasn’t seen until 1956. As far as I know, neutrinos are only created in nuclear reactions. If fusion isn’t powering the Sun, those neutrinos are a big mystery.

If the fusion reaction in the center was the source of heat then why is the corona (the outer most atmosphere) orders of magnitude hotter than the Sun’s surface (photosphere).

It’s interesting that you should bring this up when less than a week ago, a solution to this problem was proposed. Ironically, the explanation could be a type of plasma wave called an Alfven wave, named after Hannes Alfven. Yes, the father of plasma cosmology.

How does the heat get to the corona and stay there without moving back?

I don’t know. I’ve tried to figure out if “the electric Sun” can explain this better, but I don’t understand the idea that the Sun is charged. The solar wind is neutral- you can confirm that by looking at probe measurements of nuclei and electrons, and they’re the same.

Newer, more comprehensive data regarding the solar wind is also available from Ulysses. It confirms that solar wind is electrically neutral, but a charged Sun should only be repelling one type of charge.

The corona’s high temperature has been mysterious for a long time; I just don’t see any advantage to the electric Sun idea.

But an arc discharge where the energy is coming from the outside I perceive as a less problematic explanation. Combine that with a radial field flattening the solar system due to the incoming energy feeding the Sun – then it does tie in nicely – even if I don’t know what I’m talking about ;)

You’re in good company; I don’t understand that paragraph either.

I’ll take the simpler model – which is electrical repulsion. Tends to push things like that. And you have those IO plumes remember…so there’s no doubt there’s significant electrical charge available.

Electrical forces tend to push charged objects. I think you’d have a lot of trouble reproducing Cassini’s optical views of the rings with Cassini’s measurements of the electric field in the Saturnian system. I encourage you to try, but note that Io is a moon of Jupiter, not Saturn.

Also, the simplest model makes the fewest assumptions. The weird gravitational effects I’m describing don’t really make any more assumptions than Newton did when he conceived inverse square gravity. It’s just that in a rotating coordinate system, inverse square gravity has counterintuitive results when multiple objects are placed in “orbital resonances.”

Furthermore, it only makes sense to compare the simplicity of two models if their predictions both match the experimental results. I’ve seen proof that inverse square gravity can account for the gaps in Saturn’s ring system, but I haven’t seen any equivalent proof for an electromagnetic origin. I also still don’t understand where all these charges come from, and why they don’t just become neutral by attracting opposite charges.

What’s more is that the ring reforms too. Gravity just doesn’t do that – I don’t care how much math you throw at it ;)

What, exactly, do you mean by “reforms”? Why, exactly, is gravity unable to do that?

And that wouldn’t be similiar math to the one that has the bug-hole paradox in it? Or the barn-pole one? I’m sorry but paradoxes particularly like those tell me there’s something wrong somewhere… (in the model – not reality … *plugs ears so doesn’t have to enter philosphical debates on reality*).

First, you’re describing special relativity, not math itself. Second, the math involved in special relativity is almost entirely unrelated to the math used to describe the orbital resonances that connect Saturnian moons with gaps in the ring system.

Third, the barn-pole paradox isn’t really a paradox. It’s an “apparent” paradox, which means it violates common sense but isn’t internally inconsistent (which is how I’d define the term “paradox” in the context you’re using the word).

Special relativity is one of the most beautiful (IMHO) theories in physics. It’s completely at odds with common sense, and those quirks are given names like “twin paradox” and “barn-pole paradox” but the theory has stood the test of time for over a century. It’s also one of the few “advanced” topics that can be approached without much mathematics. I’ve tried to provide an introduction in this article.

The gist of the barn-pole paradox is that relativity of simultaneity causes the person holding the pole to measure the front and the back doors to open and close at different times, while the person standing still in the barn measures them opening and closing simultaneously. It’s bizarre, even infuriatingly nonsensical. But it’s got experimentally testable consequences: GPS devices wouldn’t work correctly if special relativity was wrong, because they need to take time dilation of the satellite network into account to calculate the position of the GPS receiver in your car. (A separate correction accounts for general relativistic effects.)

A good example of a genuine paradox is the grandfather paradox. This kind of internal inconsistency prompts many physicists to be skeptical of time travel. But it’s nothing like the “apparent” paradoxes in special relativity.

Your explanation opens up a giant can of worms.

But if you don’t ask the questions, people generally don’t start to think of the answers… and they just keep accepting the high priests (peer reviewed) version of reality ;)

I’m not telling you to stop asking questions. I’m just saying that I think your proposal conflicts with nearly every experimental result that I’ve seen.

I’ll try to answer your questions as best I can, but the reality is that I’ve got serious problems with my research and I’m wondering if I’ll be able to graduate after all… I should really be working on my program right now.

Apparently plasma effects scale really well (electric machining micro craters to craters on the moon – I know at this point you may not accept that – or planetary Lichtenberg figures perhaps but I’m dying here due to lack of sleep…). And plasma doesn’t just ‘cancel out’ charges – see the Birkeland currents / plasma sheaths for starters.

Yes, I’m aware that electromagnetic phenomena exist. It’s just that as far as I can tell, the electric universe is saying that electromagnetism is responsible for: the shape of Saturn’s rings, the light from the Sun, the shape of the galaxy, all the craters in existence, etc. I don’t see any reason to think that any of the currently accepted explanations for these phenomena are fundamentally wrong, let alone all of them. Furthermore, in order to fix these “problems,” they’re postulating the existence of huge charges and voltage differences between planets and stars that just don’t make any sense to me.

You need billions of years for:

  1. Evolution
  2. Gravitationally based solar system stability
  3. Geological weathering through water & wind

That ties biology, geology & astrophysics. The weight against saying such & such an event happened in a much shorter time frame in one field is caused by the other two. However, in my opinion (iamadumbnonscientist) all 3 fields could be reduced to a shorter timespan through the electric universe concepts & creation/ID.

At the cost of having to explain (among many other things):

  1. Primordial element abundances such as the 25% abundance of helium-4, which is a specific prediction of Big Bang nucleosynthesis.
  2. The WMAP cosmic microwave background. Why is it so isotropic (the same in every direction) down to the 10-5 level? Why does its angular power spectrum indicate an age for the universe that’s 13.7 billion years, plus or minus 200 million? Why does that age fit neatly within the independent 10-20 billion year estimates from analyzing Hubble recession, dating the oldest stars in globular clusters, and calculating the necessary time for hot gas in clusters and white dwarfs to cool?
  3. Isochron dating results of old rocks, which depend only on nuclear decay rates being constant in time. Isochron dating isn’t dependent on initial quantities of elements, and the analysis method automatically produces error bars on the obtained age. The oldest rocks we have agree that the Earth is 4.55 billion years old, plus or minus 100 million years or so.

Just to be clear, we can’t be sure that nuclear decay rates are exactly constant. But experiments have placed constraints on the size of any variation in decay rates:

  1. Supernovae produce many radioactive elements which slowly decay after the explosion. At first they shine brightly in a spectroscopically unique manner, but over the course of several weeks they fade to half their previous brightness. The amount of time it takes the brightness to fade is a direct measurement of the nuclear decay rate. The best example is supernova 1987A, which lies ~169,000 LY away. That means that when scientists looked at that light in 1987, they were measuring the nuclear decay rate as it was around 169,000 years ago. The results were experimentally indistinguishable from current decay rates, and have been confirmed by similar experiments on SN1991T, which is 60,000,000 light years away.
  2. The Oklo natural nuclear reactor left evidence that can be used to determine the fine structure constant and neutron capture rates, both intimately entwined with quantum mechanics’ predictions of nuclear decay rates. This experiment is more ambiguous and as a result the error bars are much larger than the SN1987A constraint, but it’s also consistent with a constant nuclear decay rate. Since the Oklo reactor was active 1.8 billion years ago, the Oklo evidence only supports a change in the fine structure constant of one part in 10 million over that timespan.
  3. The increase in nuclear decay rates necessary to increase the “apparent age” of rocks from thousands to billions of years is enormous. This decay rate would make all the mildly radioactive elements in the Earth decay faster, releasing enough heat to melt the crust. It would still be molten to this day unless God made a cosmically sized refrigerator to cool it down fast enough to fit into the creationist timeline.
  4. Any change in nuclear decay rates would have to affect all types of nuclear decay identically, otherwise isotopes that decay by different mechanisms (alpha, beta, neutron emission, etc.) would’ve decayed at different rates. If these rates changed differently, it would cause isochron dates of the same object but using different isotopes to disagree. To the best of my knowledge, that’s never happened.
  5. If nuclear decay rates have changed, then why do ice cores like the one taken at Vostok, Antarctica show agreement between annual layer counts and isochron age? A change in nuclear decay rates wouldn’t affect the annual temperature fluctuations that form the basis of the annual layer counts, so the two different methods of dating the same (~400,000 year old) ice core should be different. They aren’t.

If all these concerns can be adequately explained by a young Earth model, I’d be interested to hear about it.


Written by Marble on March 27, 2009 at 12:05 AM

Ahh… nice. All good stuff.

Ok – for brevity (and the sake of your PhD) I propose to drill down on only two items for now (if you have any time remaining to waste).

  1. Crater creation – which is one of the most personally compelling arguments for me and both sides can claim reproducibility to some extent.
  2. Speed of light variance outside of the solar system – mainly because this is key to my understanding of some concepts I have under development – so I really want to be convinced that this can be ruled out.

Currently at work – so I’ll digest your replies this evening hopefully. This was just to let you know how I was thinking of proceeding. I had been toying with the idea of getting a physics degree (particularly plasma physics and electromagnetic radiation) – but cost money and I’ve got a family to upkeep.

On the side – You saw the ‘unexpected’ supernova event report on slashdot today? (1st article when I opened it at lunch time funnily enough). Not saying that all theories don’t have to predict everything – but this isn’t by any stretch an isolated incident in astronomy… which is what contributes to my skeptism that it’s hanging together as well as you portray / been portrayed to you.

You’ve also got me wondering if people/scientists are lulled into a false sense of security by error bars too. You were saying previously for distance error was within some x% – but then Andromeda has just turned out to be twice as large (unrelated calculations I realize) – but how big were the error bars on the initial estimate for Andromeda? I’m guessing they weren’t 100%. But I think you’ll obviously agree that ultimately error bars give no guarantee of true error – just of known error within the calculation. So I’m just saying that error bars in themselves give no real assurance of ongoing predictability of the theory. Obviously if all represented data fits within them – fantastic. But you can make a mathematical function for any set of arbitrary data and have very small error retrospectively – but its predictability will be up the creek (over fitted models basically).


Written by Dumb Scientist on March 27, 2009 at 6:45 AM

a) Crater creation – which is one of the most personally compelling arguments for me and both sides can claim reproducibility to some extent.

Frankly, I’m not the right person to ask regarding craters. I was able to find references to this work, but I’ve never studied the equations governing supersonic shock waves. Until I do, I’m completely ignorant of this subject and can’t help you beyond showing you where to continue your research.

b) Speed of light variance outside of the solar system – mainly because this is key to my understanding of some concepts I have under development – so I really want to be convinced that this can be ruled out.

I’m much more familiar with optics and relativity, so discussing this subject will be more productive.

Currently at work – so I’ll digest your replies this evening hopefully. This was just to let you know how I was thinking of proceeding. I had been toying with the idea of getting a physics degree (particularly plasma physics and electromagnetic radiation) – but cost money and I’ve got a family to upkeep.

Considering the questions you’re asking, that would probably be the only way to find rigorous answers. Keep in mind that plasma physics is a highly advanced topic, so you’ll need a 4 year degree in general physics, then at least a year of graduate courses in electrodynamics. Jackson’s textbook is standard, but mastering Griffiths first is highly recommended.

Then you’ll get to the plasma physics classes. I don’t want to discourage you, but I feel the need to be honest about the difficulty of the task in front of you.

On the side – You saw the ‘unexpected’ supernova event report on slashdot today? (1st article when I opened it at lunch time funnily enough). Not saying that all theories don’t have to predict everything – but this isn’t by any stretch an isolated incident in astronomy… which is what contributes to my skepticism that it’s hanging together as well as you portray / been portrayed to you.

Yes, we’ll always be surprised by something. Our theories will always have flaws. Once that stops being true, physics will be rather boring.

I think it’s important to gauge how big these flaws are, relative to how many phenomena are satisfactorily explained by the theory in question. Isaac Asimov wrote a great essay on this subject.

You’ve also got me wondering if people/scientists are lulled into a false sense of security by error bars too. You were saying previously for distance error was within some x% – but then Andromeda has just turned out to be twice as large (unrelated calculations I realize)

  1. I’m a physicist, not an astronomer. And apparently I can’t read.
  2. I explicitly made those error bar estimates vague because I was pulling them off the top of my head (I never got proxy servers working, so I need to go into work to access the journals, and a blizzard just hit my town…)
  3. The size mass of our galaxy (not Andromeda) was apparently off by a factor of 2. That’s because we have to peer through the Milky Way’s dusty central bulge to see the other side, and the parallax measurements are very tricky. Other galaxies (such as Andromeda) are much easier to analyze.
  4. There’s a big difference between 2 and 1,000,000. And I was totally wrong to say the distance was off by a factor of 2 anyway!

– but how big were the error bars on the initial estimate for Andromeda? I’m guessing they weren’t 100%. But I think you’ll obviously agree that ultimately error bars give no guarantee of true error – just of known error within the calculation. So I’m just saying that error bars in themselves give no real assurance of ongoing predictability of the theory. Obviously if all represented data fits within them – fantastic. But you can make a mathematical function for any set of arbitrary data and have very small error retrospectively – but its predictability will be up the creek (over fitted models basically).

  1. It’s true that all error bars assume some underlying model is true. Scientists try to make this underlying model as simple and general as possible. For example, errors are usually assumed to be drawn from a so-called normal distribution (also called a Gaussian distribution).
  2. Error bars generally express uncertainty in terms of “sigma” or the standard deviation of those errors. So when a scientist says “plus or minus 10%” she’s probably using a 1-sigma error bar, which means something very specific. It means that given the data and the finite precision of her instruments, there’s a 68% chance that the quantity being measured lies within 10% of the stated value. Of course, that means there’s a 32% chance that the quantity actually lies outside “1 sigma” or “1 standard deviation” error bars.
  3. If you look carefully at the normal distribution, you’ll see that the probability of the actual quantity being within 2 standard deviations is 95.45%. Similarly, it’s 99.9999999997440% certain that the actual quantity is within 7 standard deviations.
  4. Your million-fold alteration of physics isn’t impossible, just very unlikely. So unlikely, in fact, that I can’t calculate the probability that the actual value of the speed of light (or the age of the universe, or the distance to the stars or galaxies) is outside 1,000,000 standard deviations. This calculation needs to be performed in arbitrary precision arithmetic, because even 64-bit double floats aren’t going to be precise enough to hold such a small number…

Update: I’ve failed to communicate once again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again and again.

Last modified August 11th, 2016
.
    
.

170 Responses to “A conversation regarding the “electric universe””

  1. I’ve put some more thought into the idea that light travels faster outside our solar system. In order to allow us to see objects 13,000,000,000 light years away in the 6,000 years allowed by young earth creationism, light would have to travel ~2,000,000 times faster outside our solar system. But some young earth creationists assert that the universe is ~10,000 years old, so I’ll assume Marble means that light travels a mere 1,000,000 times faster in the outside universe.

    Total internal reflection imposes the strictest constraint on the size and geometry of the boundary layer between the external universe (where light travels a million times faster than we measure) and the region containing the solar system (where light travels at 299,792,458 m/s). If the boundary reflected any light from the sun, the solar system would be a super-heated pressure cooker due to all the trapped radiation. A gradual boundary layer like the one in a moth’s eyes would reduce this problem, but most boundaries reflect more light at higher incidence angles. A spherical boundary layer centered on the sun would minimize these reflections, so let’s examine the physical consequences of that particular geometry of the boundary layer.

    A point source in the exact center of a glass marble won’t undergo total internal reflection. But the sun isn’t a point source– it’s 700,000 km in radius. Using the small angle approximation, that means the boundary has to be at least 1,000,000 times larger than the sun so that light from the edge of the sun isn’t totally internally reflected. (A boundary this small would risk totally internally reflecting x-rays from the corona, but let’s ignore that problem for brevity’s sake.)

    A spherical boundary layer with an index of refraction of 1,000,000 would focus all EM radiation into the center of the sun. That means the earth can’t drift into the focus and be fried by radiation. But the extrasolar radiation would be intensified by a factor of ~20,000,000 at the earth’s orbit if the boundary is 1,000,000 times larger than the sun.

    Also, fish see the world above the water through “Snell’s Window.” The fact that light travels 25% slower in water than air refracts light from the sky as it enters the water, compressing the entire sky into a circular window 96 degrees wide.

    This effect will also occur in your scenario, but it will be much more pronounced because light travels a million times slower inside the boundary. Again using the small angle approximation, the “Snell’s Window” for your boundary would be 0.000000035 degrees wide. So light from the outside universe would be concentrated into a tiny, ultrabright spot in the sky directly opposite the sun.

    Finally, cosmic rays in the outside universe would be limited by the higher speed of light in that region, otherwise special relativity needs to be rewritten. The particles traveling faster than “our” speed of light would probably emit Cerenkov radiation upon crossing the boundary layer into our region, which would cause the entire sky to shimmer

    • A friend just reminded me that there are other questions raised by your proposal. The speed of light is tied to the permittivity and permeability of free space. Which is changing, and how? This would alter electrodynamics; wouldn’t the spectra of other stars look different than the sun’s?

      • arationofreason posted on 2014-11-20 at 11:27

        Latest astronomy indicates that there are streams of plasma in ‘free’ space. Since Light seems always to slow down traversing any physical media, perhaps this interstellar medium is slowing down light as a function of distance rather than universe accelerating causing greater velocity with distance. Would this mean that we don’t have need for ‘dark energy’?

      • Citation-free assertions of interstellar media don’t apply to intergalactic measurements, or to analyses of the CMBR like WMAP.

  2. Marble posted on 2009-04-08 at 08:28

    I’ve been going through the crater formation links you provided and have been mulling them over. I must admit that the initial cement crater image is a good lookalike for some moon ones (although it’s a shame the image quality is so poor in that document). I have a couple of thoughts that I’m still pulling together on them and the other link that I’ll share soon.

    Here’s a couple of questions for you that you’ll have to suspend your disbelief filter to answer. I’ve been pondering these over the last couple of days – so I’d be grateful if you think about them seriously (and I suspect you will). I’ll mention now that this is the start of my 1000000 fold alteration of physics but with these questions – you’d expect nothing less ;) I think the theory should be easily falsifiable and of massive scientific & commercial importance if demonstrated to have merit. Text removed at Marble’s request.

    BTW – I didn’t know this before tonight…

    Anomalous increase of the AU

    The orbits of the planets are constantly expanding outward from the sun because the sun is losing mass by radiating away energy[3], which has led to calls to abandon the AU as a unit of measurement[4]. Now recent measurements indicate that the secular increase of the AU is larger than can be accounted for by solar mass loss[5][6].

    • Reply text removed at Marble’s request.

      The truth is I don’t have enough time to look into these questions in detail. I’m struggling to phrase an answer to your last ID comment regarding HGT (it will probably require a new article) in between my research obligations. I also just wasted an entire day arguing climate change on slashdot, and this obsessive personality quirk is something I need to learn to restrain or I won’t have any time left for the stuff I’m getting paid to look at.

      Anomalous increase of the AU

      Yes, there are quite a few strange things happening in the solar system. The “Pioneer anomaly” shows that gravity may behave differently at extremely large distances (a type of MOND, I suppose, but the evidence is ambiguous) and the “flyby anomaly” which is even weirder.

      • Marble posted on 2009-04-08 at 17:34

        All I’m asking for is the 1 minute to read it, say ‘rubbish’ or ‘there’s too many holes’ and that can be the end of it. Text removed at Marble’s request.

      • Text removed at Marble’s request.

        Reply text removed at Marble’s request.

        All I’m asking for is the 1 minute to read it, say ‘rubbish’ or ‘there’s too many holes’ and that can be the end of it.

        I think you’re seriously underestimating the time it takes to properly analyze these ideas. And as you’ve seen, I’m chronically unable to say ‘rubbish’ and leave it at that.

        Having said that, if you can summarize the ideas in a half-page or so, I’ll think about them. I can’t promise an immediate reply, but neither will I share it with anyone.

        I feel compelled to note that you’ll have to directly address the predictions I’ve made based on analyzing the boundary layer in order to hold my interest. You’ll have to show why “fast light outside the solar system” doesn’t lead to those physical consequences. Otherwise a simple glance at the night sky confirms that the speed of light is uniform.

        Maybe I’ve made some hidden incorrect assumptions in deducing those predictions, but I can’t find them. Refraction is all that’s being relied upon for most of the predictions, and refraction is a direct result of conservation of energy at the boundary layer. A spherical geometry of the boundary layer would imply a blank sky except for a solitary source of deadly radiation in the night sky directly opposite the Sun. Perhaps another geometry of the boundary layer would look slightly more normal, but I can’t think of any configuration that would match what my naked eye can see.

      • Marble posted on 2009-04-08 at 20:43

        Thanks.

        Hopefully it will hold your interest on its own merit.

        It does directly relate to the variability of light and yes – I agree, the concerns you raised need to be addressed. On that I’ll just note that there is no distinct boundary layer as such. I’ll be arguing that light speed (in a vacuum) is directly proportional to the absolute value of gravity at that point (the absolute summing of the scalar components of gravitational vectors). So, for example, I’m suggesting that the speed of light at Lagrange points will be slower because while the gravity vectors cancel out, it’s the absolute value that dictates the speed of light. (Note I’m not referencing the Doppler effect at all). So basically light speed is faster the further away it is from the (absolute) gravitational effect of mass.

        Falsifiable prediction #1: The speed of light (and hence atomic clock ticking) is slower at a Lagrange point.

        Falsifiable prediction #2: Atomic clock ticking will be appropriately slower at a Lagrange point, whereas relativity (I think – correct me if I’m wrong) would say that it should tick faster due to the low gravity environment.

        Falsifiable prediction #3: e=mc^2 will also reflect the lightspeed change etc.

        Of course of more interest / more importance is how gravity works – and my justification for why gravity and light are interrelated (yes – I think I can unify gravity & EM somewhat more than just the Doppler effect) as I have outlined above. I want to take a little longer to put down, but I haven’t seen it postulated as a theory on the web (at least in a format I understand). And (of course) it doesn’t need to appeal to complex mathematics for the basic concept of how it works.

        So anyway – I’m not sure if that alters some of your objections to the internal reflection argument. I’ve been trying to visualize the effect of internal reflection (and even the focusing of star light at the sun – that was a good one) and how that would look in the scenario above – but I think I’m going to have to put down some drawings and plot lines.

      • Thanks. Hopefully it will hold your interest on its own merit.

        You’re right, that was interesting. It’s definitely falsifiable science, too.

        It does directly relate to the variability of light and yes – I agree, the concerns you raised need to be addressed. On that I’ll just note that there is no distinct boundary layer as such.

        Which eliminates the problem of total internal reflection, but it’s a close enough analogy to the gradual lensing of the earth’s atmosphere that I don’t think any of the refraction related effects will change significantly.

        I’ll be arguing that light speed (in a vacuum) is directly proportional to the absolute value of gravity at that point (the absolute summing of the scalar components of gravitational vectors).

        Okay, so you’re talking about a general effect that applies to all stars. I previously thought you were singling out our sun alone for the region of slow lightspeed. In that case, all the weird effects I’ve described would be visible around other stars. Nearby stars would focus light from other, more distant stars in extreme ways (in the case of a slowdown of 1,000,000 around Earth relative to interstellar space).

        We do observe gravitational lensing around galaxies and (occasionally) black holes that drift directly in front of farther stars, but that’s a direct effect of the fact that general relativity bends light without slowing it down. It’s also a much smaller effect than a lens with an index of refraction of 1,000,000 would cause.

        I’m a little confused as to what you mean when you say absolute summing. Do you mean taking the lengths of all the vectors and adding them up separately? That would result in a value that would be positive everywhere (because it doesn’t depend on the direction those vectors are pointing, so doesn’t allow for cancellation).

        Falsifiable prediction #1: The speed of light (and hence atomic clock ticking) is slower at a Lagrange point.

        I think I have my answer: you probably do mean that the speed of light at any point is affected by the sum of all the lengths of the gravity vectors at that point, completely ignoring their directions. Perhaps the physical speed of light is determined by taking a theoretical “gravity-less speed of light” (which we don’t know because we’ve never measured lightspeed in a region with absolutely no gravity) and dividing that number by the sum of all those gravity vector lengths, which would imply that the speed of light is essentially infinite when you’re very far away from any massive body. The rest of my reply assumes that this paragraph is a correct summary of your idea.

        NASA has been sending probes into space for decades, and they’re linked to earth by radio signals. The delay in those radio signals is used to calculate an astounding number of subtle effects, and the delay is independently cross-checked with the Doppler-induced phase shift of the radio frequency.

        Anomalies have appeared, such as the “Pioneer anomaly” and the “flyby anomaly” (no time for links, but googling those terms should be effective). But they’re very small errors, and if I recall correctly they don’t show the type of effect you’re predicting. That’s because your effect would show up every day in missions like Cassini. Cassini regularly flies in between us and Lagrange points, and that means the radio delay would anomalously jump by some value because the speed of light would be lower in the region the radio signal passed through to get to us.

        I don’t think that’s the best test, though. That’s because Lagrange points are only special because the two strongest gravity vectors point in roughly opposite directions. Points slightly outside of Lagrange points have very similar absolute sums of the lengths of those gravity vectors, so your proposed mechanism has little to do with Lagrange points. Plus, gravity vectors only partially cancel even at Lagrange points– centripetal acceleration plays a large role which is more evident in L4 and L5 than in L1.

        A better test would be to say that Cassini signals that pass near Jupiter on its way back to earth would provide different delays than if it was at the same distance, but without a massive planet in between us. I don’t think that’s happened, but I haven’t checked in detail.

        Falsifiable prediction #2: Atomic clock ticking will be appropriately slower at a Lagrange point, whereas relativity (I think – correct me if I’m wrong) would say that it should tick faster due to the low gravity environment.

        Off the top of my head, I’m not sure. I’m familiar with special relativity, but general relativity is much more complicated and would require some deep thinking.

        So anyway – I’m not sure if that alters some of your objections to the internal reflection argument. I’ve been trying to visualize the effect of internal reflection (and even the focusing of star light at the sun – that was a good one) and how that would look in the scenario above – but I think I’m going to have to put down some drawings and plot lines.

        It eliminates the total internal reflection objection, but all the other refraction related effects should be unchanged. It would take a while to calculate the changes to my predictions based on this new gradual effect, but I don’t see them changing significantly.

      • Marble posted on 2009-04-08 at 21:48

        Just a thought – how would the lensing affect parallax measurements of stars? I’m still of the opinion that the speeding up of light causes us to overestimate how far the stars actually are. So also any calculations based on their ‘known’ distances are doomed to failure because of the constant c assumption. Likewise with probes & satellites – aren’t the distance measurements to those relying on a constant speed of light too? (Radar/radio signals etc) Anyway – don’t spend too much time on that. Part 2 is yet to come.

      • Just a thought – how would the lensing affect parallax measurements of stars? I’m still of the opinion that the speeding up of light causes us to overestimate how far the stars actually are.

        It would… kind of. The problem is that a speedup of 1,000,000 would produce such a bizarre night sky that I have trouble even thinking about it. Suppose, instead, that light only speeds up outside our solar system by 0.03% (same as the gradual air-vacuum boundary layer).

        This would indeed make stars appear to be farther away. That’s because parallax measurements create a triangle with the base being defined by the positions of earth at opposite sides of the sun, and the top being defined by the other star. The angle at the other star is tiny— less than a thousandth of a degree. The parallax measurement is only possible because our telescopes are precise enough to tell the difference between a 0.0000 degree angle (which would be infinitely far away) and a 0.0001 degree angle (which would be merely far away). This technique simply yields an angle of 0.0000– and thus a distance of infinity which is interpreted as “too far away to tell with parallax”– for objects that are very far away, such as objects that are entirely outside our galaxy.

        If light was faster outside the solar system, the light wouldn’t travel in a straight line from the other star. It would be focused by the gradual boundary layer. This means our telescopes would record the path the light took after being bent, so the triangle would appear to have a smaller angle- and thus the distance to the stars would be artificially increased. The real triangle (measured with straight lines) would have a larger angle so the real distance to the star would be smaller. The apparent triangle (as drawn by our scientists based on the directions the light rays had been curved into) would have a smaller angle and imply that the star was farther away than it actually was.

        But even such a small (0.03%) increase in the speed of light would have baffling consequences. Increasing lightspeed above a small threshold (which I don’t have time to calculate) wouldn’t result in a triangle at all when the measurements were made at 6 month intervals. In other words, the star’s light would appear to be coming from two completely distinct places in the sky. For an extreme case, remember that a 1,000,000x increase implies that the star’s light would hit earth from completely opposite directions!

        This wouldn’t look even remotely like a parallax effect– it would clearly be a lensing effect. There’s a big difference between diverging light (which can yield a parallax distance), plane wave light (which is too far away for parallax) and converging light (which makes no astrophysical sense without a giant lens around the Sun).

        So also any calculations based on their ‘known’ distances are doomed to failure because of the constant c assumption. Likewise with probes & satellites – aren’t the distance measurements to those relying on a constant speed of light too?

        That’s a little complicated. The scientific definition of a “meter” was changed in 1983 to the distance that light travels in 1 / 299,792,458 of a second. But that doesn’t mean physicists have written this assumption into stone. You can find papers exploring speed of light variability in the journals (billions of times smaller than your variability, but variability nonetheless).

        Instead of removing this subject from the bounds of scientific discussion, it’s simply changed the language. In modern physics, speed of light variability is expressed as a variation in the index of refraction of free space. All experiments performed to date agree that the vacuum in every place we’ve looked has an index of refraction that is indistinguishable from “1.00…”

        So scientists performing these distance measurements are well aware that lightspeed may not be constant. The engineers working on the probe missions may not care about this rather esoteric debate, but what I’m trying to say is that a deviation from constancy would stand out in their data in an obvious manner. Like I said, probes that appear to pass behind Jupiter would send radio signals implying that the probes moved in ways not predicted by their gravitational calculations (because the radio signal would be focused by the change in lightspeed around Jupiter) and the radio delay would suddenly change (a direct result of the change in lightspeed around Jupiter).

        I’ll stress that these two anomalies are evidence that the engineers working on these probe missions aren’t making any foolhardy assumptions. I have enormous respect for the work they do, and I’m continually surprised by the subtle anomalies they’ve brought to the attention of the physics community. But I think gravity-dependent lightspeed would create totally different anomalies which haven’t been seen.

        That said, I do think it’s an interesting idea. It might even be true on a very small scale- lightspeed might change by something like a millionth as gravity decreases to zero in intergalactic space. My only real concern is with the scale you’re proposing for the increase in the speed of light, and the mind-twisting effects it would have on the night sky.

      • Marble posted on 2009-04-09 at 00:11

        BTW – I’m not attached to this 10^6 number for increase c – I think that got munged into our discussion somewhere back a bit – but that wasn’t a figure I ever had in mind. I’d like to think that mon 838 nova is still dust expanding rather than a light echo for example – so what ever is required for that to be true is where I’d be heading.

        We’ve got the Easter holidays this weekend (4 day weekend) – so I should get that final part to you in the next couple of days.

      • All those bizarre effects would show up even in the case of a slowdown much less pronounced than the one you’re describing. Just pick any slowdown factor and redo the calculations (like Snell’s window) that I did in my first comment to prove this to yourself.

      • Marble posted on 2009-04-11 at 07:59

        Prologue

        Basically I’ve never been able to go past the fact that light waves behave so much like sound waves – I find it too hard to believe that light is anything but a wave propagating through some medium (at least too hard to believe when I’m not convinced the alternatives have been fully explored).

        But then you encounter the empirically confirmed behaviours of General & Special relativity. Which generally provide no ‘real’ explanation of their behaviour, but instead are more a mathematical description. I include space-time in general as this aspect is a more abstract invention to aid the calculations. And so while some people claim that the aether then falls to Occam’s razor, I disagree in that they are not fairly applying the rule. A mathematical description without an accompanying explanation of the underlying reality is only half the story and the razor must be stayed because it can only be applied when comparing like to like.

        Text removed at Marble’s request. These particles have a size around the order of magnitude of the size of electrons.

        A proposed alternative explanation for gravity, removed at Marble’s request.

        Moreover, I postulate that this aether described is also the medium that the light travels through as waves. The ‘quantized’ component of light and its particle nature coming from the discrete nature of the aether particle itself.

        A connection between the Casimir effect and aether, removed at Marble’s request.

        More Casimir effect and aether discussion removed at Marble’s request.

        As the medium supports light, it also is the source of magnetic fields. Text removed at Marble’s request.

        This is why magnetic fields ‘saturate’, as there is a limit on the number of aether particles in a particular area. Text removed at Marble’s request.

        I could go on with a number of other ‘high level’ explanations of other phenomena, and there’s a stack more interesting observations that can be made from this as a base. Text removed at Marble’s request. My favoured thought on this is that light is a longitudinal wave of aether particles Text removed at Marble’s request.

        Anyway – now you can see my thoughts on the variability of the speed of light. I’ve got more thoughts on all of this I’d like to burble on about – but I’ll leave it there for now.

      • Basically I’ve never been able to go past the fact that light waves behave so much like sound waves – I find it too hard to believe that light is anything but a wave propagating through some medium (at least too hard to believe when I’m not convinced the alternatives have been fully explored).

        Everyone else found it difficult to believe in the 1890s too, but Michelson-Morley showed that any aether theory has to be really weird. The aether has to be dragged by the earth so that our velocity isn’t apparent. It has to be the most rigid substance ever (wave frequency is proportional to the medium’s rigidity) but not exert any force on the earth as it moves through the aether. Otherwise we’d have spiraled into the sun eons ago.

        Furthermore, speed of light tests performed across the solar system show no absolute velocity effects, which is essentially a repeat of the Michelson-Morley experiment, but across interplanetary distances. Dragging the aether is no longer as simple as Stokes and Miller once thought.

        Physicists abandoned the aether for good reasons nearly a century before I was born.

        But then you encounter the empirically confirmed behaviours of General & Special relativity. Which generally provide no ‘real’ explanation of their behaviour, but instead are more a mathematical description.

        I think that’s a misconception. Lorentz transformations contain nearly all the math of special relativity, and were developed as a “real explanation” of Michelson-Morley’s results. They claimed that nature conspired in just the right way to fake the Michelson-Morley result, squishing objects and making clocks run slow when moving through the aether.

        This math is identical in special relativity, but Einstein gave it a different “real” explanation: the structure of our universe is such that there is no aether, and no need for an absolute frame of reference. The speed of light is the only constant- every observer in the universe measures it to have the same value of 299,792,458 m/s. In order for everyone to measure lightspeed to be the same value, one aspect of Galilean relativity had to be altered.

        I don’t see how this explanation is any less “real” than Lorentz’s explanation. It’s less intuitive, but so is the fact that orbiting spacecraft need to slow down in order to speed up (yes, you read that right). Doesn’t make either fact any less true.

        Text removed at Marble’s request. These particles have a size around the order of magnitude of the size of electrons.

        Reply text removed at Marble’s request.

        Also, the size of the electron is a very tricky thing to define. Unlike a proton which is made up of 3 quarks and thus has substructure, experiments with electrons have never revealed substructure so they’re probably fundamental particles. As a result, electrons don’t really have a “size” in the normal sense of the word:

        • A free electron’s DeBroglie wavelength at experimentally accessible momenta is much smaller than the wavelengths of visible light. That’s why electron microscopes have much higher resolution than light microscopes.
        • An electron bound to a proton has a characteristic size known as the Bohr radius.
        • The Compton wavelength describes the length scale at which non-relativistic quantum mechanics breaks down and relativistic quantum electrodynamics is necessary.
        • The classical electron radius isn’t defined by quantum mechanics but rather uses classical electrodynamics to estimate the size an electron be if its mass is due entirely to electrostatic potential energy.

        A proposed alternative explanation for gravity, removed at Marble’s request.

        Scott Adams once said that there is no gravity, it’s just that all the objects in the universe double in size every second, so when we jump into the air the expanding earth pushes against our expanding bodies in such a way that we appear to fall down to the ground. We don’t notice this constant expansion because everything grows at the same rate, including our rulers.

        His idea was cute, and it was interesting to play around with it for a couple of minutes. But he didn’t seem to realize that gravitational physics is 300 years old. He not only needed to explain how that resulted in an inverse-square approximation, but he also needed to explain Lagrange points, the precession of Mercury’s orbit, and the orbital decay of binary pulsars due to energy loss from gravitational waves. He also needed to explain the gravitational deflection of starlight, and why it matches general relativity rather than a combination of Newtonian gravity and special relativity.

        A connection between the Casimir effect and aether, removed at Marble’s request.

        Reply text removed at Marble’s request. Since the Casimir effect only works with conductive matter, it seems like these new particles of yours depend on the conductivity of the matter Reply text removed at Marble’s request.

        Experiments have ruled out that idea.

        More Casimir effect and aether discussion removed at Marble’s request.

        That’s very similar to the mainstream explanation, except virtual photons aren’t able to exist inside the gap because of Maxwell’s equations. It doesn’t require new particles, and the conventional explanation accounts for the conductivity dependence.

        Moreover, I postulate that this aether described is also the medium that the light travels through as waves. The ‘quantized’ component of light and its particle nature coming from the discrete nature of the aether particle itself.

        As the medium supports light, it also is the source of magnetic fields. Text removed at Marble’s request.

        This is why magnetic fields ‘saturate’, as there is a limit on the number of aether particles in a particular area.

        Saturation is just a ferro-magnetic thing. It depends on the properties of matter, not space. For instance, magnetars have magnetic fields hundreds of thousands of times stronger than anything we’ve made so far.

        I could go on with a number of other ‘high level’ explanations of other phenomena, and there’s a stack more interesting observations that can be made from this as a base.

        Yes… I’m sure you could. But high level explanations are the “pointy haired boss” of the physics community. The pointy haired boss doesn’t know what those funny squiggles and semicolons on your monitor mean. They’re ridiculous gibberish to him, which is why he talks endlessly about anything but real, technical matters. He demands that programmers speak in simple English and avoid talking about the internals of their programs.

        This is okay in one sense because people shouldn’t need to understand pointers to use MS Word. But it’s dangerous to take this approach too far because eventually the pointy haired boss comes to believe that the high level explanations contain the same information content that the programmers get from their source code.

        They don’t, not by a long shot. Low level explanations are necessary, such as details of memory management techniques, or which algorithm will provide logarithmic rather than quadratic scaling for a database search. Even lower level explanations like the best method to code a search algorithm from scratch are sometimes necessary if you think that existing search algorithms aren’t good enough. But the pointy haired boss can’t possibly program that, because all he understands are the high level explanations. He just wants the program to be shiny and run fast, so he thinks it’s bizarre that those simpleton programmers can’t just finish it in an afternoon.

        A programmer can’t get away with that level of abstraction, though. If you need to search a database, you carefully look at all the search algorithms available at a low level. Then you (probably) choose one of those algorithms or (less likely) write one from scratch. What you don’t do is sit down at the computer for the very first time, grab an introductory copy of VisualBASIC, and start coding a brand new search algorithm for serious commercial use. You might out-do every other search algorithm on the planet if your name is Linus Torvalds, but it’s more likely that you’ll reinvent the wheel. A slightly crooked wheel, too, compared to modern algorithms.

        Don’t do that. Life’s too damned short to reinvent the wheel. Look at the last century of physics at a low-level perspective, and look at all the work that’s been done by people who had very similar ideas to yours. You were right to suggest that this will require a physics B.S., but it will likely require more than that- some of these topics definitely require graduate math and physics classes. This is a HARD task, but I’ve found it very rewarding (albeit after leaving many head-shaped holes in my wall because of Jackson’s homework problems…)

      • Marble posted on 2009-04-20 at 10:23

        Sorry it’s taken me so long to reply – but having said that I’m sure you’re putting your time to better use rescuing your PhD ;)

        Text removed at Marble’s request.

        Replying to one other of your points –

        It has to be the most rigid substance ever (wave frequency is proportional to the medium’s rigidity) but not exert any force on the earth as it moves through the aether. Otherwise we’d have spiraled into the sun eons ago.

        Given that the planet’s orbits are unexpectedly increasing (earth’s at 10m/cy according to this article), we’re certainly in no danger of spiraling into the sun. So I’ll point out that any drag upon the earth via aether could be masked by this phenomena. (As an aside – if I’ve got that unit right (cy = calendar year?), and making the large assumption of no change to the radial increase, 4 billion years ago the earth would have been at Venus’ current orbit. Life is postulated to have appeared around 3.5b years ago – so I guess extremophiles starting up give more complex life bit more breathing space allowing the earth to come to a more hospitable distance… Ahh – but wouldn’t the oceans have been ‘boiled off’ before then? Surely they couldn’t form so close to the sun – even if the earth is forming from orbital particles – approaching ice particles would have been vaporized right? – so where’s the primordial ooze? I didn’t realize they meant the ooze was molten lead ;)…sorry – got more distracted there than anticipated.

        Correction by Marble – I now realize that cy stands for centuries – which means I’m a factor of 100 off on my orbital distance calculations. However, it will be interesting to compare the corrected orbital variation with the circumstellar habitable zone taking into account solar luminosity variation.

        The orbital resonance is also an interesting feature of the planets as well that may point to a drag of some description too (otherwise you have a very weak electrical charge / dipole that keeps one half of the moon facing the other way etc – if you want to adhere to a previously more chaotic solar system within the last 10k years.).

        For the sake of closing this already delayed email; I was about to suggest that e=mc2 is not a reflection of matter being created or destroyed Text removed at Marble’s request.

        As far as I can tell – this isn’t an already existing wheel created by somebody else (aether wind is though). I do keep debating within myself whether this is a worthwhile expenditure of time… being naturally drawn to solving problems (particularly those not yet solved by anyone – rather than those designed to be solved just for the sake of it) I’ll probably never abandon it completely unless all my angles of attack end up in defeat. The reality is I need to get cracking on my own software so I can put it to a more certain financially productive use. ;)

        Harking back to an earlier conversation – here’s the picture I had in mind when I said that it doesn’t matter how much math you throw at gravity – it won’t do ‘that‘. You may want to check the quick time link on that page as well. Of course I’ll admit that if you (not you specifically of course) could model that behavior in a simulation – I’d be convinced it was gravity doing that. My intuition about gravity thinks it’s unlikely though – but of course, it is fallible.

      • Marble posted on 2009-06-24 at 06:13

        BTW – I saw the slashdot article on the model predicted gravity affect of Saturn’s moons causing ripples in the rings. Sounds like another chink in the EU’s armour – however I’d love to understand why (as seen in this image) the waves are not appearing on both before and after the moon on both the inner & outer rings at the same time. I.e. what makes the waves switch sides as the moon passes by? (Rhetorical question BTW).

  3. Marble posted on 2009-05-05 at 22:34

    BTW I’ve stumbled across the Bohm interpretation for QM from a forum thread and was quite happy to see these guys had already headed down the same conceptual path I am want to go.

    I was very excited to see their explanation for the 2 slit experiment, because I already had come to a similar conclusion – although only a ‘pointy haired high level one’ of course- but effectively I would have explained it as something like a 3d standing wave effect (?-field they call it / pilot wave by another name I think) of aether particles caused by the emitter leading up to the expulsion of each photon / electron etc.

    I also came across Citation #2 (PDF) in wikipedia’s Bell’s Theorem entry ‘An intuitive analogy for EPR experiments’. Which is also very interesting and worth me trying to wrap my head around. But I’m glad to see there are (more knowledgeable) people who aren’t fond of the ‘collapsing wave function’ view of reality and despite the consensus view opposes them, I intuitively have to throw in with them. (Like I do with those pesky CO2 global warming skeptics ;)

  4. (Note by Marble: This conversation has been an extract of a private email conversation held with Dumb Scientist and originally was not intended for publication. Subsequently – some parts had been removed at my request that I didn’t want to see published in a public forum at this time.)

    (Ed note: I opposed the text removals but eventually decided to release some of the conversation rather than none of it. During the course of those negotiations, I made the following comments which I believe motivated Marble’s question.)

    I’m going to have to think about this. Your version cuts out topics I had been collecting notes on for use in my reply… (Ed. note: I went on to describe topics that were removed.)

    I was uncomfortable with this lack of openness from the beginning, but this has thrown me into a moral quandary. I really don’t know what to say- you clearly put a lot of time into all this at my request, but if I let my principles slip even a little I’m not sure where that will lead me.

    I’m probably going to be leaving the mainstream scientific field even if I manage to graduate, but that’s because I’m appalled at the closed-source habits and the unspoken requirement to publish in journals that charge $20 per article per person rather than returning the knowledge as open-source software/papers to the people who funded it in the first place: the taxpayers.

    Dumb Scientist is my personal refuge from all that. I’ll have to put some serious thought into how much I want to compromise on the original vision in order to continue this conversation.

    • Marble posted on 2009-06-09 at 10:24

      From something in your last email – how/why do you view Open Science so passionately? I have a concern about open science being unwaveringly seen as an exclusively a Good Thing. I definitely think people should be educated / critical thinkers – but for example say truly strong AI is eventually created and is ‘open sourced’. Strong AI in my opinion would make unscrupulous governments near omnipresent/omnipotent with regards to controlling their populations including censoring all media. Not to mention the impact it would have on warfare. Now I seriously do not want particular countries to get their hands on strong AI (ignoring MAD which fails against suicidal attacks anyway) just taking into consideration their own oppressed populations. I’m not saying democracy is the be all and end all, obviously it isn’t – but I’d rather a democracy rise to global power over other forms of government because I think that’s the best bet of AI ending up being used altruistically. And for that to happen – (strong) AI at least shouldn’t be open science (and definitely not open source ;) ) Along those lines – I have similar concerns if a large enterprise invents AI (eg Sony for arguments sake) which could easily then corner many many markets on their terms. I’m not sure how that’d play out in the end – but you’re relying on the benevolence of the shareholders then – who are generally in it for maximal profit anyway…

      • gkai posted on 2009-08-13 at 13:49

        Hum, a thought about strong AI: What are the chances for an oppressive (or a democratic, I think it would not make a big difference) government or a greedy corporation to keep a very intelligent (like, much more intelligent than them) slave in an obedient state? Hell, even a cooperative one? Just keeping it from doing whatever it just want?

        Slim, imho…

        True strong AI is, I think, synonymous from almost completely uncontrollable. The best one can hope is, at the initial building stage, add a set of core instinct that will turn out to make humans look like enjoyable pets, or interesting endangered species, or just so cute that they are impossible not to love, or like some slightly senile parent that nonetheless merit respect and love, and should be helped out of what he did when we were small and defenseless (I talk from the point of view of the AI here). And hope that those instincts are sufficiently self-consistent, entertaining and bring more joy than constraints and discomfort so that they are not worth being circumvented/rewritten…

        It could look like the strong AI is a exploitable prisoner at first. I do not see how this could continue for long if the intelligence gap is significant, i.e. it is a true strong AI qualitatively different from a team of very intelligent humans. Imho it makes as much chance as a pack of very intelligent wolves keeping a human slave to help them beat the other packs and devise clever tactics to bring those big mooses down without getting gutted like last winter…Then, as he was very helpful, why not a wife and children, so that we really become the uber-pack…then…. oups, we have become dogs ;-)

  5. (Ed. note: these comments were copied from here.)

    Causality-wise, going faster than c by any method is impossible. [Chris Burke]

    I’ve already responded to the parent agreeing with most of your position. However, I think this statement goes a little far. FTL travel almost certainly implies time travel, but relativity only makes a preferred frame seemingly unnecessary given current observations. It doesn’t rule out a preferred frame altogether.

    Also, violating causality is a good reason to be suspicious of a phenomenon, but I don’t think it deserves the “impossible” label. Grandfather paradoxes are (IMHO) the only reason to doubt the possibility of breaking causality, and the many worlds interpretation of quantum mechanics eliminates these paradoxes.

    • However, I think this statement goes a little far. FTL travel almost certainly implies time travel, but relativity only makes a preferred frame seemingly unnecessary given current observations. It doesn’t rule out a preferred frame altogether.

      Relativity is predicated on the assumption that there is no preferred reference frame. It is because of that assumption that many of the laws of Relativity are required to ensure that it is the case. If there was a preferred reference frame such that the laws of physics only had to apply to it, but causality could be broken elsewhere, then the Theory of Relativity would be very different.

      Also, violating causality is a good reason to be suspicious of a phenomenon, but I don’t think it deserves the “impossible” label.

      It’s impossible in the Relativistic Universe. It is always possible that Relativity is wrong, and that its assumptions are wrong. Hell, Newton was wrong about his basic and seemingly safe assumption that time was the same for all observers (and I hear he was even smart enough to recognize he was making that assumption and write it down). Maybe causality can be broken. The implications for physics would be profound.

      Things like Warp Drives are ways to get around the limitations of Relativity without it having to be “wrong” in the hypothetical universe. But it doesn’t work.

      Grandfather paradoxes are (IMHO) the only reason to doubt the possibility of breaking causality, and the many worlds interpretation of quantum mechanics eliminates these paradoxes.

      I’m going to wait until we unify QM and GR before I agree with that. It isn’t clear to me at all that having a loop in causality in your spacetime graph would necessarily mean you have a separate outcome from some quantum waveform collapse such that you are in a different universe at the “end” of the loop vs the “beginning”. There is nothing in Relativity (obviously) that would require that to be the case, and there wouldn’t be a “end” or “beginning” either.

      I’m also going to wait until we have some reason to actually prefer the many world interpretation over others before I agree with that. Just because it would be convenient for solving time travel paradoxes in a universe where FTL travel is possible doesn’t mean its actually true. Seems more likely (as in agrees with current best theory) that time travel (and thus FTL) is simply impossible.

    • If there was a preferred reference frame such that the laws of physics only had to apply to it, but causality could be broken elsewhere, then the Theory of Relativity would be very different.

      Yes, but like Newtonian mechanics, modern relativity would still be a useful first order approximation. Plus, to the best of my knowledge relativity has only rarely been tested to more than first order effects.

      I’m also going to wait until we have some reason to actually prefer the many world interpretation over others before I agree with that. Just because it would be convenient for solving time travel paradoxes in a universe where FTL travel is possible doesn’t mean its actually true.

      I agree; at the moment there’s no watertight reason to prefer one interpretation over another. But the alternatives are hidden variables (local variables are ruled out by Bell inequality experiments and nonlocal variables would violate relativity), or the literal Copenhagen interpretation. (There are others, but these are the most popular.)

      The Copenhagen interpretation is almost certainly wrong. (My only correction to his list is that #6 also applies to the No Hair theorem.)

    • Yes, but like Newtonian mechanics, modern relativity would still be a useful first order approximation. Plus, to the best of my knowledge relativity has only rarely been tested to more than first order effects.

      Well what I mean is that the Theory of Relativity only looks anything like it does due to the absence of a preferred reference frame. Newton’s Laws are only a good approximation because its basic assumption that time is constant is approximately true for normal speeds/masses. It’s difficult to see how “There is no preferred reference frame” could be approximately true, but all the conclusions of Relativity stem from that being exactly true. Without that assumption it wouldn’t be even approximately like it is now. And it seems too inherent, too good an assumption, to be turned over but still have the theory hold any water at all. Much like the classical Conservation of Energy has survived the transition to GR and QM, I suspect “all reference frames are equal” will continue to hold true.

      On the other hand, I’ll admit I have a hard time imagining what a universe without causality would be like. Much like I’m sure people looking at Einstein’s theory had a hard time imagining a universe where time passed at different rates for different people.

      We have tested Relativity to the extent of our ability to measure. It’s extremely well tested. I’m not sure what second order effects your referring to, but if its possible for us to measure them, we have.

      I agree; at the moment there’s no watertight reason to prefer one interpretation over another. But the alternatives are hidden variables (local variables are ruled out by Bell inequality experiments and nonlocal variables would violate relativity), or the literal Copenhagen interpretation. (There are others, but these are the most popular.)

      We’re already violating Relativity by allowing FTL even in theory, so I have no problem with that aspect. :P

      The Copenhagen interpretation is almost certainly wrong. (My only correction to his list is that #6 also applies to the No Hair theorem.)

      I’ve always hated the Copenhagen Interpretation because it always seemed like a scientific argument based on puns.

  6. (Ed. note: These comments were copied from a conversation about extracting momentum from vacuum fluctuations.)

    Here’s an interesting question:

    Assuming this works, could the process be made reversible?

    This would result in a device that resists momentum change, instead absorbing kinetic energy and using it to rotate the nanoparticles (and I assume dumping the energy out as heat).

    This would be really useful for a lot of purposes. Spaceflight, naturally (anything that lets you play with momentum is useful in spaceflight); it gives you a brake that would cause your spacecraft to resist acceleration, which would be very useful for satellite stationkeeping. Around planets the devices would fall slowly; you might be able to use one as a parachute. You could install one on the tops of tall buildings to make them resist swaying in the wind. An aircraft with one installed would behave really oddly (possibly usefully). The possibilities are endless…

    • Unless I’m misunderstanding, the quantum wheel is just an efficient photon drive. That is, normal use results in excited vacuum states (photons) carrying momentum in the opposite direction as the spacecraft. So the reverse process would be like a solar sail that absorbs photons and spins magneto-electric nanoparticles. This seems like it would require an incoming stream of photons.

      What you’re describing sounds like two (or more) of these quantum wheels placed against each other, coupled to an accelerometer. Whenever the object begins to accelerate in a particular direction, the quantum wheel in that direction starts emitting photons to counteract the acceleration as much as possible. That would result in an object with enhanced inertia, but it would require a huge amount of power (300MW) to resist a force of 1 N.

  7. Someone posted on 2010-03-28 at 19:10

    (Ed. note: these comments were copied from a conversation about a map of dark matter from Hubble.)

    Nice pretty picture… especially when you consider it’s a picture of something that very possibly doesn’t even exist.

    There isn’t any “scale bar” because you are not looking at something at any fixed distance! You are looking at (theoretically) blobs of stuff at various distances.

    • Abcd1234 posted on 2010-03-28 at 19:44

      It exists. Educate yourself.

      • Thanks. It’s worth noting that the Bullet Cluster results you linked to are only the most recent development in dark matter’s nearly 80 year history:

        1933 – Zwicky studies the Coma cluster of galaxies and is surprised to find that these galaxies are orbiting each other much faster than he predicted based on their visible mass. He proposes that each galaxy actually contains much more mass than is visible.

        1959 – Measurements of galactic rotational velocities conflict with expected velocities based on the amount of matter observed to be present. The dark matter concept proposed by Zwicky is found to solve this problem too.

        1970s – Big Bang nucleosynthesis has trouble reconciling observations of high deuterium density with the expansion rate of the universe. Non-baryonic dark matter solves this problem as well.

        At this point, dark matter was simply an hypothesis. MOdified Newtonian Dynamics (MOND) was another hypothesis with equal weight. But then in 2006 measurements of the Bullet Cluster supported the dark matter hypothesis over the MOND hypothesis.

        Simultaneously, WMAP (2001-present) measured the microwave background radiation and independently confirmed the existence of dark matter. It also revealed an even larger amount of “dark energy” which confirmed the 1998 discovery that the expansion of the universe is accelerating.

      • painandgreed posted on 2010-03-29 at 10:03

        At this point, dark matter was simply an hypothesis. MOdified Newtonian Dynamics (MOND) was another hypothesis with equal weight. But then in 2006 measurements of the Bullet Cluster supported the dark matter hypothesis over the MOND hypothesis.

        I don’t think you could really say MOND had equal weight until 2006. I got my physics degree in the late 80’s/early 90’s and while MOND was often mentioned along with dark matter, it was usually as a footnote of other possibilities. Very few considered it a serious contender as it was complicated and failed to describe all but ideal models. Physicists like elegant physics which is the ability of simple equations to describe a complex situations. MOND required complex equations to describe simple situations.

      • (Ed. note: actually posted on 2010-04-06 at 10:59, but my limited comment nesting puts this out of logical order if the actual date is used.)

        I finished undergrad in 2004, and my experience was similar to yours. I was being overly conciliatory to MOND because in my experience non-physicists aren’t swayed by issues of elegance and simplicity.

      • Someone posted on 2010-03-29 at 12:34

        However, just this last year it was found that prior surveys of background radiation had missed the mark widely. Further, there is strong new evidence contradicting the assumption that we are in a “typical” region of the universe, simultaneously calling into question whether expansion is actually accelerating.

        Gotta keep up with the news, man!

      • However, just this last year it was found that prior surveys of background radiation had missed the mark widely.

        Huh? What “mark” did WMAP or COBE miss widely? Says who?

        Further, there is strong new evidence contradicting the assumption that we are in a “typical” region of the universe, simultaneously calling into question whether expansion is actually accelerating.

        WTF? The 1998 supernovae measurements I cited were measured over billions of light years, and have been confirmed by more recent measurements with higher precision. They’re confirmed by WMAP’s measurements of the background radiation, which encompasses nearly the entire observable universe. They’re also confirmed by Chandra data of galaxy clusters.

        Update: Perhaps Someone read this paper and didn’t notice the word “if” in the abstract?

      • Someone posted on 2010-03-30 at 16:12

        My mistake. I was thinking microwaves, but it was gamma rays.

      • As you say, unidentified gamma ray sources aren’t relevant to the galaxy collision observations or dark energy acceleration observations in question. You’ve demonstrated true scientific spirit here by admitting a mistake. Kudos.

      • I am curious about something: since dark matter has to be strongly interacting enough to account for the anomalies observed in the rotation of galaxies (which is why it is presumed to be the majority of the gravitationally-interacting mass in the universe), how does that fit with the gravitation that we observe locally, in our solar system? And please don’t try to tell me that the gravitational interaction is too weak to notice on that scale, because if it did, then it would not have the requisite effect on galaxies, either. (If it interacted on weakly gravitationally, then there would have to be even more of it to have the observed large-scale effects, so that’s not an answer.) [Someone]

        The key is that most dark matter candidates don’t strongly interact, as shown by the Bullet Cluster observations. In fact, that’s what WIMP stands for: Weakly Interacting Massive Particle. Since dark matter only interacts via gravity, it only clumps together on the largest scales of galaxies; at the scale of individual solar systems most WIMP candidates should have a fairly uniform density. Depending on whether we’re dealing with cold or hot dark matter, clumping may occur at scales smaller than galaxies but to the best of my knowledge this hasn’t been confirmed experimentally.

        Locally uniform dark matter exerts no detectable gravitational force on planetary orbits because they’re much smaller than the typical distance between stars. This conclusion follows from reasoning similar to that used to solve the famous sophomore-level physics problem where one proves that gravity everywhere inside a hollow uniform sphere is identically zero.

      • What about the dark matter inside a sphere defined by the radius of a planet’s orbit? Doesn’t it exert a gravitational force which should show up in careful measurements of that planet’s orbital period? [Someone]

        Yes, but dark matter’s locally homogenous distribution means that the amount of dark matter in this sphere is very small compared to the mass of the Sun, so the corresponding force is also very small. Gauss’s law for gravity shows that the force due to this sphere increases linearly with respect to distance from the Sun, as opposed to point source gravity which has inverse square dependence. Incidentally, that’s similar to the gravity inside a spherical planet with uniform density.

        More specifically, the gravitational acceleration due to dark matter is g_dark = BigG*rho*radius, where BigG is Newton’s gravitational constant (6.67*10-11 m3*kg-1*s-2, rho is the density of dark matter in kg/m3, and radius is measured from the center of the Sun in meters.

        A rigorous estimate of rho would start with a plausible total mass of the Milky Way’s dark matter halo, then use something like the NFW profile to predict the density at the Sun’s distance from the center of the Milky Way.

        But I’m pressed for time, so the standard density of dark matter at the Sun’s location is quoted as 0.3 GeV/cm3 (with an uncertainty of ~3x), and relativity says 1 GeV/c2 = 1.78*10-27 kg, so rho = 5.3*10-22 kg/m3.

        Since the dark matter acceleration grows linearly with distance from the Sun, let’s examine its value at the maximum distance of Pluto from the Sun: 7.38*1012 m. In that case, g_dark = 2.6*10-19 m/s2, which is less than a billionth of a nanometer per second squared. Compare that to the Sun’s gravitational acceleration at that distance: 2.5*10-6 m/s2. Even at the orbit of Pluto, the Sun’s gravity is 10 trillion times stronger than gravity due to dark matter.

        In contrast, the orbit of the Sun around the center of the Milky Way is affected by all dark matter in the sphere defined by the Sun’s orbit. Since the radius of this sphere is ~30,000 LY (which is much larger than the average distance between stars), the non-clumpy distribution of dark matter is irrelevant on that scale so it exerts a measurable gravitational force. As a result, a smooth galactic dark matter halo affects the velocity curves of stars vs. galactic radii, but has no significant effect on planetary orbits.

      • The Pioneer anomaly exhibits anomalous acceleration toward the Sun at an extremely large radius, which seems to match the effect due to a uniform dark matter distribution in the Solar System. [Someone]

        That’s probably not related to dark matter because the effect doesn’t show up in the orbital periods of the outer planets, and the equivalence principle holds that dark matter should affect planets exactly the same as probes. Plus, even after accounting for local enhancements in the density of dark matter around the Sun, dark matter is too weak on this scale to explain the Pioneer anomaly.

      • Most of your derivation assumes local uniformity of the dark matter halo, which might not be a good approximation if dark matter is cold enough to clump together on smaller scales. [Someone]

        Yes, but WIMPs don’t interact via the electromagnetic force. This makes them invisible (dark) but also means they can’t run into each other (“solid matter” is only solid because the atoms’ electron shells repel each other).

        As a result, most astrophysicists believe that normal matter (with its unique clumping ability) acts as “seeds” for dark matter clumping. That’s the case on galactic scales, and maybe stars and planets can capture sufficiently cold (i.e. slow) dark matter. But there probably aren’t any planets or stars made entirely out of dark matter, which (if in the solar system) would be detectable using ranging measurements to Cassini and other probes.

        Of course, physicists are studying the issue of small-scale dark matter clumping. For instance, the flyby anomaly might be related to earth-bound dark matter, and a bound on the dark matter density in the solar system has been derived from planetary motions.

        A significant amount of earth-bound dark matter would imply that our measurements of the density of the Earth via GRACE, GOCE and Gravity Probe B (all of which include dark matter) and seismic data (which don’t include dark matter) should disagree. As far as I know they don’t show significant discrepencies, so this places an upper bound on the mass of Earth’s hypothetical dark matter core.

      • What about the Cuspy halo and missing satellites problems? [Someone]

        I study spatio-temporal gravity anomalies on the surface of the Earth, which is a radically different subfield than dark matter astrophysics. The basic principles are similar, though. We both use natural laws (Newtonian gravity or GR) to predict the effects of all known masses on observations. Then we subtract these known effects from the observations and study the nature of the residual errors.

        My experience is that when new observations examine the background model (i.e. the “known masses”) at finer spatial and temporal resolutions, a number of strange anomalies appear. Most of these anomalies only end up requiring minor tweaks to the model, though. The general public rarely hears about these “boring” anomalies and tweaks, so they tend to believe that every new anomaly precedes a revolution on par with the discovery of the precession of Mercury’s orbit.

        The missing satellites problem looks like an interesting problem for people who study galactic evolution on a very fine resolution. The missing cusps at the centers of galaxies are also very small features compared to the galactic halos, and might be related to the ubiquitous super-massive black holes in the same places.

        In contrast, MOND’s problems are at the largest scales. The existence of at least some CDM appears to be virtually unchallenged at galactic cluster length scales, though (like all models) its smaller scale features are still being explored and may eventually help determine which varieties of dark matter exist. MOND proponents have to include hot neutrinos, which are a form of non-baryonic dark matter. Also, MOND requires dark matter to avoid predicting radial temperature profiles which disagree badly with observations.

        Update: A new technique uncovers at least one of those “missing satellites.”

      • MOND explains the Bullet Cluster’s collision velocity better than DM. [Someone]

        Actually, the Bullet Cluster’s velocity isn’t extraordinarily high in the Lambda-CDM model, even though it’s higher than average.

      • MOG and TeVeS are claimed to reproduce Bullet Cluster observations. [Someone]

        Dark matter explains the Bullet Cluster observations using both Newtonian gravity and general relativity. As a result, I’m suspicious that modified gravity can only explain these results by modifying general relativity. TeVeS also doesn’t explain Boomerang data and requires some dark matter to simultaneously fit lensing and rotation curves.

      • Here’s a counter-example to the Bullet cluster. [Someone]

        Yes, Abell 520 doesn’t have the same angular separation between the x-ray emissions and the mass inferred from weak lensing observations. But this may simply mean that the galaxies in Abell 520 have a lower ratio of dark vs. normal matter than the galaxies in the Bullet Cluster.

        That’s something MOND or TeVeS or MOG can’t easily explain, because modified gravity should be universal so it should always imply the same ratio of “dark matter” to normal matter (although not all normal matter is equally visible). Therefore dark matter more easily explains the dark galaxy called VIRGOHI 21 which might have ~1000x as much dark matter as visible matter (compare to ~10-20x for the Milky Way’s ratio). Dark matter also explains the apparent lack of a universal dark matter density profile better than modified gravity for similar reasons.

        But here’s another cluster that confirms the Bullet Cluster isn’t just a fluke.

      • But we haven’t detected any WIMPS or axions, despite decades of attempts. [Someone]

        Yes, it’s difficult to tell which varieties of dark matter are prevalent in our galactic neighborhood. Many groups are working on this problem, and some recent papers seem interesting. But, as you say, at the moment no unambiguous detection events have risen above the signal to noise ratio.

      • It exists. [Abcd1234]

        What are its properties then? All we’ve observed are anomalies in gravity and space-time. Positing some mysterious form of invisible matter isn’t an explanation at all – all we still know is that there’s these weird things going on with gravity and spacetime, and that our current theories are incomplete.

    • (Ed. note: these comments were copied from here.)

      There was an interesting musing by the author of a recent Scientific American about how dark matter may interact with its own kind by forces other than the ones that cause normal matter to interact with its own kind. According to the musing (which the author rejects), dark matter operating under such forces could form complex systems, maybe even an unseen parallel universe where “people” live lives like ours, as unaware of us as we are of them. All undetectable, except by their gravitational attraction on us. [Black Parrot]

      This is something I’ve been musing on recently. What if there were, say, 4 or 5 universes all operating in the same “space”, but invisible to each other because they’re in another parallel dimension or 3-brane or something like that? According to string theory, pretty much nothing can escape from a 3-brane except gravity – which might explain why gravity is so weak, as it could diffuse in more directions than EM radiation. Since EM radiation can’t escape, these things would be effectively invisible, as it appears to be.

      It could also explain that Dark Flow thing.

      • Last year Tanuki64 asked a similar question, and I referred him to an anecdote from my senior year of physics undergrad. In 2004, I presented a similar idea to an astrophysicist in my department:

        I wonder if “dark matter” is the result of gravitational interactions with galaxies in parallel universes. Suppose parallel universes exist in the same physical “space” we inhabit, and only interact with each other (and us) via gravity. The galaxies in different universes would then clump together, but their disks wouldn’t necessarily be aligned. So the total gravity would appear similar to a spherical halo of dark matter. This would explain the too-high velocities of stars at the edges of galaxies and the too-high velocities of galaxies in superclusters.

        In 2004, I didn’t have enough experience to understand why that astrophysicist rejected my idea by saying that I was trying to explain something we don’t understand by invoking something else that we understand even less. Several years later, though, I started to see some cracks in this idea:

        2009-07-25 Update: I don’t think my hypothesis is consistent with the Bullet cluster data.

        2009-07-27 Update: Also, I wonder if galaxies in my imaginary parallel universes really would clump together. They’d certainly be gravitationally attracted to each other, but if each universe has roughly the same density of galaxies, they’d typically have a long way to fall towards each other. As a result, they’d be moving so fast that I doubt any damping mechanisms could have brought them to rest in ~13.7 billion years. But… what if they formed in the same place initially? That would make sense because supermassive black holes likely play a large role in proto-galaxy formation. Gravitational collapse in one universe would trigger collapses in other universes leading to galaxies with small relative velocities. But in that case, it seems like the disks would be aligned because disk formation probably doesn’t involve a large percentage of actual physical collisions (any actual astronomers want to help me here?). I think this would result in the wrong velocity profile for stars versus distance from the center of the galaxy? Oh, and all these stars in different universes would cause gravitational lensing events to occur with a much greater frequency than has been observed by the OGLE. Galaxies with non-aligned disks would look even weirder- that implies lensing with bizarre relative velocities.

        It could also explain that Dark Flow thing.

        Perhaps. But given the above problems with such an idea, it’s more likely that the Great Attractor is simply a massive supercluster in our own universe, even if it’s already passed over the horizon.

      • Interesting, Khayman. Why do you say it’s inconsistent with the Bullet Cluster data? It seems to me that if you had two clusters on top of each other, one in our 3-brane, and one in a neighboring one, and they collided with a cluster just in our 3-brane, that the result would be more or less consistent with the result of the purported dark matter separating from the normal matter.

      • First, note that dark matter halos have always been observed around normal matter galaxies. Your example should therefore involve at least four galaxies- two visible colliding galaxies in our own universe, each with a counterpart galaxy that occupies the same physical space in the other universe/brane, which are therefore colliding in that other universe too.

        In conventional theory, dark matter interacts with normal matter and with itself only through gravity. These dark matter halos would fly right through each other in a galactic collision, whereas normal matter can’t because it also interacts via other forces, most notably the electromagnetic force. That’s why the Bullet Cluster’s separation of x-ray vs. matter distribution tends to support dark matter rather than MOND.

        But if these dark matter halos are simply made of ordinary matter in another universe, they would collide with each other in that universe in the same way that ordinary matter collides in our universe. Thus I don’t believe that our hypothesis would result in the same separation of x-ray vs. matter distribution observed in the Bullet Cluster.

      • Aside from the fact that I’ve never seen evidence of a galaxy without a dark matter halo, the Bullet Cluster has two opposing lobes of dark matter, indicating that each galaxy in the collision had a dark matter halo that flew right through the other dark matter halo. Another similar collision also has two opposing lobes of dark matter. If a collision is ever observed with only a single lobe of dark matter, that would be consistent with our hypothesis. (Though the other objections I raised would still apply.)

      • Well, okay, rarely seen a galaxy that might not have dark matter.

      • That’s a very good point. Though I suppose if there were lots of parallel dimensions connected by gravity, they could exhibit similar behavior, if the matter was distributed throughout the dimensions so that they couldn’t collide, and only interact through gravity.

  8. (Ed. note: this comment was copied from here.)

    There’s yet to be any evidence the universe doesn’t run on very specific mathematical rules. For example, there’s a very good reason for inflation having to do with the ‘pressure’ at high energy states. [ShakaUVM]

    Huh? Your customarily vague but authoritative comment which doesn’t include an “IANAP” disclaimer will just reinforce the disturbingly common impression that physicists are bullshitting about concepts like inflation and dark matter.

    The cosmology course I’ve mentioned was taught by Dr. Nanopoulos using Kolb’s The Early Universe. He pointed out that physicists have known for decades that something like inflation is required to explain the isotropy of the cosmic microwave background radiation. Kolb disusses these topics in chapter 8, though his overview is somewhat dated now. WMAP has since observed temperature fluctuations on the 10-5 level, which matches predictions based on modelling quantum fluctuations in the early universe. More precisely, inflation predicts that these fluctuations would deviate slightly from the perfect scale invariance expected in a universe without inflation. After 7 years, WMAP can exclude the possibility of a scale invariant spectrum by more than 3 standard deviations. The WMAP results also show that the universe is perfectly flat, at least to within the limits of measurement. Inflation isn’t necessary for the universe to be perfectly flat, but it’s sufficient to explain what may seem like “fine-tuning” at first glance.

    That’s why physicists think inflation happened, but it’s an argument based on how relativistic causality affects the large-scale thermodynamics of the universe, not pressure. Pressure is at least tangentially relevant to almost every physics problem imaginable, though, and inflation is no exception. I’ve explained that dark energy’s negative pressure acts as a kind of anti-gravity. Later, Dr. Stoeger (Jesuit priest, astrophysicist working for the Vatican Observatory) observed that “There is, of course, a much deeper connection between inflation and dark energy. The only way we can really conceive of inflation occurring in the early universe is under the influence of a large amount of vacuum energy, which is a type of dark energy. This dark energy must be quickly transformed into the particles and radiation at the end of inflation. So, it’s not at all clear if there is a relationship between the dark energy which drove inflation and the dark energy which we have evidence is driving the gentle acceleration of cosmic expansion now. It may be that the dark energy now may be a remnant of the dark energy left over from the very early universe.”

    Then there’s the problem of heavy exotic particles predicted by most GUT’s; the only one I’m familiar with is the magnetic monopole. In my senior year, I took electrodynamics using the standard Griffiths 3rd ed. Page 327 shows how symmetric Maxwell’s equations appear in the presence of magnetic monopoles, and Griffiths opines that they “beg for magnetic charge to exist.” My fondest memory of that class is problem 8.12 on page 362, along with footnotes 11 and 12 on the same page. Griffiths guides the reader through a proof that quantization of electric charge can be deduced (rather than assumed a priori) if the universe contains just one magnetic monopole. Years later in graduate electrodynamics, the standard Jackson 3rd ed covers magnetic monopoles in a similar fashion in sections 6.11 and 6.12 on pages 273-280.

    A period of rapid inflation after the universe becomes too cold (i.e. not energetic enough) to create magnetic monopoles would greatly reduce their initial density, which in turn would explain why they’re so hard to find (Kolb p239,266). Thus there’s a decent reason for believing that inflation took place at comparatively low energy states.

  9. (Ed. note: this comment was copied from here.)

    There’s been occasional theories tossed about with C (or other cosmological constants) changing values, but there hasn’t been any actual evidence for it. [ShakaUVM]

    First, variations in dimensionful constants are very difficult to interpret, so most of these theories concern variation of dimensionless constants. In fact, Dirac’s original argument was specific to dimensionless constants. Second, physicists discuss the possibility of the fine structure constant varying over cosmological time because “actual evidence” was published in 2001. I previously called it “weak evidence” because it hasn’t been reproduced. But glossing over its publication isn’t helpful.

    Third, I’ve explained that the 1983 redefinition of the meter means that physicists now talk about varying the index of refraction of free space rather than varying the speed of light. Oddly, varying speed of light (VSL) theories seem to use the old terminology. Fourth, notice that these VSL theories are designed to replace inflation as the explanation of the CMBR’s isotropy. However, if the fine structure constant changed the universe would look different, so in VSL theories other physical “constants” have to vary in just the right way to agree with experiments.

  10. (Ed. note: this comment was copied from here.)

    However, my understanding of it is that there’s a rather large asymmetry between the amount of energy needed to create a matter particle vs. a much higher number to create an antimatter particle. Not the 1% they were talking about, but something like an order of magnitude more free energy. [ShakaUVM]

    Roger W. Moore was right: your statement is incorrect. In 1930, the Dirac equation hinted at the existence of what would later be called holes in the Dirac sea after their experimental discovery in 1932. Dirac postulated that the vacuum is filled with both positive and negative energy states, but that almost all of those negative energy states are filled. Positive energy states are familiar particles of matter like electrons, while the corresponding negative energy states are antimatter particles like positrons. Creating an electron requires filling a state with energy E, while creating a positron requires emptying a state of energy -E… which is exactly the same amount. They’re not even 1% different; the asymmetry mentioned in the article is that of CP violation which is now well established thanks to experiments like BaBar. CPT symmetry isn’t violated, though, which is more relevant here. As Kolb says on page 160: “… particle and antiparticle masses are guaranteed to be equal by CPT invariance.”

    So there are very strong theoretical reasons to believe that matter and antimatter particles have the same masses, and thus require the same energy to create. But what about experimental evidence? Well, remember that the spectrum of the hydrogen atom is derived in undergraduate quantum mechanics. The proton is ~2000x more massive than the electron, but a common homework problem (#3) replaces the proton with a positron to form positronium. Any imbalance between the masses of electrons and positrons would have been detected in experiments decades ago. As you can see, physicists showed in 1984 that any imbalance must be less than 1E-7. Then in 2008 you claimed that the imbalance was ~10, which means you’re only Nitpick: that’s 10-37 seconds, or ~2M Planck times. [Dumb Scientist]
    Pointing out that someone was off by 74 orders of magnitude isn’t a nitpick :-P [DriedClexler]
    off by a factor of ~100 million compared to research performed 24 years earlier. If “being off by two orders of magnitude is just ignorant” then you must have gone to plaid. Smoke if you got ’em…

    … Not the 1% they were talking about, but something like an order of magnitude more free energy. Hence the free energy ended up mainly creating bosonic matter. [ShakaUVM]

    Huh? Papers exploring bosonic matter do exist, but they don’t seem relevant. Given the context, you appear to be saying that creating antimatter requires ~10x more energy than creating matter (false), so the free energy would end up mainly creating matter rather than antimatter.

    But what could you possibly have meant by inserting the word “bosonic” in a sentence where it doesn’t seem to make any sense? Bosons and fermions are both types of indistinguishable particles in quantum mechanics. Fermions have half-integer spin, like protons, electrons, antiprotons and positrons. Bosons have integer spin, such as photons and mesons. Some implications of this distinction can be deduced using nonrelativistic quantum mechanics, such as the fact that fermions obey the Pauli exclusion principle while bosons actually attract each other into the same state (Griffiths 1st ed p179). The connection between these statistics and spin is simply assumed in nonrelativistic quantum mechanics, but it can actually be deduced using relativistic quantum field theory.

    Perhaps you’re trying to say that most of the energy ended up creating bosons like photons? That’s true, and in a sense WMAP sees the echo of these photons from when the universe was only ~400,000 years old. But these photons were created in such profusion because in the early (10-6 seconds) universe, matter and antimatter were almost exactly balanced. There were only 30,000,001 quarks for every 30,000,000 anti-quarks (Kolb p159). This conclusion is motivated by evidence like the observed baryon-to-photon ratio of the universe, and the extraordinary stability of the proton (p157,158) which suggests that baryon number is conserved. Cosmologists seem to prefer theories which dynamically evolve this imbalance around the GUT epoch (10-36 second old Kolb says the GUT epoch is t ~ 10-34 sec on page 191, but wikipedia disagrees and it’s been updated since 1990. ) from perfect symmetry at the Big Bang through processes like “sphalerons” (p184) which might be capable of violating baryon number, and/or other processes capable of CP violation on the required scale.

    Whatever caused this slight imbalance, all the antimatter and nearly all the matter were quickly annihilated, releasing exactly equal amounts of energy for the matter and antimatter particles. Most of this energy was released in the form of gamma rays that stretched over the eons to become what we now call the cosmic microwave background radiation.

  11. (Ed. note: this comment was copied from here.)

    … For scientific theories at the forefront of technology, there have been a tremendous number of mistakes made, as in, ALL the scientists were wrong. Remember the Michelson-Morley experiment? It invalidated basically every theory and model we had to that date. The notion that inertia somehow doesn’t apply to light — even though light is sorta a particle, or rather, has particle-like traits — was completely unexpected. [ShakaUVM]

    Huh? The Michelson-Morley experiment has nothing to do with your “notion that inertia somehow doesn’t apply to light.” Inertia wasn’t the problem, galilean relativity was. Light most certainly does have inertia in special relativity, as anyone who designs solar sails could tell you. If you don’t know any, just open Jackson 3rd ed to page 259, calculate a Poynting vector as in equation 6.109, divide by c2 as in equation 6.123, and remember that non-zero inertia is necessary for non-zero momentum. Griffiths 3rd ed makes a similar point in equation 8.29 on page 355 using different units.

    As the story goes, Einstein first conceived of special relativity based on his daydream of racing a beam of light, which uncovered a fundamental inconsistency between Maxwell’s equations and Newtonian mechanics. Also, as I’m explaining to Marble, physicists had recognized serious problems with the luminiferous aether decades before the Michelson-Morley experiment. In other words, Einstein’s discovery of special relativity was based on pre-existing considerations; the Michelson-Morley experiment just confirmed his hypothesis and made it more accessible to the average scientist.

    well that’s ok, because the study was about Electromagnetic fields, not magnetic fields, which are two different things. As far as I am aware, there is absolutely nodanger to humans from a magnetic field. [miro f]

    Wrong. All Electric and Magnetic Fields are the same thing. They are components of the same EM field Tensor. … Thus, if you have a pure magnetic field (like that of the earth) with the 3 E’s in F being 0, it is always possible to construct an L st F` has nonzero E’s. L is a Lorentz Transformation, so the physical significance is that you can always transform relativistically to a frame of reference where a magnetic field picks up an electric field and even radiation EM fields (such as Lienard-Wiechert potentials), making it an “electromagnetic” field. [XchristX]

    Just recalling my high school physics, if I’d have written it (and you were responding to a troll, IMO), I’d just have said that all electrical waves are electromagnetic in nature. Maybe talked about the history of ether, and how the electrical and magnetic components provide the medium for each other. [ShakaUVM]

    Oy vey. You propose to replace an overcomplicated but correct explanation with vague gibberish about aether? Trust me, there are enough people babbling about aether as it is. Before he debuted as a climate change contrarian, Marble repeatedly stressed that relativity wasn’t a “real” explanation, and that aether was more compatible with his young-earth creationist cosmology. After appearing as a climate change contrarian, gkai endorsed einsteinhoax.com… which should be self-explanatory to anyone who doesn’t think relativity is controversial. Instead, try this:

    Einstein’s theory of special relativity shows us that space and time aren’t separate entities, but are instead two sides of the same coin that we call spacetime. In exactly the same way, electricity and magnetism are two sides of the coin called electromagnetism. Even a purely magnetic field will behave like an electromagnetic field if you’re moving relative to it.

    Or, more relevantly:

    Electromagnetic fields (otherwise known as “light”) are made up of chunks called “photons” which carry an energy inversely proportional to their wavelength. In other words, small wavelength photons like gamma rays can kill you because each photon carries a lot of energy; large wavelength photons like microwaves barely interact with the human body at all because each photon carries a very small amount of energy. Interestingly, quantum mechanics says that multiple photons are absurdly unlikely to “gang up” on any molecule in your body, so mutations and cancer aren’t a concern as long as each photon has less energy than would be needed individually to damage molecules in our bodies. The only other effect we need to worry about is thermal dissipation of microwaves in water and fat molecules (i.e. the reason microwave ovens make tasty burritos.) But I don’t see how that’s significantly different from worrying about getting too close to a caveman’s fire. You might as well worry about spooning.

  12. (Ed. note: this comment was copied from here.)

    Besides, conservation of energy doesn’t apply across very short time scales – you can borrow energy from the future as long as you pay it back quickly. That’s how nuclear decay works – the radioactive atoms are trapped in an energy well; if they couldn’t borrow energy from the future they’d never decay. But their energy levels fluctuate, and depending on how deep the well is is how long the half-life of the atom. A shallow well will decay rapidly, a deep well can do a really long time before decaying to a lower energy state. What happens is the atom borrows energy from the future and then immediately repays it when it transitions to a lower energy state. [ShakaUVM]

    When I was in high school, I used to read a lot of pop-science books about physics which claimed that energy conservation is uncertain on short time scales. Then in my junior year of college I took quantum mechanics using Griffiths, though the 1st edition was standard back in 2001. Imagine my surprise when I made it to page 115:

    … It is often said that the uncertainty principle means that energy is not strictly conserved in quantum mechanics– that you’re allowed to “borrow” energy deltaE, as long as you “pay it back” in a time deltaT ~ hbar/(2*deltaE); the greater the violation, the briefer the period over which it can occur. There are many legitimate readings of the energy-time uncertainty principle, but this is not one of them. Nowhere does quantum mechanics license violation of energy conservation, and certainly no such authorization entered into the derivation of Equation 3.151. But the uncertainty principle is extraordinarily robust: It can be misused without leading to seriously incorrect results, and as a consequence physicists are in the habit of applying it rather carelessly. [Griffiths, Intro to QM, 1st ed]

    I’d recommend working through example 3 on page 114; it describes the relationship between spontaneous decay and the energy-time uncertainty principle. Also, chapter 8.2 discusses tunneling in the WKB approximation and alpha decay on page 281 and problem 8.4 on page 283. Notice that these explanations don’t misuse the energy-time uncertainty principle by invoking it the way you do.

    My first research advisor explained this issue by saying that the position-momentum uncertainty principle is subtly but crucially different from the energy-time version because there is no “operator” for time in quantum mechanics. Position/momentum/energy operators exist in the sense that those properties can be measured in the lab, but time can’t be measured in the same way. In fact, John Baez shows that there can’t be any such time operator.

    A much better example of the energy-time uncertainty principle can be seen through a spectroscope. The spectrum of any atom corresponds to the energies of photons falling from one electronic state to another. Given that quantum mechanics only allows electronic states to exist at certain quantized energy levels, one might assume that the spectra would be infinitely sharp. In other words, photons emitted when an electron falls from level 2 to level 1 might seem like they should always have exactly the same amount of energy and thus the same wavelength. But in fact, the photons will have slightly different energies, and this broadens the spectral peak.

    The exact degree of broadening turns out to be directly related to the “lifetime” of the transition between the two states in question. Due to the energy-time uncertainty principle, a transition with a short lifetime will have a large uncertainty in energy and thus have a broader peak (Sakurai 1st ed p341-345).

    Later, I took another year of nonrelativistic quantum mechanics using Sakurai 1st ed as a physics grad student, but never took relativistic quantum field theory. However, most physicists seem to agree with this wikipedia summary:

    One false formulation of the energy-time uncertainty principle says that measuring the energy of a quantum system to an accuracy deltaE requires a time interval deltaT > h/deltaE. This formulation is similar to the one alluded to in Landau’s joke, and was explicitly invalidated by Y. Aharonov and D. Bohm in 1961. The time deltaT in the uncertainty relation is the time during which the system exists unperturbed, not the time during which the experimental equipment is turned on.

    Another common misconception is that the energy-time uncertainty principle says that the conservation of energy can be temporarily violated – energy can be “borrowed” from the Universe as long as it is “returned” within a short amount of time.[13] Although this agrees with the spirit of relativistic quantum mechanics, it is based on the false axiom that the energy of the Universe is an exactly known parameter at all times. More accurately, when events transpire at shorter time intervals, there is a greater uncertainty in the energy of these events. Therefore it is not that the conservation of energy is violated when quantum field theory uses temporary electron-positron pairs in its calculations, but that the energy of quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution.

  13. (Ed. note: this comment was copied from here.)

    Do cows use quantum entanglement? no. Do sheep? no. Plants do. Why would I eat the *smarter* lifeform?

    Depending on your theory of quantum mechanics, you might believe that all systems are entangled. So yeah, they all do. [ShakaUVM]

    There are many theories of quantum mechanics: Bohr’s early theory is used as a high school teaching aid, Schrodinger’s theory is taught in undergraduate physics and widely used in industry while relativistic quantum field theory is unwieldy but even more accurate.

    But that’s irrelevant to entanglement, so presumably you’re talking about one’s interpretation of quantum mechanics. That term is preferred because different interpretations of the same theory are mathematically identical and (probably) experimentally indistinguishable.

    Either way, you seem to be talking about the many worlds interpretation, which in the most pedantic sense possible implies that all systems are entangled. Well, not exactly all systems, just those within each others’ past light cones. Relativity is a harsh mistress.

    I shouldn’t have to mention that something which can explain everything explains nothing. Physicists reserve the term “entangled” for systems that violate Bell’s Inequality because such systems are entangled regardless of one’s interpretation of quantum mechanics.

    I’ve previously pointed out that “some physicists propose that the brain is a quantum computer rather than a classical analog computer, but I’m heavily skeptical of this claim because a quantum computer’s inherent vulnerability to decoherence effects seems to make it more complicated. …”

    Here I’m mainly referring to Roger Penrose and Stuart Hameroff’s proposal that microtubules in human neurons are sufficiently well-isolated that quantum entanglement lasts long enough to play a role in human thought before decoherence takes effect. If true, this bold idea could revolutionize the fields of neurobiology, artificial intelligence, quantum computing, and possibly others. For instance, a whole brain emulation project would suddenly have to deal with the no-cloning theorem from quantum information theory.

    But your definition of “entangled” reduces that bold idea to meaningless gibberish. Because if everything is entangled already, there wouldn’t be anything special about the brain using quantum entanglement. Oh, and since our brain structure is so similar to other mammals, the quantum brain idea would also apply to sheep and cows.

    And, importantly, it’s not a matter of “whatever universe you can imagine is out there somewhere!”–the possibilities are strictly limited by deterministic evolution of the wavefunction and the initial conditions of the universe. [kebes]

    There may be no married bachelors in any branch of the multiverse, but there’s one right now where all the atoms of oxygen in my room spontaneously moved into a corner and I’m suffocating to death. That’s a lot of possibilities. There certainly could be a dimension without shrimp. [ShakaUVM]

    Ever since that cult’s movie was inflicted on the public, physicists have been overwhelmed trying to explain that the many worlds interpretation (MWI) isn’t the license to speculate wildly that most people seem to think it is. As kebes said, there are more constraints on what can happen than merely “no married bachelors” which would allow for anything except for logical contradictions.

    Even in classical statistical mechanics, the probability of all the oxygen in a room moving into a corner isn’t zero, just absurdly small. In the MWI, this fact is explained by saying that the relative amplitude of this “suffocation” branch of the wavefunction is absurdly small. The absolute square of this relative amplitude is proportional to the chance that we find ourselves in that branch.

    At first glance, it might seem like the fact that the suffocation branch has non-zero amplitude really does mean that you’re experiencing such a confusing death right now. However, there’s a subtle problem having to do with wave function collapse. In the Copenhagen interpretation, it’s easy to explain why measurements of a photon’s polarization are only ever horizontal or vertical. The photon’s superposition “collapses” onto one or the other. In the MWI, this doesn’t happen. Instead, we say that the photon’s superposition becomes entangled with its environment. More melodramatically, the entire universe splits in two, and in each branch I find either horizontal or vertical polarization.

    So far, so good. But there’s no “preferred basis” in quantum mechanics. So why don’t we ever measure a photon to be in a superposition of horizontal and vertical? In other words, if wave functions don’t collapse, why do we experience the world as though it’s classical and not quantum mechanical all the way up like the MWI says it is?

    I’ve briefly addressed this issue while debating an atheist. Most physicists think that “decoherence” is a good explanation. Dr. Zurek calls this “environment-induced superselection” (einselection). The really short version is that only certain superpositions consistent with classical physics are observable.

    Unfortunately, I don’t understand this topic well enough to make it more comprehensible. Anyway, what I’m trying to say is that I doubt your suffocation branch would be robust against einselection. Thus there probably isn’t a version of you suffocating in your room right now.

    Of course, none of that implies Anya was lying about the possibility of a dimension without shrimp. A few improperly bonded amino acids at exactly the right time certainly could have changed evolutionary history without that branch having an absurdly small relative amplitude or conflicting with einselection. The world with nothing but shrimp, on the other hand, seems less plausible. That said, I could easily understand why Illyria would tire of it quickly… if it actually existed.

  14. (Ed. note: this comment was copied from here.)

    … To use your own words, it’s like a modified Salem Hypothesis that lets non-physicists like climate scientists think that their hand-waving is a legitimate form of argumentation, whereas everyone else is an anti-scientific nutjob. It probably comes from their field being only tenuously considered a science. … [ShakaUVM to me on 2010-05-22]

    Oh, I guess you don’t fucking know what you’re talking about, do you? [ShakaUVM to me on 2010-05-19]

    I was just annoyed that you called me a non-physicist. I let it pass once with your rather insulting Salem Hypothesis thing, but I don’t let things go twice. To re-iterate what I said before, you don’t know what you’re talking about when you’re trying to insult me like that. If you label me a non-physicist, then I’ll have to start calling you a weather man. [ShakaUVM to me on 2010-05-22]

    … please feel free to continue deluding yourself that scientists are the shining beacon of logic in an otherwise inhospitable world. [ShakaUVM to Dumnezeu on 2010-07-25]

    When you say “research” do you mean enrolling in graduate physics courses at an accredited university to learn about the radiative physics of the atmosphere? (This would involve some kind of objective measure of your ability to construct and solve equations.) … [~30 pages later] … Like before, you seem to mistakenly believe that you’re being insulted by angry climate scientists. For instance, you claimed to be insulted when I implied that you were a non-physicist… just because you don’t have a graduate (or even undergraduate?) physics degree. … [~15 pages later] … Your error analysis seemed to repeatedly assume that climate models are empirical, not dynamical… among other mistakes which would be inexplicable for someone with a graduate or undergraduate (?) degree in physics. [Dumb Scientist]

    Two years of physics as an undergraduate … [ShakaUVM]

    Yes, I find it easy to believe that you’ve taken freshman and sophomore level undergraduate physics. This is where your list of accredited physics courses stops, which I also find easy to believe. Among many other things, it explains why you later repeated gibberish from your confused high school biology teacher regarding thermodynamics.

    What I don’t understand is why you’re posing as a physicist when your answer to my first question clearly should have been “no” (but technically still isn’t.) Perhaps you’re thinking two years of physics courses is roughly half of a four year physics bachelor’s degree? Hardly. Two years of lower-level physics at a semester-based school is a grand total of 4 courses. During my first two years of college I also only took 4 physics courses at a semester-based school because during that time I was an aerospace engineering major. Then I switched to physics and took 8 more physics courses before the siren song of the real world called to me. Years later I moved to a quarter-based school and finished my physics bachelor’s degree by taking 11 physics courses. However, I retook the first quarter of quantum mechanics and all three quarters of electrodynamics, so those don’t count. Since 3 quarters = 2 semesters, that’s a grand total of ~16 semesters of physics.

    So two years of physics courses is roughly 25% of the way to a physics bachelor’s degree which is basically only useful for getting into physics graduate school. For me this involved taking a further 12 physics courses that made the previous ones look like kindergarten coloring tests. Your mileage may vary, but you’re likely ~15% of the way towards starting the research which will eventually lead to your physics PhD. It would be prudent to wait until you have a little bit more experience before slandering an entire subfield of physicists.

    … I also read a great deal. … [ShakaUVM]

    After reviewing the surfacestations FAQ (where he fumbles with that lid) and comparing it to your claims, I’m inclined to agree with you. But I’ve already addressed that kind of “research.” Remember that pop-science books aren’t the same as a structured education with TAs who beat the subject-specific stupid out of their students, impart subject-specific background and make them prove that they’ve learned subject-specific somethings at a certain level. Your graduate physics education sounds like a “recipe for disaster.”

    … perfect score on the SATII Physics Test, 5 on the AP Physics test … [ShakaUVM]

    I googled the SATII Physics test (called the SAT subject test after 2005). It’s taken by non-physicists like pre-medical students because it covers algebra-based introductory physics. The AP Physics test is taken in high school, which reminds me of Walter Wagner. He accuses physicists of dishonesty and/or incompetence when they point out that the LHC isn’t going to destroy the world. To back up his claims, Wagner repeatedly stresses that he got a perfect score on the math portion of a high school teacher’s exam. He doesn’t seem to understand that he’s babbling about a topic far more advanced than anything on that test. Maybe a good score on the physics GRE would be relevant, but even it only tests senior-level undergraduate physics without testing graduate-level physics understanding at all.

    I taught methods in solving physics problems, etc., including an undergraduate quarter spent on error analysis. [ShakaUVM]

    Hmm… instructors teaching physics courses are usually required to at least have the bare minimum physics bachelor’s degree. Even teaching assistants are required to be at least two years ahead of their students, which means most physics TA’s start teaching during their third year of physics-track (not algebra-based) physics courses. I hope for the sake of UCSD’s reputation that you’re not talking about courses offered by the physics department. Possibly you’re referring to computer science classes, which you would indeed be qualified to teach.

    … what am I missing? [ShakaUVM]

    A graduate physics education, or the disclaimer “IANAP”. Take your pick. Either one would be a massive improvement over the current situation where you repeatedly pose as a physicist while spouting vague gibberish.

    For example, when VShael joined the chorus of slashdotters accusing cosmologists of incompetence and/or unscientific dogma, you agreed with him. As I’ve shown, there are many reasons why cosmologists favor dark matter over MOND. Just because other scientists don’t agree with Dr. McGaugh doesn’t mean they’re falling prey to dogma, acting unscientifically, or ignoring him. In fact, McGaugh published several papers asserting that the ratio of the first two CMBR peaks definitively rules out cold dark matter. As of 2010-10-10, those papers were cited 11 times, 47 times, 33 times, and 4 times, respectively. Furthermore, I’ve repeatedly linked to Dr. McGaugh’s website even though I don’t endorse all of his conclusions.

    • Stop copying my posts onto Dumbscientist.com. [ShakaUVM]

      Nah.

      Perhaps you got confused because both of our usernames start with ‘S’? … Especially since your ad hominem includes assigning me claims of “gibberish” by linking to posts that aren’t even by me. If you want to argue a particular topic, do so – but by this laughably dishonest (did you think that people would just never bother to check your references?) attempt at discrediting someone on the internet, you’re just exploding any cachet you had with anyone who cares. [ShakaUVM]

      That link defines the word “gibberish” as I mean it in this context:

      What you need to understand is that what you said, while sounding philosophical to the uneducated is gibberish. To a scientist what you said sounds something like “What if what I thought was my hand was actually an ardvaark in disguise”.

      I’ll see your Griffiths and raise you a Feynman, Greene, and Gershenfeld. … If your interpretation of your introductory textbook was correct, semiconductors wouldn’t work. Read up on them some time.

      Sure. Just quote the page numbers and passages (like I did) which support your claim that conservation of energy doesn’t apply across very short time scales. Make sure that you’re not misinterpreting the concept of quantum foam to support a confused pop-science notion that energy conservation is violated by much slower processes like nuclear decay or anything happening in semiconductors.

      The fact that you’re taking seriously Penrose’s idea means that you have no fucking idea what you’re talking about.

      I guess I should’ve mentioned that “I’m heavily skeptical of this claim.”

      You might want to read up some time on baryogenesis, CP violation, and the big question about why there’s more matter in the universe than anti-matter.

      Ironically, the reference I gave to Kolb p157 is the first page of chapter 6… which is called “Baryogenesis.” I suppose I should’ve mentioned (twice) that CP violation is considered along with baryon number violation caused by processes like sphalerons.

      … Khayman, you’ve gone completely off the deep end … If you’re a scientist, as you purport to be, you’re a very bad scientist. … Since you state with such certainty you know the answer, I suggest calling Stockholm and letting them know you’ve got all the answers. It’ll make a nice break from your work studying clouds. … the ad hominem from you is getting intolerable … you sound more like the unibomber than a physicist. … If you want to keep writing these long, unibomber-like screeds …

      That’s Mr. Unabomber to you.

      Update: ShakaUVM isn’t the only contrarian comparing scientists to the Unabomber.

    • Light has *momentum*. Technically, it’s just p = hf. I’m not sure why you need to reference a textbook, but you seem to go about everything in bizarre and unproductive ways. [ShakaUVM]

      Because that’s from early 1900’s quantum theory, not classical electrodynamics. My point is that even in the 1800’s, it should’ve been clear that light has momentum and thus inertia just by examining Maxwell’s equations.

      Also, you meant to say that momentum is p = h/lambda, right? E = hf is the undergraduate notation for energy.

      Update: Since the M-M interferometer didn’t use a single photon source (most affordable interferometers still don’t) the expression for a single photon’s momentum needs to be multiplied by the number of photons per unit volume in the beam. The classical Poynting vector approach isn’t counting photons and thus doesn’t require such a correction.

      … I only consider objects with mass to have inertia, as a massless object cannot have an inertial reference frame that makes any sense. A number of people agree with me. For example, Greene equates inertia with particles with mass.

      That’s an interesting definition; I’ve never heard of it before. Like Einstein, I prefer to call “E/c2” the “inertia” of light because that’s the conceptual breakthrough which resolved the original pesky factor of 4/3 that kept appearing in Lorentz’s derivation of “E=mc2“:

      And because the em-mass depends on the em-energy, the formula for the energy-mass-relation given by Thomson (1893) and Wien (1900) was m = (4 / 3) E / c2 (Abraham and Lorentz used similar expressions). Wien stated, that if it is assumed that gravitation is an electromagnetic effect too, then there has to be a proportionality between em-energy, inertial mass and gravitational mass. However, it was not recognized that energy can transport inertia from one body to another and that mass can be converted into energy, which was explained by Einstein’s mass–energy equivalence.

      The idea of an electromagnetic nature of matter had to be given up, however, in the course of the development of relativistic mechanics. Abraham (1904) argued (as described in the preceding section #Lorentz transformation), that non-electrical binding forces were necessary within Lorentz’s electrons model. But Abraham also noted that different results occurred, dependent on whether the em-mass is calculated from the energy or from the momentum. To solve those problems, Poincaré in 1905[A 8] and 1906[A 9] introduced some sort of pressure of non-electrical nature, which contributes the amount − (1 / 3)E / c 2 to the energy of the bodies, and therefore explains the 4/3-factor in the expression for the electromagnetic mass-energy relation.

      It also happens to explain the paradox discovered by Poincare regarding conservation of momentum in different frames when using Lorentz transformations to transform between inertial reference frames.

      In other words, the reason Einstein is a household name but Lorentz is known only to scientists can be traced back to the fact that Einstein recognized that light has inertia.

      You’re quite wrong that it was an expected result – read, and look at the number of times they kept trying to get the experiment to “succeed”.

      First, I just said that as the story goes, Einstein based special relativity on his daydreams and pre-existing problems with the aether. That’s the official story, but I find it hard to believe that Einstein really didn’t think about the Michelson-Morley experiment during this process. However, that’s what Einstein claimed and the story is at least remotely plausible given his stratospheric genius.

      Second, I didn’t say Michelson and Morley expected it. I just said Einstein claimed to not find it surprising or informative in his development of special relativity, and that aether timeline I linked should provide convincing evidence that the aether had already been shown to be logically inconsistent long before Michelson and Morley started their work. That same timeline also contains a handy list of all Michelson and Morley’s experiments, but thanks for the redundant link.

      • That’s the official story, but I find it hard to believe that Einstein really didn’t think about the Michelson-Morley experiment during this process. However, that’s what Einstein claimed and the story is at least remotely plausible given his stratospheric genius.

        There’s people on both sides of the issue. Some say he wasn’t aware, other’s that he was aware, still others that he was peripherally aware but relied more on his gedunkenexperiment.

        Personally, it doesn’t bother me that much, since either way Einstein had the genius to derive special relativity when other people were trying to figure out the experimental error in the M-M experiment. Really though, I think the overarching point here (as Kuhn makes) is that science can be wrong, but (if given enough evidence) will paradigm shift to a better, more accurate model.

        Naturally I’m sure you are going to mis-summarize this entirely reasonable statement of mine as saying that scientists are all lying liars who lie a lot, like my response to Vshael.

      • Actually, I’m ignoring it on dumbscientist altogether.

      • Actually, I’m ignoring it on dumbscientist altogether.

        Naturally.

        Wouldn’t want to make yourself look bad, like last time.

        Shaka: Here’s four thesis statements, some of which I’ve never raised before. Please agree or disagree.
        Khayman: I already have answered them!
        Shaka: No you haven’t
        Khayman: I don’t have time to be your tutor!
        Shaka: Just answer yes or no to each of them.
        Khayman: I have a paper to write!
        Shaka: You’d save time by just answering yes or no
        Khayman: I’ve referenced 50 papers which contains my answer somewhere!
        Shaka: What’s your thesis statement? Answer yes or no to each, and why.
        Khayman: I’ve already answered them and I hate repeating myself! …and reading back through your Dumb Scientist blog reveals exactly that – that you never answered any of the questions before, and only one after.

        Thanks for making the record. You sounded just like the Creationists that get really evasive when pressed to explain some of their answers. In fact, saying that they don’t have time to educate people is one of their favorite lines.

      • Actually, it’s because you bore me when you stop talking about physics. Also, you previously seemed to want me to stop copying your statements. But I’m copying all of these right now.

    • (Ed. note: This comment was copied from here.)

      … I’ve thought a bit more about that question of if photons of light have inertia. While they do have momentum, I am more convinced now that they don’t have inertia. One can define inertia as the resistance of mass to changes in velocity, right? The root cause of this classic Newtonian mechanic is the interaction of objects with the Higgs field, right? That’s what grants particles inertia. But photons do not interact with the Higgs field, so they don’t have inertia. [ShakaUVM]

      I’ve never taken graduate-level elementary particle physics, so I don’t know much about the Higgs field. My classmates who have taken those classes and moved on to work at the LHC tell me that most theorists consider the discovery of the Higgs boson to be very likely.

      Personally, I’m not sure how to rule out the notion that inertia is caused by viewing zero point energy in an accelerating reference frame. I’m sure the Higgs field really is more likely to be the cause of inertia, but right now I don’t have enough time to wade through the relevant literature to learn why.

      Anyway, you’re right to say that the “inertial mass” of an object can be measured by placing it in a container and determining how much force is necessary to accelerate the container. Or, rather, how much extra force is necessary compared to experiments performed when the container is empty.

      Now imagine a one dimensional container with perfectly reflective inner walls. I claim that if this container is filled with photons having total energy E, then more force would be needed to accelerate the container after filling it. More precisely, the experiment would show that the container has an extra “inertial mass” E/c2 compared to its empty state.

      Here’s why.

      If the container isn’t accelerating, the trapped photons will exert equal pressure on both walls of the container as they’re reflected back and forth, just as with solar sails. Accelerating the container, though, will cause the mirror on the bottom to reflect those photons more often with more force than the mirror on the top. Thus on average the bottom mirror will experience more pressure than the top mirror, and this pressure asymmetry will mimic an “inertial mass” of E/c2.

      In fact, I think any method of measuring inertial mass would conclude that photons have inertia. That’s because active gravitational mass in general relativity is defined by the stress-energy tensor, which includes the energy (and momentum) in electromagnetic fields. Active and passive gravitational masses must be equal to conserve momentum, and the equivalence principle says that passive gravitational mass equals inertial mass.

      In other words, the container curves spacetime more when it’s filled with photons. Therefore its gravitational mass has increased, and via the equivalence principle so has its inertial mass.

      This depends on my interpretation of the equivalence principle (and the principle itself) being correct. It also implies that pressure has inertia, because pressure contributes to the stress-energy tensor. Interestingly, that implies tension has negative inertia because tension is just negative pressure. Greg Egan uses this concept masterfully in a short story called Hot Rock, which is set in the same universe as Glory, Riding the Crocodile, and Incandescence.

      Update: Someone disagrees that tension has negative inertia.

    • I no longer think the “inertial mass” of a container full of photons will be due to more frequent collisions with the bottom mirror. After further thought, that’s not true. But the second argument still seems compelling, so I wonder how the effect would show up in a literal container experiment. Perhaps the bottom mirror would reflect a slightly blue shifted photon, and the top mirror reflects a slightly red shifted photon? This might explain the pressure asymmetry but depends on quantum mechanics to show that the momentum change of reflecting a photon depends on the wavelength. I was hoping to come up with an explanation that depended only on special relativity, but no such luck.

    • Well, even though breathless science journalists report that the Higgs Field is “source of mass”, it’s not the only one. As you pointed out, E=MC^2, so any energetic entity has a gravitational mass under relativity. Nuclei of atoms get a significant fraction of their apparent mass from the nuclear binding energy from the strong force.

      I think the Higgs Field is better described as the “source of inertia”, as opposed to the source of mass, as the mechanism by which it operates is basically what we think of as the Newtonian definition of inertia at normal scales – particles that interact with it are resistant to changes in their velocity. Since photons do not interact with it at all, that’s why I was saying that you could argue that photons do not have inertia (on top of the fact that you cannot apply a force to them to change their velocity).

      It’s really just a semantic argument, though.

    • Nuclei of atoms get a significant fraction of their apparent mass from the nuclear binding energy from the strong force.

      It’s true that an individual nucleon (a proton or neutron) gets a significant fraction of its mass not from its constituent quarks but from the strong force. For example, a proton’s mass is 938 MeV but its two up quarks and single down quark only sum to (at most) 12.4 MeV.

      But nuclei of atoms are actually slightly less massive than the individual neutrons and protons that comprise them. I believe this effect has a different sign than the binding of quarks into individual nucleons because quarks can’t ever be physically separated. It’s also much smaller; iron nickel has the most tightly bound nucleus, and its binding energy is only 8.8 MeV per nucleon. Though vastly larger than chemical energies, this is still less than one percent of its mass.

      As you pointed out, E=mc2, so any energetic entity has a gravitational mass under relativity.

      … and thus also has an inertial mass through the equivalence principle. My failure to explain how the extra inertial mass of E/c2 manifests in the container experiment doesn’t invalidate the equivalence principle.

      If light didn’t exhibit inertial mass in that sense, the container would be a magical fuel tank for a photon rocket. Most of the constraints of relativistic travel are related to the need to accelerate the reaction mass in the fuel tank. If the photons can simply be stored in a mirrored container and accelerated for free until they’re allowed to escape out the back of the rocket, that would constitute an unbelievably efficient space drive. And I mean “unbelievably” quite literally here.

      Update: Wikipedia “In a similar manner, even photons (light quanta), if trapped in a container space (as a photon gas or thermal radiation), would contribute a mass associated with their energy to the container. Such an extra mass, in theory, could be weighed in the same way as any other type of rest mass. This is true in special relativity theory, even though individually, photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum, is one of the characteristic and notable consequences of relativity. It has no counterpart in classical Newtonian physics, in which radiation, light, heat, and kinetic energy never exhibit weighable mass under any circumstances. … As noted above, even if a box of ideal mirrors “contains” light, then the individually massless photons still contribute to the total mass of the box, by the amount of their energy divided by c2. [2012-07-24] discusses photons in a container Hans and Puri, Mechanics 2nd ed., Example 12.18 on page 433. , and notes that “… a rest frame does not exist for single photons, or rays of light moving in one direction. When two or more photons move in different directions, however, a center of mass frame (or “rest frame” if the system is bound) exists. Thus, the mass of a system of several photons moving in different directions is positive, which means that an invariant mass exists for this system even though it does not exist for each photon.”

      I think the Higgs Field is better described as the “source of inertia”, as opposed to the source of mass

      Photons don’t interact with the Higgs, and photons certainly have zero rest mass. Most of the introductory material I’ve read seems to use that terminology, but again I’m not a particle physicist.

      Since photons do not interact with it at all, that’s why I was saying that you could argue that photons do not have inertia (on top of the fact that you cannot apply a force to them to change their velocity).

      Velocity is a vector, so gravitational lensing and reflection are both examples of changing the velocity of light. Refraction is an example of changing the direction and the speed of light. In all these cases, equal and opposite reactions occur but are simply too small to observe. The sun is gravitationally attracted to a beam of light that it deflects, a solar sail experiences a force, and the material interface that refracts the light really does experience a force as it deflects and slows down each photon.

    • It turns out nickel is most tightly bound. News to me.

    • (Ed. note: These comments were copied from here.)

      For example, a proton’s mass is 938 MeV but its two up quarks and single down quark only sum to (at most) 12.4 MeV. [Dumb Scientist, 2012-07-06]

      Wouldn’t that mean that the binding force of the nucleons were negative? And wouldn’t that mean that they should fly apart?

    • The strong force between quarks in each nucleon isn’t repulsive, it’s just contrary to human intuition that’s heavily influenced by gravity and electromagnetism. Electromagnetism can be repulsive, but it and gravity can both be approximated by inverse square laws that decay to zero at infinite range. The force between nucleons also decays rapidly with distance, but it’s the residual of the true strong force between quarks, in the same way that van der Waals forces are residuals of the much stronger attraction between the atoms’ protons and electrons.

      The true strong force between quarks is more like a rubber band, which exerts a stronger force as you move them apart. This rubber band “breaks” when you put enough energy into separating them as it would take to create a new quark next to each one, and that’s exactly what happens. They can’t spontaneously fly apart without violating conservation of energy. Protons are stable (for at least ~10~30 years), and even though a free neutron decays into a proton within about 15 minutes, its quarks don’t fly apart. Instead, one down quark beta decays into an up quark by emitting an electron and an electron anti-neutrino.

    • I get that. What I don’t get is: If it takes energy to remove quarks from one another, why do nucleons have higher masses than the sum of the masses of the quarks? From E=mc^2, I would assume that the heavier state had the higher potential energy, so that a higher-mass state cannot be a bound state. Why is this not the case for quarks?

    • We usually regard two particles to be in a bound state if they remain “close” to each other, but that’s only because we’re used to forces that decay to zero at infinite range. Particles that are infinitely far from each other don’t experience gravitational or electromagnetic attraction, so they’re regarded as free particles. The strong force is completely different, because it gets weaker as quarks are brought together. Thus quarks are “more free” the closer they are to each other; this is called asymptotic freedom.

      The force between nucleons is more intuitive because the gluons that mediate the strong force between the (not literal) red, blue or green color charges of quarks are confined inside each nucleon. Our intuition is based on electromagnetism which is mediated by electrically neutral photons, but gluons aren’t color neutral, so they interact with each other in strange ways. The residual strong force is mediated by composite particles that leak from each nucleon called pions which are color neutral. As a result, the binding energies of atomic nuclei are easy to explain, but those of individual nucleons are counter-intuitive in many ways.

      Please note that I’m not a particle physicist or an expert in quantum chromodynamics, so my interpretation and explanation should be taken with a huge grain of salt.

      Also, I noticed your conversation with dudpixel, and tried to explain that scientists aren’t just assuming that nuclear decay rates are constant.

    • I see. That is interesting, and kind of mind-blowing. I thought forces with massive bosons had shorter reaches, decaying faster than the square of the distance, but that must be wrong.

    • The residual strong force is mediated by massive bosons called pions, which are composed of gluons and a color-neutral quark-anti-quark pair. They mass ~135 MeV, which is less than 2/3 of a proton’s mass of 938 MeV. This seems like a clue that quark masses don’t significantly affect the masses of hadrons like baryons (nucleons, etc.) and mesons (pions, etc.).

      Or you could be referring to the weak interaction (responsible for nuclear decay) which is short range because it’s mediated by massive W and Z bosons. That’s true, but it’s a totally different fundamental force.

      The gluons which mediate the strong force are actually massless. They contribute to each nucleon’s mass-energy through their interactions with the three valence quarks and with each other.

    • Well, only the beta decay, but yes, that must have been what I had heard about. And then extrapolated to the residual strong force, which is also short range, as you write.

      This has an analog in the electromagnetic force, where dipole-dipole interactions have much shorter range than ion-ion interactions. IIRC, they fall of proportional to the fourth power of the distance instead of the second. I have no idea whether the phenomena are truly analogous, though.

    • The analogy is useful, but note that ion-ion and dipole-dipole interactions are both mediated by massless, electrically-neutral photons. The strong force is mediated by massless, color-charged gluons, while the residual strong nuclear force is mediated by massive, color-neutral pions. Bigger difference.

    • The idea of (color-)charged force mediators is weird, I keep forgetting it.

      My brain really wants the dipole analogy to hold up, to the point where I keep wondering whether dipole-dipole interactions can be described with massive pseudo-particles.

    • Probably not, but I also find charged mediators baffling. For instance, I couldn’t stop thinking about baryon masses, and quickly went down a rabbit hole. Asymptotic freedom implies that a proton would lose ~98% of its mass if its quarks could all be in the exact same spot. That’s a lot of energy; what keeps the quarks apart?

      I clung to a familiar analogy: atoms don’t collapse because of Heisenberg uncertainty and Pauli exclusion. Quarks are fermions, but the down quark isn’t subject to the exclusion principle because it can be distinguished from the up quarks. Undergraduates explore the structure of a hydrogen atom by solving the (comparatively simple) non-relativistic Schrodinger’s equation for the two-body electromagnetic interaction. On the other hand, understanding a proton’s structure requires a full relativistic treatment which involves a three-body interaction including color charge, flavor, spin, and electric charge. Scary.

      This was an excellent excuse to open David Griffith’s “Intro to Elementary Particles for the first time, and skip to page 180 to read about baryon wave functions. Conclusion: I really need to take a class on elementary particle physics!

      But anyway, here’s a null result which may be interesting. I wondered how much baryon masses depend on the electromagnetic repulsion of the two up(down) quarks in a proton(neutron). Qualitatively, it seems like up quarks should repel each other electromagnetically twice as much as down quarks, so protons should be bigger than neutrons. Bigger should mean more massive, especially because the strong interaction becomes stronger as the quarks are separated. Since protons are actually ~0.1% less massive than neutrons, it seems like electromagnetism doesn’t play a significant role in determining baryon masses.

    • (Ed. note: This comment was copied from here and started with this conversation.)

      Excellent comment. Just to take it further, active gravitational mass in general relativity is defined by the stress-energy tensor, which also has pressure components. That implies tension has negative gravitational and inertial mass because tension is just negative pressure. Greg Egan uses this concept masterfully in a short story called Hot Rock, which is set in the same universe as Glory, Riding the Crocodile, and Incandescence. [Dumb Scientist, 2012-07-05]

      The pressure of a ball of hot gas (like the sun) is a significant part of gravity because each atom and electron wizzing around and banging into things inside of it, has an energy component which strengthens the gravitational field (this is expecially true in very hot areas, like stars, where particles are moving very fast).

      The energy comes from the kinetic energy, or the momentum. The kinetic energy equation is e =.5mv^2 (does that equation remind you of anything?).

      Now, while a colder chunk of matter might have less kinetic energy. The MASS energy (e=mc^2), is is still pretty damn huge, and more then enough to have normal gravitational effects. In addition, even if a particle achieves a negative velocity, by moving slower than the expansion rate of the universe, it probably STILL won’t have any negative kinetic energy. Why? Kinetic energy equals half the mass, times THE SQUARE of the velocity. The square of any number — negative or positive — is always a positive number.

    • My response was moved here.

  15. (Ed. note: these comments were copied from a conversation about a map of dark matter from Hubble. This comment was actually posted on 2010-03-28 at 18:10. Earlier, I only copied the few bits related to science, but then Jane accused me of distorting and cherry-picking her statements to make her look unreasonable. I still think this wastes my readers’ time, but they can judge for themselves. Anyone else who’s bored by this can skip ahead to more science.)

    Nice pretty picture… especially when you consider it’s a picture of something that very possibly doesn’t even exist.

    There isn’t any “scale bar” because you are not looking at something at any fixed distance! You are looking at (theoretically) blobs of stuff at various distances.

    • …especially when you consider it’s a picture of something that very possibly doesn’t even exist.

      It exists. Educate yourself.

    • It exists.

      What are its properties then? All we’ve observed are anomalies in gravity and space-time. Positing some mysterious form of invisible matter isn’t an explanation at all – all we still know is that there’s these weird things going on with gravity and spacetime, and that our current theories are incomplete.

    • Maybe you should educate yourself.

      You present as evidence exactly the same sort of imaging techniques that were used to make the image in question? That’s really lame.

      That’s like trying to prove that a photograph of a ghost is real by producing more photographs of the ghost. Hint: it doesn’t work.

      There are alternative theories, such as MoND, that might explain this (since it explains the apparent gravitational anomalies in spiral galaxies, it is possible that it could explain this kind of gravitational anomaly equally as well). Most “evidence” I have seen that is supposed to support the Dark Matter theory tends to also support the alternative theories. So while the “evidence” does indicate that something strange is going on, exactly which strange thing it may be is far from decided.

    • I would like to add that, as some have mentioned above, there is recent evidence that appears to support the dark matter theory over the alternatives, but there is also evidence even more recently that the other evidence was in error. So… all I am saying is that this “debate” is not yet decided. Wait and see.

    • You present as evidence exactly the same sort of imaging techniques that were used to make the image in question? That’s really lame.

      No it isn’t. It’s incontrovertible evidence. Or do you have proof that the methodology is faulty (despite the results agreeing with other sorts of data, such as WMAP)?

      There are alternative theories, such as MoND, that might explain this

      No, there aren’t. Even MOND proponents have admitted that *some* form of weakly interacting matter is *required* to explain the galaxy collision results. Seriously, get your learn on, your information is *way* out of date.

      Most “evidence” I have seen that is supposed to support the Dark Matter theory tends to also support the alternative theories.

      No, it really doesn’t. Seriously. Learn a little before making yourself sound like a fool. Here, I’ll even help you:

      The leading relativistic MOND theory, proposed by Jacob Bekenstein in 2004 is called TeVeS for Tensor-Vector-Scalar and solves many of the problems of earlier attempts. However, a study in August 2006 reported an observation of a pair of colliding galaxy clusters whose behavior, it was claimed, was not compatible with any current modified gravity theories.[27] In 2007, John W. Moffat proposed a theory of modified gravity (MOG) based on the Nonsymmetric Gravitational Theory (NGT) that claims to account for the behavior of colliding galaxies.[59] This theory still requires the presence of non-relativistic neutrinos (another candidate for (cold) dark matter) to work.

    • Nothing is incontrovertible. That’s one of those words that, when you hear it, you should probably run away.

      If you had bothered to read my other comments, you might have found out something yourself.

      And yes, other than this, which does appear (without having researched it fully) to support the existence of dark matter, most of the evidence I have seen that was supposedly “evidence of dark matter” was pretty equally evidence of some of the alternative theories. Just how much do you know about the evidence I have seen anyway? You seem to believe you are an expert on the matter.

    • If you had bothered to read my other comments, you might have found out something yourself.

      I’ve read those comments. None of them appear to provide any evidence to either invalidate these and related findings, or provide a mechanism by which MOND or other theories could be altered to fit current observations.

      Just how much do you know about the evidence I have seen anyway? You seem to believe you are an expert on the matter.

      Well, judging by your comments, you certainly are. So, please, explain to me how the Bullet Cluster and galaxy collision results (among other things) can be explained by MOND. And once you’ve done that, please, publish a paper, as I’m sure the physics community will be very interested to hear about your findings given that actual experts on the topic can’t seem to manage it.

      • You still aren’t getting the point, are you? Hmph. Well, I will try again: I was referring to evidence that I HAVE SEEN. Perhaps there is lots of other evidence. You seem to believe so. But that’s beside the point. And yes, in fact I do know quite a bit about the evidence that I have seen.

        Your supposed expertise in physics aside, you seem to need reading and logic lessons.

      • And yet, for all that you know about the evidence that you have seen, you still haven’t explained how MOND or other alternatives to dark matter can explain the results that I’ve cited, results which are accepted by most of the physics community as supporting dark matter while ruling out MOND. I mean, it’s not like you can just cherrypick the results to fit your theory. You gotta explain them all.

        So please, if you have citations or other references which explain how MOND can be altered to fit the latest results (the galaxy collision results, in particular, are difficult for MOND, as they show weak lensing where no visible matter is present, which is tough to explain when the basis of your theory is to alter how gravity works, given that gravity requires, you know, matter), I’d be curious to see them, because, at least AFAICT, it’s just not possible unless you allow for *some* form of dark matter.

      • My, you do go on, don’t you?

        If you read more of my statements, not just the one you have been obsessed with, you will see that I was not saying that MoND could explain this… simply that (I am repeating myself here) most of the evidence that I have seen does support alternate theories at least as well as dark matter. I have already stated (as you would know, if you had actually been reading) that it might not apply here.

        All I have said here, given the whole of my statements, is that — contrary to your opinion — I do not believe that “the jury is in” quite yet. But without contrary evidence I grant you — for the moment — this evidence. But in my experience the attitude that “the jury is in” and all is decided is more often mistaken than not.

        Are you satisfied? Or are you still intent on arguing about nothing?

        Also, at least at first, you misunderstood my comments about using one picture to verify the other. I wasn’t disputing the physics, I was simply commenting that you can’t use a given technique to validate itself. As an analogy, suppose you were using a particular technique involving photographic film to capture images from radiation, and somebody disputed the validity of your technique. Well, you can’t just go take a bunch more pictures using that same technique, getting the same results, and call that evidence of its validity. Rather, the validity has to be determined experimentally or at the very least inductively.

        Therefore, your one picture does not validate the technique used to generate the other, as you appeared to be claiming. And there is not as yet enough other evidence to verify it either experimentally or inductively.

        You asked me if I could prove that the methodology was faulty, but that is disingenuous. The fact is that (regarding just the pictures) your “evidence” was next to meaningless as evidence that it wasn’t.

        Make no mistake. I am not disputing any physics here. All I am saying is that the one picture is not evidence that the other one is valid… even though I think they probably are.

    • Oops… I forgot a couple of things:

      No it isn’t. It’s incontrovertible evidence. Or do you have proof that the methodology is faulty (despite the results agreeing with other sorts of data, such as WMAP)?

      It is nothing of the sort. You may know a lot about current physics but you obviously know squat about logic and evidence. You offered the wikipedia link as evidence that OP’s picture is correct… yet they used exactly the same techniques (having to do with gravitational lensing) to produce the pictures. Therefore you can’t use one as evidence of the veracity of the other. That’s just plain stupid. For someone as smart as you present yourself to be, how can you present that as “evidence” of the veracity of the other picture (which is what I was referring to), and call that “incontrovertible”? It really is a downright stupid thing to do.

      I suppose it could be that you were confusing that with how “incontrovertible” you see the evidence for dark matter to be… but those are two different things. Regardless, the fact is that in science, it is usually those who go around claiming things like “incontrovertible” that end up eating their words.

      Since you appear to have set yourself up here as knowledgeable about the subject, I am curious about something: since dark matter has to be strongly interacting enough to account for the anomalies observed in the rotation of galaxies (which is why it is presumed to be the majority of the gravitationally-interacting mass in the universe), how does that fit with the gravitation that we observe locally, in our solar system? And please don’t try to tell me that the gravitational interaction is too weak to notice on that scale, because if it did, then it would not have the requisite effect on galaxies, either. (If it interacted on weakly gravitationally, then there would have to be even more of it to have the observed large-scale effects, so that’s not an answer.)

    • … since dark matter has to be strongly interacting enough to account for the anomalies observed in the rotation of galaxies (which is why it is presumed to be the majority of the gravitationally-interacting mass in the universe), how does that fit with the gravitation that we observe locally, in our solar system?

      The key is that most dark matter candidates don’t strongly interact, as shown by the Bullet Cluster observations. In fact, that’s what WIMP stands for: Weakly Interacting Massive Particle. Since dark matter only interacts via gravity, it only clumps together on the largest scales of galaxies; at the scale of individual solar systems most WIMP candidates should have a fairly uniform density. Depending on whether we’re dealing with cold or hot dark matter, clumping may occur at scales smaller than galaxies but to the best of my knowledge this hasn’t been confirmed experimentally.

      As a result, there’s no net gravitational effect on scales at which dark matter is essentially uniformly dense. This conclusion follows from reasoning very similar to the famous sophomore-level physics problem where one proves that gravity inside a hollow uniform sphere is identically zero.

      Update: I reworded this comment about a week later.

    • See the AC. He explains it very nicely, so I won’t bother.

      Honestly, how stupid do you think physicists are? They came up with the idea of dark matter *specifically* to deal with galactic rotational curves. You really don’t think they put a little thought into local effects? Please…

      This is just classic Slashdotter arrogance. You somehow think you’ve achieved a brilliant insight that people who’ve spent decades specializing in the field somehow failed to notice.

      • Sure, I can find out. And I will. But I asked you, and you couldn’t or wouldn’t answer.

      • By the way, I just wanted to mention, since you seem so adamant about this: I did not assert or even imply that I thought physicists would not have considered the issue. That would be stupid. I simply asked you a polite question.

    • Honestly, how stupid do you think physicists are?

      Aside from this nonsensical outburst, her sympathy for 9/11 truthers, GMO fearmongers and climate change contrarians probably means she’s wondering how physicists manage to walk and breathe simultaneously.

      She’s likely suffering from a case of the Dunning-Kruger effect. I can’t think of any other reason why she would write these arrogant posts without ever noting that “IANAP” like most honest slashdotters do. The fact that her original post is currently at +4 Insightful chills me to the bone: apparently some people can’t distinguish scientists from programmers who use “sciencey” buzzwords while regurgitating a “teach the controversy” argument on roughly the same intellectual level as the ones pioneered by tobacco companies and the Discovery Institute (e.g. the wedge document).

    • Hi, AC! I see you were modded down, and probably rightfully so, but I chose to answer anyway. Isn’t life grand? :0)

      Aside from this nonsensical outburst, her sympathy for 9/11 truthers, GMO fearmongers and climate change contrarians probably means she’s wondering how physicists manage to walk and breathe simultaneously.

      Hmm. First, you seem to have taken an inordinate interest in me. Why is that? Especially considering how often you seem to disagree. Is it possible I have an evil stalker? Why else would he hide under an AC?

      Second, a simple question is a “nonsensical outburst”? That’s interesting. I wonder how you would judge an actual rant. I see you are very emotional. About some of the silliest things, too.

      For example, that 9/11 issue you linked to? Guess what? All I said was that not all the evidence was nonsense. In response, the guy went on and on about a lot of evidence that — I am sure most all of use would agree — IS nonsense. Which had nothing to do with the statement I made, or the evidence I was referring to, at all. In fact he didn’t even wait to bother to find out what evidence I meant before attempting to attack it.

      In regard to the GMO study I cited there, he (ChromeAeonium) at first tried to claim that it was not a peer-reviewed paper. Without any evidence to indicate that, I might add. Later, he stated that he was not familiar with the journal at all. Which is an admission that his earlier statements were outright lies, and in fact he could have corrected that ignorance in a mere few seconds with Google. In fact, it was peer-reviewed paper, in a respected peer-reviewed publication. So it is up to you to refute that solid, peer-reviewed evidence on GMO corn, rather than snidely calling me “sympathetic to GMO fearmongers”. My evidence is real, where is yours?

      I could go on, but I won’t bother. If you don’t stop harassing me, however, eventually I will find out who you are, AC or not.

    • Actually, I think I already know, but we shall see. If I am correct, I have asked you politely before to leave me alone. But I am losing my patience and do not promise politeness in the future. Considering how often you have been wrong, I wouldn’t push it if I were you.

    • Hi, AC! I see you were modded down, and probably rightfully so, but I chose to answer anyway. Isn’t life grand? :0)

      Ditto.

      I simply asked you a polite question. … a simple question is a “nonsensical outburst”? That’s interesting. I wonder how you would judge an actual rant. …

      No, you didn’t “simply ask a polite question.” You went on a rude, nonsensical rant. Let’s review:

      Nice pretty picture… especially when you consider it’s a picture of something that very possibly doesn’t even exist. [Jane Q. Public, 2010-03-28]

      This comment is similar to all the other crackpots in this thread who dispute modern physics which clearly shows that some amount of some kind of dark matter exists.

      It exists. Educate yourself. [Abcd1234, 2010-03-28]

      Abcd1234 clearly assumed you were expressing support for MOND, and helpfully linked impressive evidence which convinced many physicists that dark matter explains the evidence better than MOND. Then I chimed in:

      … Measurements of galactic rotational velocities conflict with expected velocities based on the amount of matter observed to be present. … At this point, dark matter was simply an hypothesis. MOdified Newtonian Dynamics (MOND [umd.edu]) was another hypothesis with equal weight. But then in 2006 measurements of the Bullet Cluster supported the dark matter hypothesis over the MOND hypothesis … [Dumb Scientist, 2010-03-28]

      The next day:

      … There are alternative theories, such as MoND, that might explain this (since it explains the apparent gravitational anomalies in spiral galaxies, it is possible that it could explain this kind of gravitational anomaly equally as well). Most “evidence” I have seen that is supposed to support the Dark Matter theory tends to also support the alternative theories. … [Jane Q. Public, 2010-03-29]

      You present as evidence exactly the same sort of imaging techniques that were used to make the image in question? That’s really lame. … You offered the wikipedia link as evidence that OP’s picture is correct… yet they used exactly the same techniques (having to do with gravitational lensing) to produce the pictures. Therefore you can’t use one as evidence of the veracity of the other. [Jane Q. Public]

      Nonsense. You made what sounded like a general statement about dark matter “very possibly” not even existing, which implies that physicists who overwhelmingly do think it exists are either incompetent or suffering from frequent hallucinations. The only reasonable objection to dark matter was MOND, and the Bullet Cluster presented very strong evidence that seems to rule out MOND with no dark matter. It’s not “exactly the same technique” as the current survey because the main point of the Bullet Cluster observations (the 8 sigma angular separation of the x-ray and weak lensing peaks) was to distinguish dark matter from MOND. Now that physicists are satisfied with the evidence establishing the existence of dark matter, subsequent surveys of dark matter (like this one) are oriented towards determining its distribution.

      Abcd1234’s comment certainly wasn’t a “downright stupid thing to do” in response to someone claiming that dark matter “very possibly doesn’t even exist”. He pointed you to exactly the evidence you needed to see, and in response you repeatedly insulted him.

      I wasn’t disputing the physics, I was simply commenting that you can’t use a given technique to validate itself. … Make no mistake. I am not disputing any physics here. … I did not assert or even imply that I thought physicists would not have considered the issue. That would be stupid. … All I have said here, given the whole of my statements, is that — contrary to your opinion — I do not believe that “the jury is in” quite yet. But without contrary evidence I grant you — for the moment — this evidence. But in my experience the attitude that “the jury is in” and all is decided is more often mistaken than not. Are you satisfied? Or are you still intent on arguing about nothing? … [Jane Q. Public]

      Nonsense. You were clearly disputing the existence of dark energy when you told me that I “gotta keep up with the news” in my own field (thus implying that you thought you’d uncovered evidence that physicists hadn’t considered) when all I did was summarize facts that are well-established in the physics community. This happened after another physicist complained (justifiably) that my comment was too conciliatory to MOND.

      In fact, my comment was copied from a conversation I had with a young-earth creationist over a year ago. So your response to me clearly wasn’t about the “given technique.”

      You even appeared to direct Abcd1234 to that comment of yours several times:

      I would like to add that, as some have mentioned above, there is recent evidence that appears to support the dark matter theory over the alternatives, but there is also evidence even more recently that the other evidence was in error. [Jane Q. Public, 2010-03-29]

      If you had bothered to read my other comments, you might have found out something yourself. [Jane Q. Public, 2010-03-29]

      Of course, (to your credit) you later admitted that this comment you were bragging about was mistaken and irrelevant. But this was the only comment where you even attempted to provide cited evidence. After that, your comments devolved into an even more nonsensical “teach the controversy” argument similar to that used by creationists to invent “controversies” regarding science that they’ve never studied at a professional level.

      For example, that 9/11 issue you linked to? Guess what? All I said was that not all the evidence was nonsense. In response, the guy went on and on about a lot of evidence that — I am sure most all of use would agree — IS nonsense. Which had nothing to do with the statement I made, or the evidence I was referring to, at all. In fact he didn’t even wait to bother to find out what evidence I meant before attempting to attack it.

      You said: “For example, some (but by no means all) of the ‘9/11 truthers’ (a very derogatory phrase) have some good evidence to cite. This is hardly something an area that is ‘unequivocally known’.” You never cited any of this “good evidence” and– as predicted— over a month later you still haven’t.

      In regard to the GMO study I cited there, he (ChromeAeonium) at first tried to claim that it was not a peer-reviewed paper. Without any evidence to indicate that, I might add. Later, he stated that he was not familiar with the journal at all. Which is an admission that his earlier statements were outright lies, and in fact he could have corrected that ignorance in a mere few seconds with Google. In fact, it was peer-reviewed paper, in a respected peer-reviewed publication. So it is up to you to refute that solid, peer-reviewed evidence on GMO corn, rather than snidely calling me “sympathetic to GMO fearmongers”. My evidence is real, where is yours?

      I’m honest enough to admit that I’m not a professional botanist nor do I have any graduate experience in horticulture (i.e. I’m not a plant person), so it’s not my evidence. ChromeAeonium explained this issue well in response to my comment, and he doesn’t deserve to be called an ignorant liar.

      I don’t have any problem admitting I am wrong, if I am convinced that I am wrong. Unfortunately, sometimes people are so taken up with their confidence in their own knowledge that they don’t read carefully, and think they are arguing about something that nobody is really arguing with them about. I have had that occur quite a few times here on slashdot. I may not be a physicist, but I do expect logic and reasoning when I am discussing an issue with somebody, and I don’t think physicists should be excused from that basic requirement merely because of their position. [Jane Q. Public, 2010-03-31]

      Yes, I’ve seen that happen to you quite a few times. Here’s one example:

      Climate science is in its infancy, as anyone who has been really following the “Global Warming” debate knows. Certainly we know the globe is warming, but the greenhouse gas aspect of it is still very much up in the air. Setting up a Climate Service today would be akin to setting up an Astrology Service. They would probably both give equally good advice. [Jane Q. Public]

      When I read this comment, it seemed like you were equating climate science with astrology. But when Dr. Spork pointed that out, you said:

      That is NOT what I wrote. I will thank you to not try to put words in my mouth. What I stated was that right now, given the state of the art, a government climate service would give advice about as good as astrology. What was your reading comprehension schore in school? [Jane Q. Public]

      When hey! drew a similar conclusion, you said:

      I did not compare it to astrology! Go read my damn statement again! Seriously, I think that some people on Slashdot have difficulty reading. I stated that a government agency would probably give advice about as good as astrology. My comment was as much about government as it was about anything else. [Jane Q. Public]

      This is pretty funny. It is pretty interesting to see how many people ASSUME that I was equating climate science with astrology. In fact, that is not what I wrote at all. Go look again. What I stated was that government advice would probably be as about as good as astrology. Man, a lot of people sure went to an awful lot of effort to refute something I never stated. Well, maybe that should be a lesson to read more carefully next time. [Jane Q. Public]

      Perhaps there’s an alternative to your hypothesis that “everyone but Jane Q. Public got a low reading comprehension schore in school.” For instance, Beezlebub33 made excellent points, but the obligatory XKCD beat him to the punch.

      You do make a good point, though. Your comment would only have equated climate science with astrology in some kind of bizarre alternate universe where NASA and NOAA are government agencies offering advice about climate change.

      I simply asked you a polite question. [Jane Q. Public, 2010-03-30]

      Really?

      … you obviously know squat about logic and evidence. … That’s just plain stupid. For someone as smart as you present yourself to be, how can you present that as “evidence” of the veracity of the other picture (which is what I was referring to), and call that “incontrovertible”? It really is a downright stupid thing to do. … [Jane Q. Public, 2010-03-29]

      … Your supposed expertise in physics aside, you seem to need reading and logic lessons. [Jane Q. Public, 2010-03-29]

      This word you’re using: “polite.” I do not think it means what you think it means.

      In contrast, here are some genuinely polite questions:

      • Tanuki64 admitted he wasn’t a physicist, and politely asked if normal matter in parallel universes could be responsible for gravity normally attributed to dark matter. I defended him, saying that the idea was interesting and linked to an account of my own investigation of that idea.
      • Jimmy admitted he wasn’t a physicist in his email to Dumb Scientist (removed for brevity’s sake on the website), and politely asked about a connection between dark energy and black hole radii. I decided to contact a cosmologist friend of mine to help answer his question. That article, incidentally, is where I admitted I wasn’t a theologian and asked a polite question about Origins. That same cosmologist answered those questions at my request because he’s also a priest in the Catholic Church, and used to work at the Vatican.
      • Thirdeye is a fellow physicist who works on experimental quantum computers, and he asked a series of polite questions about the climate. In the process of answering those questions, I encountered another mistake in the IPCC AR4 WG1 report. As documented in those comments, I emailed the IPCC on Saturday and they politely responded and fixed the mistake at 9am that Monday.
      • I admitted that I’m not a professional programmer and asked a polite question about source file management. Recently I received a very helpful response.
      • Dr. Vomact admitted he’s not a physicist, and asked a polite question about quantum entanglement and parallel universes. A very pleasant and enlightening conversation ensued.

      I keep naively expecting my conversations with the general public to be as productive as my conversations with other scientists. Fellow scientists are usually genuinely curious about the universe and politely ask insightful and challenging questions that sharpen my understanding of physics. In a feeble attempt to stave off my rapidly deepening cynicism about nonscientists, I’m going to pretend that our conversation ended like this:

      (Ed. note: the rest of this comment was already copied here.)

  16. (Ed. note: these comments were copied from a conversation about the discovery of the first dark matter filament between galaxies. Don’t miss Jane’s lecture about the Higgs boson and Pro-feet’s responses.)

    “So in other words, they didn’t find anything other than a mathematical equation suggesting dark matter exists. Congratuations are in order indeed.” [slashmydots]

    Yes, I get a kick out of how that article, as well as the one on space.com linked to above, are both written under the assumption that we know “dark matter” exists… but we know no such thing. It is still a matter of much controversy (no pun intended).

    We have various theories to account for the observations. Among them the most popular of the string theories, which support the existence of dark matter. But on the other hand, there have been a number of recent findings that call “string theory” itself into strong question. Perhaps even rendering it invalid.

    It bothers me greatly when “science” magazines and “science” websites report questionable theories as though they were demonstrated beyond reasonable doubt. In the case of strings and dark matter, nothing of the sort is even remotely true.

    • String theory isn’t testable using current technology, but it’s largely unrelated to dark matter. On the other hand, we’ve already discussed some of the actual evidence for dark matter.

      This new paper seems very rigorous (to my non-cosmologist eyes). Among other checks, they extensively searched parameter space to exclude the possibility that standard NFW dark matter halos were being mistaken as a filament. The nearly head-on alignment of these two galaxies is fortunate, and the authors deserve credit for noting that it improves the signal-to-noise ratio of the gravitational lensing signal.

    • Hi, Khayman80. Haven’t heard from you in a while.

      However, there has been much work being done on both “sides” of the matter, and I really don’t feel I have time to get into a detailed discussion of the matter right now. But there have recently been findings that seriously call string theory into question, and in turn, that somewhat weakens the arguments for dark matter.

      I’m not saying that anything is conclusive in either direction. But I sense the pendulum swinging…

    • “String theory isn’t testable using current technology, but it’s largely unrelated to dark matter.”

      Apologies, I did not read this quite right the first time, or I would have answered it.

      Yes, indeed, string theory is one of the pillars upon which dark matter theory is formed. It may be possible for it to exist without “strings”, but in most current models they are inextricably intertwined. I.e., string theory does not depend upon dark matter theory, but dark matter theory (most models, anyway) very much DO depend upon string theory.

      So anything that is evidence against string theory, is also an argument against MOST dark matter models as well.

    • Citation?

    • No, it was about 6 months ago, and I don’t have it right at hand. But I will say that if you haven’t heard about it, you haven’t been paying attention.

      I will look to see if I have a reference. It might take me a day or so. I am very busy with work and personal issues right now.

    • But on the other hand, there have been a number of recent findings that call “string theory” itself into strong question. Perhaps even rendering it invalid.

      No, there haven’t. In fact, string theory’s whole problem is that it’s really hard to find evidence that it’s wrong (i.e., is testably falsifiable), because it’s so flexible.

    • Pardon me. My mistake. I was confusing the fact that supersymmetry is an integral part of most versions of string hypotheses, with the idea that strings must exist for supersymmetry to exist.

      Mea culpa.

    • “No, they don’t. No competing theory (e.g., MOND) comes close to explaining the entire set of phenomena that dark matter can explain. At best, they’ll get one or two things right.” [Someone]

      You are confused. These reports of “evidence” of dark matter have invariably involved only a single property of the hypothesized matter. In each case, there has also invariable been one or more competing theories that equally well explain that one phenomenon.

      They only HAVE TO get one or two things right, if you’re only talking about one thing.

      • No, you’re confused. I’m talking about the fact that dark matter explains aspects of galactic rotation curves, cluster structure, angular spectrum of the CMBR, supernova distance measurements, and so on. Any particular study is only looking at one of these aspects, but the point is that dark matter accounts for all these different phenomena simultaneously. No other theory does that. For that matter, no other collection of theories does that (if you want to appeal to different explanations for each phenomenon).

    • Pardon me. My mistake. I was confusing the fact that supersymmetry is an integral part of most versions of string hypotheses, with the idea that strings must exist for supersymmetry to exist. Mea culpa. [Jane Q. Public, 2012-07-07]

      You’ve demonstrated true scientific spirit here by admitting a mistake. Kudos.

      However, serious deja vu kicked in just 5 minutes later when you continued to imply that the astrophysicists who overwhelmingly consider dark matter more plausible than MOND are “confused”.

      Instead of apologizing, could you please stop spamming humanity with all of this multi-disciplinary misinformation? You may find that doing so reduces your need to apologize in the future.

      Incidentally, I was asking for a citation supporting your claim that “string theory is one of the pillars upon which dark matter theory is formed.” Your supersymmetry confusion seems completely irrelevant, but here’s a comment posted 2 hours before yours:

      That’s wrong. Probably the two leading candidates for dark matter are axions and neutralinos, neither of which require string theory. (The latter requires supersymmetry, which is part of string theory, but there are plenty of supersymmetric field theories that aren’t string theory.) [Someone]

      Notice that he was saying even supersymmetry isn’t really related to the existence of dark matter. Axions aren’t related to supersymmetry. None of this affects your claim that “string theory is one of the pillars upon which dark matter theory is formed” because it’s still wrong even if you replace string theory with supersymmetry. To me, this seems like the usual chaff intended to distract readers from noticing the increasingly vague nature of your statements.

      Nonsense. It [dark matter] is a “questionable” theory because it is questionable whether it actually deserves the moniker “theory” at all. A theory must be testable. So far, no solid grounds for testing have been established. Yes, you have sensational articles saying it has “been found”, but they invariably neglect to mention that certain other theories fit the observed phenomema approximately as well. In that vein, dark matter, dark energy, and string “theory” are ALL struggling to maintain a status of “theory”, at all. Granted, there has been some evidence. But there has also been counter-evidence. And testability is still up in the air. [Jane Q. Public, 2012-07-07]

      I’ve previously told you about some of the evidence for dark matter. Notice that none of those experiments referred to string theory or supersymmetry. In fact, Zwicky’s 1933 discovery of dark matter predates string theory and supersymmetry.

      You also claimed that the reference showing that “string theory/supersymmetry is one of the pillars upon which dark matter theory is formed” was published “about 6 months ago”…

      • [Lonny Eachus, 2006-09-12] See also recent articles in Scientific American and other journals; dark matter, dark energy, and even string theory are coming under fire, because they are based on assumptions or “tweaks” to the observed data, in order to make the theories fit the observations. That is not science, it is mere speculation. A physics professor who caught students doing the same would call it “fudging the data”. Dark matter, dark energy, and string theory have had no predictive value whatever; they are simply dreamed-up explanations for what has been observed. Therefore they do not deserve to be labeled “theories”.
      • [Jane Q. Public, 2007-08-24] The research looks legit, but the Slashdot tagline does not. The existence of such phenomenon does not appear to favor Superstring hypotheses any more than it supports a number of other hypotheses that are currently under investigation. (“Hypotheses”, because they have not yet earned the name “theory” via prediction or testability.) Perhaps this will help sort things out, and even boost one or more of these ideas into actual theory status. Until then, it is premature to imply that this research constitutes evidence for “string theory” more than it is evidence for any of those other hypotheses. This is evidence for quantum gravity, but not yet for anything else.
      • [Jane Q. Public, 2007-08-24] Not only am I angry about Superstrings being hyped and taught as though they were fact, so-called “String Theory” is not even really a theory yet. As Ars pointed out, this is no more evidence for that than it is for other quantum gravity models. There is an article on the Net (you can find it at YouTube, search for “Ring of Dark Matter” that uses similar propaganda to present a hypothesis as though it were a fact, an not just a hypothesis. The makers of the video said it demonstrated the existence of Dark Matter, but in fact the MoND hypothesis could explain it at least as well. People need to stop evangelizing about their pet hypotheses, and get back to doing real Science.
      • [Jane Q. Public, 2007-08-30] Have you not seen shows on Discovery Channel and elsewhere, in which “String Theory”, “Dark Matter”, and the like have been presented as though they had already been demonstrated to be “fact”? If not, here is just one link as an example: http://youtube.com/watch?v=EJtJ7Q0cV34 In the video, Dark Matter is presented as FACT, even though other hypotheses, such as MoND, could explain the phenomenon equally well. I am sure that if you TRIED, you could find plenty more examples on your own. Until then, please don’t bother me again.
      • [Jane Q. Public, 2007-10-25] IT IS NOT A THEORY!!! String “theory” is not a theory at all, it is merely a hypothesis. It will not become a “theory” unless and until it can be tested by experiment! Come on, people! I am not nitpicking: the scientific among you know the difference. Do not accept the name “string theory” at face value. That is just String Propaganda. And if that were not bad enough, there are other hypotheses, such as MoND (Modified Newtonian Dynamics) that explains most if not all what is explained by the string hypothesis, without having to imagine all those other dimensions. In fact, it is so much simpler than the string hypothesis that Occam’s Razor is practically screaming, “No! Over here, you idiots!” Yes, there are problems with MoND, but there are very big problems with strings as well. The fact that an idea is popular in the media or has been around longer is not evidence that it is true, any more than the others.
      • [Jane Q. Public, 2007-10-25] So far, nobody has dreamed up a way to test either the string hypothesis (hypotheses, actually) or MoND. So they both remain in “hypothesis” status (merely imagined explanations) until then.
      • [Jane Q. Public, 2007-10-25] Keep in mind that string “theory” came about as an attempt to explain precisely the same phenomenon: this discrepancy in rotational velocities. Which was also the direct cause of the whole idea of “dark matter”: “Hey… if we just assume that entire galaxies are three times heavier than they should be, all these numbers work! Just invent invisible “stuff” with mass, and we can cancel out these inconvenient factors, get a zero balance, and go have a beer!” It would be nice if we could just make up that kind of garbage in our accounting systems, too.
      • [Jane Q. Public, 2008-01-10] Propaganda. Quote: “The distribution of dark matter in the foreground galaxies that is warping space to create the gravitational lens can be precisely mapped.” Really? How can we “precisely map” something that we have never even shown positively to exist yet? The distribution of gravity could be caused my a number of things other than “dark matter”. Gravitational disturbance by itself is not evidence for dark matter, any more than it supports at least several other hypotheses.
      • [Jane Q. Public, 2008-01-11] Not Exactly. Right off the top of my head, there is the MoND hypothesis, which explains these very kinds of observations at least as well as “dark matter”, but does not require that we assume that the universe contains at least 3 times as much mass as previously thought (and observed). There ARE others; I am not prepared to expound on them all here. But look up MoND at Wikipedia… as a hypothesis it has advantages over dark matter, and is much simpler… Occam’s Razor and all that, you know.

      So-called “dark matter” (which so far is only a hypothesis, not even a real theory), DOES NOT INTERACT with our “normal” universe, except through gravity. Therefore, it does not absorb light. It could bend light (gravitational lensing) but not absorb it. Personally, I find the idea of “dark matter”, as currently envisioned, to be little more than superstitious hand-waving. I think the concept is unlikely in the extreme to be shown valid, and instead that other sources will be found for the observed effects (like, as the other responder pointed out, more mass than previously thought in existing stars). [Jane Q. Public, 2008-05-18]

      Ironically, someone responded to your claim by mentioning the Bullet Cluster.

      [Jane Q. Public, 2008-05-20] There are many ideas that have attempted to explain exactly that phenomenon, and one in particular arguably does so much better than “Dark Matter”. That one is MoND (Modified Newtonian Dynamics). Further, MoND entails only a few very minor adjustments to known constants. Unlike the Dark Matter hypothesis, MoND does not require us to imagine that the universe is made mostly of stuff that we cannot see or interact with except via gravity. That latter is a pretty big leap of faith! So in a comparison of the two hypotheses, Occam’s Razor argues very strongly in favor of MoND. , [Jane Q. Public, 2008-07-03] I don’t think it exists. There are explanations other than “dark matter” and “dark energy” that can explain the observations we see. MoND, for example (Modifien Newtonian Dynamics) is a quite popular theory among physicists, and it does not require that we believe that most of our universe is basically undetectable by humans. Occam’s Razor works strongly in favor of MoND over such hypotheses as dark matter… only time will tell. .

      There is always MoND, which explains some of the same things as “Dark Matter” and “String Hypothesis”, and then there are also some recent findings that suggest that the Universe is not expanding after all… which would throw the String Hypothesis right out the window. [Jane Q. Public, 2009-04-21]

      You mention recent findings suggesting that “the Universe is not expanding after all” and claim that this “would throw the String Hypothesis right out the window” (somehow), all without citing evidence. (And seemingly without noticing that your claim actually conflicts with the Big Bang theory which is fundamental to modern cosmology, not the String Hypothesis.) You later told me about “strong new evidence” calling into question whether the expansion is actually accelerating, again without citing evidence. Examining the evidence before making a claim is generally a good idea.

      [Jane Q. Public, 2009-04-21] Are you trying to say that “String Theory” has not been used to try to explain “dark matter”? Of course it has. , [Jane Q. Public, 2011-12-05] … there is a good bit of evidence that in some cases we may end up having to go back to some of those older ideas: so far dark mass and energy haven’t proved out, and there have come up explanations that don’t need them. Explanations that go back to some of the old “unadjusted” equations after all. We may, for example, be going back to Hamilton’s Quaternions, and Heaviside’s, analyses as opposed to Maxwell’s simplifications. It all depends on how it shakes out. … .

      Again, could you please stop spamming humanity with all of this multi-disciplinary misinformation? You may find that doing so reduces your need to apologize in the future.

    • (Ed. note: this comment was copied from here.)

      Looks like the concept of “Dark Energy” that many physicists have been so fond of, is dead. bit.ly/S7dwQv [Lonny Eachus]

      No, Lonny. The gizmag article you linked just shows that one type of dark energy (the cosmological constant) is more consistent with long-term observations showing that the proton to electron mass ratio (PEMR) has remained roughly constant over billions of years. Even wikipedia makes it clear that the cosmological constant is a type of dark energy:

      In the standard model of cosmology, dark energy currently accounts for 73% of the total mass–energy of the universe.[2] Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously,[3] and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space.

      Because dynamic types of dark energy like quintessence tend to imply changes in the PEMR over billions of years, these observations suggest that physicists now have enough evidence to prefer a static type of dark energy- the cosmological constant. So why is Lonny once again wrongly claiming that dark energy is dead?

      One reason might be these curious sentences in that gizmag article:

      The concept of “dark energy” with a negative pressure was introduced to describe this acceleration. … Dark energy must have a negative pressure to produce the observed acceleration in the standard cosmological model, a rather bizarre notion meaning that space repels itself.

      A casual reader might conclude that dark energy’s negative pressure distinguishes it from a cosmological constant, but both types of dark energy have negative pressure. In fact, I’ve explained to Jane Q. Public that “vacuum energy has pressure equal and opposite to its energy density” which is why its equation of state is w = -1. I continued, explaining why the universe’s expansion accelerates for any w < -1/3.

      Because -1 < -1/3, the cosmological constant’s negative pressure accelerates the expansion of the universe. It is a type of dark energy, which accounts for roughly 3/4 of all the mass-energy in the universe.

      Update: The Planck satellite revises the dark energy estimate to 68.3%, so closer to 2/3.

    • (Ed. note: this comment was copied from here.)

      Who are you, and why are you arguing with me? [Lonny Eachus]

      Are you cruising the web hunting around for someone to argue with over trivialities, or what? Not very friendly. [Lonny Eachus]

      I’m the Dumb Scientist, and I’m pointing out that you’re spreading misinformation. Again.

      I didn’t claim, I said “looks like”, and was referring to the popular sense. This is Twitter, not some science journal. [Lonny Eachus]

      No, it doesn’t even “look like” dark energy’s dead, in any sense. You were just wrong. Again. Spreading misinformation on Twitter is still spreading misinformation. Please stop.

      @jimmygle Interesting article. The other day it was announced that there is almost certainly no “dark energy” making the Universe expand. [Lonny Eachus]

      Again with this nonsense? Physicists have never claimed that dark energy makes the Universe expand. Dark energy makes the expansion of the Universe accelerate.

      @jimmygle “Almost certainly” to like 5 nines +. That is… apparently it is expanding. But not due to invisible “dark energy”. [Lonny Eachus]

      Lonny’s confusion between expansion and acceleration reminds me of Jane Q. Public’s similar confusion.

      @jimmygle I read about it on Ars Technica the other day. Or maybe it was… wait. Here it is. bit.ly/S7dwQv [Lonny Eachus]

      @jimmygle Haha. Well, the basic idea was that there must be SOMETHING forcing everything apart. So some bigwig physicists came up with the [Lonny Eachus]

      @jimmygle … idea that there must be some kind of invisible energy doing it. Great on paper but I don’t think there was ever good evidence. [Lonny Eachus]

      @jimmygle bit.ly/V3qJHe [Lonny Eachus]

      Posting that link must be your way of retracting your claim. In it, Dr. Perlmutter explains how the accelerating expansion of the universe reveals the existence of dark energy.

      Your other responses were even more disappointing…

  17. You may like the piece written by Scott Aaronson on quantum dynamics.

  18. (Ed. note: These comments were copied from a conversation that started on 2010-05-31.)

    What if neutrinos don’t have mass but still oscillate?

    That would be pretty amazing as it would violate the Special Theory of Relativity, one of the most tested theories of all time. The problem is, according to Special Relativity, massless particles move at the speed of light, and time does not advance for them. (If you could build a massless clock, its hands would never move.) Oscillations require a time scale. There is a time period of oscillation, or rather the probabilities of being found in a specific state (mu vs. tau, for instance) oscillate with time. Since time stands still for massless particles, this can’t happen.

    • That’s the way I’ve always understood the mass/oscillation connection too. But then I thought… wait… don’t photons oscillate too? They’re just coherent oscillations of the EM field; oscillating back and forth between electric and transverse magnetic in free space. If there’s something different about neutrino oscillation which makes it necessary for the neutrino to travel at sublight, what is it specifically?

    • The situation you describe with the EM field is an example of wave-particle duality. Light can behave like both a wave and a particle, but it doesn’t make sense to analyze it both ways at the same time. As a wave, it does manifest itself as oscillating electric and magnetic fields and as a particle, it manifests itself as a photon, which doesn’t change into a different type of particle. (There’s no such thing as an “electric photon” and a “magnetic photon”.)

      Neutrinos, too, are described quantum mechanically by wavefunctions, and these wavefunctions have frequencies associated with them, related to the energy of the particle. But these have nothing to do with the oscillation frequencies described here, in which a neutrino of one flavor (eg. mu) can change into a different flavor (eg. tau). Quantum mechanically speaking, we say the mass eigenstates of the neutrino (states of definite mass) don’t coincide with the weak eigenstates (states of definite flavor: i.e. e, mu, or tau). Without mass, there would be no distinct mass eigenstates at all, and so mixing of the weak eigenstates would not occur as the neutrino propagates through free space.

    • Thanks. I just found some equations that appear to reinforce what you said.

      Since the oscillation frequency is proportional to the difference of the squared masses of the mass eigenstates, perhaps it’s more accurate to say that neutrino flavor oscillation implies the existence of several mass eigenstates which aren’t identical to flavor eigenstates. Since two mass eigenstates would need different eigenvalues in order to be distinguishable, this means at least one mass eigenvalue has to be nonzero. There’s probably some sort of “superselection rule” which prevents particles from oscillating between massless and massive eigenstates, so both mass eigenstates have to be non-zero. Cool.

    • I don’t know of any superselection-rule — it’s possible, in theory, for the electron neutrino to have zero mass but the muon neutrino to have nonzero mass.

      But then you’d have to explain why one flavor was massive while the other was massless, which has never happened before. Since there’s lots of precedent for three flavors with different nonzero masses, people just figure that the neutrinos are the same way.

    • That’s fascinating. Do you have a good reference in mind that discusses this topic? I find the idea of a superposition which sometimes travels at lightspeed and sometimes travels slower than light to be… very bizarre.

    • I don’t know of any superselection-rule — it’s possible, in theory, for the electron neutrino to have zero mass but the muon neutrino to have nonzero mass.

      You can’t have oscillations between massless and massive states. Remember, SR says that time stands still for massless particles. If you look at the equations for neutrino oscillations, for example here, you’ll see there are expressions involving both the mass squared (for the time evolution of the wavefunction), and mass difference squared, for the mixing amplitudes. So, for quantum mechanical mixing between states, you need both non-zero masses and non-zero mass differences. There may be other, weird mixing theories which don’t require mass differences, but they would be quite exotic. On the other hand, mixing of particles with zero masses would violate SR, which would be highly surprising!

    • The time-dependent Schrödinger’s equation doesn’t apply for massless particles. It was never intended to. It isn’t relativistic. Try to apply a simple boost and you’ll see it’s not Poincaré invariant. The main point is that you get the same probabilities if you use a relativistic theory, but you need A LOT of work to get there.

      Oscillations work and happen in QFT, which is Poincaré-invariant and assumes special relativity. I can’t find any references in a quick search, but I’ve done all the (quite painful) calculations a long time ago to make sure it works. It’s one of those cases where the added complexity of relativistic quantum field theory doesn’t change the results from a simple Schrödinger solution.

    • The time-dependent Schrödinger’s equation doesn’t apply for massless particles. It was never intended to. It isn’t relativistic. Try to apply a simple boost and you’ll see it’s not Poincaré invariant. The main point is that you get the same probabilities if you use a relativistic theory, but you need A LOT of work to get there.

      Who said anything about Schrödinger’s equation? The equations on the web page I referred to in my post are all relativistically correct. Unfortunately those equations aren’t numbered, but if you look at the third equation in the section labelled “Propagation and Interference” you’ll see that it was derived under the assumption of ultrarelativistic particles (whose energy is much greater than its rest mass times c^2).

    • That’s the classical deduction using Schrödinger’s equation. The only “relativistic” approximation is the dispersion relation, but they use Schrödinger’s equation to get the kets. We use spinors, not bras/kets, when you solve the (relativistic) Dirac equation; in that case, you get the same probabilities.

      See this for a reasonably simple deduction using Schrödinger’s equation, that gives you the exact same formulae as your Wiki link. If you use a fully relativistic approach, you get the same final results but no dependency on the mass (only on mass splittings).

    • You can’t have oscillations between massless and massive states. Remember, SR says that time stands still for massless particles.

      Not only that but (rest-)massless particles move at the speed of light and particles with rest mass move at some velocity less than that. Converting from one to the other (without interacting with another particle) would violate the conservation of momentum.

    • That’s fine, because in a non-static universe there is a conservation of energy-momentum (nabla_mu T^(mu nu) = 0 is the GR conservation law). Since there is a metric expansion of space and therefore a large curvature to space-time, there is a lot of extra energy being created in the universe if the vacuum energy is constant (which it would be if the cosmological constant is inertial; quintessence and other mechanisms for the metric expansion of space don’t really change that).

      So converting momentum into rest mass is perfectly fine and vice-versa, at least in general relativity (which in turn faithfully reproduces SR locally in flat “slices” of spacetime).

      Yes, this is different from the conservation of invariant mass normally written down E^2-|\vec(p)|^2c^2 = m^2c^4 with a specially chosen inertial frame of reference such that \vec(p) = 0 as the SR formulation explicitly ignores gravitational energy. The classic mass-energy equivalence obviously does not forbid trading kinetic energy and rest mass; the invariance of the invariant (i.e., rest) mass is however (like classical SR itself) background-dependent.

    • it’s possible, in theory, for the electron neutrino to have zero mass but the muon neutrino to have nonzero mass.

      No. All flavour eigenstates MUST be massive: they are superpositions of the three mass eigenstates, one of which can have zero mass. Calling the three mass eigenstates n1, n2 and n3; and the three flavour eigenstates ne, nm and nt, we’d have:

      ne=Ue1*n1+Ue2*n2+Ue3*n3

      nm=Um1*n1+Um2*n2+Um3*n3

      nt=Ut1*n1+Ut2*n2+Ut3*n3

      So, if any of n1, n2 or n3 has a non-zero mass (and at least two of them MUST have non-zero masses, since we know two different and non-zero mass differences), all three flavour eigenstates have non-zero masses.

      Also, remember that the limit for the neutrino mass is at about 1eV, while it’s hard to have neutrinos travelling with energies under 10^6 eV. In other words, the gamma factor is huge, and they’re always ultrarelativistic, travelling practically at “c”.

      Another point is that the mass differences are really, really small; of the order of 0.01 eV. This is ridiculously small; so small that the uncertainty principle makes it possible for one state to “tunnel” to the other.

      I really can’t go any deeper than that without resorting to quantum field theory. I can only say that standard QM is not compatible with relativity: Schrödinger’s equation comes from the classical Hamiltonian, for example. To take special relativity into account, you need a different set of equations (Dirac’s), which use the relativistic Hamiltonian. In this particular case, the result is the same using Dirac, Schrödinger or the full QFT, but the three-line Schrödinger solution becomes a full-page Dirac calculation, or ten pages of QFT. In this particular case, unfortunately, the best I can do is say “trust me, it works; you’ll see it when you get more background”.

    • … Also, remember that the limit for the neutrino mass is at about 1eV, while it’s hard to have neutrinos travelling with energies under 10^6 eV. …

      Is this because most nuclear (or other?) reactions producing neutrinos give them so much energy, or something to do with QFT propagation?

      Anyway, thanks for all your insights. Until I can go back to school and take QFT for real, it’s always a pleasant surprise to meet someone willing and able to answer these questions.

    • Light can behave like both a wave and a particle, but it doesn’t make sense to analyze it both ways at the same time. As a wave, it does manifest itself as oscillating electric and magnetic fields and as a particle, it manifests itself as a photon, which doesn’t change into a different type of particle. (There’s no such thing as an “electric photon” and a “magnetic photon”.)

      There’s still something that doesn’t quite work with this argument as far as I can see. (I am a particle experimentalist and not a theorist, but hear me out. I’d be very interested to see if you have an argument for why the following idea doesn’t work.) I understand that mass is necessary for flavour oscillations, but my issue is with the argument of SR banning any time dependence for massless particles. Sure there’s no “Electric photon” or “Magnetic photon” as you say, but there IS a distinct, measurable, time dependent difference between, say, a horizontally and a vertically polarized photon. Say I have a circularly polarized photon fired at a horizontal filter. The probability of that photon passing the filter is dependent on the photon’s time of flight before hitting it. It seems to me like I could experimentally measure a time dependent property of the photon: The same experiment performed at different times on an identically prepared photon yields different results. So how does this “mesh” with the SR argument? Perhaps you’re being too general?

    • Does that fact that light is polarizable, has known frequencies of oscillation, and no mass (but has momentum), throw a wrench into all this claim that you need mass to oscillate? Probably not on these forums, but it still makes me pause to think about it.

    • Light doesn’t oscillate in this way. A photon is a photon, and remains a photon. Electric and magnetic fields oscillate, but the particle “photon” doesn’t. Neutrinos start as one particle (say, as muon-neutrinos) and are detected as a completely different particle (say, as a tau-neutrino).

      The explanation for that is that what we call “electron-neutrino”, “muon-neutrino” and “tau-neutrino” aren’t states with a definite mass; they’re a mixture of three neutrino states with definite, different mass (one of those masses can be zero, but at most one). Then, from pure quantum mechanics (and nothing more esoteric than that: pure Schrödinger equation) you see that, if those three defined-mass states have slightly different mass, you will have a probability of creating an electron neutrino and detecting it as a tau neutrino, and every other combination. Those probabilities follow a simple expansion, based on only five parameters (two mass differences and three angles), and depend on the energy of the neutrino and the distance in a very specific way. We can test that dependency, and use very different experiments to measure the five parameters; and everything fits very well. Right now (specially after MINOS saw the energy dependency of the oscillation probability), nobody questions neutrino oscillations. This OPERA result only confirms what we already knew.

    • The explanation for that is that what we call “electron-neutrino”, “muon-neutrino” and “tau-neutrino” aren’t states with a definite mass; they’re a mixture of three neutrino states with definite, different mass (one of those masses can be zero, but at most one).

      Right above I speculated that it’s not possible for a particle to oscillate between massive and massless eigenstates. Do you have a reference showing that one mass eigenvalue can be zero? I’m curious to see how a massive particle which must travel slower than light can oscillate into a massless particle that must travel at exactly the speed of light. I’d always figured a superselection rule would prevent this sort of thing. I only got through two semesters of quantum using Sakurai in grad school; I’m hoping this point is also comprehensible with the Schrodinger equation and not full QED. (shudder)

    • Mass eigenstates don’t oscillate. n1 is always n1, unless you try to measure it, in which case its eigenfunction collapses into the interaction base (ne+nm+nt). That’s quantum weirdness for you.

      The interactions (production and detection) happen in the flavour base. The propagation happens in the mass base. This means you never oscillate “from massless to massive”: you are created with a mixture of massive and massless states, which travel differently, changing the probability of each flavour.

    • Mass eigenstates don’t oscillate. n1 is always n1, unless you try to measure it, in which case its eigenfunction collapses into the interaction base (ne+nm+nt). That’s quantum weirdness for you.

      Yeah I get that, although my phrasing and editing have been poor. I’m simply uncomfortable with the idea of a superposition of massless and massive eigenstates. This post’s 2nd paragraph summed up my feelings on the matter. And I really do mean to say “feelings”– it’s been years since I touched quantum mechanics, and none of that was based on relativistic hamiltonians (except for fine structure, of course). I couldn’t spinor my way out of a paper bag.

      The interactions (production and detection) happen in the flavour base. The propagation happens in the mass base. This means you never oscillate “from massless to massive”: you are created with a mixture of massive and massless states, which travel differently, changing the probability of each flavour.

      Okay, this makes sense and doesn’t really bother me for natural neutrinos which are wildly ultrarelativistic even for the most massive eigenstate allowed by experiment. But suppose we could artificially create neutrinos with extremely low energies? Then the massive eigenstates would travel so differently than the massless eigenstates that my head hurts just thinking about it. It might be possible to contain the slow-moving massive eigenstates, but the massless ones would still fly off at lightspeed, albeit with less energy. One part of the superposition would be in the lab, and the other part would be heading to the stars…

    • I’m curious to see how a massive particle which must travel slower than light can oscillate into a massless particle that must travel at exactly the speed of light.

      Well, if it went from something with mass to something without mass, could it not use that energy from the mass to speed up to the speed of light? Sort of like how pair production causes light to become two particles traveling slower then the speed of light and then when they annihilate each other you get the photons traveling at the speed of light again?

    • Well, if it went from something with mass to something without mass, could it not use that energy from the mass to speed up to the speed of light?

      First, it’s one thing to claim that electron-positron collisions produce gamma rays. This is a (generally) non-repeated event with a clear discontinuity in time. Before the discontinuity, particles travel slower than light. After, the products of the collision travel at lightspeed. But an oscillating particle varies smoothly and repeatedly between the two states so there’s no clear discontinuity even though the physics of massless and massive particles are wildly different.

      Second, I’m not suggesting such a superposition would violate conservation of energy. I’m struggling to put my objections in words. I do grok quantum superpositions of an electron in different places, but I don’t think it’s possible to have a quantum superposition of an electron and a proton because that would violate conservation of charge, mass, lepton number, and baryon number. (I believe this general idea is known as a superselection rule, which forbids certain superpositions.) In the same way, I’m trying to figure out if a superposition of a particle that defines the light cone and a particle that’s constrained to move inside the light cone is meaningful. No conservation laws seem to apply, but I’m vaguely thinking that environmentally-induced superselection wouldn’t allow such a superposition to exist for much longer than a Planck time.

      Sort of like how pair production causes light to become two particles traveling slower then the speed of light and then when they annihilate each other you get the photons traveling at the speed of light again?

      A photon can only produce a real electron-positron pair when it has at least twice the rest-energy of an electron and it hits a nucleus. Photons can’t produce real particle pairs in free space because momentum and energy can’t be simultaneously conserved without a third particle.

      The fact that light travels slower in matter can be explained in many different ways. Some QED cartoons attribute this to the effective non-zero mass of a quasi-particle known as a polariton, which is composed of a photon and phonons in the material. Again, this doesn’t happen in free space. Also, this example involves virtual particles but the oscillating neutrino is real because it can be detected as a single particle.

    • A photon can only produce a real electron-positron pair when it has at least twice the rest-energy of an electron and it hits a nucleus.

      Or another photon.

  19. (Ed. note: These comments were copied from a conversation about the Alcubierre warp drive.)

    “Exotic matter, by definition, requires violations of the known laws of physics.”

    No, it doesn’t. Antimatter is one valid type of “exotic matter”, and it has been manufactured in labs in various (small) amounts for many decades now, without a physics violation in sight. [Jane Q. Public]

    Antimatter certainly isn’t common, but it’s not “exotic matter”. Stable wormholes and the Alcubierre drive require exotic matter with negative mass-energy, which would violate the weak energy condition.

    “… we can see it [negative mass-energy] in certain configurations of regular matter, such as the Casimir effect.”

    What does the Casimir effect have to do with it? That is merely a demonstration of so-called “zero point” fluctuations. It isn’t “negative energy”, except to the extent that you have particles and their counter-particles spontaneously arising at the same time. Even so, in the case of the Casimir effect it exerts a net positive energy on the affected mass. [Jane Q. Public]

    The Casimir effect is the best known example of negative energy:

    Morris, Thorne and Yurtsever[4] pointed out that the quantum mechanics of the Casimir effect can be used to produce a locally mass-negative region of space-time. In this article, and subsequent work by others, they showed that negative matter could be used to stabilize a wormhole.

    • The Casimir effect is that in between the plates, there is less than nothing, where nothing is defined as the usual vacuum.

      So yes, it is a negative energy.

      The negative, or attractive, force it exerts on the plates should tell you that.

    • The AC is terse but correct. The Casimir effect occurs because vacuum fluctuations are suppressed between two parallel conducting plates that are placed very close together. Maxwell’s equations force E=0 inside perfect conductors, which means that vacuum fluctuations with a half-wavelength that’s longer than the separation between the plates can’t exist between the plates. Because those fluctuations do exist in the vacuum outside the plates (which is defined to have zero energy), the energy inside the plates is actually negative. The attractive force implies negative energy between the plates because force is the negative gradient of potential energy.

  20. (Ed. note: this comment was copied from here.)

    “The Casimir effect is the best known example of negative energy:” [Dumb Scientist]

    This is going to be one of my rare responses to your posts. Prepare to be ignored for the most part, from here on in. … Get a clue. If you are seriously using that link as a citation, then you lose. You did not properly comprehend what it said. … Dude. I know you are a scientist. But do you even really know what the Casimir effect is? Of course I expect you will by the time you answer (if you do). And if you do answer, I probably won’t reply. But at this very moment, at the time you first read this, from what you have already stated, I suspect that you really don’t know what it is. [Jane Q. Public]

    Comments like these suggest that you’re not really interested in studying physics. On the other hand, John Cramer’s Alternate View columns inspired me to study physics in high school. In 1998, FTL Photons introduced me to the Casimir effect. In 2001, I made an offhand remark about these faster-than-light (FTL) implications to my experimental physics professor, and he asked me to give a presentation to the class.

    The next comment I wrote summarized the first part of my presentation. The second part showed that virtual particles actually slow down light in the standard vacuum, because photons spend some of their time as electron-positron pairs that travel slower than “true” lightspeed. Because the Casimir effect suppresses some of these virtual particles, light actually travels faster between the plates (perpendicular to the plates) than in the standard vacuum. This is called the Scharnhorst effect.

    The Casimir effect can be modeled mathematically as a negative-mass region; Hawking showed that negative energy is necessary for certain effects on WORMHOLES to take place in conjunction with such a negative mass. But he did not claim that the negative energy was supplied by it. But that does not establish a direct relationship between the two. It is a very FAR cry from equating negative energy with the Casimir effect. [Jane Q. Public]

    Why are you talking about Hawking? I already pointed you to “Wormholes, Time Machines, and the Weak Energy Condition”:

    “The following model explores the use of the ‘Casimir vacuum'[12] (a quantum state of the electromagnetic field that violates the unaveraged weak energy condition[11]) to support a wormhole…” [Morris, Thorne, and Yurtsever, 1988]

    Nevertheless, Hawking’s findings did not point at Casimir effect as a source of negative energy; they merely indicated that negative energy was necessary for the negative mass to have the calculated effect. Not the same thing. [Jane Q. Public]

    Again, why are you talking about Hawking? You might want [1] to read “FTL Photons”:

    “Since the energy density of normal vacuum is defined to be zero, the vacuum between the metal plates actually becomes a region of negative energy density.” [John Cramer, 1990]

    Again, granted: Hawking showed that negative energy might be required for negative mass effects in relation to wormholes. But I have never seen any science indicating that this negative energy is actually related to or a result of the Casimir effect. That is a rather large leap that is not supported in any of the science I have read. The only relationship I have seen is that negative energy is required for certain predicted phenomena; nowhere have I seen any claim that anything related to the Casismir effect is the actual source of that negative energy. [Jane Q. Public]

    Then you might want to see the claim in “The warp drive: hyper-fast travel within general relativity”:

    “We see then that, just as it happens with wormholes, one needs exotic matter to travel faster than the speed of light. However, even if one believes that exotic matter is forbidden classically, it is well known that quantum field theory permits the existence of regions with negative energy densities in some special circumstances (as, for example, in the Casimir effect [4]). The need of exotic matter therefore doesn’t necessarily eliminate the possibility of using a spacetime distortion like the one described above for hyper-fast interstellar travel.” [Miguel Alcubierre, 1994]

    You might also want to see Hawking’s claim in “Space and Time Warps” regarding the Casimir effect:

    “So the energy density in the region between the plates, must be negative.” [Stephen Hawking]

    Your inexplicable references to Hawking’s research might have been prompted by this sentence shortly after the part I quoted: “Stephen Hawking has proved that negative energy is a necessary condition for the creation of a closed timelike curve by manipulation of gravitational fields within a finite region of space;[6] this proves, for example, that a finite Tipler cylinder cannot be used as a time machine.”

    That reminds me…

    “No, nothing can go faster than the speed of light because it will violate causality. Which is more or less forbidden by the entirety of physics.” [Nyrath the nearly wi]

    Incorrect. There is nothing we know of that actually works to prevent the violation of causality. There are a number of ways it can theoretically be done. See Tipler, “Rotating Cylinders and the Possibility of Global Causality Violation”. All rhetoric (like the post at that link) aside, all we really have about it is guesses. The fact that we have never observed anything, so far, that would violate causality says absolutely nothing about the possibility. Further, it is not necessarily true that limited instances of causality violation would render the entirety of physics invalid, any more than relativistic situations render Newton “invalid”. They are “special cases”. That is all. [Jane Q. Public]

    I’ve argued that violating causality isn’t impossible if the many worlds interpretation is right, and explained why FTL travel is equivalent to time travel. However, Hawking’s research actually seems to show that a finite [2] Tipler cylinder wouldn’t even work in theory.

    Hawking’s chronology protection conjecture also poses problems for warp drive. For instance, Hawking radiation could destroy anything inside an FTL warp bubble, or its stress-energy tensor could blow up.

    Krasnikov and Everett and Roman pointed out that the spacecraft would be causally separated [3] from an FTL warp bubble, so the spacecraft couldn’t activate or deactivate the warp bubble.

    So the Alcubierre and Natario drives might “just” be reactionless sublight propulsion systems where arbitrarily high acceleration isn’t felt [4] aboard the spacecraft, and where relativistic speed doesn’t cause time dilation.

    “Exotic matter, by definition, requires violations of the known laws of physics.” [Someone]

    No, it doesn’t. Antimatter is one valid type of “exotic matter”, and it has been manufactured in labs in various (small) amounts for many decades now, without a physics violation in sight. [Jane Q. Public]

    Antimatter certainly isn’t common, but it’s not “exotic matter”. Stable wormholes and the Alcubierre drive require using exotic matter that has negative mass-energy, which would violate the weak energy condition. [Dumb Scientist]

    Okay, I will concede that point, although it is about a Wikipedia entry. If you really want to argue about those… But my point is still valid, since Bose-Einstein condensates of macro-size have been manufactured in laboratories since 1998. Thus, “exotic matter” IS being manufactured, in significant quantities, right here in the real world, for 14 years now with no physics violations in sight. (“Exotic Matter”, according to your own citation.) [Jane Q. Public]

    Many physicists, including the inventor of warp drive, use the term “exotic matter” [5] to refer to matter with negative mass-energy. I tried (and apparently failed) to make this explicit. BEC’s are qualitatively similar to lasers, superconductors and superfluids; none of them have negative mass-energy.

    Note that I’m not claiming the weak energy condition is a law of nature. In fact, all these physicists point to the Casimir effect’s negative energy as experimental evidence [6] that the weak energy condition can be violated. However, in theory the weak energy condition is supported by “quantum inequalities” that limit the magnitude and duration [7] of negative energy densities.

    That’s why this Slashdot article focuses on ways to reduce the unphysically large amount of negative energy required for a warp bubble. That’s also why John Cramer mused that warp drive is outlawed and celebrated Van Den Broeck’s 1999 insight that the required negative energy could be reduced by making the interior of the warp bubble larger [8] than its exterior.


    Footnotes

    [1] You might also want to read “Averaged Energy Conditions and Quantum Inequalities” and “The Energy Density in the Casimir Effect”:

    “… AWEC is violated in both two and four dimensions for a static timelike observer in a Casimir vacuum state, with either type of boundary conditions, since such an observer simply sits in a region of constant negative energy density for all time. …” [Ford and Roman, 1995]

    “… we have also found that it is possible for the net energy density in the region between the plates to be negative … It should not come as a surprise that there is a regime of negative energy density. …” [Sopova and Ford, 2002] ↩ back

    [2] If cosmic strings exist, they could be literally infinite- stretching across the entire universe. In Stephen Baxter’s Ring, the Great Northern travels into the past by repeatedly looping around a pair of cosmic strings. On a related note, the Great Northern’s 5,000,000 year voyage was obviously based on figure 2 from Morris, Thorne, and Yurtsever 1988 (“spacetime diagram for conversion of a wormhole into a time machine”), as was the 1,500 year voyage of the Cauchy in Timelike Infinity. ↩ back

    [3] Alastair Reynolds suggests that the Waynet in his Merlin stories was created when the Waymakers tried to build a network of Krasnikov tubes. ↩ back

    [4] That sounds like the impulse drive from Star Trek, the gravity polarizers/thrusters from Larry Niven’s Known Space, and the parametric/frameshift drives from Alastair Reynolds’s House of Suns and Pushing Ice. Even a sublight warp field requires lots of negative energy, so the diametric drive proposed by Robert Forward and Jamie Woodward is a sublight alternative. The diametric drive relies on the peculiar effects that Rei mentioned, which were discussed by Geoffrey Landis (small world, eh?). ↩ back

    [5] Stephen Baxter also calls matter with negative mass-energy “exotic matter”. ↩ back

    [6] Squeezed vacuum experiments also involve negative energy, along with the energy-time uncertainty principle and Hawking radiation. On a related note, the Casimir effect stabilized the first bulky generation of wormholes generators in The Light of Other Days by Arthur C. Clarke and Stephen Baxter. Then squeezed vacuum technology miniaturized the generators down to the size of a wristwatch. ↩ back

    [7] In 2002, the same authors who introduced “quantum inequalities” showed that arbitrarily large negative energy densities are possible, though their duration still seems restricted. As Cramer notes, renormalization makes it difficult to take quantum field theory’s bounds on energy density seriously. ↩ back

    [8] Van Den Broeck might have been inspired by Doctor Who’s TARDIS. ↩ back

  21. (Ed. note: this comment was copied from here.)

    Because those fluctuations do exist in the vacuum outside the plates (which is defined to have zero energy), the energy inside the plates is actually negative. The attractive force implies negative energy between the plates because force is the negative gradient of potential energy.

    A force being applied in the context of the Casimir effect is definitely a vector. It has direction. Neither a positive or negative vector implies “negative energy”: it simply defines the physical direction in which the energy is directed. The coordinates are arbitrary according to vector calculus. There are circumstances in which energy can also be considered a vector, but this is not one of them. The Casimir effect is definitely a measurable vector in a particular direction, and he clear implication then is positive energy. [Jane Q. Public]

    Good grief, you’re arguing with the definition of potential energy. I was referring to the fact that all conservative forces can be described as the negative vector gradient of a potential energy function. Many of your statements on this topic are confusing:

    A force being applied in the context of the Casimir effect is definitely a vector. It has direction. [Jane Q. Public]

    Yeah, forces are vectors…

    Neither a positive or negative vector implies “negative energy”: it simply defines the physical direction in which the energy is directed. [Jane Q. Public]

    The force vector points from a region with high potential energy to a region with lower potential energy. That’s why an attractive force implies that the Casimir vacuum has less energy than the standard vacuum. No energy is “directed” anywhere because we’re talking about potential energy, not calculating Poynting vectors.

    “Because those fluctuations do exist in the vacuum outside the plates (which is defined to have zero energy), the energy inside the plates is actually negative.”

    You’re trying to get my goat. Haha. That isn’t what it says. According to the article, the force is negative, in relation to the chosen physical framework, which (as it clearly says in the article) merely implies that the energy is lowered when the physical substrates come together. [Jane Q. Public]

    The Casimir force between two parallel conducting plates is negative/attractive. Period. More complicated geometries can have repulsive Casimir forces, but that doesn’t affect the attractive force between parallel plates any more than your meaningless caveat does. Perhaps you meant to say “According to the article, the energy is negative…”

    The same phenomenon can be demonstrated with magnets. No “negative energy” is implied. [Jane Q. Public]

    In my opinion, electrostatics (or gravity) would be a better analogy than magnetism. Moving two oppositely-charged particles together does make their electrostatic potential energies more negative, just like their gravitational potential energies become more negative. The article I originally linked even explains why gravitational potential energies are negative, which also explains the values I’ve calculated for the Earth-Moon system.

    But in the Casimir effect the classical fields that usually cause negative potential energies are zero (or negligible, like gravity). Instead, energy is removed from the vacuum itself.

    The coordinates are arbitrary according to vector calculus. [Jane Q. Public]

    By your logic, I could demonstrate “negative energy” with a child on a playground swing. All I have to do is choose my coordinates appropriately. [Jane Q. Public]

    As discussed below, my logic is that gravity itself seems to have chosen coordinates that renormalize vacuum energy to zero, which means the Casimir vacuum has negative energy.

    There are circumstances in which energy can also be considered a vector, but this is not one of them. [Jane Q. Public]

    Ironically, the potential “energy” of a magnetic field is usually a vector potential, which makes magnetism a poor analogy for simpler forces that can be described using scalar potentials. (Permanent magnets can be described with a scalar potential, but electrostatic and gravitational potential energies are used as examples far more often, because ferromagnetism is so complicated that most intro physics courses only cover free currents.)

    The Casimir effect is definitely a measurable vector in a particular direction, and he clear implication then is positive energy. [Jane Q. Public]

    So… neither a positive or negative vector implies negative energy, but a vector in any direction clearly implies positive energy? That’s just nonsense.

  22. (Ed. note: this comment was copied from here.)

    I will add a tidbit that I picked up last night shortly after I wrote the above. You mentioned that since the ground state (not your exact words) of the vacuum is “defined” to be 0, then the energy must be negative. I understand that logic. The problem is that the premise is incorrect. Planck’s equations, as refined by Einstein et al. in 1913, show that in fact the vacuum energy of a quantum system must always be above its “potential well”, or the theoretical zero state. Thus, “zero-point” energy is NOT “defined” to be zero, but in fact is always positive, and the Casimir effect then, even using your own framework, is not “negative energy”. [Jane Q. Public]

    If you really did “understand that logic” then you wouldn’t have written all that nonsense about vectors. Instead, you’d have skipped immediately to this point, which now implicitly acknowledges that the Casimir vacuum has lower energy than the standard vacuum.

    Remember that spacetime is curved near large masses, but ~flat far away where only vacuum energy is present. This implies that vacuum energy exerts ~zero gravitational force, so its stress-energy tensor must be ~zero, so the standard vacuum has ~zero energy.

    If you’re interested in the details, John Baez summarizes several vacuum energy density calculations. A naive quantum field theory calculation yields a vacuum energy with a mass density of +1096 kg/m3, which would’ve ripped the universe apart [1] before galaxies could form. On the other hand, general relativity and observations of our nearly-flat universe place a more rigorous upper bound at +10-26 kg/m3. It seems like [2] gravity renormalizes vacuum energy to zero, within about one part in 10122. Even though renormalization was harshly criticized at first, it’s necessary to explain why galaxies (and thus humans!) exist.

    Update: Here’s an interesting article about vacuum energy.

    Here’s another, purely quantum-based, argument [3] for renormalization:

    “As there is no lower energy state than the ground state, there is no energy level transition available to release the ZPE. Therefore, it can be argued that hf/2 should be dropped before integration of the quantum expression. This procedure is an example of renormalization, which basically redefines the zero of energy.” [Abbott et al. 1996]


    Footnotes

    [1] One might assume that a large positive vacuum energy would collapse the universe just like a large amount of positive mass-energy would. This doesn’t happen because in general relativity gravity depends on energy and pressure. In natural units, vacuum energy has pressure equal and opposite to its energy density. Because the stress-energy tensor has three pressure terms (for x,y,z) and only one energy density term, the negative pressure of positive vacuum energy dominates, causing the expansion of the universe to accelerate. ↩ back

    [2] It’s also vaguely possible that zero point energy doesn’t gravitate at all (see question 2 here), but that would violate the equivalence principle. Also, during inflation, the vacuum energy density has been estimated at +1074 kg/m3. It’s almost as though the universe stopped renormalizing some of the vacuum’s zero point energy for a tiny fraction of a second just after the birth of the universe… ↩ back

    [3] Notice that section 8 on page 7 has interesting skeptical remarks about the Casimir effect, its relationship to vacuum energy, and the question of whether vacuum energy is “available for grilling steaks.” Furthermore, Jaffe 2005 presents an alternative explanation for the Casimir effect that doesn’t involve vacuum energy at all. ↩ back

  23. (Ed. note: this comment was copied from here.)

    … “zero point” energy is NOT in fact zero (it is actually pretty huge)… [Jane Q. Public]

    While talking with my first research advisor around 2003, I mused that it’s unfortunate how the Casimir effect only suppresses vacuum fluctuations with wavelengths larger than twice the spacing between the plates. Since fluctuations with shorter wavelengths have more energy, the Casimir effect only depletes a vanishingly small fraction of the vacuum energy between the plates. So I agree that a naive quantum calculation leads to a huge vacuum energy. But as I’ve just explained, the same theory of general relativity [1] that implies stable wormholes and the Alcubierre drive also seems to renormalize the vacuum energy to zero. So this just means that depleting vacuum energy could potentially lead to very negative energy densities.

    In fact I thought it was pretty obvious to most people that the fact that “zero point” energy is NOT in fact zero (it is actually pretty huge), has been the motivation for finding ways to “Maxwell’s Demon” the quantum vacuum fluctuations. There is nothing theoretically preventing it; one team this year found a possible means of exploiting it. We shall see. [Jane Q. Public]

    I asked which team and you replied:

    I looked again, and didn’t find anything from this year. So my memory could be incorrect. [Jane Q. Public]

    Agreed:

    What I am curious about is: assume you get the virtual particles which then tunnel: what is the probability that they will tunnel with the same probability, then recombine properly? It seems to me (without having done the math), that there is some possibility here of ending up with a quantum Goretex, or, in other words, a Maxwell’s Demon of sorts, no matter how small its effect might be. [Jane Q. Public, 2009-04-21]

    Not having done the math often does lead to quantum Goretex and ironic references to Maxwell’s (broken) Demon.

    But there’s Maclay and Forward, from 2004. There are more recent examples but I will not have time to hunt them up today. [Jane Q. Public]

    Maclay and Forward 2004 [2] imagined accelerating a mirror fast enough that the dynamic Casimir effect creates real photons. A more recent example was in 2009, which imagined spinning magneto-electric nanoparticles fast enough that the centripetal acceleration created real photons. At the time, I called this device a photon drive. On page 2 of their 2004 paper, Maclay and Forward point out that more conventional photon drives would arguably be better than their propulsion system.

    Granted, it’s only a thought experiment, and it doesn’t generate practical energy even then, in this form. But hey… fusion isn’t practical yet, either. To be clear: I did not claim anybody had found anything practical. Only that there may be ways to do it. [Jane Q. Public]

    We know that fusion generates practical energy because the stars shine. We just don’t know if we can build fusion reactors that generate more energy than is required to run the containment system that replaces the immense pressure at a star’s core. I do like thought experiments about new sources of energy, though. For instance, I just described Greg Egan’s suggestion that rotating hoops could provide a more efficient energy source than fusion, which could allow our civilization to outlive the stars. Egan notes that negative pressure (tension) contributes negatively to the stress-energy tensor, which might allow us to beat fusion’s ~1% mass defect without violating the weak energy condition.

    I’ve also daydreamed about ways to harness the Casimir force. Suppose one plate is made of vanadium [3], a superconductor cooled to delta_T below its critical temperature (T_c = 5.03K), and the other plate is made of niobium (T_c = 9.26K) which is held at (5.03 + delta_T)K. The Casimir force will pull the two plates together, and when they touch the vanadium plate will lose superconductivity as it’s warmed past its critical temperature. This weakens the Casimir effect, allowing the plates to be pulled apart with less force, where the vanadium plate will eventually cool enough to superconduct again. Repeating this process could turn a crankshaft [4] just like a piston in a gas engine. I wish I had time to figure out if this system would actually break the Carnot efficiency limit, but I’m too busy with GRACE research and defending the scientific community against an avalanche of baseless accusations.

    Update: Geoffrey Landis points out that this wouldn’t work.

    So I also like to speculate about energy sources that aren’t practical yet. But a photon drive isn’t a source of energy. It converts energy to photons like a flashlight does. Understanding physics could help you to avoid mistaking flashlights for energy sources, and might dissuade you and Lonny Eachus from cheerleading Fleischmann–Pons cold fusion long after physicists dismissed it because it can’t be replicated and has no plausible mechanism.


    Footnotes

    [1] A complete answer awaits quantum gravity. ↩ back

    [2] Also, note that equation 8 from Brown and Maclay 1969 (PDF) and equation 1 from Forward 1984 both say the energy density between Casimir plates is negative. ↩ back

    [3] Pinto 1999 (patented) considers plates made of semiconductors, which have finite conductivity and a correspondingly large skin depth. Semiconductors are created by doping a silicon crystal lattice with impurities like phosphorous, which also makes their properties non-uniform at the atomic scale. Superconductors made of just one element would address both concerns. Type II superconductors have second order phase transitions, so passing the critical temperature (T_c) doesn’t involve latent heat. There are just two stable elemental type II superconductors: niobium (T_c = 9.26K) and vanadium (T_c = 5.03K). The niobium plate’s heat capacity should be larger than the vanadium’s, so the vanadium plate is warmed past its critical temperature upon contact. The niobium plate remains below its T_c, so it could also be a type I superconductor like lead or beta-lanthanum. ↩ back

    [4] The plate motion could also be turned into electricity via piezoelectric crystals or molecular motors. The plate temperatures could be controlled with Peltier coolers. Fortunately, vanadium’s critical temperature is higher than 2.73K, so its heat can be radiated to space. In practice, one “plate” should probably be part of a sphere, because aligning parallel flat plates is difficult when they’re very small. ↩ back

  24. (Ed. note: this comment was copied from here.)

    Maxwell’s equations force E=0 inside perfect conductors, which means that vacuum fluctuations with a half-wavelength longer than the separation between the plates can’t exist between the plates.

    By the way: If you are going to refer to Maxwell’s equations, you should use caution. Because often what are referred to as “Maxwell’s Equations” are actually just Maxwell’s simplifications of Heaviside’s and Hamilton’s quaternion equations, with introductions of arbitrary “constants” to cancel out inconveniences, much like Einstein’s “cosmological constant”. There is a good deal of modern evidence that Maxwell’s attempt to simplify things may have been wishful thinking, and that Heaviside and Hamilton had it right all along. We rely much on Maxwell, but his conclusions are assumptions. Not only are they not proven, there is significant counter-evidence. [Jane Q. Public]

    Good grief. Electric fields are zero in perfect conductors. I explain this fact to freshman physics students by asking: what would happen if we tried to place an electric field across a conductor? Electrons would move opposite the field, and positive electron holes would move with the electric field, exactly enough to cancel out the original field inside the conductor. Better conductors cancel out faster, so electric fields are zero in perfect conductors.

    Mentioning that this fact can be derived from Maxwell’s equations is meant to be helpful, because all physics students should be familiar with the first theory that emerged in a Lorentz-invariant form. In other words, Maxwell’s equations were consistent with special relativity before relativity even existed. They’re the basis of all radio equipment, and the correspondence principle checks that quantum electrodynamics (one of the most accurate theories in history) is identical to Maxwell’s equations for large systems. If your reaction to hearing “Maxwell’s equations” is to spray chaff about quaternions, you’ll be disappointed to find that core classes based on junior-level Griffiths and graduate-level Jackson are almost exclusively about Maxwell’s equations.

    Quaternion notation is useful when describing 3D rotations, but it’s not used in electrodynamics because vector notation is more intuitive. That doesn’t stop crackpots from insisting that Maxwell’s equations are wishful thinking.

    Physicists use Maxwell’s vector equations despite the fact that we’re well aware of quaternion notation. John Baez even wrote a paper on octonians. As Baez quips, if the noncommutative quaternions are like a shunned eccentric cousin, then the nonassociative octonians are like the crazy old uncle nobody lets out of the attic.

    In fact, look at p542 of Griffiths 3rd edition: “Equation 12.136 combines our previous results into a single 4-vector equation– it represents the most elegant (and the simplest) formulation of Maxwell’s equations.”

    Page 555 of Jackson 3rd edition uses different units to make a similar point. Both graduate and undergraduate electrodynamics courses introduce students to the 4-vector notation, which collapses the four Maxwell’s equations into a single 4-vector equation.

    Using different notations can be fun, but shouldn’t change the answer to any physical problem. Therefore, caution isn’t required when referring to elementary consequences of Maxwell’s equations, such as the fact that electric fields are zero in perfect conductors.

    It’s also strange that you criticize Maxwell’s equations for introducing arbitrary constants (with no links or details, as usual). I’ve previously explained that many physicists think the “zero” in Gauss’s law for magnetism should be replaced with the density of hypothetical magnetic monopoles.

    Because often what are referred to as “Maxwell’s Equations” are actually just Maxwell’s simplifications of Heaviside’s and Hamilton’s quaternion equations, with introductions of arbitrary “constants” to cancel out inconveniences, much like Einstein’s “cosmological constant”. [Jane Q. Public]

    Einstein abandoned the “cosmological constant” in 1931 because of the 1927 discovery by Hubble and Lemaitre that the universe is expanding. Ironically, your 2009 suggestion that “the Universe is not expanding after all” contradicts all the cosmology we’ve learned since 1927. The last link in that comment evokes yet another feeling of deja vu:

    … there is a good bit of evidence that in some cases we may end up having to go back to some of those older ideas: so far dark mass and energy haven’t proved out, and there have come up explanations that don’t need them. Explanations that go back to some of the old “unadjusted” equations after all. … We may, for example, be going back to Hamilton’s Quaternions, and Heaviside’s, analyses as opposed to Maxwell’s simplifications. It all depends on how it shakes out. … [Jane Q. Public, 2011-12-05]

    After repeatedly appealing to Heaviside without providing any links or details, perhaps you should read Heaviside’s own words?

    “… I came later to see that, as far as the vector analysis I required was concerned, the quaternion was not only not required, but was a positive evil of no inconsiderable magnitude; and that by its avoidance the establishment of vector analysis was made quite simple and its working also simplified, and that it could be conveniently harmonised with ordinary Cartesian work. There is not a ghost of a quaternion in any of my papers (except in one, for a special purpose). The vector analysis I use may be described either as a convenient and systematic abbreviation of Cartesian analysis ; or else, as Quaternions without the quaternions, and with a simplified notation harmonising with Cartesians. In this form, it is not more difficult, but easier to work than Cartesians. Of course you must learn how to work it. Initially, unfamiliarity may make it difficult. But no amount of familiarity will make Quaternions an easy subject. Maxwell, in his great treatise on Electricity and Magnetism, whilst pointing out the suitability of vectorial methods to the treatment of his subject, did not go any further than to freely make use of the idea of a vector, in the first place, and to occasionally express his results in vectorial form. In this way his readers became familiarised with the idea of a vector, and also with the appearance of certain formula) when exhibited in the quaternionic notation. …” [Oliver Heaviside, Electromagnetic Theory, Volume I, pp. 134–135, 1893]

    Here, Heaviside echoes the mainstream viewpoint that quaternions and vectors are just two notations for describing electrodynamics. And he doesn’t seem to advocate the quaternion notation.

    … Dude. I know you are a scientist. But do you even really know what the Casimir effect is? Of course I expect you will by the time you answer (if you do). And if you do answer, I probably won’t reply. But at this very moment, at the time you first read this, from what you have already stated, I suspect that you really don’t know what it is. [Jane Q. Public]

    In addition to the other reply I wrote two days before your latest accusation of incompetence charming and productive questions, I’d also briefly explained the Casimir effect to Marble three years earlier.

    Apologies. I did not see your additional reply until I had already answered. It appears that you do, in fact, know what the Casimir effect is. [Jane Q. Public]

    Instead of apologizing, could you please stop spamming humanity with all of this multi-disciplinary misinformation? You may find that doing so reduces your need to apologize in the future.

  25. (Ed. note: this comment responds to a 2011 conversation about this T2K paper which observed muon neutrinos oscillate into electron neutrinos.)

    It’s only arrogance if you’re wrong. If you are correct, it’s knowledge. If you’re wrong, it’s arrogance. Sadly, many employers do not understand this little bit of wisdom. [Jane Q. Public, 2012-10-25]

    Jane, are you sure you want to use that criterion? Let’s reminisce…

    How do they know they were the same neutrinos they launched out? [Dr Max]

    … they know the beginning ratio and ending ratio of the different types. If they are not the same, then some must have flipped (or rotated, or whatever language the neutrino guys use these days). [global_diffusion]

    Not necessarily. They could be different neutrinos, caused by atoms in the way absorbing some neutrinos and emitting others. I am not sure but I suspect that is what GP [Dr Max] was getting at. Rather than evidence of neutrinos actually changing from one type to another, it seems just as likely (more likely?) that intervening matter performed a conversion. Just as, say, a crystal or a gas can “change” a laser’s color by absorbing photons and then emitting others of a different frequency, maybe matter is absorbing these neutrinos and emitting others with different properties. [Jane Q. Public, 2011-06-17]

    Nonlinear crystals can change a laser’s color by absorbing photons and then emitting others of a different frequency because photons are mediators of the electromagnetic force, so they interact with comparatively large (~10-10 m) electron clouds. But neutrinos only interact via gravity (irrelevant here) and the weak force which has a comparable range of ~10-18 m. Since the cross section determines how likely interactions are, neutrinos are roughly ten thousand trillion times less likely to interact with matter than photons. This is just an approximation, but experiments yield similarly tiny cross sections.

    If neutrinos have to interact with intervening matter before hitting the detector, an extra interaction is involved. That’s why Chris Burke pointed out that detecting neutrino flavor change due to an interaction with intervening matter would depend on the square of the interaction probability. Detection in the conventional flavor oscillation theory just depends on the interaction probability because it only involves a single interaction, so it’s trillions of times more likely to explain the observed electron neutrino events.

    In fact, that T2K paper acknowledged a much bigger source of noise on page 8: the muon neutrino beam was slightly contaminated by electron neutrinos. This contamination doesn’t invalidate their results because it only explains ~1.5 out of 6 observed electron neutrino events.

    Anyway, the processes that change a laser’s color are given names like “second-harmonic generation” (where a crystal combines two photons into one, commonly used in green laser pointers) and “parametric down-conversion” (where a crystal splits one photon in two, commonly used as a source of entangled photons). To the best of my knowledge, these nonlinear processes only work in crystals, not in gases.

    I haven’t studied second-harmonic generation in depth, but five years ago I reviewed a quantum teleportation experiment that used a beta-barium borate (BBO) crystal to generate entangled photons via parametric down-conversion. Look at figure 1 on page 6 of the PDF or slide 12 of the powerpoint animation. Notice that the down-converted photons leaving the BBO crystal aren’t collinear with the original UV pump laser beam.

    That’s because the down-converted photons are emitted in two cones which don’t generally align with the pump photon, as shown in this diagram. Careful phase matching of the BBO crystal is required for the down-converted photons to leave in the same direction as the pump photon. For example, here’s an experiment using collinear parametric down-conversion. Notice that they had to buy a BBO crystal that was likely periodically poled, then arrange it at exactly the right angle with respect to the pump beam in order to produce collinear down-converted photons. This is highly unlikely to happen naturally, as you suggest is happening with neutrinos.

    Also, the total number of photons in the universe isn’t conserved. That’s why parametric down-conversion can turn one photon into two photons. But the total number of leptons is usually conserved, and the total number of baryons minus leptons is even more likely to be conserved. So the neutrino analogue of parametric down-conversion would increase the total number of leptons by +1 unless it also creates an antilepton (or destroys an existing lepton). Alternatively, it could create a baryon (or destroy an existing antibaryon, which seems unlikely) to keep B-L constant instead. Both possibilities seem to either violate conservation of energy or imply neutrino-induced radiation.

    This would imply that the absorbing/emitting matter emitted it in exactly the same direction, which seems unlikely. [AlecC]

    That’s why I used the example of the laser: the photons are emitted in exactly the same direction, however unlikely you might think that is. [Jane Q. Public]

    Here you’ve switched to a different topic: stimulated emission, which does happen in gases, and is collinear. But why is it collinear? Photons produced via stimulated emission are identical to the original photon not only in terms of direction, but also in terms of frequency, phase, polarization, and transverse and longitudinal spatial states. In fact, they’re in exactly the same quantum state. As I’ve explained:

    Bosons and fermions are both types of indistinguishable particles in quantum mechanics. Fermions have half-integer spin, like protons, electrons, antiprotons and positrons. Bosons have integer spin, such as photons and mesons. Some implications of this distinction can be deduced using nonrelativistic quantum mechanics, such as the fact that fermions obey the Pauli exclusion principle while bosons actually attract each other into the same state (Griffiths 1st ed p179). The connection between these statistics and spin is simply assumed in nonrelativistic quantum mechanics, but it can actually be deduced using relativistic quantum field theory.

    Because photons have spin 1 and are thus bosons, they attract other photons into the same state. Griffiths derives Einstein’s “B coefficient” governing stimulated emission on p311 of the 1st edition; this derivation depends on the fact that photons are bosons. However, neutrinos have spin 1/2 and are thus fermions, so the Pauli exclusion principle prevents them from occupying the same quantum state as other neutrinos. Therefore, stimulated emission of individual neutrinos is impossible.

    Update: Actually, the stimulated photon isn’t identical to the original photon because of the no-cloning theorem. But stimulated emission is a “natural candidate for the practical realization of a quantum copier.”

    Not necessarily. They could be different neutrinos, caused by atoms in the way absorbing some neutrinos and emitting others. … [Jane Q. Public]

    It’s not entirely an oversimplification to say “that won’t happen” – solar neutrinos pass straight through the Earth for example. (See the Wikipedia page) [Tim C]

    Do they? Or do they often collide with atoms and experience the same kind of “conversion”? As far as I know, nobody has performed any experiments to find out. The very idea that they might change from one form to another is very recent. [Jane Q. Public]

    Sure, if 1957 fits your definition of “very recent”.

    Do they? Or do they often collide with atoms and experience the same kind of “conversion”? As far as I know, nobody has performed any experiments to find out. The very idea that they might change from one form to another is very recent. [Jane Q. Public]

    I guess just over half a century is ‘very recent’ by some standards, but I’d say probably not by the standard of “recent enough for me to assume no experiments have been conducted.” [Chris Burke]

    This was the first experiment of this kind to be performed, as you well know. Those others you mention over that last century are not relevant to my comment. Tell me: when was the last other experiment performed to find this evidence about the third leg of the oscillation? What’s that you say? Never? Wow. How about that. [Jane Q. Public]

    You originally wrote “change from one form to another” which isn’t flavor-specific and thus refers to neutrino oscillation in general. Here you seem to be asserting that this meant “change from muon neutrino to electron neutrino which is controlled by theta_13”. Even if that’s what you originally meant, you still need to redefine “Never” to mean the papers published in 1992, 2001, 2002, 2003, 2003, 2003, 2005, 2006, 2007, etc.

    I think it’d be Nobel prize material if one found neutrino-stimulated neutrino emission, as that is what you’re alleging. I’m not saying it’s impossible, just that IIRC my undergrad physics at all, it’d be a big discovery. [tibit]

    Bigger than, say, neutrinos spontaneously, and without obvious cause, changing from one form to another? I don’t see why. In fact, I think it is the more likely explanation. It fits Occam’s razor a hell of a lot better, because you don’t have to assume some kind of spontaneous process from a cause unknown. [Jane Q. Public]

    This still begs the question: they are claiming that this is a “new type” of neutrino oscillations. So what causes the oscillations? So far I have yet to see an explanation, anywhere. [Jane Q. Public]

    You are saying that the cause of this oscillation is known? If so, can you enlighten us, or at least link to an explanation of this behavior? Because everything I have read about it so far says that (a) this is the first time it has been observed, and (b) the cause is unknown. [Jane Q. Public]

    You tell me: what is the most likely hypothesis for why this happens? Not how… stop getting that confused. I asked why. What is the cause behind neutrino oscillation? I will patiently wait for at least one, or hopefully at least three hypotheses about the cause of these theoretical oscillations. I don’t want to hear any garbage about waveforms and probability. That’s a how. I asked for a why. … Come back when you can explain to me some hypotheses for the cause of neutrinos oscillating. NOT an equation (still very much speculative, at that) purporting to describe how. [Jane Q. Public]

    Dismissing wave functions and probability as “garbage” isn’t a very productive approach to learning quantum physics. The cause of neutrino oscillation is that a neutrino’s wave function interferes with itself, because neutrino propagation eigenstates aren’t identical to the flavor eigenstates involved in neutrino detection, and propagation eigenstates with different masses have different wavelengths. As a result, flavor detection probabilities vary spatially.

    Thanks for the mention of MSW Effect. The idea of coherent forward scattering is something that I mentioned myself earlier, but I was merely speculating about the possibility, without actually knowing about it. [Jane Q. Public]

    … I only mentioned the possibility that coherent scattering might exist, in a completely different comment that did not directly bear on the first one. And, as it turns out, coherent scattering does exist. But the possibility that it is the actual cause of the results of this experiment are, admittedly, near nil. The point of that comment was only that coherent scattering should be possible… and it turns out that it is. [Jane Q. Public]

    No, you made a vague reference to parametric down-conversion which isn’t naturally collinear and seems to either violate conservation of energy or imply radiation if it could happen with neutrinos. Then you made a separate vague reference to stimulated emission which doesn’t work with fermions like individual neutrinos. Saying the word “laser” isn’t the same as saying that “coherent scattering should be possible” because the MSW effect (i.e. coherent forward scattering) is analagous to refraction, not to a laser. Just because a laser emits coherent light doesn’t mean you get to reinterpret your previous statements like those of Nostradamus.

    You’re trying to draw a distinction between the MSW effect in matter and neutrino flavor oscillation in vacuum that simply doesn’t exist in our universe. The MSW effect is analagous to the way a prism separates white light into a rainbow, which is an example of dispersion. A prism’s index of refraction is wavelength dependent, so photons with different wavelengths travel through it at different speeds.

    Electron neutrinos interact with electrons in matter via W bosons, but muon and tau neutrinos don’t. This is a kind of dispersion where electron (anti-)neutrinos travel slower(faster) in matter than muon or tau neutrinos because they have different effective masses. Both vacuum and MSW oscillations occur because a neutrino’s flavor (detection) eigenstates are rotated with respect to its propagation eigenstates, which travel differently because they have different masses. If we lived in a universe where all neutrinos were massless, neutrino oscillation wouldn’t happen in vacuum, but the MSW effect would still cause neutrinos to oscillate in stars.

    But we actually live in a universe where neutrinos have very small but non-zero masses, so the same physics that explains the cause of the MSW effect (where neutrinos oscillate in stars) also explains the cause of neutrino oscillation in vacuum. For example, Boris Kayser derives functionally identical Hamiltonians for vacuum and MSW oscillations in equations 25 and 37.

    I am curious … What is the proposed mechanism by which these neutrinos oscillate? If flavor is a measurable property, then how can they “spontaneously” change? [Jane Q. Public]

    … I am still left, however wondering not how neutrinos oscillate, but rather the why. What causes them to oscillate in the first place? I understand about spontaneous propagation and destruction of virtual particles, for example, and to me that needs little explanation, because it’s all probability and there is no — or very little anyway — net gain or loss. Things aren’t changing properties, on average… just form. But it seems to me that this neutrino oscillation is different. There is a macroscopically measurable difference of properties, and so I have a hard time accepting that it is merely probability “driving” the neutrino oscillations. [Jane Q. Public]

    A few years ago, I mentioned that virtual particles can explain why light slows down in materials, which is related to the MSW effect. But I also said that “I couldn’t spinor my way out of a paper bag” so here’s a simpler analogy for understanding the cause of neutrino oscillations.

    Consider the famous double-slit experiment which is performed in freshman physics classes. Photons actually go through both slits, then interfere with themselves to cause a counterintuitive oscillatory pattern on the screen. Neutrinos interfere with themselves too, which causes a similar pattern of neutrino flavor oscillations. Both patterns exist because of wave function interference, which cause the detection probabilities to vary spatially.

    In other words, by questioning the long-established cause of neutrino flavor oscillation, you’re also questioning basic quantum theory.

    Well, since it [a neutrino] isn’t subject to magnetic or electrical forces, it basically has to slam into the nucleus … it needs to get close enough to another particle – by coincidence – for the weak force to start having a decent effect on them. [OeLeWaPpErKe]

    You are saying, in effect, that radioactivity is unlikely. And statistically, it is, I suppose. All I am doing is speculating. So far, I have not seen anybody (aside from a commenter here who so far has given no evidence) that there is a cause known for this “oscillation”. I am simply guessing — no more than that — at a possible cause, rather than assume it is somehow spontaneous. [Jane Q. Public]

    It’s odd that you mention radioactivity, because that’s a spontaneous, macroscopically measurable difference in the properties of a nucleus, and it’s “driven” by mere probability. Just like neutrino flavor oscillation.

    … at this time any real evidence is still waiting to show up. And I will be happy to accept that evidence, if it was responsibly gathered. Until then, I am entitled to my opinion as to what is more likely. [Jane Q. Public]

    The skeptic looks for potential causes for an observation, rather than accepting that it happens spontaneously or through “mysterious” processes. If the cause is unknown, then speculation as to the possible cause is not only called for, but necessary. Further evidence will not be forthcoming until those speculations are tested. I do not claim to be as qualified to speculate on the matter as professional physicists; nevertheless, in an absence of explanation I still have a right to speculate. [Jane Q. Public]

    Freedom of speech gives you the right to speculate either way, but your frequent rants about physics would be more credible if you:

    1. Recognize that “absence of explanation” should read “absence of explanation that I’ve learned about.”
    2. Try to understand the conventional explanation instead of pretending it’s absent.

    … this looks like a definitive on-point source, from LBL by a FNAL author. Enjoy! [singlercm]

    Thank you for that link. I now see how, theoretically anyway, it could be a probabilistically-determined superposition. That clears up a lot. [Jane Q. Public]

    No, it really is a superposition, not just in theory but later confirmed by the experiments shown in figure 13.2 of that paper and many more. You’re just manufacturing unwarranted doubt about yet another topic in physics.

    • I’ve failed to communicate once again. An anonymous commenter also fails to communicate.

    • Damn. Posts like that are the reason I read Slashdot. I’m an astrophysicist working in neutrino detection, working on a project looking for neutrinos with energies above 1020 eV, and I didn’t know more than a quarter of what I read in that post. Everyone in my field more-or-less ignores the neutrino flavour, assuming that neutrinos from some astrophysical source will reach a 1:1:1 flavour ratio through oscillation by the time they reach us.

    • You clearly spent a lot of time on this post. It is quite excellent, and I thought that I would bring to light a question that you presented about nonlinear optical processes in gases. The simple answer is that with a powerful enough laser, you can get any odd-ordered nonlinear effect (ie: 1st (absorption) 3rd (like 2d ir), 5th(some raman experiments), etc.) You cannot, however see any (to a very, very, very, very good approximation) even ordered effects because the polarizability must have inversion symmetry in an isotropic medium. That is P(x) = – P(-x) in something like a gas or a glass or liquid (isotropic). Crystals are not necessarily isotropic, enabling them to have such an effect over the medium (specifically, crystals lacking inversion symmetry). You may note that the molecules themselves can be isotropic; it’s their distribution on average that matters here (so yes, you may be able to observe a tiny, tiny, tiny signal due to differences in the average, but this is negligible.). Just thought you’d like to know :)

      • Thank you, that was informative and interesting. I also appreciate the replies from several other anonymous commenters.

    • … I have to ask you one more time: what part of STOP STALKING MY CONVERSATIONS, GO THE FUCK AWAY, AND LEAVE ME ALONE do you not understand??? THIS is a prime example of arrogance, and it is demonstrably no joke. You need to go take a l-o-n-g look in the mirror. And then go the fuck away. I am serious. This is getting to the point of stalking and harassment. Do you really want to go there? [Jane Q. Public]

      This bears repeating, Mr. “Khayman80”: You appear to have some kind of unhealthy obsession with me and it has gone far beyond the point of simply rubbing me the wrong way. If you do not cease and desist voluntarily, I will be compelled to start looking into what other options may be available. [Jane Q. Public]

      Don’t flatter yourself. Debunking misinformation and defending scientists against baseless attacks are my unhealthy obsessions. It’s hardly my fault that you’re one of the most prolific misinformers I’ve ever seen. If you didn’t want people responding to your claims, you probably should’ve written them in a notebook instead of on a public website. It’s also strange that you call my responses to your public comments “stalking and harassment” while quoting hacked private emails from years ago to baselessly attack scientists.

      YOU don’t have somebody following you around and harassing you with months-old, off-topic comments all over Slashdot. If this had been the only example, I wouldn’t mind. But he has done it many times. Frankly, he acts like a stalker and I don’t know what his obsession with me is, but I don’t appreciate it in the least, and if somebody had been doing it to you, you wouldn’t either. [Jane Q. Public]

      Let’s consider some of the “many times” you mentioned. When I asked for references to support your claims about climate science, you called me a vindictive asshole. Then I responded to your claims about the Casimir effect hours before my presentation at the GRACE science team meeting. Afterward, I wrote another comment about negative energy, then went on vacation. After returning home, I found that you’d dramatically expanded the scope of your claims. When I responded, you complained that I’d taken weeks and accused me of being a stalker.

      My response to your claims about neutrino oscillation was interrupted last summer by a cross-country move, after which research quickly diverted my attention. However, the charming comments you left at Dumb Scientist in June reminded me that you hadn’t retired. When I responded, you complained that I’d taken MONTHS and accused me of pathetic personal attacks. When I responded just now to your claims about neutrino oscillation, you complained that I’d taken too long and accused me of being a stalker. But refuting your claims about neutrino oscillation is a prerequisite to refuting your other claims about Latour’s article, which you’ve asked for:

      Where is your refutation of any argument I made HERE, in this thread? Where is it? … Why aren’t you discussing the issue I raised? [Jane Q. Public, 2012-07-16]

      Again, you sidestep my question. Why can’t you answer it? It is an article about physics. Would you like to refute the actual content? The fact is that I suspect you will not actually address this. … Because I don’t think you really CAN refute LaTour’s physics. Instead you will try to prove ME wrong. [Jane Q. Public, 2012-07-16]

      I’ll prove you both wrong. Again, patience.

      More generally, you seem to be asserting that there’s a statute of limitations on debunking misinformation. Apparently, if your baseless attacks against scientists aren’t answered immediately, they should remain unchallenged forever. I disagree. Gish Gallops are effective because repeating nonsense only takes a few seconds, but researching and debunking that nonsense often takes days. As a result, I’m months behind in debunking your misinformation. Each time you repeat more nonsense, you’re just delaying the blissful day when we can finally ignore each other.

      I’m posting my comments as replies to your most recent comment to make a frozen public copy, and to give you a chance to respond on neutral ground. If you’d prefer, I could post without replying to you so that you could ignore me more easily. I haven’t done that so far because it seemed like debunking you without giving you a chance to respond would lead to accusations of cowardice.

      But I can’t just ignore the misinformation that you’ve helped spread, because some of it threatens the future of human civilization.

    • When Jane responded, I said:

      Again, you’ve asked questions which will take months to answer. Again, if you’d prefer, I could post without replying to you so that I don’t rudely interrupt you.

    • What the hell are you talking about? What questions have I posed, in the last few days, that will take months to answer? … [Jane Q. Public]

      1. You’ve repeated your support for Latour’s article, which is fractally wrong.

      2. Scientific peer review requires rejecting bad papers. Scientists would only be conspiring to suppress legitimate papers if those papers were actually legitimate. Your baseless attack only required a few minutes of copy-pasting from hacked private emails. To debunk it, I’ll have to spend months figuring out what papers those quotes refer to, examining their claims, linking to them, describing some of their worst mistakes, etc. And that’s just one typical example out of dozens.

      Again, if you’d prefer, I could post without replying to you so that I don’t rudely interrupt you.

    • When Jane responded, I said:

      Okay, I’ll take that as a polite request that I post without replying to you so I don’t rudely interrupt you.

      Update: I’ve failed to communicate once again and again and again and again and again.

  26. arationofreason posted on 2014-11-20 at 09:54

    Just musing: We know that light can change speed in a ‘transparent’ medium. From new astronomy interstellar space seems to contain plasma. The speed of light should be changed in interstellar space. ??

  27. andrew posted on 2016-06-03 at 04:01

    “We’ve seen meteorites causing craters. It’s an established fact. I’m thinking of the craters that have formed on Earth during recorded history, as well as craters that have formed on the moon and been seen by our telescopes (it looks like a single bright flash, not a lightning-like spark), and events such as Shoemaker-Levy 9 which show that comets do strike planets.”

    This was written early on in this conversation by an individual who was not impressed by the EU theory. I would like to suggest that, even though i to believe craters can be caused by meteors, no one has ever seen this happen and the only large scale events like Tunguska for example have been for from crater leaving events.

    • Wrong. NASA routinely observes craters being formed on the moon. It’s a serious problem for the (possibly) upcoming moon base, so they’re trying hard to characterize the impact frequencies and size distributions to keep the colonists safe. Here’s the best video I’ve found that shows an impact.

      The largest impact in recorded history was the Tunguska event in Russia in 1908. Recently, researchers have claimed that the impact crater is hidden under a lake. I think this is the lake in question, and they’re planning to take core samples to confirm this (by searching for the expected ejecta at the right depth).

      In 1947, a meteorite hit Russia and left several craters, the largest of which was 26m across and 6m deep.

      In 2007, a meteorite hit Peru, and left a roughly circular crater 13m across and 4.5m deep.

      Also, over a thousand meteorites have been recovered after eyewitnesses followed the fireball to the rock.

.