Jump to content

User talk:Tommysun: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Tommysun (talk | contribs)
stuff
Tommysun (talk | contribs)
No edit summary
Line 5: Line 5:
----
----
|- align="center"
|- align="center"
| [[/Archive 1|1]] [[/Archive 2|2]]
| [[/Archive 1|1]] [[/Archive 2|2]] [[/Archive 3|3]]
|}
|}



Revision as of 07:51, 23 November 2006

Archive

Archives


1 2 3



Here ... have a mind beer

Mind-beer ... on me ... User:QTJ/Wikipedia_Humor

Sometimes it helps to see the humor behind everything?

--QTJ 06:19, 31 October 2006 (UTC)[reply]



My letters...


http://listserv.arizona.edu/cgi-bin/wa?A2=ind0101&L=quantum-mind&T=0&P=1046

Date: Sat, 20 Jan 2001 12:49:57 -0800

Reply-To: Quantum Approaches to Consciousness

Sender: Quantum Approaches to Consciousness

From: Brian Flanagan

Subject: q-mind forum


In this issue we have a handful of fine contributions to offer,

including a few new voices, a call for papers on the esoteric arts, and several outstanding links.

So, thanks to all our contributors!
First up we have a highly acute essay from Gao Shan, who has clearly

given the q-mind business a good deal of careful consideration, and whose insights go directly to the heart of the q-mind thesis. Further, Shan's work suggests a natural affinity with that of Henry Stapp, Eugene Wigner, and JS Bell.

=

snip

=

Thomas Mandel shares his learned reflections on Isacson's work with

cellular automata, and the surprising patterns to be found in nature.

-Brian Flanagan


Greetings; In response to Joel Isaacson's letter of 30 December 2000, I would like to offer the following comments. Joel believes that his discovery of fundamental pattern in the codings of Cellular Automata may be important. He observes:

"Note 3: Hegelian dialectic is an influential, yet controversial,

philosophical doctrine that holds that the universe operates, through-and-through, according to complicated schemata of interacting opposites and their continual mediation. My discovery, in the early 1970s, of these kinds of schemata in the patterns generated by >recursive tetracoding had been totally unexpected and inexplicable.

In combination with the more recent discovery (May 1997) of baryon representations among the very same dialectical patterns, the surprise effect is compounded manyfold. End Note 3]

( A faithful copy of Joel's paper can be found at www.isss.org/2001meet/2001paper/stegano.pdf and his comments to this letter at www.isss.org/2001meet/2001paper/stegano.htm as well as his referenced patent paper at www.isss.org/2001meet/2001paper/4286330.pdf )

What Joel is saying is that he has discovered a general pattern at work, but not just any pattern, the simplest pattern. The Hegelian Dialectic Joel refers to is actuallly only one among many other ontological philosophiesthat express the same catagory of, well, wholeness. Nicholas of Cuza expressed it as Coincidentia oppositorum. Heraclitus expressed it as a union of opposites, and used fire as an example of their relationship. Empedocles expressed it as a tetronic concept with his whole composed of stuff, the boundless, the relationship and the whole (water, air, fire, earth).

But even before them it was expressed in the Tao te Ching, specifically in Chapter 42,

"The begot One
The One begot two
The Two begot THree
The Three begot the Ten Thousand Things..."

So this is not something we can say is "new." It's also mentioned in the Kabbala, where it is expressed as a geometrical progression from point to line to area to volume, exactly as Fuller described his tetahedron.

See www.isss.org/primer/table.gif

This ontology, however, does require a new kind of thinking. Instead of thinking in terms of taking apart into smaller parts, the new thinking requires a putting together into wholes. Ackoff describes this as down and up thinking. Analytical and synthetic. In the scientific sense, this new thinking is regarded as systemic thinking. Thinking in terms of systems. What Joel has found, is an interpenetrative example of the general system principle at work. But remember, it is not the opposites that are ultimately important, rather it is their relationships and as Ludwig von Bertalanffy expresses it, "Compared to the analytical procedure of classical science with resolution into component elements and one-way or linear causality as basic category, the investigation of organized wholes of many variables requires new categories of interaction, transaction, organization, teleology..."

(In short, the NCC is not a thing, it is a relationship.)

Incidently, I have been working on this same principle since 1972, and I have invented a "tetronic notation" in order to work with it. It can be found at

http://www.newciv.org/ISSS_Primer/asem26tm.html

with a more general paper at

http://www.isss.org/primer/gensystm.htm

Hopefully, Joel's discovery of this principle in the Baryon Octet will finally bring to light what is perhaps is the most re-invented wheel in our universe, the operating principle of the universe itself...

Note from Moderator: bj: The SU(3) group is thought to govern the strong nuclear force between baryons; this group is one of three which inform the basis of the standard model of particle physics. So it would seem like a potentially quite fruitful avenue of research, then, to compare the

mathematics of Isaacson's tetracoding with SU(3) and other groups, as well.

Let us remember, however, that the brain is an electrical machine,

and so its many interactions are electromagnetic in nature, and therefore governed, not by SU(3), but by the group U(1) -- all this by way of standard physics. Now, in recent times, it has come to me that perhaps what we color and sound and heat and cold and so forth are varieties of phase information. Consider that, upon observation, photons reliably yield these kinds of "information" to us. And so, on this view, the photon, in its role as the exchange particle of the EM interaction, as described so well by modern guage theory, has rather more degrees of freedom than is usually countenanced. Which would lead one to ask whether U(1) might not give way to a larger group in order to accomodate these "secondary" qualities of color, sound, and so forth -- the which are always and everywhere associated with photons upon observation.




JSC is the Journal of Consciousness Studies. The study of consciousness using the scientific method is seeking to explain what consciousness is. It began several years ago with the search for the NCC or Neural Correlate of Consciousness. Bit by bit the brain was dissected and entities such as the neuron, spikes, snaptic junctions, calcium were proposed as the NCC. But after all was said none of them emerged as a winner. Today the list is still asking what is consciousness?

In jcs-online@yahoogroups.com, Thommandel@... wrote:
"There is a way to look at all this information as a whole rather than a group of complicated parts. Most of the descriptions in the descriptive paragraph are in terms of "parts". Entities. But we can also describe what these parts are doing -- their relationships. Parvizi and Damazio used this approach in their paper on the The Recticular Formation, they even said so five, pages into the article, "It is important to note that these formations are described in terms of interactions."
"To make a long story short, I am enclosing a target paper by Charles Francois, Editor of the International Encyclopedia of Cybernetics and Systems writing about the newest new theory, interconnection theory, what it is and how we can develop it. The theory he is talking about is like a template theory - providing a structure and you fill in the blanks.
"The NCC, if we can still use that term, is not one neuron - it is two neurons - a system. interacting as a whole


(Alfredo) -
"I am in general agreement but not sure if the term "connection" is the right one. "Connectionist" models run in digital computers work pretty mechanistically. Jonathan and I have looked for a stronger way of information integration (related to what he calls "the information problem" in consciousness research). As the mechanistic picture of the world is "partes extra partes" (each part is statistically independent, or ontologically separated from the others), the stronger information integration we are looking for goes beyond mechanicism."
"I think that quantum entanglement is a good solution for this problem, but it is still unclear how a multiparticle entanglement could encompass informational variety.
"After a decade of theoretical research I have recently came to the conclusion that the best medium for quantum entanglement in the brain is the calcium ion population trapped in the astrocytic syncytium and interacting with the neurons' electrical fields (Local Field Potentials).
"This kind of proposal is partially similar to Penrose-Hameroff's, but in my view the quantum information that is relevant for consciousness is not processed by the microtubule/gap junctions proteins, but by the ions trapped in them. Johathan prefers a quantum "pattern" in the single neuron membrane.It seems we are entering a new phase in quantum consciousness research, with these and other neuroscientific-plausible proposals, leading to an exciting discussion in the near future.



Thom Mandel <Thommandel@aol.com> on 13 Nov 2006 wrote: >There is a way to look at all this information as a whole rather than a group of complicated parts. Most of the descriptions in the paragraph are in terms of "parts". Entities. But we can also describe what these parts are doing -- their relationships. . [S.P.] In principle, it is very good direction of thinking. The only remark is that the description of the relationships between the parts is only a first step towards the description of the relationships between the wholes (which may be seen as an ultimate desirable task). . [Thom Mandel] >To make a long story short, I am enclosing a target paper by Charles Francois, writing about the newest new theory, interconnection theory, what it is and how we can develop it. The theory he is talking about is like a template theory - providing a structure and you fill in the blanks.

The Primer Papers

TARGET PAPER BY CHARLES FRANCOIS

Editor: International Encyclopedia of Cybernetics and Systemics

TOWARDS A FRAMEWORK THEORY FOR SYSTEMICS PART TWO

Interconnecting Theory 1. Constructing a framework out of the-empirical view . [S.P.] Well, we have here a collection of very good intuitive ideas. The most positive moment is that the author (Charles Francois) understands that the description of the interconnectedness of the wholes requires the principally different theoretical framework. He says: “Our first need is thus to discover efficient models for the workings of all the complex systems in which we are enmeshed.” The negative moment is that the author underestimates the whole complexity of the problem. So, he thinks that to construct a necessary framework it would be sufficient to answer the following questions:

- What is a connection?

- What does produce connections?

- What are the effects of a connection on the connected elements?

- How do connections generate complexity?

- How to construct a descriptive taxonomy of the various types of connections?” . In other words, bearing in mind the Wholes, he continues to talk in the language of the Parts. In my view, answering to those questions is important, but by no means sufficient since we do not address here the key question: “Why the entropy additivity does not take place in the complex (including the complex self-organized) systems?” Moreover, the formalization of Wholes (unlike Parts) requires application of some specially constructed theoretical models (like the model of the integrated information system (IIS), and the system of the associational (AS), dissociational (DIS), and decompositional (DEC) models; see [1]).

Thom Mandel <Thommandel@aol.com> on 14 Nov 2006 wrote: Using Baar's proposal to use visual consciousness as a model, I propose the measurement of cognitive visual consciousness.

It was mentioned that our peripheral vision differs from out focal vision. I submit that it is our focal vision that is "conscious" and our peripheral vision is unconscious. I propose that this relationship be measured. . [S.P.] Strange though it may seem, but when I look at the central letter “O” in the sequence of letters . b o w a l d n n b v l a j O k k a g h e i o a n g v a . I clearly see the letters “n b v l a j O k k a g h” together with the left-most “b” and the right-most “a”, while the regions “o w a l d n” to the left, and “e i o a n g v” to the right are really blurred. So, either something is wrong with my conscious vision, or ... with the proposed theory. But, if to speak seriously, Thom's idea to transform our on-line forum into a kind of scientific laboratory is very interesting and useful. As a case in point, I propose to look at the picture at http://www.geocities.com/spatlavskiy/gallery-anomalies.html (second picture; animation must be enabled in the browser's options). There are several violet dots arranged in a circle. But, if you concentrate on the crest in the center of the circle for a long time, instead of many violet dots you will see a one green dot moving clockwise. I wonder whether the proposed cognitive visual consciousness theory (or Baar’s theory) can explain this visual illusion? . Thom Mandel <Thommandel@aol.com> on 3 Nov 2006 wrote: Amazing, I never told anyone about this, but I went through a five year period in my life, (I fell in Love) when I had similar cloud experiences almost on a daily basis. If you want something more practical to think about, how does dowsing work?

[S.P.] In my case, it was not just some “cloud experience”, but observation of the pre-planned phenomenon. It was a scientific experiment, if you wish. I subsume this phenomenon under the more general category of R-facts. By R-fact I mean the reliable (uncontested, well-documented, indisputable) scientific fact, but unexplainable (or, insufficiently explainable) on the basis of the existing scientific principles, theories, etc., or within the existing scientific paradigm. Under this category, the phenomenon of dowsing may be subsumed too. By the way, many R-facts found their explanation exactly when applying the system of the [AS-DIS-DEC] models (mentioned above). . Kindly, Serge Patlavskiy [1] http://www.geocities.com/spatlavskiy/ElaborNewParadigm.pdf



Hi Thom,

Going offlist to let you know fyi, I am midstream in the process of formulating a new articulation of quantum bioholography. Since I wrote this 2001 paper, I've been seriously frustrated along with my colleagues at http://emergentmind.org that Gariaev has been less than forthcoming about details of his equipment and protocols. He has thereby stymied efforts at replication, such as those Louis Malklaka would like to run. He and Poponin are in a feud over the discovery. Gariaev tends to 'fall off the map' periodically, especially since he is back in Russia. Nevertheless, his work is becoming 'the next big thing'in New Age thought...a new meme.

However, the theoretics is certainly worth pursuing, though I never actually took it as a mind-over-matter proof of 'free will'. It is simply arguable, but much of that depends of WHICH school of physics is used for that train of thought. Elsewhere I have argued strongly against the new agey buzzword 'intentionality'. [attached] Like the new 'What the Bleep' movie, they oversell this in 'The Secret' in a rather embarrassing and exploitive, materialistic way, telling the audience what they WANT to hear, for a nice fee. New Age hucksterism is a pet peeve of mine, moreso in those I generally respect.

A colleague and I spent the summer going over all the equations for the various interpretations of QM, and wound up back where we began - with my dissenting colleague now convinced that plenum physics and solitons are a substantive investigative path (no pun intended). ;))) This time we will make a mathematical argument, and THEN a conceptual one that follows from it.

I was recently contacted by Dr. Mitja Perus of Slovenia who has worked with both Pribram and Bohm and can probably shed some illumination with metaphors from optical holography. Currently, we are pursuing Resonant Holography, modeling not only brain ventricles as optical resonators, but each cell, as well.

Thx for your interest.

Best, Iona Miller




My reference article

Top Problems with the big bang theory

[reprinted from Meta Research Bulletin 11, 6-13 (2002)]

Abstract. Earlier, we presented a simple list of the top ten problems with the Big Bang. 1 Since that publication, we have had many requests for citations and additional details, which we provide here. We also respond to a few rebuttal arguments to the earlier list. Then we supplement the list based on the last four years of developments – with another 20 problems for the theory.


(1) Static universe models fit observational data better than expanding universe models.

Static universe models match most observations with no adjustable parameters. The Big Bang can match each of the critical observations, but only with adjustable parameters, one of which (the cosmic deceleration parameter) requires mutually exclusive values to match different tests. [[2],[3]] Without ad hoc theorizing, this point alone falsifies the Big Bang. Even if the discrepancy could be explained, Occam’s razor favors the model with fewer adjustable parameters – the static universe model.


(2) The microwave “background” makes more sense as the limiting temperature of space heated by starlight than as the remnant of a fireball.

The expression “the temperature of space” is the title of chapter 13 of Sir Arthur Eddington’s famous 1926 work, 4 Eddington calculated the minimum temperature any body in space would cool to, given that it is immersed in the radiation of distant starlight. With no adjustable parameters, he obtained 3°K (later refined to 2.8°K 5), essentially the same as the observed, so-called “background”, temperature. A similar calculation, although with less certain accuracy, applies to the limiting temperature of intergalactic space because of the radiation of galaxy light. 6 So the intergalactic matter is like a “fog”, and would therefore provide a simpler explanation for the microwave radiation, including its blackbody-shaped spectrum.
Such a fog also explains the otherwise troublesome ratio of infrared to radio intensities of radio galaxies. 7 The amount of radiation emitted by distant galaxies falls with increasing wavelengths, as expected if the longer wavelengths are scattered by the intergalactic medium. For example, the brightness ratio of radio galaxies at infrared and radio wavelengths changes with distance in a way which implies absorption. Basically, this means that the longer wavelengths are more easily absorbed by material between the galaxies. But then the microwave radiation (between the two wavelengths) should be absorbed by that medium too, and has no chance to reach us from such great distances, or to remain perfectly uniform while doing so. It must instead result from the radiation of microwaves from the intergalactic medium. This argument alone implies that the microwaves could not be coming directly to us from a distance beyond all the galaxies, and therefore that the Big Bang theory cannot be correct.


None of the predictions of the background temperature based on the Big Bang were close enough to qualify as successes, the worst being Gamow’s upward-revised estimate of 50°K made in 1961, just two years before the actual discovery. Clearly, without a realistic quantitative prediction, the Big Bang’s hypothetical “fireball” becomes indistinguishable from the natural minimum temperature of all cold matter in space. But none of the predictions, which ranged between 5°K and 50°K, matched observations. 8 And the Big Bang offers no explanation for the kind of intensity variations with wavelength seen in radio galaxies.

(3) Element abundance predictions using the Big Bang require too many adjustable parameters to make them work.

The universal abundances of most elements were predicted correctly by Hoyle in the context of the original Steady State cosmological model. This worked for all elements heavier than lithium. The Big Bang co-opted those results and concentrated on predicting the abundances of the light elements. Each such prediction requires at least one adjustable parameter unique to that element prediction. Often, it’s a question of figuring out why the element was either created or destroyed or both to some degree following the Big Bang. When you take away these degrees of freedom, no genuine prediction remains. The best the Big Bang can claim is consistency with observations using the various ad hoc models to explain the data for each light element. Examples: [[9],[10]] for helium-3; 11 for lithium-7; 12 for deuterium; 13 for beryllium; and [[14],[15]] for overviews. For a full discussion of an alternative origin of the light elements, see 16.


(4) The universe has too much large scale structure (interspersed “walls” and voids) to form in a time as short as 10-20 billion years.

The average speed of galaxies through space is a well-measured quantity. At those speeds, galaxies would require roughly the age of the universe to assemble into the largest structures (superclusters and walls) we see in space 17, and to clear all the voids between galaxy walls. But this assumes that the initial directions of motion are special, e.g., directed away from the centers of voids. To get around this problem, one must propose that galaxy speeds were initially much higher and have slowed due to some sort of “viscosity” of space. To form these structures by building up the needed motions through gravitational acceleration alone would take in excess of 100 billion years. 18


(5) The average luminosity of quasars must decrease with time in just the right way so that their average apparent brightness is the same at all redshifts, which is exceedingly unlikely.

According to the Big Bang theory, a quasar at a redshift of 1 is roughly ten times as far away as one at a redshift of 0.1. (The redshift-distance relation is not quite linear, but this is a fair approximation.) If the two quasars were intrinsically similar, the high redshift one would be about 100 times fainter because of the inverse square law. But it is, on average, of comparable apparent brightness. This must be explained as quasars “evolving” their intrinsic properties so that they get smaller and fainter as the universe evolves. That way, the quasar at redshift 1 can be intrinsically 100 times brighter than the one at 0.1, explaining why they appear (on average) to be comparably bright. It isn’t as if the Big Bang has a reason why quasars should evolve in just this magical way. But that is required to explain the observations using the Big Bang interpretation of the redshift of quasars as a measure of cosmological distance. See [[19],[20]].
By contrast, the relation between apparent magnitude and distance for quasars is a simple, inverse-square law in alternative cosmologies. In [20], Arp shows great quantities of evidence that large quasar redshifts are a combination of a cosmological factor and an intrinsic factor, with the latter dominant in most cases. Most large quasar redshifts (e.g., z > 1) therefore have little correlation with distance. A grouping of 11 quasars close to NGC 1068, having nominal ejection patterns correlated with galaxy rotation, provides further strong evidence that quasar redshifts are intrinsic. 21


(6) The ages of globular clusters appear older than the universe.

Even though the data have been stretched in the direction toward resolving this since the “top ten” list first appeared, the error bars on the Hubble age of the universe (12±2 Gyr) still do not quite overlap the error bars on the oldest globular clusters (16±2 Gyr). Astronomers have studied this for the past decade, but resist the “observational error” explanation because that would almost certainly push the Hubble age older (as Sandage has been arguing for years), which creates several new problems for the Big Bang. In other words, the cure is worse than the illness for the theory. In fact, a new, relatively bias-free observational technique has gone the opposite way, lowering the Hubble age estimate to 10 Gyr, making the discrepancy worse again. [[22],[23]]


(7) The local streaming motions of galaxies are too high for a finite universe that is supposed to be everywhere uniform.

In the early 1990s, we learned that the average redshift for galaxies of a given brightness differs on opposite sides of the sky. The Big Bang interprets this as the existence of a puzzling group flow of galaxies relative to the microwave radiation on scales of at least 130 Mpc. Earlier, the existence of this flow led to the hypothesis of a "Great Attractor" pulling all these galaxies in its direction. But in newer studies, no backside infall was found on the other side of the hypothetical feature. Instead, there is streaming on both sides of us out to 60-70 Mpc in a consistent direction relative to the microwave "background". The only Big Bang alternative to the apparent result of large-scale streaming of galaxies is that the microwave radiation is in motion relative to us. Either way, this result is trouble for the Big Bang. [[24],[25],[26],[27],[28]]


(8) Invisible dark matter of an unknown but non-baryonic nature must be the dominant ingredient of the entire universe.

The Big Bang requires sprinkling galaxies, clusters, superclusters, and the universe with ever-increasing amounts of this invisible, not-yet-detected “dark matter” to keep the theory viable. Overall, over 90% of the universe must be made of something we have never detected. By contrast, Milgrom’s model (the alternative to “dark matter”) provides a one-parameter explanation that works at all scales and requires no “dark matter” to exist at any scale. (I exclude the additional 50%-100% of invisible ordinary matter inferred to exist by, e.g., MACHO studies.) Some physicists don’t like modifying the law of gravity in this way, but a finite range for natural forces is a logical necessity (not just theory) spoken of since the 17th century. [[29],[30]]


Milgrom’s model requires nothing more than that. Milgrom’s is an operational model rather than one based on fundamentals. But it is consistent with more complete models invoking a finite range for gravity. So Milgrom’s model provides a basis to eliminate the need for “dark matter” in the universe at any scale. This represents one more Big Bang “fudge factor” no longer needed.


(9) The most distant galaxies in the Hubble Deep Field show insufficient evidence of evolution, with some of them having higher redshifts (z = 6-7) than the highest-redshift quasars.

The Big Bang requires that stars, quasars and galaxies in the early universe be “primitive”, meaning mostly metal-free, because it requires many generations of supernovae to build up metal content in stars. But the latest evidence suggests lots of metal in the “earliest” quasars and galaxies. [[31],[32],[33]] Moreover, we now have evidence for numerous ordinary galaxies in what the Big Bang expected to be the “dark age” of evolution of the universe, when the light of the few primitive galaxies in existence would be blocked from view by hydrogen clouds. 34


(10) If the open universe we see today is extrapolated back near the beginning, the ratio of the actual density of matter in the universe to the critical density must differ from unity by just a part in 1059. Any larger deviation would result in a universe already collapsed on itself or already dissipated.

Inflation failed to achieve its goal when many observations went against it. To maintain consistency and salvage inflation, the Big Bang has now introduced two new adjustable parameters: (1) the cosmological constant, which has a major fine-tuning problem of its own because theory suggests it ought to be of order 10120, and observations suggest a value less than 1; and (2) “quintessence” or “dark energy”. [[35],[36]] This latter theoretical substance solves the fine-tuning problem by introducing invisible, undetectable energy sprinkled at will as needed throughout the universe to keep consistency between theory and observations. It can therefore be accurately described as “the ultimate fudge factor”.



Anyone doubting the Big Bang in its present form (which includes most astronomy-interested people outside the field of astronomy, according to one recent survey) would have good cause for that opinion and could easily defend such a position. This is a fundamentally different matter than proving the Big Bang did not happen, which would be proving a negative – something that is normally impossible. (E.g., we cannot prove that Santa Claus does not exist.) The Big Bang, much like the Santa Claus hypothesis, no longer makes testable predictions wherein proponents agree that a failure would falsify the hypothesis. Instead, the theory is continually amended to account for all new, unexpected discoveries. Indeed, many young scientists now think of this as a normal process in science! They forget or were never taught that a model has value only when it can predict new things that differentiate the model from chance and from other models before the new things are discovered. Explanations of new things are supposed to flow from the basic theory itself with at most an adjustable parameter or two, and not from add-on bits of new theory.


Of course, the literature also contains the occasional review paper in support of the Big Bang. 37 But these generally don’t count any of the prediction failures or surprises as theory failures as long as some ad hoc theory might explain them. And the “prediction successes” in almost every case do not distinguish the Big Bang from any of the four leading competitor models: Quasi-Steady-State [16,[38]], Plasma Cosmology [18], Meta Model [3], and Variable-Mass Cosmology [20].


For the most part, these four alternative cosmologies are ignored by astronomers. However, one web site by Ned Wright does try to advance counterarguments in defense of the Big Bang. 39 But his counterarguments are mostly old objections long since defeated. For example:

(1) In “Eddington did not predict the CMB”:

a. Wright argues that Eddington’s argument for the “temperature of space” applies at most to our Galaxy. But Eddington’s reasoning applies also to the temperature of intergalactic space, for which a minimum is set by the radiation of galaxy and quasar light. The original calculations half-a-century ago showed this limit probably fell in the range 1-6°K. [6] And that was before quasars were discovered and before we knew the modern space density of galaxies.

b. Wright also argues that dust grains cannot be the source of the blackbody microwave radiation because there are not enough of them to be opaque, as needed to produce a blackbody spectrum. However, opaqueness is required only in a finite universe. An infinite universe can achieve thermodynamic equilibrium (the actual requirement for a blackbody spectrum) even if transparent out to very large distances because the thermal mixing can occur on a much smaller scale than quantum particles – e.g., in the light-carrying medium itself.

c. Wright argues that dust grains do not radiate efficiently at millimeter wavelengths. However, efficient or not, if the equilibrium temperature they reach is 2.8°K, they must radiate away the energy they absorb from distant galaxy and quasar light at millimeter wavelengths. Temperature and wavelength are correlated for any bodies in thermal equilibrium.

(2) About Lerner’s argument against the Big Bang:

a. Lerner calculated that the Big Bang universe has not had enough time to form superclusters. Wright calculates that all the voids could be vacated and superclusters formed in less than 11-14 billion years (barely). But that assumes that almost all matter has initial speeds headed directly out of voids and toward matter concentrations. Lerner, on the other hand, assumed that the speeds had to be built up by gravitational attraction, which takes many times longer. Lerner’s point is more reasonable because doing it Wright’s way requires fine-tuning of initial conditions.

b. Wright argues that “there is certainly lots of evidence for dark matter.” The reality is that there is no credible observational detection of dark matter, so all the “evidence” is a matter of interpretation, depending on theoretical assumptions. For example, Milgrom’s Model explains all the same evidence without any need for dark matter.

(3) Regarding arguments against “tired light cosmology”:

a. Wright argues: “There is no known interaction that can degrade a photon's energy without also changing its momentum, which leads to a blurring of distant objects which is not observed.” While it is technically true that no such interaction has yet been discovered, reasonable non-Big-Bang cosmologies require the existence of entities many orders of magnitude smaller than photons. For example, the entity responsible for gravitational interactions has not yet been discovered. So the “fuzzy image” argument does not apply to realistic physical models in which all substance is infinitely divisible. By contrast, physical models lacking infinite divisibility have great difficulties explaining Zeno’s paradoxes – especially the extended paradox for matter. [3]

b. Wright argues that the stretching of supernovae light curves is not predicted by “tired light”. However, one cannot measure the stretching effect directly because the time under the lightcurve depends on the intrinsic brightness of the supernovae, which can vary considerably. So one must use indirect indicators, such as rise time only. And in that case, the data does not unambiguously favor either tired light or Big Bang models.

c. Wright argued that tired light does not produce a blackbody spectrum. But this is untrue if the entities producing the energy loss are many orders of magnitude smaller and more numerous than quantum particles.

d. Wright argues that tired light models fail the Tolman surface brightness test. This ignores that realistic tired light models must lose energy in the transverse direction, not just the longitudinal one, because light is a transverse wave. When this effect is considered, the predicted loss of light intensity goes with (1+z)-2, which is in good agreement with most observations without any adjustable parameters. [ NOTEREF _Ref4051228 \h \* MERGEFORMAT 2,[40]] The Big Bang, by contrast, predicts a (1+z)-4 dependence, and must therefore invoke special ad hoc evolution (different from that applicable to quasars) to close the gap between theory and observations.


By no means is this “top ten” list of Big Bang problems exhaustive – far from it. In fact, it is easy to argue that several of these additional 20 points should be among the “top ten”:

· "Pencil-beam surveys" show large-scale structure out to distances of more than 1 Gpc in both of two opposite directions from us. This appears as a succession of wall-like galaxy features at fairly regular intervals, the first of which, at about 130 Mpc distance, is called "The Great Wall". To date, 13 such evenly-spaced "walls" of galaxies have been found! 41 The Big Bang theory requires fairly uniform mixing on scales of distance larger than about 20 Mpc, so there apparently is far more large-scale structure in the universe than the Big Bang can explain.

· Many particles are seen with energies over 60x1018 eV. But that is the theoretical energy limit for anything traveling more than 20-50 Mpc because of interaction with microwave background photons. 42 However, this objection assumes the microwave radiation is as the Big Bang expects, instead of a relatively sparse, local phenomenon.

· The Big Bang predicts that equal amounts of matter and antimatter were created in the initial explosion. Matter dominates the present universe apparently because of some form of asymmetry, such as CP violation asymmetry, that caused most anti-matter to annihilate with matter, but left much matter. Experiments are searching for evidence of this asymmetry, so far without success. Other galaxies can’t be antimatter because that would create a matter-antimatter boundary with the intergalactic medium that would create gamma rays, which are not seen. [[43],[44]]

· Even a small amount of diffuse neutral hydrogen would produce a smooth absorbing trough shortward of a QSO’s Lyman-alpha emission line. This is called the Gunn-Peterson effect, and is rarely seen, implying that most hydrogen in the universe has been re-ionized. A hydrogen Gunn-Peterson trough is now predicted to be present at a redshift z » 6.1. 45 Observations of high-redshift quasars near z = 6 briefly appeared to confirm this prediction. However, a galaxy lensed by a foreground cluster has now been observed at z = 6.56, prior to the supposed reionization epoch and at a time when the Big Bang expects no galaxies to be visible yet. Moreover, if only a few galaxies had turned on by this early point, their emission would have been absorbed by the surrounding hydrogen gas, making these early galaxies invisible. [34] So the lensed galaxy observation falsifies this prediction and the theory it was based on. Another problem example: Quasar PG 0052+251 is at the core of a normal spiral galaxy. The host galaxy appears undisturbed by the quasar radiation, which, in the Big Bang, is supposed to be strong enough to ionize the intergalactic medium. 46

· An excess of QSOs is observed around foreground clusters. Lensing amplification caused by foreground galaxies or clusters is too weak to explain this association between high- and low-redshift objects. This apparent contradiction has no solution under Big Bang premises that does not create some other problem. It particular, dark matter solutions would have to be centrally concentrated, contrary to observations that imply that dark matter increases away from galaxy centers. The high-redshift and low-redshift objects are probably actually at comparable distances, as Arp has maintained for 30 years. 47

· The Big Bang violates the first law of thermodynamics, that energy cannot be either created or destroyed, by requiring that new space filled with “zero-point energy” be continually created between the galaxies. 48

· In the Las Campanas redshift survey, statistical differences from homogenous distribution were found out to a scale of at least 200 Mpc. 49 This is consistent with other galaxy catalog analyses that show no trends toward homogeneity even on scales up to 1000 Mpc. 50 The Big Bang, of course, requires large-scale homogeneity. The Meta Model and other infinite-universe models expect fractal behavior at all scales. Observations remain in agreement with that.

· Elliptical galaxies supposedly bulge along the axis of the most recent galaxy merger. But the angular velocities of stars at different distances from the center are all different, making an elliptical shape formed in that way unstable. Such velocities would shear the elliptical shape until it was smoothed into a circular disk. Where are the galaxies in the process of being sheared?

· The polarization of radio emission rotates as it passes through magnetized extragalactic plasmas. Such Faraday rotations in quasars should increase (on average) with distance. If redshift indicates distance, then rotation and redshift should increase together. However, the mean Faraday rotation is less near z = 2 than near z = 1 (where quasars are apparently intrinsically brightest, according to Arp’s model). 51

· If the dark matter needed by the Big Bang exists, microwave radiation fluctuations should have “acoustic peaks” on angular scales of 1° and 0.3°, with the latter prominent compared with the former. By contrast, if Milgrom’s alternative to dark matter (Modified Newtonian Dynamics) is correct, then the latter peak should be only about 20% of the former. Newly acquired data from the Boomerang balloon-borne instruments clearly favors the MOND interpretation over dark matter. 52

· Redshifts are quantized for both galaxies [[53],[54]] and quasars 55. So are other properties of galaxies. 56 This should not happen under Big Bang premises.

· The number density of optical quasars peaks at z = 2.5-3, and declines toward both lower and higher redshifts. At z = 5, it has dropped by a factor of about 20. This cannot be explained by dust extinction or survey incompleteness. The Big Bang predicts that quasars, the seeds of all galaxies, were most numerous at earliest epochs. 57

· The falloff of the power spectrum at small scales can be used to determine the temperature of the intergalactic medium. It is typically inferred to be 20,000°K, but there is no evidence of evolution with redshift. Yet in the Big Bang, that temperature ought to adiabatically decrease as space expands everywhere. This is another indicator that the universe is not really expanding.] 58

· Under Big Bang premises, the fine structure constant must vary with time. 59

· Measurements of the two-point correlation function for optically selected galaxies follow an almost perfect power law over nearly three orders of magnitude in separation. However, this result disagrees with n-body simulations in all the Big Bang’s various modifications. A complex mixture of gravity, star formation, and dissipative hydrodynamics seems to be needed. 60

· Emission lines for z > 4 quasars indicate higher-than-solar quasar metallicities. 61 The iron to magnesium ratio increases at higher redshifts (earlier Big Bang epochs). 62 These results imply substantial star formation at epochs preceding or concurrent with the QSO phenomenon, contrary to normal Big Bang scenarios.

· The absorption lines of damped Lyman-alpha systems are seen in quasars. However, the HST NICMOS spectrograph has searched to see these objects directly in the infrared, but failed for the most part to detect them. 63 Moreover, the relative abundances have surprising uniformity, unexplained in the Big Bang. 64 The simplest explanation is that the absorbers are in the quasar’s own environment, not at their redshift distance as the Big Bang requires.

· The luminosity evolution of brightest cluster galaxies (BGCs) cannot be adequately explained by a single evolutionary model. For example, BGCs with low x-ray luminosity are consistent with no evolution, while those with high x-ray luminosity are brighter on average at high redshift. 65

· The fundamental question of why it is that at early cosmological times, bound aggregates of order 100,000 stars (globular clusters) were able to form remains unsolved in the Big Bang. It is no mystery in infinite universe models. 66

· Blue galaxy counts show an excess of faint blue galaxies by a factor of 10 at magnitude 28. This implies that the volume of space is larger than in the Big Bang, where it should get smaller as one looks back in time. 67


Perhaps never in the history of science has so much quality evidence accumulated against a model so widely accepted within a field. Even the most basic elements of the theory, the expansion of the universe and the fireball remnant radiation, remain interpretations with credible alternative explanations. One must wonder why, in this circumstance, that four good alternative models are not even being comparatively discussed by most astronomers.


Acknowledgments

Obviously, hundreds of professionals, both astronomers and scientists from other fields, have contributed to these findings, although few of them stand back and look at the bigger picture. It is hoped that many of them will add their comments and join as co-authors in an attempt to sway the upcoming generation of astronomers that the present cosmology is headed nowhere, and to join the search for better answers.



Tommysun's incompetant/biased/fanatical/lunatic mad ravings of a POV crusader

Editors are invited/challanged to read and discover a editorial POV in the article below. This is how I would write a NPOV article. Well, at least until the end where I get to the point that I don't know what to say.


Crop Circles

By Tommy Mandel

MY example of NPOV writing:

Crop circles, circular geometrical formations of flattened cereal crop, have become a controversial subject in England and worldwide. The precisely formed formations found in fields of grain have baffled visitors for decades. Some believe they were made by aliens, some believe they were made by the winds, some believe they all were hoaxed on us by Doug and Dave, and the many who followed them. SOme have unusual explanations. But the story is a little more complicated than just a flattened shape in a wheat field. There is a lot more to it.

A few scientists have ventured into the investigations. Gerald Hawkins, the renown astronomer who deciphered Stonehenge, investigated the circles before hoaxing became popular. He found that some of them conform to exact geometrical shapes. Some circles demonstrate the diatonic ratios found in music. And in some circles Hawkins found Euclidian theorems not yet published in the literature.


There appear to be certain characteristics found in the circles that are unusual in that one wouldn't expect to see them in a circle Doug and Dave typically made. The crop is bent rather than broken over. Sometimes this bent is a woven form. Sometimes the bent is woven in layers, one circle had five different layers of bent. In one circle, a tuft of grain consisting of a few standing (unbent) stalks was found in every square foot of a crop circle. Sometime standing stalks ae found interspersed with bent stalks. In the triple Julia set circle, each of the hundreds of individual circles had a different lay pattern. Oilseed rape, canola, has a structure similar to celery. It does not bend very much, instead it breaks. Crop circles are found regularily in canola. In one circle a canola plant was bent 180 degrees.

Looking closer, changes in the structure of the crop is found. The nodes show elongation on one side bending the plant over. Sometimes the node has burst. Internally visible changes also occur in pits (I think)

Changes in the soil are found. In some cases the soil is dryer than it is outside the circle. Magnetic particles have been found, white deposits also have been found. While these were added to the soil, changes in the soil itself has been found. Confirmed by a leading authority, the crystalline structure of clays in the soil has been measured and found to be significantly different from soil outside the circle.

It has been reported many times that batteries go dead when taken into a circle. Cameras have stopped working. Tractors have stopped running. Professinal video cameras have broken. One report talks about a whole town going dead. Magnetic fields have been detected. Compasses are erratic within a circle. Energy lines have been found by dowsing and electronic equipment aligned with the circles.

Perhaps the most elusive mystery are the BoLs. Balls of light which are regularily seen around crop circles. The sightings are numerous enough to prompt naming a hill, Golden Ball Hill, after them. The balls of light are typically described as semitransluesent basketball sized globes of light. They have been reported doing all sorts of things from chasing cars on a highway to playing tag with a helicopter. Usually they are seen hovering over a circle. In one video, a ball of light is seen hovering over a circle, then veering off toward an approaching tractor. And in the video, the farmer turns his head as the BoL passes over him.

Here is where the story goes wild. The Balls of Light are not imagined. Hundreds have seen them. They have appeared in photographs and videos. Many were hoaxed, but many appear not to be hoaxed. They have been reported in scientific journals. And even more, they have been reported all over the world. Balls of light have been reported by astronaults, pilots, police, air traffic controllers and in Mexico city by thousands at one time. It is not known if the balls of light of UFO fame are the same.

A ball of light looks like a ball made of light, but there is no known process which would create a ball out of light. An alternative explanation is plasma, the fourth state of matter consisting of inos, an atom stripped of electrns, and electrons flowing apart from each other yet together. Small balls of plasma called plasmoids have been created in the laboratory. Some of these experiments purportedly have created free energy. A plasmoid, however, is typically a very hot state of matter, and the balls of light seen in the fields do not appear to be seering hot. Eyewitness accounts describe the balls of light as a transleusent globe of colored light. Golden ball Hill in England is named after the golden colored BoL's that have appeared there.

Many researchers have seriously investigated the crop circles. Peter Sorensen has videotaped nearly all of them in England. He recalls on his website how at first he didn't believe the circes were all hoaxed, but as time moved on he slowly came around the conclusion that "probably all of them are hoaxes". And then he adds "But the Balls of Light are real, I have seen them myself."

Perhaps the mystery of the crop circles is not about who made them, rather more what are the balls of Light seen so often around them? Are they same as the UFO's that have been sighted? If that is so, are the crop circles evidence of intelligent UFO's?

Next - black helicopters and crop circles

(notes)

A recognized researcher reports on his website a sighting of two black helicopters hoving over a field afterwhich a crop circle was found. Interestingly, he was able to discover yet another black helicopter doing the same thing. The two articles read identical...

This is too bizarre.

to be continued