![]() |
A truly Galilean-class volume, this book introduces a new method in theory formation, completing the tools of epistemology. It covers a broad spectrum of theoretical and mathematical physics by researchers from over 20 nations from four continents. Like Vigier himself, the Vigier symposia are noted for addressing avant-garde, cutting-edge topics in contemporary physics. Among the six proceedings honoring J.-P. Vigier, this is perhaps the most exciting one as several important breakthroughs are introduced for the first time. The most interesting breakthrough in view of the recent NIST experimental violations of QED is a continuation of the pioneering work by Vigier on tight bound states in hydrogen. The new experimental protocol described not only promises empirical proof of large-scale extra dimensions in conjunction with avenues for testing string theory, but also implies the birth of the field of unified field mechanics, ushering in a new age of discovery. Work on quantum computing redefines the qubit in a manner that the uncertainty principle may be routinely violated. Other breakthroughs occur in the utility of quaternion algebra in extending our understanding of the nature of the fermionic singularity or point particle. There are several other discoveries of equal magnitude, making this volume a must-have acquisition for the library of any serious forward-looking researchers.
Sample Chapter(s)
Foreword (105 KB)
Laws of Form, Majorana Fermions, and Discrete Physics (347 KB)
https://doi.org/10.1142/9789814504782_fmatter
The following sections are included:
https://doi.org/10.1142/9789814504782_0001
The present paper is a discussion of how the idea of a distinction gives rise to fundamental physics. We discuss everything and nothing, and in the end we see that they are not distinct.
https://doi.org/10.1142/9789814504782_0002
Why, despite all efforts to the contrary, have attempts at unification based on the supposedly more fundamental quantum theory failed miserably? The truth is that the essential idea or concept of the quantum itself has never been fully understood. What is the quantum, or rather, what is its ultimate nature? Science may be able to work adequately with the quantum; in a sense science is quite articulate in the language of the quantum, i.e., its mathematical interpretation of the quantum mechanics, but science has no idea of the true physical nature of the quantum. Scientists and philosophers have wasted energy and efforts on irrelevant issues such as the debate over determinism and indeterminism instead of carefully analyzing the physical source of the quantum. Only with a true understanding of the physical nature of the quantum will the unification of the quantum and relativity ever become a reality.
https://doi.org/10.1142/9789814504782_0003
Physics at the fundamental level can be effectively reduced to an explanation of the structures and interactions of fermions. Fermions appear to be singularities rather than extended objects, but there is no obvious way of creating such structures within the 3-dimensional space of observation. However, the algebra associated with the Dirac equation appears to suggest that the fermion requires a double, rather than a single, vector space, and this would seem to be confirmed by the double rotation required by spin ½ objects, and the associated effects of zitterberwegung and Berry phase shift. Further investigation of the second ‘space’ reveals that it is, in effect, an ‘antispace’, which contains the same information as real space but in a less accessible form. The two spaces effectively cancel to produce a norm 0 (nilpotent) object which has exactly the mathematical structure required to be a fermionic singularity.
https://doi.org/10.1142/9789814504782_0004
In order to present the universe as a zero-totality the key concepts of nothingness and duality are required. Diaz and Rowlands introduce processes of conjugation, complexification, and dimensionalization using a universal alphabet and rewrite system to describe a physical universe composed of nilpotents. This paper will apply the concept of conjugation to the Newtonian duality actionreaction by introducing associated dual spaces called action space-reaction space. An attempt to generalize Newton’s third law of motion, utilizing the concept of dual spaces, will follow in a manner suggestive of the zero-totality fermion-vacuum relationship.
https://doi.org/10.1142/9789814504782_0005
The principal criteria Cn (n = 1 to 23) and grammatical production rules are set out of a universal computational rewrite language spelling out a semantic description of an emergent, self-organizing architecture for the cosmos. These language productions already predicate: (1) Einstein’s conservation law of energy, momentum and mass and, subsequently, (2) with respect to gauge invariant relativistic space time (both Lorentz special & Einstein general); (3) Standard Model elementary particle physics; (4) the periodic table of the elements & chemical valence; and (5) the molecular biological basis of the DNA / RNA genetic code; so enabling the Cybernetic Machine specialist Groups Mission Statement premise;** (6) that natural semantic language thinking at the higher level of the self-organized emergent chemical molecular complexity of the human brain (only surpassed by that of the cosmos itself!) would be realized (7) by this same universal semantic language via (8) an architecture of a conscious human brain/mind and self which, it predicates consists of its neural / glia and microtubule substrates respectively, so as to endow it with; (9) the intelligent semantic capability to be able to specify, symbolize, spell out and understand the cosmos that conceived it; and (10) provide a quantum physical explanation of consciousness and of how (11) the dichotomy between first person subjectivity and third person objectivity or ‘hard problem’ is resolved.
https://doi.org/10.1142/9789814504782_0006
This paper presents a tool kit of new ways of understanding fundamental physics and the relationships between parameters. The novel insights and predictions include: A self-contained consistent new Planck unit set of maximal-sized parameters from which all observed values can be compared and easily combined in equations. A self-contained consistent new Planck unit set of electroncharge-based parameters, some of which are directly observable in experiments. The interpretation of the gravitational constant, G as a dimensionless ratio eliminates the need to test the equivalence of gravitational and inertial mass. That all parameters can be displayed in terms of only h and c for the Planck maximal-parameter set and additionally in a for the electron-charge-based set, previously considered impossible. A new dimensional analysis employed to describe parameter dimensionality and to uncover any law of nature or universal constant. That all electron-charge-based Planck parameters can be described solely in terms of ratios of Rk and Kj and so will benefit from the precision of measurement of these two parameters. That most electromagnetic parameters can be reinterpreted in terms of mechanical parameters. By adjusting currently misaligned SI units to be self-consistent and consistent with these Double-adjusted-Planck units, greater clarity will ensue.
https://doi.org/10.1142/9789814504782_0007
We explore properties of a ‘Least Cosmological Unit’ (LCU) as an inherent spacetime raster tiling or tessellating the unique backcloth of Holographic Anthropic Multiverse (HAM) cosmology as an array of programmable cellular automata. The HAM vacuum is a scale-invariant HD extension of a covariant polarized Dirac vacuum with ‘bumps’ and ‘holes’ typically described by extended electromagnetic theory corresponding to an Einstein energy-dependent spacetime metric admitting a periodic photon mass. The new cosmology incorporates a unique form of M-Theoretic Calabi-Yau-Poincaré Dodecadedral-AdS5-DS5space (PDS) with mirror symmetry best described by an HD extension of Cramer’s Transactional Interpretation when integrated also with an HD extension of the de Broglie-Bohm-Vigier causal interpretation of quantum theory. We incorporate a unique form of large-scale additional dimensionality (LSXD) bearing some similarity to that conceived by Randall and Sundrum; and extend the fundamental basis of our model to the Unified Field, UF. A Sagnac Effect rf-pulsed incursive resonance hierarchy is utilized to manipulate and ballistically program the geometric-topological properties of this putative LSXD space-spacetime network. The model is empirically testable; and it is proposed that a variety of new technologies will arise from ballistic programming of tessellated LCU vacuum cellular automata.
https://doi.org/10.1142/9789814504782_0008
We make a preliminary exploratory study of higher dimensional (HD) orthogonal forms of the quaternion algebra in order to explore putative novel Nilpotent/Idempotent/Dirac symmetry properties. Stage-1 transforms the dual quaternion algebra in a manner that extends the standard anticommutative 3-form, i, j, k into a 5D/6D triplet. Each is a copy of the others and each is self-commutative and believed to represent spin or different orientations of a 3-cube. The triplet represents a copy of the original that contains no new information other than rotational perspective and maps back to the original quaternion vertex or to a second point in a line element. In Stage-2 we attempt to break the inherent quaternionic property of algebraic closure by stereographic projection of the Argand plane onto rotating Riemann 4-spheres. Finally, we explore the properties of various topological symmetries in order to study anticommutative - commutative cycles in the periodic rotational motions of the quaternion algebra in additional HD dualities.
https://doi.org/10.1142/9789814504782_0009
A macroscopic object equipped with synchronized clocks is examined. General physical relations are directly derived from Lorentz transformations for the case of one-dimensional motion (along the X axis) – the uncertainty relation of the object’s x coordinate and the projection of its impulse along the X axis, px, and the uncertainty relation of the object’s observation time, t, and its energy, E. The uncertainty relations take the form dpxdx > H and dEdt > H. The H value in the relation has action dimensions and is dependent upon the precision of the rod’s clocks and its mass. It is shown that if the macroscopic object in and of itself performs the function of an ideal physical clock, the uncertainty relations derived in the limiting case then take the usual form of dpxdx ≥ h and dEdt ≥ h, where h is the Planck constant.
https://doi.org/10.1142/9789814504782_0010
Based on pre-Einstein classical mechanics, a theoretical model is constructed that describes the behavior of objects in a liquid environment that conduct themselves in accordance with the formal laws of the special theory of relativity. This model reproduces Lorentz contraction, time dilation, the relativity of simultaneity, the Doppler effect in its symmetrical relativistic form, the twin paradox effects, Bell effect, the relativistic addition of velocities. The model makes it possible to obtain Lorentz transforms and to simulate Minkowski four-dimensional space-time.
https://doi.org/10.1142/9789814504782_0011
In this paper we examine the analytical production of the Lorentz Transformation regarding its fundamental conclusion i.e. that the speed of Light in vacuum is the uppermost limit for the speed of matter, hence superluminal speeds are unattainable. This examination covers the four most prominent relevant sources of bibliography: Albert Einstein’s historic paper (1905) titled: “On the Electrodynamics of moving Bodies” on which his Special Relativity Theory is founded. His famous textbook titled: “Relativity, The Special and General Theory”, A. P. French’s textbook titled “Special Relativity”, Wolfgang Rindler’s textbook titled: “Essential Relativity”. Special emphasis is placed on the critical analysis of Einstein’s gedanken experiment as it is presented in his original paper, where he considers a moving, straight, rigid rod at the ends of which there are two clocks, whose synchronization is checked according to his own definition as given in part 1 of his paper. By applying the second fundamental hypothesis (principle) of SRT, we arrive at the conclusion that this noetic experiment can be concluded only if the rod’s speed V with regards the stationary system and measured from it, is less than the speed of light C also with regards the stationary system and measured from it. In the opposite case, said noetic experiment would be meaningless as it could never be concluded for the Observer of the stationary system, at least in the Euclidean Space. Finally, we show that in all four cases under examination the relationship v < c is not a conclusion of the Lorentz Transformation, but rather a hidden pre-condition of its analytical production, in other words, the LT can only be valid for v < c Consequently, there doesn’t exist a definite and rigid law of Physics forbidding matter to travel with superluminal velocity in vacuum.
https://doi.org/10.1142/9789814504782_0012
Minkowski spacetime predates quantum mechanics and is frequently regarded as an extension of the classical paradigm of Newtonian physics, rather than a harbinger of quantum mechanics. By inspecting how discrete clocks operate in a relativistic world we show that this view is misleading. Discrete relativistic clocks implicate classical spacetime provided a continuum limit is taken in such a way that successive ticks of the clock yield a smooth worldline. The classical picture emerges but does so by confining unitary propagation into spacetime regions between ticks that have zero area in the continuum limit. Clocks allowed a continuum limit that does not force inter-event intervals to zero, satisfy the Dirac equation. This strongly suggests that the origin of quantum propagation is to be found in the shift from Newton’s absolute time to Minkowski’s frame dependent time and is ultimately relativistic in origin.
https://doi.org/10.1142/9789814504782_0013
With a few arguments (half a page) it is proven that general relativity (GRT) makes contradictory predictions about the total energy of a particle resting in the gravitational field. With a few further arguments (one page) it is proven that these contradictions are resolved by expanding general relativity. The other situation: Though it is not the aim of the author to reject general relativity but to expand it, he is treated as some uncritical anti-relativist - since the start of his considerations and meanwhile for more than 20 years. My public question: Are relativists – on account of their many famous results – unable to admit imperfections of general relativity? General relativity is contradictious in energy questions since on one side the total energy of a particle resting in the gravitational field is lower than its rest mass (there is energy needed to pull out the particle from the gravitational field) while on the other side it is equal to its rest mass (this is a consequence of the equivalence principle).
https://doi.org/10.1142/9789814504782_0014
General relativity is not yet consistent. Pauli has misinterpreted Einstein’s 1916 equivalence principle that can derive a valid field equation. The Wheeler School has distorted Einstein’s 1916 principle to be his 1911 assumption of equivalence, and created new errors. Moreover, errors on dynamic solutions have allowed the implicit assumption of a unique coupling sign that violates the principle of causality. This leads to the space-time singularity theorems of Hawking and Penrose who “refute” applications for microscopic phenomena, and obstruct efforts to obtain a valid equation for the dynamic case. These errors also explain the mistakes in the press release of the 1993 Nobel Committee, who was unaware of the non-existence of dynamic solutions. To illustrate the damages to education, the MIT Open Course Phys. 8.033 is chosen. Rectification of errors confirms that E = mc2 is only conditionally valid, and leads to the discovery of the charge-mass interaction that is experimentally confirmed and subsequently the unification of gravitation and electromagnetism. The charge-mass interaction together with the unification predicts the weight reduction (instead of increment) of charged capacitors and heated metals, and helps to explain NASA’s Pioneer anomaly and potentially other anomalies as well.
https://doi.org/10.1142/9789814504782_0015
Physicists’ understanding of relativity and the way it is handled is at present dominated by the interpretation of Albert Einstein, who related relativity to specific properties of space and time. The principal alternative to Einstein’s interpretation is based on a concept proposed by Hendrik A. Lorentz, which uses knowledge of classical physics to explain relativistic phenomena. In this paper, we will show that on the one hand the Lorentz-based interpretation provides a simpler mathematical way of arriving at the known results for both Special and General Relativity. On the other hand, it is able to solve problems which have remained open to this day. Furthermore, a particle model will be presented, based on Lorentzian relativity, which explains the origin of mass without the use of the Higgs mechanism, based on the finiteness of the speed of light, and which provides the classical results for particle properties that are currently only accessible through quantum mechanics.
https://doi.org/10.1142/9789814504782_0016
We propose an experiment based on statistical analysis to test the currently accepted statement, that “a photon cannot be split.” The essential statistical phenomenon pertains not only to beam splitters, but also to the generation of correlated photon pairs used extensively in quantum optics experiments. A consequence of this analysis is that, arguably the tactic of reducing the window defining a coincidence has the unexpected effect of preferentially selecting uncorrelated photon pairs, thereby undermining an essential prerequisite for the conclusions drawn from such experiments.
https://doi.org/10.1142/9789814504782_0017
Newton claimed the influence of gravity is instantaneous; Einstein insisted no influence could propagate faster than the speed of light. Recent experiments to test the speed of gravity have been controversial and inconclusive on technical grounds. Considerable effort is currently expended in the search for a Quantum Gravity; but there is no a priori reason there should be one. We propose that is not the regime of integration which instead occurs in the arena of the Unified Field, UF; further that a completed model of Geometrodynamics inherently includes a Newton/Einstein duality which introduces shock effects in certain arenas. The unified theory predicts that there is no graviton of the usual phenomenal form (an artifact of the incompleteness of Gauge Theory, i.e. gauge theory is only an approximation suggesting new physics). A new Large Scale Additional Dimensional (LSXD) M-Theoretic topological charge alternative is presented. We also attempt to show how the Titius-Bode Law for solar and exoplanetary configurations appears to provide indicia of this multiverse gravitational model. Applications of the dual geometrodynamics formulation include an interpretation of quasar luminosity as the result of gravitational shock waves in a manner countering explanations of large redshift, Z in Big Bang cosmology putatively based on Doppler recession. Instead redshift occurs as the result of a periodic minute photon mass anisotropy caused by periodic coupling to a covariant polarized Dirac vacuum.
https://doi.org/10.1142/9789814504782_0018
In the part of physics called “classical mechanics” the Galilean principle of relativity is accepted by everyone. But, according to the theory of special relativity (TSR) this principle cannot be applied to light. By analyzing several experiments (Fizeau’s experiment, Michelson’s experiment, de Sitter’s observation and Alväger’s experiment) in this paper it will be concluded that the reliance of TSR’s postulates in those experiments is unsustainable. The essence of the reviewing these experiments is that some of them do not have the same conditions as the motion to which refers the Galileo principle (Fizeau’s experiment, the de Sitter’s observation and the Alväger’s experiment) and in some of them the Galileo principle is not applied correctly (Michelson’s experiment). In this paper it is also concluded that the light obeys to the Galileo principle like all other bodies. Finally, an idea is proposed for conducting an experiment which will prove that the velocity of light is not constant in all reference frames.
https://doi.org/10.1142/9789814504782_0019
All physicists who had contributed to the explanation of the relative motion before Einstein, in their calculations, have considered the velocity of light as constant magnitude in all reference frames. But to derive the Lorentz transformations is not enough just to say that velocity of light is constant in all reference frames. For this derivation is needed a “thought experiment” and then in this experiment should be some inconsistent physical assumptions; also should be some non-transparent mathematical equations and only after these can be seen the Lorentz transformations. In this paper we will present a critical review, in physical and mathematical terms, on some parts in the path of constructing of special relativity’s basic equations.
https://doi.org/10.1142/9789814504782_0020
The generalization of Einstein’s special theory of relativity (SRT) is proposed. In this model the possibility of unification of scalar gravity and electromagnetism into a single unified field is considered. Formally, the generalization of the SRT is that instead of (1+3)-dimensional Minkowski space the (1+4)-dimensional extension G is considered. As a fifth additional coordinate the interval S is used. This value is saved under the usual Lorentz transformations in Minkowski space M, but it changes when the transformations in the extended space G are used. We call this model the extended space model (ESM). From a physical point of view our expansion means that processes in which the rest mass of the particles changes are acceptable now. If the rest mass of a particle does not change and the physical quantities do not depend on an additional variable S, then the electromagnetic and gravitational fields exist independently of each other. But if the rest mass is variable and there is a dependence on S, then these two fields are combined into a single unified field. In the extended space model a photon can have a nonzero mass and this mass can be either positive or negative. The gravitational effects such as the speed of escape, gravitational red shift and detection of light can be analyzed in the frame of the extended space model. In this model all these gravitational effects can be found algebraically by the rotations in the (1+4) dimensional space. Now it becomes possible to predict some future results of visible size of supermassive objects in our Universe due to new stage of experimental astronomy development in the RadioAstron Project and analyze phenomena is an explosion of the star V838 Mon.
https://doi.org/10.1142/9789814504782_0021
An analysis of the net proper time difference for a Twin Paradox scenario is shown to require an asymmetric construct such as a special physics frame and also shows that Special Relativity’s time dilation equation does not describe proper time. An analysis of GPS confirms the need for a special physics frame and is sharply at odds with Special Relativity despite claims that GPS uses it.
https://doi.org/10.1142/9789814504782_0022
Yarman et al. proposed a new Dirac-like equation confining the change due to the bound electron, not in the field, but within the electron itself, as an alternative to the standard field approach. Solving this equation, they showed that for the ground state of the Hydrogen atom, the two approaches produce the same result, which supports previous efforts towards the description of the micro and macro worlds, based on the same philosophy, thus providing a unique fusion of these worlds. In this work we study the energies of the excited states of the Hydrogen atom at possible total angular momenta, j’s. And for the transition between any pair of j’s, we find practically the same energy difference as that predicted by the standard Dirac equation, but surprisingly with the opposite sign. Since not the energy levels alone, but the energy differences between levels are measured, we come to the amazing bifurcation point of choosing between the change-in-the-field-concept versus the change-in-the-particle-concept. It is further interesting to note that the levels such as 2S(1/2) and 2P(1/2), which are identical in the standard approach, turn also to be identical in our approach. We happen accordingly to predict the same Lamb shift as the difference of the magnitudes of the lines 2P(3/2)-2S(1/2) and 2P(3/2)-2P(1/2), assuming yet that the 2S(1/2) level (Lamb-shift-wise) is displaced downward (and not upward), in contrast with what the common approach, rather deliberately, supposes. While the standard approach, neither in the celestial world nor in the atomic world, allows any unification of these two worlds, our approach, on the contrary, provides an easy fusion of them through simple means, and the same change of metric, allowing further an immediate quantization of the celestial world. The main problem in the atomic world, according to our approach, arises to be, the choice between the following two statements:1) The greater j, the higher is the energy level (common approach). 2) The greater j, the lower is the energy level (our approach). A heuristic assessment favors our approach against the common approach.
https://doi.org/10.1142/9789814504782_0023
Here the basic ideas behind Landauer’s introduction of his principle supposedly linking information theory and thermodynamics are examined afresh. Since this examination was prompted by the recent claims of an experimental verification of that principle, the details of the actual experimental verification are re-examined also in the light of these new thoughts concerning the supposed link between information theory and thermodynamics.
https://doi.org/10.1142/9789814504782_0024
The physical basis of the canonical and grand canonical distributions is questioned. In particular, we question the usual methods by which these distributions are derived, namely that fluctuations in entropy around energy and particle number are assumed to occur when the entropy depends only on variables which cannot themselves fluctuate. We show, starting from the Maxwellian velocity distribution, that the probability that a classical ideal gas at a fixed temperature occupies a given energy state corresponds not to the canonical ensemble of classical statistical mechanics but to the Gamma distribution. Computer simulations of a hard-sphere fluid demonstrate the principles. The analysis is extended to open systems in which the number of particles fluctuates and we show that for a system connected to a particle reservoir the Poisson distribution governs the probability of finding a given number of particles. The resulting probability distributions are used to calculate the Shannon information entropy which is then compared with the thermodynamic entropy. We argue that information theoretic entropy and thermodynamic entropy, whilst related, are not necessarily identical and that the information entropy contains non-thermodynamic components.
https://doi.org/10.1142/9789814504782_0025
Clausius’ original papers covering the re-working of Carnot’s theory and the subsequent development of the concept of the entropy of a body are reviewed critically. We show that Clausius’ thinking was dominated by the then prevalent idea that a body contained heat and argue that his concept of aequivalenzwerth, the forerunner of entropy, should be understood as the quantity that measures the ability of the heat in a body to be transformed into work. Although this view of heat had been rejected by the mid-1870s, the concept of the entropy of a body not only survived but went on to dominate thinking in thermodynamics. We draw attention to several contradictions within Clausius’ papers, among them the idea that only infinitesimal quasi-static processes are reversible even though reversibility is integral to the theory of motive power. In addition, Clausius developed his theory of entropy to account for internal work done in overcoming the forces between particles but then applied the theory to the ideal gas in which there are no such forces. We show how this conflicts with the First Law and discuss whether entropy is the state function Clausius claimed it to be.
https://doi.org/10.1142/9789814504782_0026
Based on projective geometry, a quantum holographic approach to the orbiton / spinon dynamics of quantum blackholography and clinical magnetic resonance tomography is mathematically described. Crucial applications of the conformal steady-state free-precession modality and automorphic scattering theory are the evidence for a supermassive central black hole in the Milky Way galaxy and the modalities of clinical cardiovascular magnetic resonance tomography and diffusion weighted magnetic resonance tomography of non-invasive radiological diagnostics.
https://doi.org/10.1142/9789814504782_0027
Boscovich proposed a unified theory in the 18th century, from this theory both quantum physics and Einstein’s relativity was derived. Rather than look back at the history of how modern physics was developed, mainstream seems to go blindly onwards looking for a unified theory and ignoring this particular unified theory.
https://doi.org/10.1142/9789814504782_0028
A third regime is proposed in the natural progression of the description of the physical world - Classical to Quantum to Unified Field (UF) Mechanics. We describe the new conceptual panoply and propose an experimental method to falsify the new UF hypotheses. Like Penzias & Wilson wondering if bird droppings affected their antenna we describe a serendipitous insight into wavepacket dispersion of 1800 MHz telecommunication em-waves in the arena where signal strength attenuates periodically by factors attributed to perceived properties that we postulate can only be mediated by UF mechanics. Salient suggested elements include extended geometrodynamics (duality of Newton’s instantaneous and Einstein’s relativistic models), Solar dynamo activity, geomagnetic phenomena, seasonal precession of the Earth’s axis, near - far field: geomagnetic core dynamo - solar scale-invariant wavepacket dispersion coupling and longitudinal em components. This UF model putatively also provides an indirect measure of photon mass.
https://doi.org/10.1142/9789814504782_0029
The phenomenology of fields is a major part of physical science. We briefly outline the most important aspects here. Photon structure derives from quantum fluctuation in quantum field theory to fermion and anti–fermion, and has been an experimentally established feature of electrodynamics since the discovery of the positron. In hadronic physics, the observation of factorisable photon structure is similarly a fundamental test of the quantum field theory Quantum Chromodynamics (QCD). An overview of measurements of hadronic photon structure in e + e- and electron-positron, ep interactions is presented, and comparison made with theoretical expectation, drawing on the essential features of photon fluctuation into quark and anti–quark in QCD.
https://doi.org/10.1142/9789814504782_0030
In this work we extend Vigier’s recent theory of ‘tight bound state’ (TBS) physics and propose empirical protocols to test not only for their putative existence, but also that their existence if demonstrated provides the 1st empirical evidence of string theory because it occurs in the context of large-scale extra dimensionality (LSXD) cast in a unique M-Theoretic vacuum corresponding to the new Holographic Anthropic Multiverse (HAM) cosmological paradigm. Physicists generally consider spacetime as a stochastic foam containing a zero-point field (ZPF) from which virtual particles restricted by the quantum uncertainty principle (to the Planck time) wink in and out of existence. According to the extended de Broglie-Bohm-Vigier causal stochastic interpretation of quantum theory spacetime and the matter embedded within it is created annihilated and recreated as a virtual locus of reality with a continuous quantum evolution (de Broglie matter waves) governed by a pilot wave - a ‘super quantum potential’ extended in HAM cosmology to be synonymous with the a ‘force of coherence’ inherent in the Unified Field, UF. We consider this backcloth to be a covariant polarized vacuum of the (generally ignored by contemporary physicists) Dirac type. We discuss open questions of the physics of point particles (fermionic nilpotent singularities). We propose a new set of experiments to test for TBS in a Dirac covariant polarized vacuum LSXD hyperspace suggestive of a recently tested special case of the Lorentz Transformation put forth by Kowalski and Vigier. These protocols reach far beyond the recent battery of atomic spectral violations of QED performed through NIST.
https://doi.org/10.1142/9789814504782_0031
The quantum extension of causal analysis has shown a rich picture of the subsystem causal connections, where the usual intuitive approach is hampered more commonly. The direction of causal connection is determined by the direction of irreversible information flow, and the measure of this connection, called the course of time c2, is determined as the velocity of such flow. The absence of causality corresponds to infinite c2, accordingly the degree of causal connection is inversely related to c2. This formal definition of causality is valid at any time direction. The possibilities of causal analysis have been demonstrated before by series of examples of the two- and three-qubit states. In this paper we consider the new applications. The first one is the application of quantum causal analysis to the asymmetric entangled state under decoherence. Three models of decoherence: dissipation, depolarization and dephasing are studied. For the all models the strength and the direction of induced causality has been computed. It turns out that the decoherence acting along original causality destroys entanglement to a lesser degree than it acting against this causality. The second application is the interaction between a two-level atom and infinite-dimensional quantized mode of a field by Jaynes-Cummings model. An analytical solution of von Neumann equation for different initial states is examined. The filed is considered initially to be in thermal mixed state, while atom – sequentially in excited, ground or thermal states. Negativity, mutual information and causal characteristics for different temperatures are computed. It is obtained that for high temperatures distinction between behaviors of different initial states smoothes over and the state turns out to be causal, entangled and “classical” in entropic sense. And the third application is the teleportation (three-particle protocol). Contrintuitively the teleported qubit is not an effect of the original one; it proves the common effect of both two other ones. But at the same time the result of Bell measurement constitutes a cause with respect to every qubits of entangled pair just since moment of their birth. The latter is manifestation of causality in reverse time.
https://doi.org/10.1142/9789814504782_0032
A new explanation is given for correlated measurements on widely separated entangled particles. This explanation uses relativity where events in the same proper frame can be local to one another. Proper frames are shown to be possible, even for light waves, because of diffraction. Waves ΨA and ΨB with distinct proper frames Гp and Гn, travel in opposing directions to reach well separated observers A and B. Consideration of the formality of combined time-frequency-reversal leads to concepts of adjoint waves ΨAa and ΨBa with phase velocities still travelling to A and B respectively. Measurements are now associated with ΨAa*ΨA and ΨBa*ΨB rather than ΨA*ΨA and ΨB*ΨB. It is shown how waves ΨA and ΨBa* can have the same proper frame Гp while ΨAa* and ΨB can have the same proper frame Гn. Consequently these waves ΨA and ΨBa* can be said to be relativistically local and could have correlated properties without any violation of the principle of locality. Similar remarks apply to ΨAa* and ΨB. It is no longer surprising if widely separated particles, represented by measurements of ΨAa*ΨA and ΨBa*ΨB, have correlated properties though the theory’s generality prevents detailing specific correlations. Similar arguments are advanced for Dirac waves. However essential quantum aspects of entanglement lie in carefully timed measurements that choose the correct waves with the correct proper frames. Classical measurements at A and B contain random sets of waves from different proper frames averaged over time and this generally returns uncorrelated measurements. The paper provides satisfying links between relativity, entanglement, quantum wave collapse, and classical independence of non-local observations.
https://doi.org/10.1142/9789814504782_0033
With the help of light cone coordinates and light cone field representations of Maxwell’s classical equations, quantum polarization entanglement is explained using the relativistic results of a companion paper that shows how conventional or reference waves can have an adjoint wave, travelling in phase with the reference wave, but in a proper relativistic frame that travels in the opposing direction to the proper frame of the reference wave. This subsequently allows waves, travelling in opposite directions, to have the same proper frame and consequently such waves can be regarded as relativistically local. The light cone coordinates offer a classical form of a quantum wave function and demonstrate a classical equivalent of a mixed quantum state.
https://doi.org/10.1142/9789814504782_0034
We postulate bulk universal quantum computing (QC) cannot be achieved without surmounting the quantum uncertainty principle, an inherent barrier by empirical definition in the regime described by the Copenhagen interpretation of quantum theory - the last remaining hurdle to bulk QC. To surmount uncertainty with probability 1, we redefine the basis for the qubit utilizing a unique form of M-Theoretic Calabi-Yau mirror symmetry cast in an LSXD Dirac covariant polarized vacuum with an inherent ‘Feynman synchronization backbone’. This also incorporates a relativistic qubit (r-qubit) providing additional degrees of freedom beyond the traditional Block 2-sphere qubit bringing the r-qubit into correspondence with our version of Relativistic Topological Quantum Field Theory (RTQFT). We present a 3rd generation prototype design for simplifying bulk QC implementation.
https://doi.org/10.1142/9789814504782_0035
By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model’s deterministic foundations.
https://doi.org/10.1142/9789814504782_0036
Quantum Mechanics associated with new logic like Multivalued Logic and Fuzzy Logic has progressed in different ways and their applications can be found in many fields of sciences and technologies. All the concepts attached to this theory are far from the classical view. Classical mechanics can be viewed as crisp limit of a Fuzzy quantum mechanics. This leads to the following interpretation: It is the consequence of an assumption that a quantum particle “reside” in different place or in every path of the continuum of paths which collapse into a single “unique” trajectory of an observed classical motion The reality is “Fuzzy” and nonlocal not only in space but also in time. In this sense, idealised pointlike particles of classical mechanics corresponding to the ultimate sharpness of the fuzziness density emerge in a process of interaction between different parts of fuzzy wholeness. This process is viewed as a continuous process of defuzzification. It transforms a fuzzy reality into a crisp one. It is clear that the emerging crisp reality as a final step of measurements carries less of information that the underlying fuzzy reality. This means that there is an irreversible loss of information usually called “collapse of the wave function”. It is not so much a “collapse” as a realization of one of the many possibilities existing within a fuzzy reality. Any measurements rearrange the fuzzy reality leading to different detection outcomes.
https://doi.org/10.1142/9789814504782_0037
The hypothesis of transition from a chaotic Dirac Sea, via highly unstable positronium, into a Simhony Model of stable face-centered cubic lattice structure of electrons and positrons securely bound in vacuum space, is considered. 13.75 Billion years ago, the new lattice, which, unlike a Dirac Sea, is permeable by photons and phonons, made the Universe detectable. Many electrons and positrons ended up annihilating each other producing energy quanta and neutrino-antineutrino pairs. The weak force of the electron-positron crystal lattice, bombarded by the chirality-changing neutrinos, may have started capturing these neutrinos thus transforming from cubic crystals into a quasicrystal lattice. Unlike cubic crystal lattice, clusters of quasicrystals are “slippery” allowing the formation of centers of local torsion, where gravity condenses matter into galaxies, stars and planets. In the presence of quanta, in a quasicrystal lattice, the Majorana neutrinos’ rotation flips to the opposite direction causing natural transformations in a category comprised of three components; two others being positron and electron. In other words, each particle-antiparticle pair “e-” and “e+”, in an individual crystal unit, could become either a quasi- component “e- ve e+”, or a quasi- component “e+ - ve e-”. Five-to-six six billion years ago, a continuous stimulation of the quasicrystal aetherial lattice by the same, similar, or different, astronomical events, could have triggered Hebbian and anti-Hebbian learning processes. The Universe may have started writing script into its own aether in a code most appropriate for the quasicrystal aether “hardware”: Eight three-dimensional “alphabet” characters, each corresponding to the individual quasi-crystal unit shape. They could be expressed as quantum Turing machine qubits, or, alternatively, in a binary code. The code numerals could contain terminal and nonterminal symbols of the Chomsky’s hierarchy, wherein, the showers of quanta, forming the cosmic microwave background radiation, may re-script a quasi-component “e- ve e+” (in the binary code case, same as numeral “0”) into a quasi-component “e+ -ve e-” (numeral “1”), or vice versa. According to both, the Chomsky“s logic, and the rules applicable to Majorana particles, terminals “e-” and “e+” cannot be changed using the rules of grammar, while nonterminals “ve” and “-ve” can. Under “quantum” showers, the quasi- unit cells re-shape, resulting in re-combination of the clusters that they form, with the affected pattern become same as, similar to, or different from, other pattern(s). The process of self-learning may have occurred as a natural response to various astronomical events and cosmic cataclysms: The same astronomical activity in two different areas resulted in the emission of the same energy forming the same secondary quasicrystal pattern. Different but similar astronomical activity resulted in the emission of a similar amount of energy forming a similar secondary quasicrystal pattern. Different astronomical activity resulted in the emission of a different amount of energy forming a different secondary quasicrystal pattern. Since quasicrystals conduct energy in one direction and don’t conduct energy in the other, the control over quanta flows allows aether to scribe a script onto itself through changing its own quasi- patterns. The paper, as published below, is a lecture summary. The full text is published on website: www.borisvolfson.org
https://doi.org/10.1142/9789814504782_0038
Historical aether models are placed in context with the electron–positron lattice (epola) model of space due to M. Simhony. A brief outline of the model as an aggregation state of matter, intermediate to the nuclear state and the atomic aggregation state, includes reference to its derivation of physical laws and fundamental constants. The broad application of the epola model is appraised for its validation against a range of physical laws, experiments and constants. Simhony declared a specific dependence for the stability of atomic matter upon speed through the epola, suggesting a test for falsification. This theme is further developed by the same logic to suggest practical experimental and theoretical tests of the epola model. A formula for the inverse fine structure constant of space, providing the accepted CODATA value, is derived from Simhony’s explanation of the Bohr – de Broglie model of the ground state electron orbital of the hydrogen atom by including a term for speed through the Cosmos. A theoretical solution of the Michelson-Morley experiment is applied as evidence for the concept. The mechanism of motion through the epola is considered further for possible implications of speed including dependency of decay rates by radio nuclides and the results of former and ongoing experiments are considered.
https://doi.org/10.1142/9789814504782_0039
In recent years, more and more problems have arisen over the usual interpretation of the observed redshift. Much of this originated with observations made by Halton Arp but the questions raised by his work, although dismissed perfunctorily by many at the time and since, have not disappeared; the questions still demand answers. On top of this worry for orthodox astrophysics/cosmology, more and more questions seem to be arising concerning the observed presence of magnetic fields and electric currents in space. The latter questions are being addressed by laboratories working on experimental plasma physics but, considering the work of many years by such as Anthony Peratt, it might be wondered if all this work is necessary or, indeed, if much of it has been done already?
https://doi.org/10.1142/9789814504782_0040
Guth’s inflationary universe model has been widely accepted in modern physics. Expanding upon this concept Linde introduced the Chaotic and Eternal Inflationary model. The inflationary universe structure allows for multiple universes or various bubble universes connected through scalar and tensor fields and making the structure of space self-similar on larger scales. In this paper we briefly examine these and other theories associated with a multiverse, including the M-Theory that considers that our universe and others are created by collisions between membranes in an 11-dimensional space.
https://doi.org/10.1142/9789814504782_0041
We have developed a hyperdimensional geometry, Dn or Descartes space of dimensionality of n > 4, for our consideration n = 10. This model introduces a formation in terms of the conditions of constants as the space that allows us to calculate a unique set of scaling laws from the lower end scale of the quantum vacuum foam to the current universe. A group theoretical matrix formalism is made for the ten and eleven dimensional model of this space. For the eleven dimensional expressions of this geometry, a fundamental frequency is introduced and utilized as an additional condition on the topology. The constraints on the Dn space are imposed by the relationship of the universal constraints of nature expressed in terms of physical variables. The quantum foam picture can be related to the Fermi-Dirac vacuum model. Consideration is made for the lower limit of a universal size scaling from the Planck length, l = 10-33 cm, temporal component, t = 10-44 sec, density, 1093 gm/cm3 and additional Planck units of quantized variables. The upper limit of rotational frequency in the Dn space is given as 1043 Hz, as conditions or constraints that apply to the early universe which are expressed uniquely in terms of the universal constants, h, Planck’s constant, the G, the gravitational constant and c, the velocity of light. We have developed a scaling law for cosmogenesis from the early universe to our present day universe. We plot the physical variables of the ten and eleven dimensional space versus a temporal evolution of these parameters. From this formalism, in order to maintain the compatibility of Einstein’s General Relativity with the current model of cosmology, we replace Guth’s inflationary model with a matter creation term. Also we have developed a fundamental scaling relationship between the “size scale” of organized matter with their associated fundamental frequency.
https://doi.org/10.1142/9789814504782_0042
Relativity Theory (RT) incorporates serious inconsistencies:- (1) embracing the function of transverse e.m. (TEM) waves as perfect messengers but denying the presence of a Maxwell’s equations aether lest it might invalidate that perfection, despite it being essential for their existence; (2) assuming the physical absurdity that the external physical properties (mass, magnetic moment) of fundamental particles can be developed in zero volume (“spatially infinitesimal singularities”), despite powerful evidence that they are of finite size. It thereby overlooks that if two electromagnetically defined objects are of finite size the force communication between them is progressively velocity-limited, falling to zero at c [Heaviside 1889]. So this is what happens in electromagnetic accelerators, not massincrease. For more than a century these defects have hampered progress in understanding the physics of the mass property of particles, thus compelling it to be regarded as ‘intrinsic’ to those specific infinitesimal points in space. A rewarding substitute, Continuum Theory (CT), outlined here, (A) implements Maxwell’s aether as a massless all-pervasive quasi-superfluid elastic continuum of (negative) electric charge, and (B) follows others [Clerk Maxwell, both Thompsons, Larmor, Milner] in seeing mass-bearing fundamental particles as vortical constructs of aether in motion, not as dichotomously different from it. To encompass that motion, these cannot be infinitesimal singularities. Electron-positron scattering provides guidance as to that size. For oppositely-charged particles, one sort contains more aether and the other less, so particle-pair creation is ‘easy’, and abundantly observed, but has been attributed to ‘finding’. This electron-positron relationship defines mean aether density as >1030 coulomb.cm-3, thus constituting the near-irrotational reference frame of our directional devices. Its inherent self-repulsion also offers an unfathomable force capability should the means for displacing its local density exist; that, we show, is the nature of gravitational action and brings gravitation into the electromagnetic family of forces. Under (B) the particle mass is measured by the aether-sucking capability of its vortex, positiveonly gravitation being because the outward-diminishing force developed by each makes mutual convergence at any given point the statistically prevalent expectation. This activity maintains a radial aether (charge) density gradient - the Gravity-Electric (G-E) Field - around and within any gravitationally retained assemblage. So Newton’s is an incomplete description of gravitation; the corresponding G-E field is an inseparable facet of the action. The effect on c of that charge density gradient yields gravitational lensing. We find that G-E field action on plasma is astronomically ubiquitous. This strictly radial outward force on ions has the property of increasing the orbital angular momentum of material, by moving it outwards, but at constant tangential velocity. Spiral galaxies no longer require Cold Dark Matter (CDM) to explain this. The force (maybe 30 V.m-1 at solar surface) has comprehensive relevance to the high orbital a.m. achieved during solar planet formation, to their prograde spins and to exoplanet observations. The growth of high-mass stars is impossible if radiation pressure rules, whereas G-E field repulsion is low during dust-opaque infall, driving their prodigious mass loss rates when infall ceases and the star establishes an ionized environment. Its biggest force-effect (~1012 V.m-1) is developed at neutron stars, where it is likely the force of supernova explosions, and leads to a fertile model for pulsars and the acceleration of 1019 eV extreme-energy cosmic rays. Our only directly observed measure of the G-E field is recorded at about 1 V.m-1 in the ionosphere-to-Earth electric potential. And temporary local changes of ionosphere electron density, monitored by radio and satellite, have been discovered to act as earthquake precursors, presumably, we suggest, by recording change of G-E field and gravitational potential at Earth surface when its elastic deformation occurs, even when this is deep below electrically conducting ocean water. The paper concludes by noting experimental evidence of the irrelevance of the Lorentz transformations in CT and with a discussion of CT’s competence in such matters as perihelion advance and Sagnac effect, widely regarded as exclusively RT attributes. Finally we broach the notion that the aether is the site of inertia. This could explain the established equality of gravitational and inertial masses. In an accompanying paper we explore the cosmological and other aspects of ‘making particles out of aether’. This link undermines the expectation of fully distinct dynamical behaviour by particles and aether which motivated the Michelson-Morley experiment.
https://doi.org/10.1142/9789814504782_0043
Our preceding paper “Implementing Maxwell’s aether.……” (Paper I) concluded:- (A) Maxwell’s aether, ignored in Relativity, is a massless, quasi-superfluid continuum of extremely high negative charge density; (B) Fundamental particles are not infinitesimal singularities within the aether but develop their mass by being ‘made out of it’ (hence the name Continuum Theory) as finite-sized vortical constructs of its motion. So reproduction (‘auto-creation’) of more of them requires only the addition of suitable dynamical energy, with Ampere’s law providing charge-coupling in shear to get rotations. (C) In the resulting gravitational process, generating the Newtonian force simultaneously also generates a radial electric field, the Gravity-Electric (G-E) field, whose action on astronomical plasmas could explain the flat tangential velocity profiles of spiral galaxies without resort to Cold Dark Matter (CDM) if outward disc flow is present. One of the objectives here is to provide that flow by axial infall and to examine its consequences. But first, if particles are ‘made out of aether’ the associated random aether-charge motion will generate radiation (the CMB) and impose four distance-cumulative, wavelength-independent transmission effects upon electromagnetic waves. One of these – a redshift – we see here as the cosmic redshift, plus intrinsic redshifts in stellar and galaxy ‘atmospheres’. Such a redshift appears to have been reliably observed with caesium clocks over long ground-level paths in 1968 but, lacking an appreciation of its mechanism, its wide significance was doubted. In fact, our extrapolation to intergalactic conditions dispenses with the BigBang. The other 3 transmission effects are:- spectral line broadening, scattering and attenuation, each of which has significant astronomical/cosmological expression. If the cosmic redshift is not a velocity, the reason for Dark Energy vanishes. In the resulting no-expansion cosmology the Universe was originally equipped with randomly moving aether, from whose motion and energy content the entire mass content of the Universe has grown over time by auto-creation, the local rate of which experiences positive feedback and acceleration as gravitational accumulations drive energy levels higher. Hence the clumpiness of galaxy distributions. The infall of cosmogonally young material from the auto-creation auras of clusters has 3 major implications. (1) It completely inverts the Big Bang perspective that lowmetallicity material, widespread in galaxy haloes, is very ancient. (2) Quasi-axial infall of such broadly neutral material (mostly H) onto a Spiral will spread out in the galactic plane, driven radially from the ionizing bulge by the G-E field, maintaining constant tangential velocity; all without CDM. This pattern means that the arms, although trailing, are actually being blown outward (unwrapping). See Paper I for detail. For such ongoing disruption of Spirals to prevail so widely means that originally each must have started life as an a.m.-conserving, tightly-wound spiral of mostly neutral, cosmogonically young material (mainly H), in which G-E field action was minimal until star formation and ionization had set in. (3) In cluster interiors, other cluster members may deflect the two infall streams as they converge onto a Spiral, introducing a dynamical rotational couple near the centre, with an axis roughly in the galactic plane, to produce a Barred Spiral. Cessation of infall then results in endwise collapse of that bar, yielding a fattened Elliptical. Those are indeed typically concentrated in the centres of clusters and show a dearth of active star formation, consistent with being deprived of young infall. We present images and diagrams in support and elaboration of (2) and (3). The CT model for quasars provides large intrinsic redshift by the CT analogue of Transverse Doppler Effect and offers light-element synthesis by the evolutionary precipitation of a runaway rotational shrinkage, with mass annihilation and emission of a GRB. Of special interest, relative to the arm’s-length nature of BigBang cosmology, is that continuous auto-creation (CAC) cosmology is in principle available near-by for direct study and the development of strong observational constraints. In the context of (1), the very low metallicity (Pop II) of globular (star) clusters abundantly present in the haloes of galaxies points to them being (infallen?) local concentrations of quite young auto-creation. In that case the ‘blue straggler’ stars more recently formed in their core regions may be our youngest examples of ongoing auto-creation. In summary, CT offers a much more directly observable Universe, with no Big Bang, CDM, or Dark Energy, and a CMB that records the true temperature of intergalactic space along the path taken by the radiation. Its closely cavity-radiation character arises from the random aether’s transmission-related opacity (Olbers’ Paradox) of the infinite CT Universe. Fundamentally, the aether’s random motion constitutes all-penetrating random electromagnetic excitation at the atomic scale that may offer the accommodation between classical physics and stochastic quantum electrodynamics so long obstructed by Relativity Theory.
https://doi.org/10.1142/9789814504782_0044
An internally transluminal model of the hypothesized unique cosmic quantum of the very early universe is proposed. It consists of a closed-loop photon model, which has the total mass-energy of the atomic matter and dark matter of our observable universe that will develop from it. The closed-loop photon model is composed of a rapidly circulating point-like transluminal energy quantum (TEQ). This TEQ circulates in a closed helical path with a maximum speed of c sqrt(5), and a minimum speed of c around the photon model’s one-wavelength closed circular axis. The transluminal energy quantum model of the cosmic quantum is a boson. The cosmic quantum model may help shed some light on several fundamental issues in cosmology, such as the nature of the cosmic quantum, the predominance of matter over antimatter in our universe, the possible particles of dark matter, the quantum interconnectedness of the universe, and the very low entropy of the very early universe. The cosmic quantum may be the first particle of dark matter, from which all other particles are derived.
https://doi.org/10.1142/9789814504782_0045
A photon is modeled by an uncharged transluminal energy quantum (TEQ) moving at c sqrt(2) along an open 45-degree helical trajectory with radius R=Lambda/2pi, (where Lambda is the helical pitch or wavelength of the photon). A transluminal spatial model of an electron is composed of a charged pointlike transluminal energy quantum circulating at an extremely high frequency of 2.5 x 1020 hz in a closed, double-looped helical trajectory whose helical pitch or wavelength is one Compton wavelength h/mc. The transluminal energy quantum has energy and momentum but not rest mass, so its speed is not limited by c. The TEQ’s speed is superluminal 57% of the time and subluminal 43% of the time, passing through c twice in each trajectory cycle. The TEQ’s maximum speed in the electron’s rest frame is 2.516 c and its minimum speed is c/sqrt(2), or .707 c . The electron model’s helical trajectory parameters are selected to produce the electron’s spin hbar/2 and approximate (without small QED corrections) magnetic moment ehbar/2m (the Bohr magneton) as well as its Dirac equation-related “jittery motion” (zitterbewegung) angular frequency 2mc2/hbar, amplitude hbar/2mc and internal speed c. The two possible helicities of the electron model correspond to the electron and the positron. With these models, an electron is like a closed circulating photon. The electron’s inertia is proposed to be related to the electron model’s circulating internal momentum mc.
https://doi.org/10.1142/9789814504782_0046
For many years now, the so-called ‘Big Bang’ model for the beginning and evolution of the Universe has been accepted almost without question. In fact, any who have dared question its authenticity has been condemned to a sort of scientific outer darkness. However, what is the truth in this matter? Here some brief thoughts on this problem will be laid out for people to ponder at leisure.
https://doi.org/10.1142/9789814504782_0047
It is shown in this paper that the Big Bang Cosmology has its basis in theology, not in science, that it pertains to a Universe entirely filled by a single spherically symmetric continuous indivisible homogeneous body and therefore models nothing, that it violates the physical principles of General Relativity, that it violates the conservation of energy, and that General Relativity itself violates the usual conservation of energy and momentum and is therefore in conflict with experiment on a deep level, rendering Einstein’s conception of the physical Universe and the gravitational field invalid.
https://doi.org/10.1142/9789814504782_0048
The neurogenetic correlates of consciousness (NgCC) is a new field of consciousness studies that focuses on genes that have an effect on or are involved in the continuum of neuron-based consciousness. A framework of consciousness based on the neural correlates of consciousness (NCC) has already been established by Francis Crick and Christof Kock. In this work I propose that there are NgCC underlying the NCC which are both active during the conscious experience. So how are genes involved? There are two significant connections between DNA and neurons that are involved in the conscious experience. First, any brain system can be adversely affected by underlying genetic abnormalities which can be expressed in an individual at birth, in adulthood, or later in life. Second, the DNA molecule does not lay dormant while the neuron runs on autopilot. DNA is active in translating and transcribing RNA and protein products that are utilized during neuron functioning. Without these products being continuously produced by the DNA during a conscious experience the neurons would cease to function correctly and be rendered unable to provide a continuum of human consciousness. Consequently, in addition to NCC, NgCC must be factored in when appreciating a conscious event. In this work I will discuss and explain some NgCC citing several examples.
https://doi.org/10.1142/9789814504782_0049
First of all the scientific reasons of astrophysics, astronomy and modern astrobiology in favor of the existence of life, consciousness and intelligence in the Universe will be presented and estimated (e.g. the Nobel Laureate Christian de Duve’s arguments). The part played in this kind of scientific debate by the Copernicus principle will be stressed from the scientific and philosophical point of view. Since there are also philosophers and theologians who argue in favor of the existence of life in the Universe, their arguments will be shortly presented and estimated as well.
https://doi.org/10.1142/9789814504782_0050
The author proposes a holoinformational view of the observer based, on the holonomic theory of brain/mind function and quantum brain dynamics developed by Karl Pribram, Sir John Eccles, R.L. Amoroso, Hameroff, Jibu and Yasue, and in the quantumholographic and holomovement theory of David Bohm. This conceptual framework is integrated with nonlocal information properties of the Quantum Field Theory of Umesawa, with the concept of negentropy, order, and organization developed by Shannon, Wiener, Szilard and Brillouin, and to the theories of self-organization and complexity of Prigogine, Atlan, Jantsch and Kauffman. Wheeler’s “it from bit” concept of a participatory universe, and the developments of the physics of information made by Zureck and others with the concepts of statistical entropy and algorithmic entropy, related to the number of bits being processed in the mind of the observer are also considered. This new synthesis gives a self-organizing quantum nonlocal informational basis for a new model of awareness in a participatory universe. In this synthesis, awareness is conceived as meaningful quantum nonlocal information interconnecting the brain and the cosmos, by a holoinformational unified field (integrating nonlocal holistic (quantum) and local (Newtonian). We propose that the cosmology of the physical observer is this unified nonlocal quantum-holographic cosmos manifesting itself through awareness, interconnected in a participatory holistic and indivisible way the human mind-brain to all levels of the self-organizing holographic anthropic multiverse.
https://doi.org/10.1142/9789814504782_0051
The concept of time in the ‘clockwork’ Newtonian world was irrelevant; and has generally been ignored until recently by several generations of physicists since the implementation of quantum mechanics. We will set aside the utility of time as a property relating to physical calculations of events relating to a metrics line element or as an aspect of the transformation of a particles motion/interaction in a coordinate system or in relation to thermodynamics etc., i.e. we will discard all the usual uses of time as a concept used to circularly define physical parameters in terms of other physical parameters; concentrating instead on time as an aspect of the fundamental cosmic topology of our virtual reality especially as it inseparably relates to the nature and role of the observer in natural science.
https://doi.org/10.1142/9789814504782_0052
Physics has been slowly and reluctantly beginning to address the role and fundamental basis of the ‘observer’ which has until now also been considered metaphysical and beyond the mandate of empirical rigor. It is suggested that the fundamental premise of the currently dominant view of ‘Cognitive Theory’ - “Mind Equals Brain” is erroneous; and the associated belief that the ‘Planck scale, ‘the so-called basement level of reality’, as an appropriate arena from which to model psycho-physical bridging is also in error. In this paper we delineate a simple, inexpensive experimental design to ‘crack the so-called cosmic egg’ thereby opening the door to largescale extra dimensions (LSXD) tantamount to the regime of the unified field and thus awareness. The methodology surmounts the quantum uncertainty principle in a manner violating Quantum Electrodynamics, (QED), a cornerstone of modern theoretical physics, by spectrographic analysis of newly theorized Tight-Bound State (TBS) Bohr orbits in ‘continuous-state’ transition frequencies of atomic hydrogen. If one wonders why QED violation in the spectra of atomic hydrogen relates to solving the mind-body (observer) problem; consider this a 1st wrench in a forthcoming tool box of Unified Field Mechanics, UF that will soon enough in retrospect cause the current tools of Classical and Quantum Mechanics to appear as stone axes. Max Planck is credited as the founder of quantum mechanics with his 1900 quantum hypothesis that energy is radiated and absorbed discretely by the formulation, E = hv. Empirically implementing this next paradigm shift utilizing parameters of the long sought associated ‘new physics’ of the 3rd regime (classicalquantum- unified) allows access to LSXD of space; thus pragmatically opening the domain of mental action for the 1st time in history. This rendering constitutes a massive paradigm shift to Unified Field Theory creating a challenge for both the writer and the reader!
https://doi.org/10.1142/9789814504782_bmatter
The following sections are included: