387) Future of CMNS

Ludwik Kowalski

Montclair State University, New Jersey, USA
January 31, 2010


1) Future of cold fusion has been mentioned in the private discussion list for CMNS researchers. Last night Ed Storms wrote:

“. . . Physicists seem to think they are the only evaluators of reality and they have applied their approach to cold fusion.  This approach involves ad hoc assumptions about ideal conditions and application of mathematical models. Such an approach cannot be tested because it does not describe actual reality in this case.  If they had been the evaluators of early chemistry, early genetics, or even the invention of the light bulb, these fields would have been slow to develop as well.  While you might think this conclusion rather extreme, nevertheless, no theory will be accepted because none can be tested until the nature of the required material is understood.  Once Fleischmann and Pons claimed a nuclear process, which let physics into the discussion, they were doomed.  The field will continue to be slow to develop until this attitude changes or until someone finds the active material by accident.”

2) Responding to this I wrote: “Another very powerful trigger would be a single reproducible-on-demand demo of a strong nuclear effect, such as undeniable transmutation or emission of tritons, resulting from an atomic (chemical) process.

What would I do if I had the power and the means to proceed toward such a goal? I would start by selecting (in consultation with others) several most promising CMNS protocols.  The protocols would be ranked as #1, #2, #3, etc.  Then several teams of competent researchers would be asked to verify the claim made by authors of Protocol #1. Sooner or later they would come up with the answer. Their "yes" would mean that the goal has been achieved; their "no" would mean that Protocol #2 should be tried, by selected groups of competent researchers. They too would deliver a clear yes or no answer., sooner or later. Then the same for Protocol #3, etc., etc, till the last protocol is tried, as schematically illustrated in Figure 1. Yes, I know that this would not be easy in practice. But is it possible in principle? I think so. Do you agree?


Figure 1
Desirable strategy for making progress in CMNS field.

3) Responding to the above, Larens Imanyuel wrote “I find your approach too rigidly methodical. A successful scientific debate in a difficult field is more like the case study of a Sun Tzu battle.“ [According to Wikipedia, San Tzu (6th century BC) wrote "It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle.”]

4) Responding to Larens, I asked:

(a) What kind of battle are we fighting?
(b) Who are our enemies?
(c) What should we know about them?
(d) What should we know about ourselves?

5) Responding to the above, Larens Imanyuel wrote:

(a) “The battle is to have LENR recognized as a fundamental physics breakthrough within the international scientific policy process, so that LENR get the scientific resources they deserve.
(b) Scientists who have not been open-minded about LENR.
(c) We need to know which scientists to win over with which arguments, so as to destroy the alliance against us.
(d) We need to know those biases within us which have prevented us from winning the battle.”

6)
6) One researcher wrote that CMNS “cannot be explained by conventional physics.” Another wrote that “nothing about CF requires new physics. I agree, some physics will have to be modified in ways that have not been considered.” This led to an argument. Referring to the second researcher, the first one wrote: “You are asserting your belief as fact here. It is your obligation within the discussion to present some type of proof that new physics is not required.”

The proof will be in the pudding. A theory predicting reproducible experimental data will be accepted. Will it be based on conventional physics or will it be based on something totally new? This remains to be seen. 

7) The most important task, for the time being, is to find at least one reproducibl-on-demand protocol yielding a strong nuclear process resulting from an atomic process. A challenge to theoreticians will be not only to explain that nuclear process but also to show that a successful theory predicts new reproducible phenomena. That is what I learned from those who described scientific methodology of validation and acceptance. I strongly believe that the discovery of the first reproducible protocol will speed up process of understanding of CMNS phenomena.

8) No one contributed so far. That shows that what is on my mind is not necessarilly on the minds of others. Here is how this unit was announces, about 15 hours ago: Capturing what we think about the future of CMNS (for future generations) is important. With this in mind I started composing a new unit:

        http://pages.csam.montclair.edu/~kowalski/cf/387future.html

So far this link is "secret;" please do not share it with others. I will let you know when the unit becomes public. Ed already gave me permission to quote. I hope Larens will do the same. Please contribute to this thread. Or send me a private message with something to insert. In that way no one on this list will know who the author is. Unless I have permissions, names will be replaced with X, Y, Z, etc., if you prefer. But let me know.

Added on 2/2/2010

9) A question about resonant tunneling was asked by X. Not being a theoretician, I usually stay away from mathematical analysis of CMNS problems. What follows is quoted to show that, contrary to what many outsiders believe, disagreements and personal friction, also exist in CMNS community. We are not a mutual admiration society. Responding to X, Professor Y wrote:

In Professor Li's work, unlike mine, he solves Schroedinger's Equation for a complex wave function, which has a phase rather than merely an amplitude as I have. Therefore he can compute a reaction RATE rather than get a simple YES/NO answer.

Every one of the half-dozen theorists whom I mentioned in my MIT Colloquium presentation is doing some aspect of the problem in greater detail than am I and "my" answer could be improved if they would examine the same scenario that I did but with more detail.

For example, Drs. Talbot & Scott Chubb usually consider a 3D lattice whereas I [like Schiwinger] have simplified the problem to a 1D approximation in order to be able to get an answer more easily.

In Prof. Yeong Kim's work, he is solving the actual Schroedinger equation EXACTLY rather than just using the widely accepted standard approximate solution which is easier to use and is what I used.  I have told Prof. Kim that if he considered the boundary conditions of the lattice (rather than just the two isolated nuclei) he would get an improvement in the theory which I have been advocating.

Likewise "my" theory could be improved if one included the phonon aspects, as does Prof. Peter Hagelstein, in order to include the subject of the post-fusion energy being transferred to the lattice in the form of lattice excitations or heat.

But NONE of the other theorists in the field has ever even cited the Y’s work, nor any of my four "proofs" of varying degrees of simplification, much less improved upon it as I have often expressed the hoped that they would!


Added on 2/3/2010

10) X2 wrote: “
There is a conflict here between political styles. I see the necessity of an initial political phase of building personal relations, before trying to win a political battle in a large forum. These behind the scene personal relations are what is going to insure victory in the large forum. If newcomers feel a "get to the back of the bus, we old-timers are running the show" attitude, the necessary positive social relations will not be built. That is why I am working first to build working groups that can move our agenda up the scientific bureaucracy.”

11) X3 responded:
I agree with you, building trust and inclusion is essential. As far as I can tell, this is the general policy of the field, including my own.  Relationships are currently being built between a wide variety of institutions.  In the process, two other dynamics are operating.  Efforts are being made to bring a higher standard of experiment and theory to the field. These efforts involve criticisms that can cause resentment. At the same time, some people care only about their own self-interest, which they promote by criticizing efforts to improve the quality of the work.  Therefore, some conflict in the field is to be expected, as exists in all fields of human effort.   The solution is to collaborate with those people who are working to advance the general field and who take a completely objective approach to the effort.

12) Another exchange between X3 and X2 (X3 in blue and X2 in red).

The November 13 DIA technology assessment makes clear that LENR experimentation has been rapidly expanding, and that this is bringing a "higher standard of experiment".

True, but I'm talking about the field policing itself by peer review. This process is happening as people exchange preprints and ask for critiques.  Unfortunately, everyone does not do this and some take the criticism personally.


A comparable increase in resources has yet to occur on the theoretical side. Since Julian Swinger resigned from the APS over their hostile CF policy, people wanting to pursue the fundamental physics of LENR have had to be self-supporting.

True, with a few exceptions.

By "fundamental" I mean new simpler mathematical formulations that are correct by Occam's Razor. These do not require any new experimental LENR breakthroughs, though cleaning up the existing results will be useful both for theory and for being more persuasive to the larger scientific community. The theoretical breakthrough, can only occur, however, if the "newcomers" are granted enough resources to effectively collaborate on cleaning up relevant known fundamental theory. Their efforts to obtain these resources is currently at the stage of negotiating on the "shape of the table and the height of the chairs".

I agree, not much resource is being applied to theory.  However, for a person to make a useful contribution, they need to master what is known, which is not an easy job.

When you talk about people "who take a completely objective approach" you are talking about an empty set. You by your own statement are strongly in the camp that new experimental results will be necessary.

By an objective approach I mean not being in love with your own ideas to the exclusion of other possibilities.  I find many theoreticians are so sure they are right that no argument, no matter how well researched, has any effect on their approach.  Of course, a person should be an advocate for an idea, but this has to have limits.

13) The issue of money was raised. Responding to this, X2 wrote: “If i were given 10 M$ to set up a laboratory and 5 years of guaranteed support, I would have no problem getting the people.  This economy and the situation at LANL would provide many well trained and talented people if the lab were set up in Los Alamos or Santa Fe.  This effort could also involve creating a department of LENR at UNM.  No, the problem is the willingness to risk money on this subject.  Money will flow into the field cautiously and it will be focused on the big laboratories such as NRL and SRI.  Eventually, the so-called national laboratories will want a piece of the action. As this money produces success, more will enter the field and be directed by the conventional scientific establishment because they have the experience and the trust to properly direct money. However, a lot will be wasted because, as X3 says, a special knowledge set is required.  The conventional money managers will not recognize this requirement.  Talent is available, only awareness at the management and funding levels is lacking, at least in the US.”

Added on 2/4/2010

14) As I often repeat, money and administrative support will probably materialize very quickly after at least one reproducible-on-demand demo of a nuclear process due to an atomic process is offered. An example of a nuclear process is emission of energetic photons or p rojectiles, changes in isotopic ratios, transmutation of elements, etc. An example of an atomic process is electrolysis or another chemical reaction, diffusion, glow discharge etc.

The best candidate for such demo seems to be creation of helium during electrolysis. This kind of nuclear transmutation was independently reported by at least three teams of highly qualified researchers, two in the US and one in Italy. These are the only studies in which excess heat and the nuclear effect were shown to be correlated. More than several MeV of excess heat is generated for each atom of produced helium. In one study (M. McKubre) the reported value was close to 23 MeV per helium atom. If I had the expertise needed to perform such investigations, and access to required instruments, I would certainly try to independently verify results reported McKubre, Miles and De Ninno. Focusing on reproduction experiments seems to be more desirable, at this stage, than searching for new CMNS phenomena.

Sustained commercial success--for example, in generation of excess heat, even without understanding of the mechanism, would also precipitate interest and support. Such scenario is possible, at least in principle. But it is very unlikely. I am thinking about the proverbial blind chicken trying to find a grain of corn by pecking randomly. Most new applications are now based on scientific understanding of underlying processes.

Added on 2/5/2010

15) The last comment above made think about two kinds of research projects, replication-oriented and discovery-oriented. The first kind is more likely to be successful than the second kind. Why is it so? Because in the first kind situation success depends only on the skill of researchers. To succeed in the second kind of situation one must not only to be skillful but also lucky (in choosing the topic). My thesis adviser told me this in France, nearly five decades ago.

My definition of “replication experiments” is broader than “strict replications” of what has already been done. It includes “analytical experiments” in which the goal is to learn something about a phenomenon know to exist, or to measure something. Trivial examples are (a) measuring the resistivity of a new alloy or (b) learning how that resistivity depends on temperature. What we need in CMNS field are strict replications verifying reported results, preferably in different laboratories (to confirm reproducibility).

By the way, most experiments performed in school laboratories belong to the first kind. The same is true for most experiments performed by professional researchers. A question is asked and an experiment is performed to answer it. For example, (a) “how does the cross section of a particular nuclear reaction depends on the kinetic energy of projectiles?” or (b) does a theoretically-predicted particle (such as neutrino ) exit?” Success in performing such experiments depends only on qualifications of experimentalists. A success means to produce a convincing yes or no answer. Discoveries of unexpected things usually occur during analytical experiments. Most experiments in which I participated were analytical. Only once was I lucky to participate in a discovery of something unexpected.

Addead on 2/6/2010

16) Larens Imanyuel wrote:
“Since people have been accustomed to thinking in 4-D physics, they generally ask, "Where are all those new dimensions? when presented with a higher dimensional theory. In this posting I will present the answer to that question intuitively before I get into any formal mathematics.

3-D is exceptional, because the number of degrees of rotation equals that of translation. Instead of points, we can take helices to be the fundamental geometrical object, each with an axis of translation, a plane of rotation, and a phase. We may then use a three dimension array of numbers to develop the dynamics, which gives a 3 cubed = 27-D space. This is best decomposed into subspaces as 27 = 8 + 8 + 8 + 3. One 8-D space is the double of our conventional 4-D space, arising because helices, unlike points, exist in both left and right handed chiralities. Time and temperature are a new pair of dimensions. This pair easily admits a thermodynamic direction to time, unlike in conventional physics. Since in 27-D space there are multiple degrees of freedom for time, observed time is an average over them - the same as temperature is the average over degrees of freedom of energy. Time and energy are conjugate, of course, because their product is action.

The existence of three 8-D spaces may be associated with the temporal and philosophical triality of actors who exist in the present, observe the past, and act upon the future. The 16-D projection into the past and the future exhibits strong dualities in which there are equivalent ways of expressing the same physics. One of these is the particle/wave duality of conventional quantum mechanics. There is a cosmic one in which in one frame the observed universe is always the same size, and in another the universe is expanding from a Big Bang. A fundamental one is that the change from future to past may be represented by either the change in the sign of a number, or by switching between two completely different metrics.

The structure of one 8-D space describes the length preserving transformations in its associated 16-D space. Within a 16-D space one 8-D subspace represents a transformation of the other one. The Standard Model of Particle Physics may be derived from this structure. The triality appears alternatively as three generations of particle, and as the tripolar charges of the strong force.

The 3-D subspace of the 27-D space is a nonparticle space in which scaling effects more than one dimension at a time. For instance, scaling the area of the plane of rotation of a helix, will give you one nonparticle dimension in a set of three. This 3-D space is essential for the simple description of condensed matter science. Each dimension admits solutions to the general quantum three-body problem giving fractal binding energies. Canonical states of matter may be constructed using the known properties of isotopes and matched to important physical features. One of these relates to the geometrical mean between the largest and smallest times in the universe, i.e., the age of the Big Bang and the reduced Planck time. This ratio may be calculated from information theoretic considerations and matches observation to the accuracy of the condensed matter physics involved. Virtual states are needed to construct the cold fusion reaction. The "magic numbers" for atomic number and mass may be calculated from the number of elements in the 8-D space structure. LENR extends beyond these magic numbers, however, so a higher order theory needs to be constructed. Presumably this will use virtual light isotopes arising from heavier isotopes.

This theory differs from conventional string theory in using commutative operators over 8-D division algebra, rather than anticommutative operators over "geometric" algebra. By a long known result 27-D physics is an extension of conventional quantum physics. The closest conventional string theory is the exceptional 16-D compactification of the heterotic string.”

17) Responding to the above I wrote: “I agree with Ed Storm that such mathematical speculations should not be part of CMNS struggle for acceptance. They should first be used to explain non-controversial experimental data. Mixing two not-yet-acceped considerations is likely to counterproductive. “

18) Larens Imanyuel responded”
“Ludwik, Science is not a popularity contest. All the declarative sentences of my "intuitive" explanation of 27-D physics are backed up by some type of more formal mathematical explanation. It is more likely to be proven correct than is the Higgs boson likely to be discovered. The double standard whereby the consideration of my theory is dismissed as "speculation" and the hypothesis of the Higgs boson presented as "fact" must be rejected itself for scientific progress to proceed. As I explained the 3-D nonparticle subspace of 27-D theory is essential for CMNS; and it does not exist in the old theory.

In case you haven't followed, the existence of the Higgs boson is being challenged by nonparticle Higgs mechanisms and the "un-Higgs" pseudo-particle, so conventional particle physics is having to get into this realm. There is no other serious, useful fundamental theory of physics other than the one I gave, because I have based it on fundamental mathematics. Another theory may use a somewhat different mathematical form, but must have basically the same physical content. Some of my intuitive statements will probably have to be modified once all the proofs have been worked out, but there is no good reason to think that the basic content will change much.

Only trying to first explain "non-controversial experimental data" would be a strategic disaster. To gain acceptance a new theory must explain evidence that conflicts with old theory. All such evidence is by nature "controversial", because it does conflict with the old theory.”

19) Ludwik responded: “I am probably not the only one on this list who knows very little about Highs boson or string theory. That is why I think that such topics do not belong to CMNS. The same applies to other exotic topics: vacuum energy, magnetic monopoles, etc. Such interesting topics are, or should be, debated outside of CMNS. Let us focus on four nuclear claims: (a) transmutation of elements, (b) changes in isotopic ratios, (c) emission of energetic radiations (alpha, beta, gamma, protons, etc.), and (d) generation of unexplained excess heat. What is wrong with this suggestion? What can be gained by making string theory part of CMNS? “

Responding to a comment made by another researcher, I added: “Yes, we do need some kind of a limit. But where should the limit be placed and how to implement it in practice? By the way, I do understand creative people's desire to share. Seeking readers and listeners is part of human nature. And I do not mind reading what they have to say, provided I able to follow it. The message to which I was responding was indeed quite interesting.”

= = = = = = = = = = = = = = = = = = = = = =

Added on February 12, 2010

20) The discussion initiated several days ago restarted when X5 asked “Larens, how about educating an old man a little.  Can you define for me, each of the 27 dimensions which are involved?” Responding to this I wrote “ I know that there is a deep answer. But let me give you a naive one: Larens will probably reject it.

Suppose we use Newton's frame of reference; it is fixed with respect to distant stars.  You need only three coordinates to identify  position of the center of gravity of an object. To deal with velocity and acceleration of the center of gravity you also need time; that is the fourth coordinate. You also need coordinates associated with possible orientations of a rigid object. Suppose the object is not rigid, like who rigid spheres connected with a spring. You need to describe possible oscillations. Suppose the object is biological, for example a human being. Then you need a coordinate for the body temperature, a coordinate for heart beating rate, a coordinate for the rate of sweating, and coordinates for rate of breathing, digesting, thinking, etc. “

21) Responding to someone else Larens wrote: “Part of the problem is the difference in the meaning of "dimension" between math and physics. What physicists call "degrees of freedom" are often called "dimensions" in math. In mathematical physics the number of dimensions of a system is generally the number required to get a "true representation" of it.

A similar confusion occurred when Einstein extended the use of the word "dimension" from 3-D "space" to 4-D "spacetime". "Temperature" is a collectively defined variable, which may be included as a "dimension" along with time when we extend from "particle" physics to "condensed matter" physics. We need to double the size of 3-D space when we replace translations by rotations to get a realistic local action. Each rotation is coupled with a nonzero differential translation to get a helix. The nonzero condition arises, because with QM no particle with a finite wavelength can have zero momentum. The space is doubled, because helices come in two different chiralities.

The resulting 8-D space is tripled to 24-D by the "strong triality" of division algebra, but the 24 dimensions cannot be labeled "exactly" in conventional terms, because of multiple interpretations. One of the simplest to understand is that there are three "generations" of particles differing only in mass. The nonparticle subspace of 27-D physics is a 3-D translational space, the same as conventional 3-D space, but applies only to condensed matter physics and not to particle physics.”


22) X6 responded: “Thanks, I now understand.  In this definition, an equation of the form X = A +By +Cy^2+Dy^3 + Ey^n  would have n dimensions.  By 27-D you mean that 27 individual quantities are needed to define all aspects of a material.  Being a chemist, I suggest this is a gross under estimation.”

23) Larens responded: “You are coming from a position of dogmatic Western dualism. This is a temporary cultural phenomenon arising from the fact that the physics of Western culture has not yet been mathematically sophisticated enough to address the existence of consciousness in the universe. It is time to move on.

Western physicists have also insisted on putting the labels of "length" and "time" on mathematics that is intrinsically "dimensionless". Correcting this problem results in multiple interpretations of the mathematical dimensions - which are in turn denied, because the physicists are not yet philosophically aware of their problem. To give an "exact" definition of all the dimensions will probably take mathematical physicists several decades. . . .

I have stated that my theory includes "actors that observe the past, act upon the future, and relate to one another in the present." This statement includes the concepts of cognition and volition that define consciousness. . . . I stated earlier that 27 is the necessary and sufficient number. To get into a detailed explanation would require more mathematics than I can get into in an "intuitive" explanation of the physics, for my audience is not familiar with the mathematics. . . . I explained earlier that I am using the definition implicit in modern geometry, but that these dimensions may relate to different meanings in terms of conventional physics.”

24) Ludwik wrote:
“Conflicts can often be avoided when different meanings are not assigned to already-used words. In this forum the term ‘dimension’ should be used as defined in physical sciences. We refer to four essential dimensions: L, M, T and I. In the now-used system of unit (SI) these four dimensions are represented by meter, kilogram, second and ampere. Introducing another meaning to the word dimension can create confusion.

25) Larens responded: “By introducing another standard usage of the word ‘dimension’, different from either the way Ed and I were using it, you have highlighted one of the fundamental problems of language. No one can demand that their particular use of a word with multiple definitions is the ‘correct’ one. Everybody needs to be aware of potential confusion and to define their terms.”

26) X6 responded: “No, the definition of words is universal. It does not matter what "dogmatic" view a person has, concepts cannot be communicated unless the words have shared meaning. I find that people who have no idea what they are talking about want to keep the meanings ambiguous so that their ignorance is not revealed. I'm not suggesting this applies to you.  I agree, many things about this world are not easy to define because so little is known.  However, simply throwing out a collection of arbitrary words does not help this situation.   For example, if you use the word consciousness, then I assume you know the meaning you intend this word to have.   I can understand what you mean only if you give a definition.  To me, the word means being aware of self and being able to demonstrate that awareness. A rock, for example, is not conscious because it has no means to be aware of self.  Even a dead person is not conscious even though he might be having an after-death experience because he cannot communicate that experience.  What is your definition of consciousness?

I take your point and an exact definition of all dimensions is, I agree, too much to ask. However, some basic definitions would help.  Simply stating that 27 dimensions exist is not helpful unless you have some reason to believe this is true.  Why choose 27? Why not 50? No matter what number you chose, each example has to be described by the same general definition of dimension. What is the general definition?”

27) Ludwik added (referring to his last message above) : “We often use x and y axes to show how one quantity depends on the other, for example, acceleration versus time. In that case dimensions are different along each axis, [(L/T^2) for acceleration and (T) for time. But this is no longer true when coordinates are used to represent space, either 2D or 3D. In this case all dimensions must be (L).

The fourth dimension, introduced in the theory of relativity, is said to be time. This is not true. The forth dimension is c*t, where c is the speed of light. In other words, all four quantities are dimensionally identical; their dimension is (L).  Is the same true when one deals with the 27D space?

28) Larens responded: In 27-D physics dimensions are labeled by factors of action or their inverses. Multiplying all the dimensions together should give you the number one. You may also interpret all these labels as coming with constants of proportionality in the manner you did with c, so that each dimension is equivalent.

29) Ludwik: I have no idea what the factor of action is. It is obvious that I am not prepared to understand Larens. The same is probably true for most people on this list. He should address people with the same theoretical backgound. Why is he wasting so much effort on us?

30) Dean Sinclair wrote. “ . . . As to different Chemistry and Physics approaches, the theoreticians of both "Fields" tend to cloak their ideas in abstruse mathematics.  The more abstruse it is, apparently the more prestigious it is.   Actually, the separation of the field of science into many disciplines obscures the fact that all are examining the same thing, the mystery of existence.”

31) Larens responded: “Ludwik, I am sure you understand what a "factor of action is". It is just that you have not seen the phrase before. Length, area, time, mass, velocity, momentum, energy, action, etc. are all factors of action, with action most often being factored as momentum times length or energy times time.

32) Ludwik: Why does he need a new term for what is usually called physical quantity?

33) Larens: " ‘Physical quantity’ is a more comprehensive term than ‘factors of action’. Electromagnetic units are physical quantities, for instance, but not factors of action.”

34) Ludwik: As far as know, electromagnetic units (such as volt, ampere, ohm, tesla, or V/m) are not physical quantities. They are used to evaluate physical quantities (how much, how strong, etc.).

35) Larens: “As long as we are splitting nanowires - unit quantities are a subclass of quantities.”

36) Ludwik (not posted). I know what units are but the meaning of the new term “unit quantity” is not obvious to me. Why do we need it? I do not want to argue about definitions. That is why I am giving up responding. What the CMNS field urgently needs, at this time, are reproducible experimental results demonstrating reality of chemically-induced strong nuclear process. Speculations should be based on such results.

37) Responding to another researcher, Ludwik wrote:
“A mathematician does not need experimental data; s/he can make any set of assumptions and derive resulting consequences. Consequences are valid, unless there is a mathematical (logical) error in the derivation.
But absence of logical errors in a derivation is not sufficient in physical sciences. In that area theoretical derivations are evaluated by reproducible experimental data. Theoretical physicists make predictions and experimentalists test them, in order to validate theories. That is what X7 is saying. I agree with him. We are not mathematicians.”

I made this observation before and no one objected on the list. I wish the CMNS forum were dominated be comments about new and old experiments, and by testable theoretical predictions. Unfortunately this is not the case.

Added on February 20, 2010

Larens continues posting messages which i do not understand. But that does not mean that his observations are not valid. I would be even more confused by reading observations of well known theoretical physicists, such as Pauli and Dirac.

38) Responding to X8, Larens wrote:
"The 'strength' of a particular reaction is a function of the 'range' and 'coupling constant' of the relevant gauge boson. 'Range' and 'coupling constant', however, are mathematically independent concepts."

This prompted me to ask the following question: “I suppose that the term 'strength' is the same as cross section, usually expressed in barns. But what is the 'coupling constant' and in what units is it expressed? How is it determined for typical well known reactions?” I hope this question will be answered tomorrow. I know what the spring constant is (expressed in N/m), for example, when it couples two macroscopic objects. Perhaps Larens’ model consists of nucleons connected by linear springs. That would be a reasonable model. But he did not say this.

Meanwhile Larens posted another message: “OK, here is a question for the experimenters: What should theory provide for the experimenters, so that they may increase the reproducibility of LENR enough to support the design of commercial equipment? Your answers may help the theorists set their priorities in a field where theory is quite difficult.” That is a good question. Pauli, for example, predicted existence of particles, named neutrinos, and described their properties. Knowing what to expect experimentalists were able to design experiments confirming existence of predicted particles. Hundreds of experimentalists, all over the world, have been studying properties of neutrinos for many decades. Neutrino experiments, for example at BNL, are reproducible-on-demand.

Responding to Laren’s question (about how to help experimentalists), X10 suggested that theoreticians should tell us how to maximize the reaction rates. That would indeed be very useful if experimental results were reproducible. Unfortunately, we are still waiting for this luxury. For the time being the most important task of a theoretician (who wants to help) is to tell us what to do to make experiments reproducible. In other words, to use the term NAE (Nuclear Active Environment), introduced by Ed Storms, a theoretician should tell what the NAO is and how to create it. Naming something is not the same as knowing what it is..

Added on February 21, 2010

Not surprisingly, Storms responded:
“This is a good question, Larens, and at the risk of wasting your time, I will provide an answer.  A theory must do the following at least:

1. Identify the materials and real-world conditions required for the LENR mechanism to operate.

2. Describe a mechanism that is consistent with all observed behaviors both with respect to LENR and to all other fields of science.

3. Describe a mechanism that does not violate any basic laws of Nature.

4. Identify behaviors uniquely related to the mechanism that can be clearly tested in the laboratory.

5. Provide a clear description of the mechanism with a rational justification of its reality.  This is necessary to justify the investment of time and money to test the claims.

Without #1 being satisfied, experiments cannot be made reproducible.  Without #5 being satisfied, no one will take the time to test a theory.   So far, no theory has satisfied these two essential requirements.  Most violate some or all of the other requirements.

I say this after having taken the time to evaluate all published theories.  This has resulted in a paper that might be published someday. Meanwhile, Brian and I are searching for a mechanism that does satisfy all of these requirements. This is a difficult task unless a very open mind is used to explore ALL possibilities.  I suggest, professional theoreticians have largely failed to find a useful explanation because of their obsessive commitment to a particular approach, even in the face of their failure to explain essential behavior.  Theory needs the same open mind that we in the LENR field insist skeptics show when evaluating the claims based on experiment.  In addition, theoreticians need to explain their ideas with tolerance and care so that their ideas can be understood and applied by experimentalists.”

39) Another experimentalist, Dennis Cravens, wrote:
As an experimenter, I look for definite predictions of theories. Predictions connected to my “knobs” that I can turn.  Theories that say that the reaction is via particle X, Band structure Y, nuclear reactions Z do not help me too much.  I what to know things like: does it predict to turn up my temperature, increase currents, make smaller points on my cathode, make materials with magnetic properties, add some materials that allow for spin exchanges, use light of a given frequency, avoid material X, make particles of size Y, pulse the current, impose a temperature gradient, impose a magnetic field, ……

After most theories I read, I see nothing that helps me know what to do in the lab.  Most don’t seem to give me anything I can use in the lab. Give me nuts and bolts.

In short, give me a theory that makes predictions and is connected with physically real and obtainable conditions and materials.  Which knob do I turn and how far.

40) Edmund Storms wrote:
“. . . Physics is based on a mathematical view of reality from which easily verified predictions can be made, i.e. either a behavior, event or consequence exists or it doesn't.  Chemistry or more exactly materials science is different. Such systems are so complex that mathematical predictions are very limited.  Nevertheless, the mathematicians make an effort.  The result, especially in LENR, is not very useful because it is not related to the real world of knobs, as Dennis says.

Math is based on assumptions, which some people simply ignore. The mathematical equations simple extend and connect these assumptions. If the assumptions are wrong, the math has no meaning.  So, I suggest we start examining the assumption for a real connection to reality.”

Unless reproducibility on demand is achieved, the future of cold fusion can be described as in a little poem below. Yes, I know that not everything that rimes can be called a poem; I am not a poet. Note that in this context (as opposed to the context of my just-published autobiography, from where it is extracted),“good” stands for “promising expectations” why “evil” stands for “disappointments.” Both theoerticians and experimentalist should focus on the absence of reproducibility on demand.

  Both good and evil will survive.
  To fight each other and contrive,
  To show and hide, and to refuse,
  To offer something and confuse.

  To give and take, to kiss and bite,
  To make and break, and to excite.
  To promise something in the sky,
  To ruin hopes and say good-bye.

  To feed and starve, to love and hate,
  To burn, to smash and to create.
  To wreck, to torture, to destroy
  To build, to cherish and enjoy.

Ludwik Kowalski
December, 2009.

41) Larens wrote (responding to my question):
" ‘Coupling constants’ are dimensionless numbers that express the strength of fundamental reactions with "sizes" being determined by QM relations. The problem is that they are not really "constants", but are functions of energy with different functions for different forces. There is presumed to be a grand unification energy at which the functions for EM, weak, and color forces converge to a single coupling constant. This is far above the energy of any existing instrument, so values of different functions extrapolated to zero are generally given. EM is particularly simple, so extrapolation only gives the fine structure constant.”

I am lost again. In any case, the adjective ‘constant” should not be used to describe something that is not constant. I still think that the reaction stregth is represented by its cross section.

Larens’ 27-D theory should first be tested by comparing its results with reproducible-on-demand data, such as fission cress sections, scattering of neutrons, nuclear transmutations at high energies, etc. Consider the n+U235 fission. The cross section, for neutrons of very low energy, is several hundred barns. For neutrons of higher energies, such as 2 MeV, is becomes close to one barn. How would Larens’ theory explain this? Similar questions can be asked about the energy dependence of cross sections (i.e. strengths) of other nuclear reaction. That is what I would do to promote a theory. I do not think that promoting it by using the CMNS results will be successful among typical theoretical physicists. I would discuss the theory on their websites, not of the website where most people are not theoretically oriented.

42) Storms wrote (responding to Dennis):
Physics is based on a mathematical view of reality from which easily verified predictions can be made, i.e. either a behavior, event or consequence exists or it doesn't.  Chemistry or more exactly materials science is different. Such systems are so complex that mathematical predictions are very limited.  Nevertheless, the mathematicians make an effort.  The result, especially in LENR, is not very useful because it is not related to the real world of knobs, as Dennis says.

Math is based on assumptions, which some people simply ignore. The mathematical equation simple extend and connect these assumptions. If the assumptions are wrong, the math has no meaning.  So, I suggest we start examining the assumption for a real connection to reality.

Larens responded:
a) Ed, You have made a good summary. A good theory can make rough phase diagrams for ideal materials. The material scientist can then take these, learn how to compensate for non-idealness, and measure specific values for the phase transitions. These can then be used to refine the theory to improve its usefulness. A theory that can explain the 0.875 loading threshold for PdDx has an excellent point of connection with LENR reality, for instance, if it also has good reality connections to the rest of the universe. With at least one such excellent reality connection, there is good hope for it to be predictive of previously unknown phenomena.

b) One cannot expect too much to come out of the theorists rapidly, however. To give an example, in 1955 a group of mathematicians set out to classify all the finite simple groups. After about 500 papers by about 100 authors totaling tens of thousands of pages they announced success. Some holes were discovered. These were patched by 2004. A simplified theorem of only about 5000 pages is now being worked on. The classification includes 27 "sporadic" simple groups (if one includes the Tits group). "Sporadic" means "not in an infinite family" and is equivalent to the word "exceptional" elsewhere in math. It is conjectured that the number "27" is not an "accident", but relates to the 27-D physics I mentioned earlier. This suggests that what we are likely to see in the near future will not be a complete "proof" of principle, but will be a hierarchy of theories that are increasingly ad hoc as they approach the laboratory situation.

c) Chemically assisted nuclear transformations clearly involve energy transfer between different nuclear reactions, because otherwise some of the reactions would violate the conservation of energy. What we need is a joint accounting of all the isotopes before and after a run along with the energy. The complexity of this is comparable with that of contemporary high energy physics, but the equipment for LENR will be much less expensive. The complexity of results should be a good match for the complexity of the theory, so reasonably rapid discovery and development should occur. The LENR community needs to discuss a general plan to obtain and utilize the necessary resources. The offer of resources in the 2004 DOE report might be a point of leverage for discussing the matter with the larger scientific community.

43) Tom Barnard wrote (responding to an earlier message of Storms):
Ed, I have been doing this a long time too. I have read your book. I make judgments, based on my own long experience, on what to believe. You may not like my judgments, I don't like many of yours; we're even. Most experiments are not vetted and not repeatable, so one has to be extremely leery. There aren't too many experimental results in this field that I would call "facts"; just me and my hard won experience talking. I appreciate the criticism. Keep up the good work.


Ed Storms wrote (responding to Larens):
“ a) The PdD0.875 threshold has no relationship to the mechanism because this is the average composition of the cathode, not the composition where the effect actually occurs. The challenge is to identify which property is important to a mechanism.  Is it the deuterium content, the availability of important energy levels, or a magnetic effect, to give a few examples? Unless the important basic property is identified, too many materials become possible candidates.

b) I agree, the process will be slow and filled with false starts. However, some approaches are useless and should not be used to waste valuable talent and money.

c) We now know that helium production is the main nuclear product and that the energy released is close to that known to result from D-D fusion. The big issue is the unique condition that initiates the reaction and how energy is released to the outside world.  Unfortunately, money is not available to explore all of the possibilities as would be the case if this were an accepted phenomenon.”

Added on February 22, 2010
There were an avalanshe of messages, most of them about theories. It is not my role to record all the. Future investigators will be able to find them in the list’s archive. But how can I resist quoting four papers posted by experimentalists?
44) Jack Dufour wrote:
“Dear Andrew, I think the point you raised (coupling constant of the strong force) is well documented. For distances between nucleons higher than 1 fm, experimental data are well fitted with g2/hc  = 14,5. This has been fully accepted by the referees I am discussing with. See also "Le monde subatomic" by Luc Valentin -Hermann (1986) tome 1 p.62. For lower distances, this is no longer valid, but d/d nuclear reactions imply distances higher than 1 fm. The coupling constant of the interaction mediated by the neutral and virtual electron-positron pair (range h/2mec = 193 fm) experimentally determined in the alpha disintegration case (which I did) is 3.8x10-6 g2 (# 8x10-3 e2). You cannot invoke higher values, otherwise you get trouble with the very well experimentally documented alpha disintegration case. So the (hypothetical) Yukawa plays only a role for energies of the deuteron of a few keV (same order of magnitude as the electrons screening). Its role is totally negligible at eV (room temperature) levels.  (NB In the formulas I wrote, h stands for h bar.)”


X9 wrote:
I'm at a loss to understand your attitude, Larens. I simply told you a fact.  The value you quote is the average composition of the cathode and it is a fact that the reaction is not occurring at a composition equal to the average. It does not matter to me if you believe this or not. It only matters that you understand what I actually said. Also, I have no theory and I offered none. I simply suggested several examples of properties that might be important. I'm willing to discuss CF with you, but I expect in return you will trying to understand what I'm actually saying.

P.S.
Excuse me for being unclear.

My comment concerned the statement, "The reality is that there is a tight correlation between heat  output and loading above the 0.875 threshold", which was then disputed. The statement is fairly accurate, albeit I do not believe this is known to three significant figures. As to the 27-D theory, it or any other theory will have to ‘put its neck on the chopping block’ based upon experimental data.

Ed Storms wrote:
Tom, I share your definition of like and dislike. Of course, some data is better than others and some data is clearly wrong.  That is why I rely on patterns of behavior that is supported by many studies.  In addition, I expect people will have different opinions about what to accept and what to reject.  I only ask that these opinions be based on logic and fact, not on emotion.  My role has been to examine all the published information and apply objective criteria to its evaluation as much as possible.  In my comments to you, I'm only trying to share this evaluation. Of course, you are free to accept or reject the information.  Nevertheless, people in the field and especially theoreticians need to agree on what is real and what is imagined in both theory and experiment.  Otherwise, we will all be going off in scattered directions with no hope of arriving at agreement.

I agree, the meaning of the word ‘fact’ is sometimes in the mind of the beholder. However, we need some way of distinguishing information that has good support and consistency with scientific understanding from opinion.  In my case, I try to use information that can be well defined and I call this ‘fact’.  If you disagree, please explain why you think something I said is not "fact" and I may change my mind.

Michael McKubre wrote:
“All, The correlation between the ability of a cathode to attain and maintain "high" loading (>8.5-9.0) and the capacity to produce excess heat was first reported simultaneously and independently by K. J. Kunimatsu's group at IMRA Japan an my group at ICCF3 in Nagoya.  This result has significance both as a means of explaining earlier null results and, as Larens notes, as an insight into potential mechanisms.

For the latter it is worthwhile perhaps exploring a little further what has been reported and what that might mean for the lattice condition that facilitates or hosts excess energy production.  The resistance ratio measurement that we co-opted for use to determine the D/Pd loading necessarily measures a bulk average value.  In the SRI experiments (for the most part) we kept the cathode as uniform as possible* and the diffusion coefficient of D in Pd is very high (>>10^-7 cm^2 s^-1).  For both reasons, at the time of interest (after long, slow loading), the activity of D in the Pd is rather uniform throughout the length and depth of the cathode material so that an average value has general and specific significance**.

Although I am not going to cite any data or sources here it seems clear that the energy producing reaction occurs not homogeneously throughout electrodes (no matter how uniform their environment) but rather in small, discrete, "special" places***.  What causes these "special zones" to become specially effective after very long times at effectively uniform deuterium activity?  Is it something intrinsic to the lattice - already built in?  Or is it something that grows in the lattice or on the surface with the long initiation time?  

I have some ideas but do not have the answers.  I do appreciate the "new thinking" provided particularly by Andrew, Tom & Larens, under pedagogical probing from Ed.  It is crucial to understand the environment in which the effect occurs, and then to understand what triggers it.  Given both, theory will seem (in hindsight) to be obvious.  But it is precisely in getting to that place of understanding that theory can help most in guiding experiment.
-Mike
- - - - - - - - - -
*  Ed mentioned lattice geometry effects, and there does seem to be a difference in threshold (and consistency of heat production) between wire and foil cathodes that I attribute largely to the intrinsic inhomogeneity of current density and loading (and deuterium flux) of foil cathodes in "sandwich" anodes compared with wires with axially symmetric anodes.

** This is easily modeled.  Anyone in doubt should calculate (say) the deuterium flux needed to sustain a compositional gradient of  ±0.01 D/P for a  50 µm thick foil (or even 1 mm dia. wire) - and then recognize what that flux is doing to load or unload the Pd.

*** Thermal imaging, helium production (and partial retention), and "common sense" place this zone at or near the surface; Ed has called it the "Nuclear Active Environment" or NAE.”

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

ADDITIONAL MESSAGES FROM LARENS (INSERTED LATER AT HIS REQUEST)

On Sun, 21 Feb 2010 18:07 -0700, "Edmund Storms" wrote:

The PdD0.875 threshold has no relationship to the mechanism because this is the average composition of the cathode, not the composition where the effect actually occurs. The challenge is to identify which property is important to a mechanism.  Is it the deuterium content, the availability of important energy levels, or a magnetic effect, to give a few examples? Unless the important basic property is identified, too many materials become possible candidates.

Larens wrote:
You provide a great example of disconnection of theory from reality. The reality is that there is a tight correlation between heat output and loading above the 0.875 threshold. You declare as a "fact", however, that this correlation is not related to causality. You insist without any supporting evidence that there is a "mechanism" elsewhere where the "effect actually occurs". Your theory moreover does not even offer a clue as to whether "deuterium content, the availability of important energy levels, or a magnetic effect", or something else is important to this hypothetical mechanism elsewhere.

On Sun, 21 Feb 2010 22:10 -0700, "Edmund Storms" wrote:

I'm at a loss to understand your attitude, Larens. I simply told you a fact. The value you quote is the average composition of the cathode and it is a fact that the reaction is not occurring at a composition equal to the average. It does not matter to me if you believe this or not. It only matters that you understand what I actually said. Also, I  have no theory and I offered none. I simply suggested several examples  of properties that might be important. I'm willing to discuss CF with  you, but I expect in return you will trying to understand what I'm  actually saying.

larens wrote:

The problem is that you are assuming a conventional 3-D geometry, and then assuming your conclusions thereof are a "fact". I choose to use the 3-D "nonparticle subspace" of 27-D physics. In this case the average composition of an electrode controlling behavior apparently at its surface is not a contradiction, but is the preferred interpretation ofevents. My choice of 27-D physics is not to fit this particular case, but is to provide explanations for many phenomena that do not have a plausible explanation using conventional geometry and physics. Your assumption of conventional physics leads you to no theory for LENR and therefore no way of optimizing it as a technology.

Added on March 2, 2010

I am skipping ma lot messages, nearly all of them theoretical and without obvious connection to the CMNS field. Some of these messages contained personal insults. One prominent CNMS contributor declared that he will stop posting comments. Here is my contribution; it was a reply to X10:

44) Ludwik Kowalski wrote:
1) As often emphasized by Ed, in order to be accepted, a new theory must not conflict with those theories that have been successful in explaining and predicting real things. Special theory of relativity, as you certainly know, is in perfect agreement with Newtonian theory at low velocities. I agree with Ed that everyone who has a new theory should first use it to explain well known facts (such as elastic and inelastic scattering, alpha decay, thermonuclear reactions, fission, etc.).

2) Elastic scattering is not the only phenomenon demonstrating the reality of Coulomb barriers. Why do you think the half-life of alpha decay, T, depends on the energy of the emitted particles? Typically, T is longer when energies are lower. I am saying "typically" because T also depends on other factors (spin, parity, etc.). Without Coulomb barriers all alpha-unstable nuclei would decay immediately (after being formed). For high energy decays (emission of alpha particles with higher energies) the barriers are less high. That is why tunneling probabilities are higher (than when decay energies are low). This tendency, known as Geiger Nuttall rule, was recognized before Gamow's theory of tunneling was formulated.

3) I also agree with Ed that everyone who has a new theory should first use it to explain well known facts (such as elastic and inelastic scattering, alpha decay, thermonuclear reactions, fission, etc.). I see three advantage in doing this:

a) You would be sure that experimental data are more reliable.

b) It would be much easier to promote your theory , i.e. to publish papers in widely circulating journals. Editor will not reject your papers without sending them to referees, as they often do with CMNS-related papers.

c) You would be addressing people who know theoretical physics, and mathematics, much better than most of us on the CMNS list.

Added on March 8, 2010

Once again I am skipping I am skipping a lot messages.


45) Ed Storms, responding to Larens, wrote:
I see we come at the problem from different directions. You look at the problem from the bottom-up like a basic scientist and I look at it from the top-down like an engineer. I identify the environment first because the effect is obviously very sensitive to the environment. In contrast, the nuclear world that you use as your basis is not sensitive to the environment, hence the environment can be ignored. Cold fusion is an anomaly because it combines nuclear reactions with the chemical environment.  As a result, nuclear physicists and chemists are needed but are incapable of communication with each other because they use  entirely different concepts and words.

You probably have not spent much time in the laboratory trying to study the CF phenomenon, so that you don't appreciate the role of materials as much as I do.  It is obvious, no matter what you want to measure, an instrument must be used for the measurement to be made. But first, you need to decide which behavior should be measured.  So, I first ask you, what behavior should people measure?  People have measured the deuterium composition, the power production, the various radiations, helium-4, helium-3 and tritium. People have looked for superconductivity, magnetic effects, and interaction with laser light. What property should we look at next? What property do you think is most important to understand?

But, to justify setting up expensive equipment, a person must be confident the phenomenon can be made to occur. That is the most serious problem right now. You say that "The characteristics of materials seem mainly to be whatever is necessary to create the necessary densities and fluxes".  Yes, but what are these materials? That is the question I'm trying to answer.  You ask for an accounting of energy. How would you suggest this accounting be done? What instruments should be used? These are the kind of answers I need in the laboratory before I can provide you with the answers you need. I'm must happy to provide information so that you can answer your questions,  but I need my answers first.


Larens responded:
I look at the problem comprehensively like a basic scientist. You look at it with a narrow focus like an engineer. I defined the problem as LENR, with its many reactions. You redefined the problem as "cold fusion", a narrower concept that you are personally interested in. I use the entire universe as my basis, since I must maintain consistency with all proven math and physics. You, with your tunnel vision, can only see the nuclear part of my universe.

I understand the importance of your question, "What behavior should people measure?" Before that, however, is the question, "Of which phenomenon should we measure the behavior?" LENR is too complex for us to build a comprehensive model without setting priorities. Using existing data we should build a phenomenological model until we see that there are key questions that need to be answered with new experiments. With the new data we can further build the model, thus iterating the research process. At the moment the existing data is incoherently scattered about, because too many people have been pushing their own pet ideas to the point of suppression of more important ideas. Before I can answer your questions with confidence, I must stop finding so many dusty corners with important data. In the meantime, people's suggestions as to what they think is most important for model building are most welcome.


Ludwik wrote:
Neither theoreticians nor experimentalists should think in terms of their own superiority. Mutual respect and mutual help are needed. It is difficult to become a good experimentalist; it is difficult to become a good theoretician. Theories must be validated in terms of experimental data and experimental data must be explained in terms of theoretical considerations. We need each other.


Responding to Larens Storms wrote:
Once again Larens, you can't seem to restrain your judgmental compulsion.  First of all, I don't define the problem with a narrow focus. To me, the title "cold fusion" is only a convenient description and not meant to be limiting. My focus is as wide as yours. The difference is one of practically. In my mind, we have to learn how to crawl before we can walk.  A broad basic theory, similar to a philosophy that can't be applied, is useless. We need to start with ideas that have a direct relationship to observation. Second, I agree, the explanation must have mathematical consistency and be consistent with what is already known, as I have said many times.  The challenge is to get to this universal goal.  Our conflict is over the best path.  So, please suggest a useful path and drop the these irrelevant judgments.  You have a lot of scientific knowledge that this field could use if you are willing to debate and teach. So, please help us find a useful path.

Agreeing with the above, X wrote:
". . . For a man with a hammer, every problem looks like a nail. . . .".

Responding to X Storms wrote:
I agree, no published theory has been useful. However, this is not the whole story. Everyone who does experimental work always has a model in  their head that guides their work and is used to understand the  results. These models are never published until the observations demonstrate that the final version is correct.  Generally speaking, the person who has the most adaptive model that is close to reality wins the race.  In my case, I'm often amazed how often the results are  totally unpredicted and, as a result, send the work in a new and unexpected direction.   My analogy is the exploration for gold. We are prospectors who are searching in random directions. Occasionally,  largely by chance, we find a nugget.  Once this happens, all the other  prospectors rush to the spot and start digging to find the ore body.  So far, the ore body remains elusive.  Hopefully, someone will find a map to its location.  So far, all the maps have sent people in the wrong direction and many have even ignored the locations of nuggets that were found.  To carry the analogy further, some people even want  to suggest that the discovered nuggets are not even real gold. sigh

Larens wrote:
I see that Ed is taking my "modest proposal" satire in earnest by expanding to the hijacking and derailing of this thread. Per my satire he rephrases my ideas and claims them as his, then creates a "conflict" by claiming there is a basic "difference" between our two statements. He does this well enough to get Bill to agree with "him", even though Bill is really agreeing with the basic ideas that I first presented. Having claim jumped my ideas Ed can then assert that I am not presenting anything "useful". He then presents a gold digging analogy to focus on the incoherence of the situation, thus derailing my attempt to have a thread which presents a "useful path" of coherence to CMNS by which to build a phenomenological theory.

Ed has previously stated his motives for doing this. He believes that LENR can be explained by conventional physics and that discussing the proposition that it requires new fundamental physics will inhibit the acceptance of LENR phenomena by the scientific establishment. As an editor of scientific journals he feels justified in engaging in rhetorical warfare on CMNS to block the discussion of such new physics. He has a highly committed position, for he is willing to ignore - that to the degree that new physics is required to explain LENR - he is destroying the possibility of the scientific establishment accepting LENR as theoretically well founded.

To solidify his position he takes my invitation for people to offer "what they think is most important for model building" and twists it to demand that I as theorist must offer all the suggestions. He offers no suggestions - demanding that I must tell him how energy "accounting be done" and "What instruments should be used?" - even though research engineers who have hands-on experience with contemporary equipment are generally the ones best able to handle these details of experiment design.

Ludwik wrote:
Let us face reality--the CMNS field is still waiting for a recognized reproducible-on-demand demo of a strong nuclear process due to a chemical process. That is why we have no choice but to lean on already-validated theories. Doing anything else would amount of combining two unverified things: experimental claims, and theories. This seems to be a bad strategy; it is like one blind leading another blind. 

Unverified theories should FIRST be tested on experiments which are reproducible. That is what I would I do first, if I had a new theory and wanted to offer it as a guide for experimentalists in the CMNS field. What is wrong with this position? 

Lorens wrote:
Ludwik, The reality is that when establishment mathematical physicists see the type of explanations on CMNS that "lean on already-validated theories" they refer to them as "handwaving", or perhaps some more derogatory term, and to the people who believe the explanations as "idiots". You might as well be saying that we have "no choice" of strategy but to shoot ourselves in the head.

I have been pointing out that the successful extension of fundamental theory is probably too difficult to get good short term results, and that the proper way to deal with this is to build a good phenomenological theory. Establishment mathematical physics will be able to understand this problem and respect this strategy, because they well understand the complexity of modern physics.

There are two things wrong with your proposition that new theory "should FIRST be tested on experiments which are reproducible", and not in the CMNS field:

1) CMNS experiments ARE largely reproducible, so are a valid first target for new theory. To think otherwise is to concede a lack of confidence in the field.

2) The ONLY tests that validate a new theory are ones in which the predictions differ from the old theory. The relevant fields of testing are a small set, and all suffer problems of acceptance because they differ from the conventional wisdom. One of the best ways to pick a field is to find one where there are strong economic rewards for success. CMNS is clearly the best field to choose in this regard.


Ludwik, responding to Lorens

*) I wrote: "the CMNS field is still waiting for a recognized reproducible-on-demand demo of a strong nuclear process due to a chemical process. 

*) You wrote: " CMNS experiments ARE largely reproducible." 

1) Which demo, according to you, is recognized (by mainstream scientists) as "reproducible on demand?" 

2) What is wrong with the idea of FIRST validating your theory by using  recognized reproducible-on-demand data?

3) What is wrong with FIRST presenting and defending your proposed theory in journals for theoretical physicists (where it is more likely to be understood)?  
l
Ed Storms wrote:
Since, thanks to Larens, the discussion has drifted from science to philosophy, I would like to add my two cents to what your have described. You raise the issue of whether theory at some level has to have a relationship to reality or can stand on its own as a consistent mathematical construction.  Once this construction has been created, it is maintained as a representation of reality only because the theoreticians who accept this construction will not allow any other construction to take its place.  They use denial of publication and personal attack, as Larens is good at doing, to discourage debate. 

I suggest the solution, as has been the case in all other fields, is to obtain so much experimental information that the accepted construction cannot stand. A change in attitude is not based on a proof, as Larens describes, but on an accumulation of so much consistent information supporting a different conclusion that rational people gravitate to the new idea.  This gravitation is gradually occurring in CF. For example, the idea that clusters of deuterons are involved was not considered initially, but now is being accepted as producing fusion as well as transmutation.  This is progress and it is one small step toward a basic theory.  This means that a theory must examine the nature of the cluster in addition to addressing the nature of the nuclear process.  Once this idea is accepted, all theory of LENR will take a different path and be more successful.


Larens responding to Ludwik (point by point) :
1) None of them [demos] are, because mainstream scientists have been mostly seduced into a fictitious world where all CMNS experiments are the results of incompetence or fraud, and are never published in peer reviewed journals. When dealing with such fantasy, one must focus on the best means psychologically for disintegrating the fantasy. Pointing out clearly unjust machinations of editors is much better than trying to find "perfect" experiments, because experiments are relatively expensive and more difficult to interpret.

2) Because there are none, per my answer to question 1).

3) To penetrate the PURE theoretical physics world is even more difficult than promoting CMNS. Clubs of dozens of paid theorists who do not have to produce any testable results have developed in the last couple of decades around Planck scale string theory. An unpaid outsider cannot compete in such a contest, because he will not be able to produce the necessary volume of "elegant" mathematics. Indeed, because he "is too small to succeed", he will not even be able to get through the door far enough to get the feedback necessary to find the best formulation of his work for future acceptance.

On the other hand, if he goes the more realistic route of doing combined PURE and APPLIED physics by showing that his theory gives successful results where the old theory does not, that takes us right back to the problem of dealing with experiments of questionable acceptability.

Larens responding to Ludwik (point by point) :
1) None of them are, because mainstream scientists have been mostly seduced into a fictitious world where all CMNS experiments are the results of incompetence or fraud, and are never published in peer reviewed journals. When dealing with such fantasy, one must focus on the best means psychologically for disintegrating the fantasy. Pointing out clearly unjust machinations of editors is much better than trying to find "perfect" experiments, because experiments are relatively expensive and more difficult to interpret.

2) Because there are none, per my answer to question 1).

3) To penetrate the PURE theoretical physics world is even more difficult than promoting CMNS. Clubs of dozens of paid theorists who do not have to produce any testable results have developed in the last couple of decades around Planck scale string theory. An unpaid outsider cannot compete in such a contest, because he will not be able to produce the necessary volume of "elegant" mathematics. Indeed, because he "is too small to succeed", he will not even be able to get through the door far enough to get the feedback necessary to find the best formulation of his work for future acceptance.

On the other hand, if he goes the more realistic route of doing combined PURE and APPLIED physics by showing that his theory gives successful results where the old theory does not, that takes us right back to the problem of dealing with experiments of questionable acceptability.

Ludwik wrote, addressing Larens:
1) I am surprised by your answer to the third question. Is this a well known fact or is it only your personal opinion?

2) The CMNS field has a reasonably well organized opposition, fighting discrimination. Does something like this exist among string theory practitioners? Please elaborate.

3) I suppose most string theorists are Ph.D.-level researchers. Is this correct?

Larens wrote (responding point by point):
1) Lee Smolin and Peter Woit have each written popular books saying that string theory hegemony is destructive for theoretical physics. I was just giving it my personal spin.

2) There is not yet an organized opposition, because, even though many people are concerned, there has been the lack of any clearly superior alternative to organize around.

3) Yes, [most of them are Ph.D.-level researchers]..

Ludwik (not posted):
1) I still think that attempts to validate theories should be made in platforms (lectures, journals,. discussion lists, etc. Most people on our CMNS list, including myself, are totally unqualified to provide the feedback needed by a string theorists. Larens is probably well aware of this.

He wrote: "To penetrate the PURE theoretical physics world is even more difficult than promoting CMNS. Clubs of dozens of paid theorists who do not have to produce any testable results have developed in the last couple of decades around Planck scale string theory." This looks like a division within the string theory community (the Plank scale group versus others), not a conflict between mainstream theoretical physicists and string theory specialists. In that sense, the situation is different from what we have in CMNS.

I do not know what the PURE is. Does it mean not APPLIED? Does it mean “mathematics only?” The “PENETRATE" probably means to have a theory accepted (recognized as valid). Mathematicians would validate any logically derived theory; the only cause for rejection would be a mathematical error. Physicists, by contrast, reject theories whose conclusions conflict with reality.