Friday, October 10, 2025

How We Got Misinformed About "Grandmother Cells"

Each neuron fires between about 1 and 200 times per second, with the firing rate being unpredictable. So neurons are a noisy, unpredictable signal source; and that kind of source provides opportunities for noise mining and pareidolia, the occasional finding of some desired pattern by people scanning noisy, variable data looking for such a pattern. Similarly, at a restaurant that makes 200 pieces of toast every day using different types of bread, there is an opportunity for noise mining, in which someone checking each piece of toast may eventually claim to see the face of Jesus in a slice of toast. 

Let us look at the history of claims of "grandmother cells." The term refer to some neuron that might allegedly respond only when a person sees some particular type of visual, such as a picture of the person's grandmother. The 2002 article "Genealogy of the 'Grandmother Cell'" by the late Charles G. Gross gives us some background on how the idea of such a cell got started. Gross tells us this:

"The term originated in a parable Jerry Lettvin told in 1967. A similar concept had been systematically developed a few years earlier by Jerzy Konorski who called such cells 'gnostic' units."

So according to Gross, the concept of a "grandmother cell" arose independently of observations, without any empirical warrant.  But then Gross starts telling an unwarranted self-serving tale that evidence was found supportive of such an idea. He claims, "In the early 1970s, my colleagues and I working at M.I.T. in Cambridge, Massachusetts, reported visual neurons in the inferior temporal cortex of the monkey that fired selectively to hands and faces (Gross and others 1969, 1972; Gross 1998a)." Gross is here engaging in self-citation. Let us look at the papers Gross refers to, and see whether they actually gave any evidence to back up such a claim. 

  • The 1969 paper "Visual Receptive Fields of Neurons in Inferotemporal Cortex of the Monkey" by Gross and others which you can read here. We have no specific data backing up any claim that anything had been found like a neuron that only responds to some particular image. We merely have this vague statement: "There were several units that responded most strongly to more complicated figures. For example, one unit that responded to dark rectangles responded much more strongly to a cutout of a monkey hand, and the more the stimulus looked like a hand, the more strongly the unit responded to it." The paper gives no data backing up such a claim. 
  • The 1972 paper by Gross is the paper "Visual properties of neurons in inferotemporal cortex of the macaque." Only the first page of the paper is publicly available here. That page makes no claim backing up claims of anything like a grandmother cell. 
  • The "Gross 1998a" citation is a citation of the book "Brain, Vision, Memory" by Gross, which you can read here.  On page 198 Gross claims that "he did not publish a full account of a face-selective neuron until 1982," which shows that the previous two citations were inappropriate. On the same page Gross misspoke by claiming that "soon thereafter, a flood of papers on such cells appeared." No such flood occurred. He mentions a 1982 paper by Perrett, Rolls and Cann ("Visual Neurones Responsive to Faces in the Monkey Temporal Cortex"), which you can read here

Nothing that is in any of these citations supports the claim that anything like a face-selective cell or a hand-selective cell was discovered. If we look at the 1982 paper by Perrett, Rolls and Cann ("Visual Neurones Responsive to Faces in the Monkey Temporal Cortex"), which you can read here, we also find nothing impressive. The paper claims that "Of the 497 cells recorded in the STS region there was a sub-population of at least 48 cells which gave responses to the sight of faces that were two to ten times as large as the responses to other stimuli tested." There is no claim that these cells fired only when other faces were shown, and Figure 3 (cherry-picked as the strongest evidence of a "face responsive cell") shows the cell firing many times when things other than faces were shown. The graphs in the diagram are examples of cherry-picking, showing results from a few cells that seemed to fire the most when the subject was shown faces. 

Some mathematical analysis will show how unimpressive the result discussed above. In the study there were five types of sensory stimuli: faces, gratings/geometric stimuli, complex 3D stimuli, somatosensory stimuli, and auditory stimuli. Let us imagine that we are recording how 497 cells respond when a subject is exposed to one of a small number of categories of stimulus, such as five.  Given a high random variability in how the cells respond, with the firing rates varying randomly between 1 and 200 times per second, and given a relatively small number of trials, and only a small number of types of stimulus (such as only five), we would expect that by chance there would be about 10% of these cells that would fire twice as often or more when a subject is exposed to one of the five types of stimulus. So the reported result that "there was a sub-population of at least 48 cells which gave responses to the sight of faces that were two to ten times as large as the responses to other stimuli tested" is not something unexpected, assuming purely chance results, and no actual "face sensitivity" or "face selectivity" going on in the cells. 

The claim by Gross to have discovered neurons that "fired selectively to faces and hands" was false. Neither he nor anyone else discovered any such thing. All that was going on was noise-mining.  Monkeys were being shown different visual stimuli, including faces and things that were not faces. The firing of hundreds of neurons were recorded, and researchers were drawing attention to the cells that happened to have the highest firing rate when faces were shown. No evidence was being presented of more neuron firing during face observation than we would expect to see from a random set of randomly firing cells that fired with a high variability. 

Later in the 2002 article "Genealogy of the 'Grandmother Cell'" by Charles G. Gross, Gross makes this claim: "Starting 10 years later, these finding were replicated and extended in a number of laboratories (e.g., Perrett and others 1982; Rolls 1984; Yamane and others 1988) and were often viewed as evidence for grandmother cells." The references do not actually refer to any papers providing evidence for grandmother cells. The 1982 Perrett paper is discussed above, and did not find any such evidence, but merely claimed "Of the 497 cells recorded in the STS region there was a sub-population of at least 48 cells which gave responses to the sight of faces that were two to ten times as large as the responses to other stimuli tested."  The Rolls 1984 paper is the paper "Neurons in the cortex of the temporal lobe and in the amygdala of the monkey with responses selective for faces."  It is merely another paper picking out some cells out of hundreds that fired more often when faces were shown, while also firing when things other than faces were shown. 

None of the papers that Gross has cited could intelligently be interpreted as evidence for grandmother cells, so Gross misleads us badly by claiming that such papers "were often viewed as evidence for grandmother cells." Later Gross confesses, "However, most of the reported face-selective cells do not really fit a very strict criteria of grandmother/ gnostic cells in representing a specific percept, that is, a cell narrowly selective for one face and only one face across transformations of size, orientation, and color (Desimone 1991; Gross 1992)." At the end of the paper, Gross deceives us by trying to make it sound like these alleged "face selective" cells may be something like "grandmother cells." But no evidence he has presented or cited has given any evidence for such "grandmother cells." 

The next big development on this topic occurred when scientists started reading the firings of neurons in individual humans. This is something that cannot be done by simply having a person wear an EEG cap on his head. The reading of firings of individual neurons in humans requires the implanting of electrodes into the brain.  Some people with drug-resistant epilepsy have electrodes implanted in their brains so that doctors can figure out where is the best place to do surgery to help cure their epilepsy. Neuroscientists have tried to leverage the implanting of such electrodes, to study the firing of individual neurons in the human brain. 

This has often been a morally objectionable type of activity by neuroscientists. The type of electrodes implanted in a brain to evaluate a patient for epilepsy are called macroelectrodes.  The type of electrodes implanted to record the firing of individual neurons are called microelectrodes. There is never any medical justification for implanting microelectrodes in addition to macroelectrodes. A scientific paper tells us, "Sixty-five years after single units were first recorded in the human brain, there remain no established clinical indications for microelectrode recordings in the presurgical evaluation of patients with epilepsy (Cash and Hochberg, 2015)." In other words, there is no medical justification for implanting microelectrodes in the brains of epilepsy patients. Here is a quote from a scientific paper:

"The effects of penetrating microelectrode implantation on brain tissues according to the literature data...  are as follows:

  1. Disruption of the blood–brain barrier (BBB);
  2. Tissue deformation;
  3. Scarring of the brain tissue around the implant, i.e., gliosis 
  4. Chronic inflammation after microelectrode implantation;
  5. Neuronal cells loss."
What is going on with attempts to find something like grandmother cells in humans is typically a morally objectionable affair in which very sick people are being put to unnecessary risks for the sake of scientists seeking fame and glory. Such affairs are so morally dubious that we should have a natural tendency to distrust the statements of scientists doing such research, just as we should have a natural tendency to distrust the statements of any person engaged in a reckless or shady activity. 

Similar to claims of a "grandmother cell" are claims of a "Jennifer Aniston neuron" that was activated only when a epileptic subject was shown a picture of Jennifer Aniston. The claim is unfounded, and does not match the data in the original paper. For a discussion of the shady business that went when claims like this were made, see the last seven  paragraphs of my post here

plan for becoming famous scientist

At the link here, a vision scientist describes some of what is going in studies like the studies mentioned above:

"Neuroscience, as it is practiced today, is a pseudoscience, largely because it relies on post hoc correlation-fishing....As previously detailed, practitioners simply record some neural activity within a particular time frame; describe some events going on in the lab during the same time frame; then fish around for correlations between the events and the 'data' collected. Correlations, of course, will always be found. Even if, instead of neural recordings and 'stimuli' or 'tasks' we simply used two sets of random numbers, we would find correlations, simply due to chance. What’s more, the bigger the dataset, the more chance correlations we’ll turn out (Calude & Longo (2016)). So this type of exercise will always yield 'results;' and since all we’re called on to do is count and correlate, there’s no way we can fail. Maybe some of our correlations are 'true,' i.e. represent reliable associations; but we have no way of knowing; and in the case of complex systems, it’s extremely unlikely. It’s akin to flipping a coin a number of times, recording the results, and making fancy algorithms linking e.g. the third throw with the sixth, and hundredth, or describing some involved pattern between odd and even throws, etc. The possible constructs, or 'models' we could concoct are endless. But if you repeat the flips, your results will certainly be different, and your algorithms invalid...As Konrad Kording has admitted, practitioners get around the non-replication problem simply by avoiding doing replications.” 

Later in the same scientist's blog, we read this year 2023 comment: "Articles published during the past decade bemoaning the inability of mainstream neuroscience to generate replicable or even reproducible outcomes are too many to count."  In the same post the scientist states this:

"If we weren't living it, it would be hard to imagine how a research culture could have strayed so far from the path of rationality as has the culture of neuroscience. Fundamental problems in theory and method have long been flagged (e.g. Teller, 1984; Jonas & Kording, 2018; Brette, 2019), but critiques have left barely a trace on the hard-beaten track of routine, mainstream practice."

Monday, October 6, 2025

Professors Acting Spooky-Stupid Outnumber Professors Acting Spooky-Smart

"Discovery commences with the awareness of anomaly, i.e. with the recognition that nature has somehow violated the paradigm-induced expectations that govern normal science. It then continues with a more or less extended exploration of the area of anomaly. And it closes only when the paradigm theory has been adjusted so that the anomalous has become the expected.”

― Thomas Kuhn, The Structure of Scientific Revolutions

The average person may occasionally read about the paranormal, and may get the impression that it is some extremely rare thing, based on how infrequently it is reported. But there are reasons for thinking that what you read about the paranormal is just the tip of the tip of the iceberg. Instead of being a “blue moon” type of thing, the paranormal may be extremely common.

  • In Arcangel's study of 827 people, 596 (72%)  responded that they had had an "afterlife encounter." We read"69% of respondents listed some form of visual encounter (Question 4), 19% were Visual only, 13% were a combination of Visual/Auditory, 8% Visual/Sense of Presence and 8% Visual/Auditory/Sense of Presence."
  • Erlendur Haraldsson surveyed 902 people in Iceland in 1974, finding that 31% reported seeing an apparition or having an encounter with a dead person.  He did another survey in Iceland  in 2007 with a similar sample size, finding that 42% reported seeing an apparition or having an encounter with a dead person, with 21% reporting a "visual experience of a dead person,"  along with 21% reporting an out-of-body experience. 
  • According to the paper "Psychic Experiences in the Multinational Human Values Study: Who Reports Them?" here: "Three items on personal psychic experiences (telepathy, clairvoyance, contact with the dead) were included in a survey of human values that was conducted on large representative samples in 13 countries in Europe and in the U.S. (N = 18,607). In Europe, the percentage of persons reporting telepathy was 34%; clairvoyance was reported by 21%; and 25% reported contact with the dead. Percentages for the U.S. were considerably higher: 54%, 25% and 30% respectively.".  
  • A 1973 survey of 434 persons in Los Angeles, USA ("Phenomenological Reality and Post-Death  Contact" by Richard Kalish and David Reynolds) found that 44% reported encounters with the deceased, and that 25% of those 44% (in other words, 11% of the 434) said that a dead person "actually visited or was seen at a seance."
  • As reported in the 1894 edition of the Proceedings of the Society for Psychical Research (Volume X, Part XXVI), an 1890's "Census of Hallucinations" conducted by the Society for Psychical Research asked, "Have you ever, when believing yourself to be completely awake, had a vivid impression of seeing or being touched by a living being or inanimate object, or of hearing a voice ; which impression, so far as you could discover, was not due to any external physical cause?"  As reported in Table 1 here (page 39), the number answering "Yes" was about 10%.  Because the question did not specifically refer to the dead, ghosts or apparitions, the wording of the question may have greatly reduced the number of "yes" answers from people experiencing what seemed to be an apparition of the dead or a sense of the presence of the dead. 
  • In the March-April 1948 edition of the Journal of the Society for Psychical Research, page 187, there appeared the result of a survey asking the same question asked in 1894: "Have you ever, when believing yourself to be completely awake, had a vivid impression of seeing or being touched by a living being or inanimate object, or of hearing a voice ; which impression, so far as you could discover, was not due to any external physical cause?"  According to page 191, 217 out of 1519 answered "Yes." This was a 14% "yes" rate higher than the rate of about 10% reported in 1894. 
  • A 1980 telephone survey of 368 participants found that 29% reported "post-death communication." 
  • The British Medical Journal published in 1971 a study by Rees that involved almost 300 subjects, one entitled "The Hallucinations of Widowhood."  Rees reported that 39% in his survey reported a sense of presence from a deceased person and 14% reported seeing the deceased, along with 13% hearing the deceased.
  • A 2015 Pew Research poll found that 18% of Americans said they've seen or been in the presence of a ghost, and that 29% said that they've felt in touch with someone who died. 
  • survey of 1510 Germans found (page 12) that 15.8 reported experience with an apparition, and more than 36% reported experience with ESP. 
  • A Groupon survey of 2000 people found that more than 60% claim to have seen a ghost.
  • A 1976 survey of 1467 people in the US asked people if they had ever "felt as though you were really in touch with someone who had died?" 27% answered "Yes."  
  • On page 123 of the 1954 Proceedings of the American Society for Psychical Research (Volume 48), which you can read here, we read of a poll done of 42 students who were asked: "Have you ever actually seen your physical body from a viewpoint completely outside that body, like standing beside the bed and looking at yourself lying in the bed, or like floating in the air near your body?” 33% answered "Yes." 
  •  A  study found that "Of the 30 interviewable survivors of cardiac arrest, 7 (23 percent) described experiences classified as NDEs by scoring 7 or more points on the NDE Scale." Of these reporting a near-death experience in this study (11), 90% reported out-of-body experiences. 
  •  A Dutch study found 18% of cardiac arrest survivors reporting a near-death experience, but with only a minority of these reporting an out-of-body experience. 
  • Ia survey of 300 students and 700 non-student adults in  Charlottesville, Virginia  (not at all a hotbed of New Age thinking), the result was more than half of the respondents claimed an extraordinary ESP experience. 
  • survey of family members of deceased Japanese found that 21% reported deathbed visions. A study of 103 subjects in India reports this: "Thirty of these dying persons displayed behavior consistent with deathbed visions-interacting or speaking with deceased relatives, mostly their dead parents." A study of 102 families in the Republic of Moldava found that "37 cases demonstrated classic features of deathbed visions--reports of seeing dead relatives or friends communicating to the dying person." 
  • paper  "Out-of-Body Experiences" by Carlos S. Alvarado tells us that according to 5 surveys of the general population, 10% of the population report out-of-body experiences. A larger number of surveys of students show they report out-of-body experiences at a rate of about 25%.   
  • study on after-death communication (ADC) states, "Results indicated that, regarding prevalence, 30-35% of people report at least one ADC sometime in their lives and, regarding incidence, 70-80% of bereaved people report one or more ADC experiences within months of a loved one's physical death."
  • survey about near-death experiences in Australia said that nearly 9$ of Australians reported them.  
  • We read the following on a page of the Psi Encyclopedia: "In 2017, Una MacConville carried out a study with Irish health care professionals. The carers reported that 45% of their patients spoke of visions of deceased relatives, often joyful experiences that bring a sense of peace and comfort." 
  •  
    Various factors may have caused you to think of the paranormal as being something extremely uncommon, when it actually may be very common. Let's look at what some of these factors may be. One factor is that probably the overwhelming majority of people who have paranormal experiences do not publicly report them. There are several reasons why someone having a paranormal experience may not report it publicly. He may fear being ridiculed, or he may fear that if he reports a paranormal experience he may be thought of as weird or flaky or a liar, and that this may hurt his job prospects. Or someone may not report a paranormal experience simply because there was not any physical evidence he can present to show the incident occurred. 

    Of the people who do publicly report their paranormal experiences, probably the great majority simply make some social media entry that you are very unlikely to ever hear about. My guess is that 99% of all paranormal experiences are not reported in a way that would be likely to end up in a news story that you might ever read. Corporations are masters of milking the media for news coverage, but what is the chance that some person having a paranormal experience will then spam the news media (or issue a press release) in the right way to get good news coverage? Almost zero.

    Another reason why the paranormal may be vastly more common than you might imagine is that your college or university probably failed to teach you anything about it. Modern colleges and universities are bastions of materialist thinking that like to exclude and denigrate the paranormal. When you took that psychology course in college, you should have learned all about the years of very substantial and methodical observational reports on the paranormal, particularly ESP, clairvoyance, medium activity and apparition sightings. But you probably learned very little or nothing on the topic, leaving you with the impression that there isn't much there.

    The problem lies with our science professors. Science professors are often members of a conformist belief community in which there are hallowed belief dogmas and very strong taboos.  We fail to realize how often science professors are members of tradition-driven church-like belief communities, because so many of the dubious belief tenets of such professor communities are successfully sold as "science," even when such tenets are speculative or conflict with observations. Fairly discussing reports of the paranormal is a taboo for science professors, who are typically men whose speech and behavior is dominated by moldy old customs and creaky old taboos.  There are many other socially constructed taboos such as the taboo that forbids saying something in nature might be a product of design, no matter how immensely improbable its accidental occurrence might be. The main reason why science professors shun reports of the paranormal is that such reports tend to conflict with cherished assumptions or explanatory boasts of such professors. Also, reports of the paranormal clash with the attempts of vainglorious science professors to portray themselves as kind of Grand Lords of Explanation with keen insight into the fundamental nature of reality. 

    One of the rules of today's typical science professor is: shun the spooky. So when people report seeing things that scientists cannot explain, the rule of today's scientists is: pay no attention, or if you mention it, try to denigrate the observational report, often by shaming, stigmatizing or slandering the observer. Following the "shun the spooky" rule, science professors typically fail to read hundreds of books they should have read to help clarify the nature of human beings and physical reality, books discussing hard-to-explain observations by humans.  

    Our colleges and universities train professors to be spooky-stupid rather than spooky-smart. Here are the characteristics of spooky-stupid professors:

    • When they hear about reports of some type of spooky phenomena, they say or think something such as "that can't be right" or "that's impossible" or "that must have just been a hallucination," and they don't do anything to seriously study the report or similar reports. 
    • They don't bother collecting reports of spooky phenomena. When they hear about such reports, they make no effort to add the report to some collection of reports of the unexplained. 
    • They don't bother to seriously study the literature documenting the paranormal.  
    • When they write about the types of things that humans experience, and the types of events that occur, they ignore reports of the spooky. 
    • Very stupidly, they throw away what may be some of the most important clues about reality ever reported. 

     Here are some characteristics of spooky-smart professors:

    • When they hear about reports of some type of spooky phenomena, they do their best to preserve such reports, and investigate them further. 
    • They do in-depth study trying to discover whether anyone else made similar reports of such a phenomenon. 
    • They do their best to classify, quality-check and analyze such reports. 
    • They do in-depth study reading about all reports of phenomena that cannot be explained. 
    • They act according to the rule of "don't throw away clues, if there's a chance in a thousand they might be important."

    It is a gigantic mistake to assume that when a science professor speaks against the paranormal, he is stating an educated opinion.  Based on their writings, it seems that 99% of today's science professors have never bothered to seriously study the paranormal.  A physics professor denigrating the paranormal no more states an educated opinion than a taxi driver offering an opinion on quantum chromodynamics. The fact that a person has studied one deep subject requiring the reading of hundreds of long volumes for a fairly good knowledge of the subject is no reason for thinking that the same person has studied some other deep subject (such as paranormal phenomena) requiring the reading of hundreds of long volumes for a fairly good knowledge of the subject, particularly when studying such a subject seriously is a taboo for that type of person. Serious scholars of paranormal phenomena can tell when someone speaking or writing on a topic has never studied it in depth, and low-scholarship indications are typically dropped in abundance when science professors write about the paranormal (things such as a failure to reference or quote the most relevant original source materials).   

    The spooky-stupid scientist following a "shun the spooky" rule is rather like Sherlock Holmes wearing handcuffs behind his back. Sherlock Holmes was the most famous fictional detective in literary history. In a series of stories by Arthur Conan Doyle, Sherlock Holmes would attempt to uncover the truth behind a crime, using every tool he could muster. Like Sherlock Holmes, a scientist attempts to uncover the truth, using a variety of tools and methods. But imagine if Sherlock Holmes tried to solve crimes wearing handcuffs that prevented him from using his hands.  He would probably fail to solve many of his harder crime cases, and would often come up with wrong answers. 

    The scientist following a "shun the spooky" rule is like a man wearing handcuffs that prevents him from using his hands. A large fraction of the most important clues that nature offers are things that appear to us as spooky things, because we cannot understand them.  A scientist refusing to examine such clues will be likely to reach wrong conclusions about some of the most important issues a scientist can study. 

    A professor acting spooky-stupid

    It is a great mistake to think that a scientist following a "shun the spooky" rule will merely end up getting wrong ideas about paranormal topics. Following such a rule, the scientist will tend to also end up with wrong ideas about important topics that are not normally thought of as paranormal. The person who fails to study the paranormal will tend to end up with wrong ideas on topics such as the relation between the brain and the mind and the origin of man.  Similarly, he who fails to properly study mathematics may end up with wrong ideas on topics outside of mathematics, such as physics and biology; and he who fails to study history may end up with bad ideas about politics, current affairs and public policy.

    The "shun the spooky" rule causes neglect of all kinds of important things beyond what is considered paranormal. So, for example, scientists may avoid studying John Lorber's cases that included cases of above-average intelligence and only a thin sheet of brain tissue, finding such results too spooky. Such results are "wrong way" signs nature is putting up, telling neuroscientists some of their chief  assumptions are wrong. The "shun the spooky" rule may lead to wasted billions and bad medical practices. Doctors and scientists may focus on ineffective treatments stemming from incorrect assumptions, while neglecting effective treatments because the results are too spooky for them.    

    professor discarding unwanted observations
    Another professor acting spooky-stupid

    When I was a small child, younger than 10, I would read in a children's magazine a series of educational cartoons that were called the Goofus and Gallant series. The Goofus and Gallant series of cartoons would try to teach small children good principles of behavior, by showing bad behavior by Goofus and good behavior by Gallant. I can use the Goofus and Gallant approach to illustrate some of the differences between spooky-stupid behavior and spooky-smart behavior. Here is one attempt:

    bad professor and good professor

    Here is another such attempt:

    good professor and bad professor


    Here is one more such attempt:

    And here is the last such attempt:

    bad professor and good professor


    Very sadly, the science departments of our universities are all stuffed with spooky-stupid guys like Professor Goofus. To these self-shackled Sherlocks, I say: ditch your shackles, and start studying all of the evidence relevant to the claims you make, including the things discussed in my hundreds of posts here and the list of books given at the beginning of the post here.

    Thursday, October 2, 2025

    How Guys Write Groundless "This Is How Your Brain Stores Memories" Articles

    A type of article that shows up periodically in the literature of neuroscience is an article with some title such as "How the Brain Stores Memories." All such stories are bogus examples of groundless claims and hand-waving. No one has any understanding of how a brain could store memories. There is nothing in the brain that bears any resemblance to a device for writing memories; there is nothing in the brain that bears any resemblance to a device for storing memories for many years; and there is nothing in the brain that bears any resemblance to a device for retrieving memories. 

    Let's look at a recent example of this type of misleading article, and try to derive from it some principles about how people go about writing articles of this type. The article is one published in Forbes magazine. It is entitled "Timing Is Everything: How Our Experiences Become Memories." The author (William A. Haseltine) gives us an appalling  example of someone pretending to understand things he does not understand.  The article describes him as someone who "covers genomics and regenerative medicine."  That's already a reason for suspecting the accuracy of his article. The author is apparently not an expert in the field of cognitive neuroscience or memory. 

    The first sentence of the article is: "Memories are created in a matter of seconds." That's correct; humans can form permanent new memories instantly (such as when a son or daughter learns that their parent has died).  But Haseltine fails to put two and two together by realizing that this fact of instant memory creation rules out every explanation he attempts to give. The processes he describes (mainly synapse strengthening) are mostly sluggish processes requiring hours or days or many minutes.  Processes so sluggish cannot be an explanation for the creation of a new memory, which can occur instantly. And since synapses are don't-last-for-years things built from proteins which have an average lifetime less than two weeks, unstable synapses cannot be an explanation for human memories that can reliably persist for 50 years. 

    Trying to explain how a brain could create a memory, Haseltine gives us this vacuous bit of hand-waving:

    "From the moment the brain receives a sensory input (i.e. sight, sound, smell, etc.), neurons across the brain activate. Connections formed between these neurons give rise to dynamic neural networks called engrams. For example, when exploring a new city, an engram forms and continuously updates as you walk down various streets and turn corners. The moment you finally encounter the landmark you have been searching for, there is a burst of neural activity. Neurons that were activated seconds prior also increase their firing. Your brain consolidates this information into a mental map of how to get to the landmark. Engram formation, therefore, depends not only on neurons firing simultaneously but also on those that activate immediately before and after. This is known as behavioral timescale learning. "

    The term "engram" is a vacuous bit of speculation that does not correspond to any well-established scientific reality.  The term means an alleged place where a memory is stored in the brain. Microscopic examination of brain tissue has never revealed the existence of any such thing as engrams. No one has ever found information someone learned in school by microscopically examining tissue from that person's brain. When biologists use the term "engram" they are speculating as wildly as when astrophysicists use the terms "dark matter" and "dark energy." 

    The account that Haseltine is giving here makes no sense, given the reality of instant memory creation. Neural connections are the synapses between brains. It takes at least days for a new synapse to appear between neurons. So it is misleading bluffing for Haseltine to be telling us a story of the arising of "dynamic neural networks" as an explanation for memory creation. 

    Haseltine's next paragraph is just a mention of what goes on all the time in the brain, something that does nothing at all to explain memory creation. He says this:

    "When a neuron activates, an action potential is generated. First, an electrical or chemical input stimulates a dendritic branch on the neuron. If the stimulus is strong enough, a branch becomes activated. The signal travels through the cell body and into the neuron’s axon. The activated axon releases chemical messengers called neurotransmitters to activate other cells in the network. Neural activation lasts just two milliseconds before the cell resets to allow another action potential to be generated."

    Haseltine's next paragraph begins with a statement of fact, and two statements of utterly unproven speculation, wrongly  stated as if they were fact. He states this:

    "Generating action potentials is the basis of all brain activity. During learning, action potentials transmit signals that encode new experiences. A key region involved in this process is the hippocampus. Here, the brain consolidates short-term memories into long-term memory." 

    Yes, generating action potentials is the basis of all brain activity. No, there is no evidence that "action potentials transmit signals that encode new experiences."  No one understands how things that humans see and hear could ever be encoded or translated into some format that would allow memories to be stored as synapse states or neural states.  And there is no evidence that "the brain consolidates short-term memories into long-term memory" in the hippocampus or any other place.  We merely know that human beings can have short-term memories that don't last for long and long-term memories that are permanent.  Neuroscientists have no understanding of how a brain could create either short-term memories or long-term memories. 

    Haseltine goes on and on, mainly mentioning sluggish things that go on in the brain that are way too slow to credibly account for instant memory formation. For example, he states this:

    "Repeated stimulation from a presynaptic neuron (the neuron sending the signal) to a postsynaptic neuron (the neuron receiving the signal) triggers molecular changes. First, neurotransmitters released by the presynaptic neuron bind to receptors on the postsynaptic neuron. When neurotransmitters bind to these receptors, channels open that allow calcium ions to enter the neuron." 

    Using Haseltine's article as an example, I can give a general outline for how people write bogus groundless articles with titles such as "This Is How Your Brain Stores Memories."  The steps are basically these:

    Step 1: Accumulate a list of things that are constantly going on in the brain, things that occur at various timescales ranging from seconds, hours, days or weeks.  

    Such a list may include:

    (a) the transmission of action potentials across chemical synapses, something that is constantly occurring billions of times every second in the brain;

    (b) the transmission of neurotransmitter chemicals across such synaptic gaps, something that constantly occurs;

    (c) the strengthening of existing synapses, something that takes many hours or days, and that goes on constantly regardless of whether anyone is learning or having sensory experiences;

    (d) the creation of new synapses between neurons, something requiring days or weeks;

    (e) the growth of new dendritic spines, which takes days or weeks;

    (f) the changing in the size of dendritic spines, which takes  days or weeks.

    (2) Write some account of people acquiring a new memory or learning something, while blending the account with as many items on this list of types of neural activity, paying no attention to the time required for the types of neural activity. 

    This is exactly what Haseltine has done in his article.  He has left out any discussion of the hours and days required for the processes he mentions, so that the reader will not notice what he has discussing is way, way too slow to explain instant human learning. 

    (3) Do a little science-jargon sprinkling, by doing things such as using the speculative term "engrams," by making vacuous uses of the terms "encoding" and "consolidation," and by maybe referring to some part of the brain such as the hippocampus or referring to some type of protein, claiming that it "plays a role" in memory formation. 

    Haseltine's only use of the word "encode" or "encoding" is vacuous,  when he claims that  "during learning, action potentials transmit signals that encode new experiences." The failure of neuroscientists to articulate any credible theory of neural encoding of memories -- and their failure to show any robust evidence of such a thing occurring -- are huge reasons for rejecting claims of the brain storage of memories.  If memories were to be stored in brains, there would have to be some gigantically complicated encoding scheme a million times more complicated than the genetic code, something capable of converting the huge variety of things humans can learn into synapse states or brain states.  There is no evidence that any such thing exists, and no neuroscientist has even stated a credible detailed theory of how such a thing could work. Haseltine also uses the neuroscience jargon word "consolidation" or "consolidate,"  but all of his references are vacuous, and never refer to any evidence for consolidation or a theory of consolidation. 

    (4) Refer to some recent paper, typically some poorly designed study using way too few subjects and poor methods, almost always something merely involving mice. 

    This is exactly what Haseltine. He refers us to only the poorly designed study "Dendritic, delayed, stochastic CaMKII activation in behavioural time scale plasticity," a mouse study that fails to even tell us how many mice were used. Whenever that happens, it is invariably because some way-too-small study group size was used such as only 7 mice.  The study makes no mention of the use of blinding, meaning it is a Questionable Research Practices affair we should not trust. A study that fails to tell how many subjects it used should not be trusted about anything. 

    This scientific paper says the following:

    "Previous models have suggested that CaMKII functions as a bistable switch that could be the molecular correlate of long-term memory, but experiments have failed to validate these predictions....The CaMKII model system is never bistable at resting calcium concentrations, which suggests that CaMKII activity does not function as the biochemical switch underlying long-term memory."

    This recent scientific paper says on page 9, "Overall, the studies reviewed here argue against, but do not completely rule out, a role for persistently self-sustaining CaMKII activity in maintaining" long term memory. 

    (5) Skip the issue of the lifelong persistence of memories, because scientists have no clue as to how that could occur in a brain with such high molecular turnover.  

    (6) Skip the issue of instant memory recall, since the brain has nothing like any of the things that enable that in machines, things such as addressing, sorting and indexing. 

    neuroscientist explanation of memory

    An example of a vacuous "brain explanation" article is the Mayo Clinic's page entitled "How Your Brain Works." The page fails to tell us how a brain could perform any aspect of human mentality. What we mainly have is a discussion of different parts of the brain, and how neurons transmit chemical and electrical impulses. We have the thinnest of localization claims about the function of different parts of the brain, which are each one-sentence affairs completely lacking in details that might explain things. For example, we are told "The frontal lobes help control thinking, planning, organizing, problem-solving, short-term memory and movement." We have no explanation of how that might happen. We are told "t
    he occipital lobes process images from your eyes and connect them to the images stored in your memory." But where does the writer think that memories are stored in the brain? He does not say. And how could such a storage of memory ever occur? The writer does not say. And how could a brain ever instantly recall something as soon as you hear a name or see an image? The writer does not say. 

    All that is going on in the Mayo Clinic's page entitled "How Your Brain Works" is hand-waving and description of parts of the brain, without any explanation at all as to how could the brain produce any cognitive effect.  It's just the kind of article we might expect to get if claims of brains producing minds and brains storing memories were misconceptions, kept afloat by a constant repetition of socially constructed myths. 

    There's a general rule of thumb about credibly explaining any very impressive result in biology, which is: in general, it is enormously and exponentially more difficult than you might think at first, because of the failure of the human mind to conceive all the difficulties involved.  Consider the case of trying to explain the origin of the human body. 

    Darwin started out by trying to explain the human body by the childishly simple idea that organisms undergo random changes, and that the luckier changes survive better. But a close consideration of the problem reveals a plethora of difficulties with that idea:

    • A simple variation in just the body of one member of an organism would not explain how the species of that organism got some feature, because the variation would have to be an inheritable variation; and it is generally believed that acquired characteristics are not inherited. 
    • Any variation would require not just variation in the structure and internal arrangement of one organism, but presumably a variation in some schema that controlled such a structure and internal arrangement of the organism; and any such change would tend to be diluted in offspring that had a mixture of inheritance from male and female. 
    • Almost any lucky variation would be lost in subsequent generations, as the offspring of such generations would mainly have inheritance from organisms not having the lucky variation. 
    • There does not even exist within the body of any organism some schema that specifies the appearance, behavior traits, structure and internal arrangement of the organism.  The only known unit of inheritance is DNA and its genes, and such things are not a blueprint, recipe or program for making the body of any organism, contrary to the lies and misleading statements that biologists have told on this topic. DNA and its genes specify only low-level chemical information such as which amino acids go in particular proteins, not high-level instructions for anatomical assembly. 
    • Because of factors discussed here such as the general uselessness of  early stages, nonfunctional intermediates and the interdependence of extremely complex components all required for many types of biological function, random variations in organisms or some genetic material they use does not stand as a credible theory explaining the origin of any very complex creature (such as humans)  requiring a very hierarchical arrangement of a huge number of well-arranged and interdependent parts. 
    Just as there are "show stoppers" and unsolvable dilemmas all over the place in trying to describe some random natural process producing great wonders of physical functionality, there are "show stoppers" and unsolvable dilemmas all over the place in trying to describe how a brain could produce the wonders of human memory.  Here is just one of them:  given that the human brain has not changed substantially since 2000 B.C, and given that the English alphabet and language is only centuries old rather than thousands of years old, how there could ever occur by brain action an event such as as the well-verified event of an old man memorizing the entire text of Milton's Paradise Lost, a poem of almost 80,000 words? 

    And how many words or characters of the English language have been found by a microscopic examination of brain tissue? Not a single word or character, even though very much freshly extracted tissue from living persons has been microscopically examined, and even though the entire brains of many recently dying people have been microscopically examined by neuroscientists eager to find a trace of some learned knowledge, without having any success. 

    Sunday, September 28, 2025

    Irredeemable: Reproducibility and Power Size in Neuroscience Are Very Bad, and Not Getting Any Better

     A recent study offers some encouraging news about psychology research. The paper is entitled "Increasing Sample Sizes in Psychology Over Time." The paper reports this:

    "We collected data from 3176 studies across six journals over three years. Results show a significant increase in sample sizes over time (b=44.83, t(6.25)=4.48, p=.004, 95%CI[25.23,64.43]), with median sample sizes being 40 in 1995, 56.5 in 2006, and 122.5 in 2019. This growth appears to be a response to the credibility crisis....The increase in sample sizes is a promising development for the replicability and credibility of psychological science."

    The credibility crisis referred to is the widely reported reproducibility crisis in fields such as psychology and neuroscience.  For decades it has been reported that experimental studies in psychology and neuroscience tend to be unreliable and poorly reproducible, largely because the sample sizes used were way too small.  This was commonly called a "reproducibility crisis in psychology," although it was very much a reproducibility crisis in both psychology and neuroscience. A tendency to produce studies with too-small sample sizes was just as prevalent in neuroscience as psychology. 

    Psychology experiments typically involve humans, and advances in internet technology may have been a factor helping to lead to increased study group sizes in psychology. Decades ago a scientist might have found it necessary to recruit subjects to come into some laboratory where an experiment can be done. But now there are online platforms that allow people to sign up to be subjects in psychology experiments, while being paid for their efforts. This provides a very large pool of potential test subjects. A psychologist can now run experiments using subjects from across the USA or even multiple countries, by designing some experiment that subjects can participate in over the internet, while the subjects stay in the comfort of their homes.  

    But while there may have been an increase in study group sizes used in psychology experiments, there has apparently been no such increase in the field of neuroscience. How could you honestly describe the state of experimental neuroscience? You might honestly describe it as an irredeemable cesspool consisting mostly of junk science studies that continue to have the same old fatal defects such as the use of way-too-small study group sizes. Well-designed studies in cognitive neuroscience seem to be in the minority, and are outnumbered by junk science studies guilty of very bad Questionable Research Practices. 

    Scientific studies that use small sample sizes are typically unreliable, and often present false alarms, suggesting a causal relation when there is none. Such small sample sizes are particularly common in neuroscience studies, which often require expensive brain scans, not the type of thing that can be inexpensively done with many subjects. In 2013 the leading science journal Nature published a paper entitled "Power failure: why small sample size undermines the reliability of neuroscience." There is something called statistical power that is related to the chance of a study producing a false alarm. The Nature paper found that the statistical power of the average neuroscience study is between 8% and 31%. With such a low statistical power, false alarms and false causal suggestions will be very common. 

    A scientific study with a statistical power of 50% is one that will have about a 50% chance of being successfully reproduced when someone attempts to reproduce it. Even when a statistical power of 50% is reached, the statistical power is not high enough for robust evidence to be claimed.  In order to be robust evidence for an effect, a study much reach a higher statistical power such as 80%.  When that power is reached, there is about an 80% chance that an attempt to reproduce the results will be successful. 

    The Nature paper said, "It is possible that false positives heavily contaminate the neuroscience literature." 

    An article on this important Nature paper states the following:

    "The group discovered that neuroscience as a field is tremendously underpowered, meaning that most experiments are too small to be likely to find the subtle effects being looked for and the effects that are found are far more likely to be false positives than previously thought. It is likely that many theories that were previously thought to be robust might be far weaker than previously imagined."

    Scientific American reported on the paper with a headline of "New Study: Neuroscience Gets an 'F' for Reliability."

    So, for example, when some neuroscience paper suggests that some part of your brain controls or mediates some mental activity, there is a large chance that may simply be a false positive. As this paper makes clear, the more comparisons a study makes, the larger a chance for a false positive. The study has an example: if you test whether jelly beans cause acne, you'll probably get a negative result, but if your sample size is small, and you test 30 different colors of jelly bean, you'll probably be able to say something like "there's a possible link between green jelly beans and acne"  -- simply because the more types of comparisons, the larger the chance of a false positive.  So when a neuroscientist tries to look for some part of your brain that causes some mental activity, and makes 30 different comparisons using 30  different brain regions, with a small sample size, he'll probably come up with some link he can report as "such and such a region of the brain is related to this activity." But there will be a high chance this is simply a false positive.  

    bad neuroscience lab

    The 2013 "Power Failure" paper discussed above was widely discussed in the neuroscience field, but a 2017 paper indicated that little or nothing had been done to fix the problem. Referring to an issue of the Nature Neuroscience journal, the author states, "Here I reproduce the statements regarding sample size from all 15 papers published in the August 2016 issue, and find that all of them except one essentially confess they are probably statistically underpowered," which is what happens when too small a sample size is used. 

    A 2017 study entitled "Effect size and statistical power in the rodent fear conditioning literature -- A systematic review" looked at what percentage of 410 experiments used the standard of 15 animals per study group (needed for a moderately compelling statistical power of 50 percent).  The study found that only 12 percent of the experiments met such a standard.  What this basically means is that 88 percent of the experiments had low statistical power, and are not compelling evidence for anything.


    low statistical power in neuroscience


    The 2017 scientific paper "Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature" contains some analysis and graphs suggesting that neuroscience is less reliable than psychology. Below is a quote from the paper:


    "With specific respect to functional magnetic resonance imaging (fMRI), a recent analysis of 1,484 resting state fMRI data sets have shown empirically that the most popular statistical analysis methods for group analysis are inadequate and may generate up to 70% false positive results in null data. This result alone questions the published outcomes and interpretations of thousands of fMRI papers. Similar conclusions have been reached by the analysis of the outcome of an open international tractography challenge, which found that diffusion-weighted magnetic resonance imaging reconstructions of white matter pathways are dominated by false positive outcomes  Hence, provided that here we conclude that FRP [false report probability] is very high even when only considering low power and a general bias parameter (i.e., assuming that the statistical procedures used were computationally optimal and correct), FRP is actually likely to be even higher in cognitive neuroscience than our formal analyses suggest.

    The paper draws a shocking conclusion that most published neuroscience results are false. The paper states the following: "In all, the combination of low power, selective reporting, and other biases and errors that have been well documented suggest that high FRP [false report probability] can be expected in cognitive neuroscience and psychology. For example, if we consider the recent estimate of 13:1 H0:H1 odds, then FRP [false report probability] exceeds 50% even in the absence of bias." The paper says of the neuroscience literature, "False report probability is likely to exceed 50% for the whole literature." 

    In June of 2025 I searched on Google Scholar, trying to find some paper reporting on an improvement of sample sizes in neuroscience research. I could find no such paper. The sample sizes used in neuroscience research are very bad, and are not getting any better. Today's neuroscience research is a cesspool of dysfunction and misleading claims. There are no signs that it is improving its horribly dysfunctional ways. 

    Why does this situation persist? There are two main reasons: economics and ideology. 

    The economic explanation for bad science practices is explained rather well in the paper "The Natural Selection of Bad Science" by Paul E. Smaldino and Richard McElreath. In that paper we read this:

    "Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery....We first present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more ‘progeny,’ such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates."

    The paper has a shocking confession by a scientist who has worked on search committees searching for scientists to be hired. The scientist states this:

    "I’ve been on a number of search committees. I don’t remember anybody looking at anybody’s papers. Number and IF [impact factor] of pubs are what counts."

    This is a description of an economic ecosystem in which what  determines a scientist's career advancement is not the quality and reliability of the papers he has published, but the mere quantity of such papers, and how many citations such papers are getting. 

    The paper ("The natural selection of bad science") states this: "In fields such as psychology, neuroscience and medicine, practices that increase false discoveries remain not only common, but normative." In this context "normative" means "more the rule than the exception." The paper states, "Some of the most powerful incentives in contemporary science actively encourage, reward and propagate poor research methods and abuse of statistical procedures." Later the paper gives us some insight on the economics that help to increase the likelihood of scientists producing lots of low-quality research papers:

    "If researchers are rewarded for publications and positive results are generally both easier to publish and more prestigious than negative results, then researchers who can obtain more positive results—whatever their truth value—will have an advantage. ...One way to better ensure that a positive result corresponds to a true effect is to make sure one’s hypotheses have firm theoretical grounding and that one’s experimental design is sufficiently well powered. However, this route takes effort and is likely to slow down the rate of production. An alternative way to obtain positive results is to employ techniques, purposefully or not, that drive up the rate of false positives. Such methods have the dual advantage of generating output at higher rates than more rigorous work, while simultaneously being more likely to generate publishable results. Although sometimes replication efforts can reveal poorly designed studies and irreproducible results, this is more the exception than the rule. For example, it has been estimated that less than 1% of all psychological research is ever replicated  and failed replications are often disputed. Moreover, even firmly discredited research is often cited by scholars unaware of the discreditation. Thus, once a false discovery is published, it can permanently contribute to the metrics used to assess the researchers who produced it....Campbell’s Law, stated in this paper’s epigraph, implies that if researchers are incentivized to increase the number of papers published, they will modify their methods to produce the largest possible number of publishable results rather than the most rigorous investigations."

    What the paper is suggesting is that junk science is strongly incentivized in today's science research ecosystem.  A scientist is more likely to succeed in academia if he produces a high quantity of low-quality research papers than if he produces a lower quality of high-quality research. There are several online sources that keep track of the number of papers that a scientist wrote or co-wrote, and the number of citations such papers got.  There are no online sources that keep track of the quality and reliability of the papers that such a scientist produced.  In such an environment, a scientist will be more likely to get ahead if he produces many low-quality papers rather than a smaller number of papers that are more reliable and truthful in the results reported. 

    junk science practices

    The economic motivations of badly behaving neuroscientists and similar bad actors are sketched in my diagram below, and the post here explaining the diagram. At the top left corner is the starting point of "quick and dirty" experimental designs with way too few subjects. The diagram charts how various types of people in various industries benefit from such malpractice. 

    academia cyberspace profit complex

    Another huge explanatory factor that helps explain the massive persistence of junk neuroscience studies is ideology. What we should never forget is that neuroscientists are members of a belief community.  That belief community is dedicated to promoting various dubious belief dogmas such as the dogma that the brain is the source of the human mind, and the dogma that the brain is the storage place of human memories. So in many cases junk science studies that a peer reviewer or an editor would normally be ashamed to approve for publication will be approved for publication, because the study appears to support some dogma or narrative that is cherished by members of the neuroscientist belief community. 

    church of academia

    The main beliefs of the neuroscientist belief community are false beliefs.  Because of innumerable reasons discussed on this blog, there is no credibility in the claim that the brain is the source of the human mind, and there is no credibility in the claim that the brain is a storage place of human memories. When the beliefs of a belief community are true, the community does not need to rely on studies involving bad science practices or bad scholarly practices.  But when the beliefs of a belief community are false, that belief community may need to keep producing studies involving bad science practices or bad scholarly practices. That way the belief community can try to maintain an illusion that the evidence is favoring its cherished beliefs.