Why is there no substantive discussion at GRG list? Three example topics that received no substantive discussion (excluding from myself) are about mouse intelligence, practical methionine reduction in humans, and Predictors of Exceptional Longevity (correlation/causation issue). Another issue was matching donation campaigns, which are a big problem and a productive discussion of that never happened.
I’ve included the first three posts I referred to, below.
Begin forwarded message:
Subject: Mouse Intelligence? (was: Major Mouse Testing Program — slides to be sent for the New Year!)
From: Elliot Temple
Date: January 11, 2015 at 4:39:48 PM PST
To: Gerontology Research Group
Hi Mike! I’ve enjoyed reading some of your explanations about cryonics, e.g. at:
Perhaps some others here would be interested too.
Replies about mouse intelligence below:
On Jan 9, 2015, at 11:31 AM, Mike Darwin wrote:
> How any educated person familiar with biological evolution (and living in the West in the 21st Century) could believe that any vertebrate, let alone any mammal, is an automaton behaving in some computer- like algorithmic fashion is unfathomable to me.
I hope you’ll be interested in understanding some different ideas, which you report you were previously unaware of.
They are relevant to GRG because understanding mice can help with mouse experiments.
> Isolating animals, placing them in barren environments and depriving them of the degree of social interaction required for their species has a devastating impact on health and longevity. Such animals experience large and life long elevation of serum cortisol levels (an immunosuppressive and pro-brain aging condition), altered cellular and humoral immunity and major adverse changes in brain chemistry. The literature is so vast on this subject that it would run to many pages to just to reproduce the cites. What I haver done below is to pick a very few representative papers.
I’m not disputing these facts, but I don’t think they have the implications you claim.
There are other reasons animals could react like this, besides them being intelligent, non-algorithmic, human-like and having emotions.
Suppose mice do work by non-intelligent algorithm. Why would you expect to put them in different situations and get the same results? Algorithms often produce different results with different inputs. It isn’t surprising that a mouse algorithm would work badly in some environments it isn’t evolutionary adapted for.
> Animals would not explore, play, and otherwise engage with their environment and with each other unless they were biologically rewarded for doing so.
That doesn’t make sense to me. Why can’t an algorithm specify doing those actions, so then the animal does them?
I do think mouse algorithms are complex and involve something you could call a reward system. Algorithms can do things like specify releasing (reward) chemicals in some situations, and algorithms can react differently to the presence (reward) chemicals.
> In order to function in a complex and changing world animals must be able to process complex experiential information, store the results and integrate them will feelings, such as pleasure, fear and anxiety.
It sounds like the issue is that – in my view – you have dramatically underestimated the possible complexity and capability of non-intelligent algorithms. Why can’t algorithms process complex information, store the results, and later take that stored information into account? They can.
So because you view algorithms as limited, you think they can’t explain mice. Perhaps you can comment on why you think algorithm complexity is inherently limited well below mice, if that’s our disagreement.
A typical reason for this belief I’ve encountered is basically that human programmers aren’t really very good yet. People sometimes estimate the capability of algorithms a little above what human programmers currently accomplish. For example, people see chess playing algorithms and correctly identify those as far more limited and simplistic than mice (though note chess algorithms do process and store information, and later use it – that isn’t hard). But that doesn’t put a cap on what a better written and more complex algorithms could accomplish.
> By the implantation of complex multi-electrode arrays in the brain it is now possible to actually visualize the cognitive and emotional dynamics of the rodent brain: http://ift.tt/1IxCPlW and, just as one might expect, it functions very much like the human brain. The parts of the human brain that give us consciousness and the ability to experience pleasure, pain, fear, anxiety and to experience anticipation are evolutionarily ancient and that is why they are referred to as the “reptilian brain”.
This technology is pretty cool. But your way of using it strikes me like this:
You attach complex measuring equipment to two different computers. You notice some broad similarities in the movement of electrons between different subsystems. And then, based on this hardware monitoring, you reach the conclusion that the computers are running similar software, and even claim some specific features are the same.
All of the evidence brought up so far is compatible with mice functioning by non-intelligent algorithm. I’m not disputing this evidence; it doesn’t contradict my position.
If I’m correct, then many people – including scientific researchers – have been telling fantasy stories about the human-like characteristics of mice, and misinterpreting some actions as involving emotions, intelligence, etc. Somewhat similar to what many people do with their pets. Misunderstanding what one is observing hinders research progress.
PS If you think mice have emotions and intelligence, maybe you shouldn’t put them in cages and do experiments on them – or eat meat.
Begin forwarded message:
From: Elliot Temple
Date: January 14, 2015 at 3:20:26 PM PST
Subject: Re: [GRG] Practical Methionine Reduction in Humans
To: Gerontology Research Group
> Methionine is an amino acid. Research shows low methionine diets increase lifespan in animals. Some recent research: http://ift.tt/1xqWHir _sdtp=
This is not a scholarly way to do citations. Citations should be to specific articles. That will present clear targets for criticism: if the specific articles cited are refuted, then the material citing them would need to be revised or rejected.
Linking to search engine results doesn’t work because search results change over time, and results vary based on the searcher’s IP address and browser cookies. There is no way for me to reliably know which articles were found in the original search, and this situation will get much worse over time.
This makes criticism problematic because someone might read (and perhaps criticize) an article that wasn’t actually intended as a source. It’s important to be clear and specific about what one’s sources are.
This is relevant to the GRG mission because doing scholarship correctly is necessary for research to be effective.
Also note the PDF leans heavily on the cites. There are significant points for which it presents no argument, only relying on cites/external information. Such as low met diets being good. The only arguments for that are via citing, so the cites have to be top notch (but aren’t).
I went ahead and took a look at the top search result. My comments here are intended to be illustrative – many of the same ideas could be applied to other papers:
> Methionine restriction increases blood glutathione and longevity in F344 rats
This paper has many large flaws, both intrinsically and also for use as a source for “Practical Methionine Reduction in Humans”. I present a list including both types of issues.
This was published in 1995, it is not “recent research” as was claimed when providing the source.
– Will It Work In Humans?
Does the same thing work for humans? This source isn’t about that. The “Practical Methionine Reduction in Humans” PDF should either explain reasoning that it would work in humans, or raise the issue and then give a specific source which contains reasoning about it working with humans.
– Mischaracterizing The Research
Now consider the claim quoted above that, “Research shows low methionine diets increase lifespan in animals.”
Does this research show that? No. At best it shows that low methionine diets are *correlated with* increased lifespan (and correlated with increased blood glutathione). That’s different. What they did is feed different rats different diets and then compare results, in order to establish correlations between diet and some measured results.
– Not Reproducible
Is this research reproducible? No. It doesn’t provide adequate information to reproduce it. For example, it doesn’t specify enough details about the cages – e.g. whether there were wheels. To reasonably attempt to reproduce this results (and it should be reproduced, among other things, before humans change their diets) would requiring knowing some details of what they did which are not provided.
Note these details also matter to things other than reproducing the results. For example, if the rats had no exercise, it could be the observed correlation only happens with creatures that don’t exercise. Maybe humans have to avoid exercise for the dietary changes to work. Or maybe the opposite: the rats had wheels and it only works if you do exercise. You simply don’t know when dealing with inadequately controlled or reported experiments dealing with correlations.
– Inability To Count
In the “MATERIALS AND METHODS” section it says animals were housed in groups of 5, but it also says there were 16 control rats. How is this possible? This is a HUGE problem. What is going on? They miscounted their rats or lied about using a 5-per-cage method, or they mixed control and non-control rats in the same cages. This shows a level of extreme carelessness that means all results of the paper basically need to be ignored. At best you could think maybe they were onto something and try replicating it, but you can’t trust the conclusions of work like this.
– Inadequate Controls
As recently discussed at the GRG forum, it’s important to have calorie restricted control rats because if you change anything (e.g. feed rats something different) they might eat less as a result and therefore get calorie restriction benefits. But this experiment says it gave all the rats unlimited food. (BTW if you do have calorie restricted controls, how much do you restrict their calories? That seems problematic too.)
– No Sources Of Error Section
The paper doesn’t have a sources of error section. That’s very unscientific. You have to carefully consider all the ways your results could be false and share them along with your results.
– Unclear Statistical Methods
> Data are expressed as means ± SEM. Differences between groups were considered significant at P < 0.05 by Student’s t test or by ANOVA with Scheffe’s posthoc test.
Why are there two different statistical methods? Which was used when, and why? This sounds like a potential source of error: maybe they used whichever one would give them a bigger result. It’s very important to decide on statistical methods before doing the experiment and stick to them, or else the research is invalidated.
– Correlation vs. Causation and Explanation
When talking about reducing GSH, most of the reasoning given relates to correlations rather than causes or explanations. And the work itself focuses on correlation – which it then falsely presents as causation in the title.
Causes and explanations of what’s going on and how things work are what’s important. Correlations should be used in order to learn about causes and explanations, rather than being the focus. Each correlation should be presented along with clear acknowledgement that it’s a correlation, and a statement about what useful thing we can learn from it (regarding causes or explanations of what’s going on). This isn’t done.
They say their result is remarkable: they restricted Met, the only precursor to GSH the rats had, and then GSH levels were *higher*. That’s interesting, but note it’s a correlation. Then what do they do? Say this correlation “clearly suggest[s]” one particular conclusion, even though it’s logically compatible with many different explanations of the causation. This is very sloppy and an unacceptable way of trying to figure things out. To understand a correlation like this, you have to critically consider many hypotheses about what’s causing it, not just present one and falsely claim the correlation “clearly suggest[s]” it.
Also note given the result is remarkable that met restriction *raises* GSH, we should not be assuming this remarkable result will apply in all animals and humans with all kinds of other factors different. We should be extra careful with it.
– Scientific Method
The authors don’t understand the scientific method. The basic purpose of scientific experiments is to rule things out, but they present it as if they were positively establishing things. (For information on scientific methodology, see the books of Karl Popper and David Deutsch, such as _Conjectures and Refutations_ and _The Beginning of Infinity_.)
A good thing to consider when looking at science is what hypotheses they are trying to test, which may be ruled out by the experiment. What you need in order to learn something is two (or more) hypotheses which make contradictory predictions. Then you do an experiment and at least one is ruled out.
This research isn’t done using that approach to scientific testing. So what good is it?