Saturday, January 15, 2011
Changes
I'm not happy with the current design of this blog. Look forward to more changes in background, text color, and format as soon as blogger starts cooperating.
More rambling about science
1. (No) Fear
With all my senseless ranting about ESP, I forgot to mention another psychology paper making the rounds through the popular press. Like the ESP paper, the authors and journal are both prominent within the field of psychology. Unlike the ESP paper this paper, The Human Amygdala and the Induction and Experience of Fear, has already been published.
The paper describes a single subject who has a specific genetic condition that resulted in a lesion of her amygdala. Subsequently, the authors describe how she can no longer experience fear. The account is interesting and not without scientific merit, but the authors (and the popular press) have, in my mind, overstepped the data. For one thing, the subject has damage to a wide variety of structures of surrounding the amaygdala. Also the paper doesn't describe experimental data, but rather a series of interview responses and anecdotes. I don't mean to imply that this sort of data is invalid, but more that they don't support the strong conclusions made in the paper and in the popular press.
2. Strong Claims in Science
I mentioned in the ESP post that I think this sort of thing is a big problem in psychology, and indeed I think its a systemic problem that affects every branch of science. Publications (and citations) are the primary currency in academics, with careers literally decided on the number (and quality) of published work. However, publication does not occur in a vacuum. Papers that make very strong claims (ESP exists!) will be more heavily cited, more broadly discussed, and thus more valuable as academic currency than papers that make relatively weaker claims. The problem is that the latter papers often include better scientific methods than the former.
This all depends on how you define proper scientific methodology (and science itself) of course, but it is becoming increasingly clear to me that the way to become an eminent scientist is to make strong claims that may (or may not) be based on evidence rather than applying scrupulous methods and getting into the nitty gritty aspects of research. I haven't done any formal work on this, but I can't even count the number of papers I've read in top tier journals that feature shoddy methods.
3. So what?
There really isn't another metric for judging an academic other than publications and citations, so I don't think the situation is going to change. However, I think the increasing prevalence of science blogs and other forums for discussion may do a lot for revealing the flaws in scientific papers. Perhaps this will lead researchers to be more cautious about making strong claims and applying good methodology but probably not.
With all my senseless ranting about ESP, I forgot to mention another psychology paper making the rounds through the popular press. Like the ESP paper, the authors and journal are both prominent within the field of psychology. Unlike the ESP paper this paper, The Human Amygdala and the Induction and Experience of Fear, has already been published.
The paper describes a single subject who has a specific genetic condition that resulted in a lesion of her amygdala. Subsequently, the authors describe how she can no longer experience fear. The account is interesting and not without scientific merit, but the authors (and the popular press) have, in my mind, overstepped the data. For one thing, the subject has damage to a wide variety of structures of surrounding the amaygdala. Also the paper doesn't describe experimental data, but rather a series of interview responses and anecdotes. I don't mean to imply that this sort of data is invalid, but more that they don't support the strong conclusions made in the paper and in the popular press.
2. Strong Claims in Science
I mentioned in the ESP post that I think this sort of thing is a big problem in psychology, and indeed I think its a systemic problem that affects every branch of science. Publications (and citations) are the primary currency in academics, with careers literally decided on the number (and quality) of published work. However, publication does not occur in a vacuum. Papers that make very strong claims (ESP exists!) will be more heavily cited, more broadly discussed, and thus more valuable as academic currency than papers that make relatively weaker claims. The problem is that the latter papers often include better scientific methods than the former.
This all depends on how you define proper scientific methodology (and science itself) of course, but it is becoming increasingly clear to me that the way to become an eminent scientist is to make strong claims that may (or may not) be based on evidence rather than applying scrupulous methods and getting into the nitty gritty aspects of research. I haven't done any formal work on this, but I can't even count the number of papers I've read in top tier journals that feature shoddy methods.
3. So what?
There really isn't another metric for judging an academic other than publications and citations, so I don't think the situation is going to change. However, I think the increasing prevalence of science blogs and other forums for discussion may do a lot for revealing the flaws in scientific papers. Perhaps this will lead researchers to be more cautious about making strong claims and applying good methodology but probably not.
Black Swan
I recently realized that Darren Aronofsky makes genre films. I guess this is sort of obvious, given that his films are about professional wrestlers and neuroscientist space knights, somehow I missed it until I saw Black Swan.
Though most of Aronofsky's previous non-Requiem for a Dream work can be loosely categorized as science fiction, Black Swan is essentially a horror film. The plot has a lot of standard horror movie tropes elevated by an intensely claustrophobic atmosphere and a great performance by Natalie Portman.
The story is an (even more) melodramatic take on Swan Lake with added psychodrama and body horror. The film isn't for the faint of heart with some incredibly disturbing visuals. Really, everything about Black Swan is unrelentingly dark. Even the ballet sequences, which except for some notable exceptions are the least fantastical parts of the film, are really hard to watch. Despite the darkness and melodrama, the film is tightly made and incredibly engaging. If you aren't bothered by disturbing images and a plot made up of unrelenting darkness, I highly recommend it.
Aronofsky's next film is The Wolverine. Given his distinct directorial vision and intense darkness of his previous films, I'm interested in how he is going to handle a mainstream superhero movie.
The story is an (even more) melodramatic take on Swan Lake with added psychodrama and body horror. The film isn't for the faint of heart with some incredibly disturbing visuals. Really, everything about Black Swan is unrelentingly dark. Even the ballet sequences, which except for some notable exceptions are the least fantastical parts of the film, are really hard to watch. Despite the darkness and melodrama, the film is tightly made and incredibly engaging. If you aren't bothered by disturbing images and a plot made up of unrelenting darkness, I highly recommend it.
Aronofsky's next film is The Wolverine. Given his distinct directorial vision and intense darkness of his previous films, I'm interested in how he is going to handle a mainstream superhero movie.
Thursday, January 13, 2011
There is no such thing as ESP
There is something of a major controversy brewing over an article set to be published in a highly respected social psychology journal which purportedly demonstrates scientific evidence for the existence of extra-sensory perception. Now, I've read the actual paper... and I don't buy it, not for a second. Here is a decidedly strong opinion from someone whose trained not to have them about potential behavioral phenomena: There is no such thing as ESP.
Though the author of the paper has published work on ESP before, the catalyst of the current controversy seems to be an article published in the New York Times. The issue at hand isn't the existence of ESP, which most people dismiss outright, but the use of significance testing as the primary statistical tool in psychology. I can't really get into the ins and outs of significance testing here, but it is the method most commonly used to determine if the differences observed in psychology studies are due to experimental manipulation or simply chance. The vast majority of my formal statistics training has been in learning how to properly apply these methods, with the vast majority of the results section of my masters thesis devoted to reporting how my experiments failed to show statistical differences using significance testing.
A follow-up article in the Times discusses this in more detail and ends up condemning an entire field for using a form of statistics thats been widely accepted for almost a century. Hilariously, this same article implies that psychology should adopt statistical methods similar to those used in medical studies as well as new methods such as Bayesian statistics. This is hilarious for a few reasons. First, and I'm not going to cite specific people/departments/papers here as a professional courtesy, but in my experience the closer a study is to examining anything medically related, the worse the statistics. Second, any psychology researcher worth his or her salt is already using a variety of statistical tools including both significance testing AND Bayesian methods. Psychologists may be behind in a lot of things, but learning about how to use (or abuse) new methods in statistics is not one of them (especially for those of us who use fMRI). Also. Bayesian statistics, for all their current popularity, are not without their own myriad of problems.
In terms of statistics in psychology (or really, statistics in anything) the issue isn't which method is better than another, but more when should any given method be applied. More succinctly, you need to know how to use statistics to properly use statistics. Different methods are useful for different things. Standard significance testing may be wholly inappropriate for use in some medical studies, but Bayesian statistics is not at all useful for answering questions in most branches of psychology (I actually apply it in fMRI analysis, but thats beside the point). Also, and I realize this may be particularly jarring to people in "hard" sciences, no statistics are objective. This may be especially evident with significance testing, but any number generated in any scientific pursuit is subject to interpretation. In significance testing, where p=0.05 is supposedly the threshold denoting statistical significance, results are commonly reported as significant when they are as "insignificant" as p=0.1. Its not the math thats potentially problematic, its the scientists using them. Incidentally, that previous sentence is why I think philosophy of science is so interesting.
Back to the ESP paper... I don't think the statistical methods are the source of the result. From what I can tell, the proper statistics were used given the experimental design. ESP obviously isn't real, so where do the results come from? Reading between the lines of the actual paper, it seemed to me that a lot of the experiments featured rather lax experimenter controls (poor RA oversight, subjects were not blind to the purpose of the study) that likely contributed to the statistical error that led the performance differences ever so slightly above chance level that could potentially be interpreted as possible evidence for a phenomena that may or may not be ESP. So really the controversy should be about poor laboratory controls in research published in top tier journals and overstating what your results actually mean, phenomena that I feel are startlingly widespread, rather than the (mis)use of a particular statistical methodology.
Also. There is no such thing as ESP.
Obviously.
Though the author of the paper has published work on ESP before, the catalyst of the current controversy seems to be an article published in the New York Times. The issue at hand isn't the existence of ESP, which most people dismiss outright, but the use of significance testing as the primary statistical tool in psychology. I can't really get into the ins and outs of significance testing here, but it is the method most commonly used to determine if the differences observed in psychology studies are due to experimental manipulation or simply chance. The vast majority of my formal statistics training has been in learning how to properly apply these methods, with the vast majority of the results section of my masters thesis devoted to reporting how my experiments failed to show statistical differences using significance testing.
A follow-up article in the Times discusses this in more detail and ends up condemning an entire field for using a form of statistics thats been widely accepted for almost a century. Hilariously, this same article implies that psychology should adopt statistical methods similar to those used in medical studies as well as new methods such as Bayesian statistics. This is hilarious for a few reasons. First, and I'm not going to cite specific people/departments/papers here as a professional courtesy, but in my experience the closer a study is to examining anything medically related, the worse the statistics. Second, any psychology researcher worth his or her salt is already using a variety of statistical tools including both significance testing AND Bayesian methods. Psychologists may be behind in a lot of things, but learning about how to use (or abuse) new methods in statistics is not one of them (especially for those of us who use fMRI). Also. Bayesian statistics, for all their current popularity, are not without their own myriad of problems.
In terms of statistics in psychology (or really, statistics in anything) the issue isn't which method is better than another, but more when should any given method be applied. More succinctly, you need to know how to use statistics to properly use statistics. Different methods are useful for different things. Standard significance testing may be wholly inappropriate for use in some medical studies, but Bayesian statistics is not at all useful for answering questions in most branches of psychology (I actually apply it in fMRI analysis, but thats beside the point). Also, and I realize this may be particularly jarring to people in "hard" sciences, no statistics are objective. This may be especially evident with significance testing, but any number generated in any scientific pursuit is subject to interpretation. In significance testing, where p=0.05 is supposedly the threshold denoting statistical significance, results are commonly reported as significant when they are as "insignificant" as p=0.1. Its not the math thats potentially problematic, its the scientists using them. Incidentally, that previous sentence is why I think philosophy of science is so interesting.
Back to the ESP paper... I don't think the statistical methods are the source of the result. From what I can tell, the proper statistics were used given the experimental design. ESP obviously isn't real, so where do the results come from? Reading between the lines of the actual paper, it seemed to me that a lot of the experiments featured rather lax experimenter controls (poor RA oversight, subjects were not blind to the purpose of the study) that likely contributed to the statistical error that led the performance differences ever so slightly above chance level that could potentially be interpreted as possible evidence for a phenomena that may or may not be ESP. So really the controversy should be about poor laboratory controls in research published in top tier journals and overstating what your results actually mean, phenomena that I feel are startlingly widespread, rather than the (mis)use of a particular statistical methodology.
Also. There is no such thing as ESP.
Obviously.
Snow
Lots to write about but I'm occupied digging out my car/house/shoes from the latest snow debacle. Something resembling content is coming soon.
Subscribe to:
Posts (Atom)