Hadoblado
think again losers
- Local time
- Today 10:02 PM
- Joined
- Mar 17, 2011
- Messages
- 7,065
This is the quick and dirty filtering system to interface with the scientific literature directly (cutting out the media middleman), reliably (filtering out the garbage), quickly (focusing you on the parts that matter), and honestly (filtering out your own biases when conclusions disagree). This is my method, and it is not the most strict, but it is a happy medium between finding strong evidence and respecting your time.
This process is the most valuable thing I know. While it's a bit of effort to understand, it changed my life and could change yours.
Now you have access to most scientific conclusions, but there will often be many articles with at times contradictory results. Science is a liar sometimes and most people will accept the first article they find that substantiates their views. This is perhaps the greatest pitfall of scientific discourse. The way to get around this is to use objective standards to converge on the best papers.
Any papers you read, save. Have a filing system for them that makes them findable.
I personally limit myself to ejournals where I can because academic referencing is a chore and these are the most convenient medium. I avoid books because while accessible, they often skirt peer review.
Now you've converged on the papers worth reading.
Now you understand what experts think.
This process is the most valuable thing I know. While it's a bit of effort to understand, it changed my life and could change yours.
Use google scholar to filter out everything that is not science. If you are a student you might have access to databases. When I'm a student I tend to prefer databases, when I'm not, google scholar will do.
If an article is behind a paywall, you can look around to find alternate versions. Otherwise, you can copy-paste the doi or title into sci-hub and if it's in their database you'll have access. Sci-hub is not legal, but any scientists use it and I've never met anyone that would take issue with you accessing their articles that way. Scientific journals are similar to record companies in that they take the bulk of the credit for the labour of the artists/scientists they represent.
If an article is behind a paywall, you can look around to find alternate versions. Otherwise, you can copy-paste the doi or title into sci-hub and if it's in their database you'll have access. Sci-hub is not legal, but any scientists use it and I've never met anyone that would take issue with you accessing their articles that way. Scientific journals are similar to record companies in that they take the bulk of the credit for the labour of the artists/scientists they represent.
Now you have access to most scientific conclusions, but there will often be many articles with at times contradictory results. Science is a liar sometimes and most people will accept the first article they find that substantiates their views. This is perhaps the greatest pitfall of scientific discourse. The way to get around this is to use objective standards to converge on the best papers.
Any papers you read, save. Have a filing system for them that makes them findable.
I personally limit myself to ejournals where I can because academic referencing is a chore and these are the most convenient medium. I avoid books because while accessible, they often skirt peer review.
The first filter is the...
Systematic reviews and metastudies or meta-analyses are the gold standard for giving you a comprehensive understanding of the state of the scientific evidence. The next step down, RCT, will give you insight into one singular study with a strong design, which is an enormous step down:
Every red dot in this picture represents the results of an entire study regarding the relationship between a specific consumable (y axis) and how likely it is to predict cancer (x axis). The leftmost article on milk suggests drinking milk halves your likelihood of cancer. The rightmost article suggests it multiplies your risk by five. These are both scientific conclusions, but it would be irresponsible to assume the veracity of either given the greater context. This is what metastudies are for.
After the level of evidence, the most important aspects are its relevance to your question (read the abstract), and its age (look at the publication date).
Systematic reviews and metastudies or meta-analyses are the gold standard for giving you a comprehensive understanding of the state of the scientific evidence. The next step down, RCT, will give you insight into one singular study with a strong design, which is an enormous step down:
Every red dot in this picture represents the results of an entire study regarding the relationship between a specific consumable (y axis) and how likely it is to predict cancer (x axis). The leftmost article on milk suggests drinking milk halves your likelihood of cancer. The rightmost article suggests it multiplies your risk by five. These are both scientific conclusions, but it would be irresponsible to assume the veracity of either given the greater context. This is what metastudies are for.
After the level of evidence, the most important aspects are its relevance to your question (read the abstract), and its age (look at the publication date).
Article relevance can be less clear than you might expect. Scientists write to a technical audience and often use terms that are loaded with additional meaning specific to their area that you (presumably not a highly specialised researcher in this field) may not pick up on.
I've heard many aspiring scientists give rules of thumb for what an acceptable range of years is (usually 5-20 years) but there is no hard rule. The reason it is important to take the most recent evidence is that science does not stand still and most theories, while perhaps the best guess at the time, are later disproven or improved upon. The further back you take your evidence from, the more likely it is that this has occurred and you are now relying on outdated information. This is a trick that the media will play to support their material interest in a given conclusion, but if you talk to people in the field references to outdated theories will stand out like a sore thumb.
As a rule, I will prioritise newer studies over older ones unless there is a difference in the aforementioned evidential level. I will accept a paper from 50 years ago if there hasn't been a more relevant one since.
As a rule, I will prioritise newer studies over older ones unless there is a difference in the aforementioned evidential level. I will accept a paper from 50 years ago if there hasn't been a more relevant one since.
There are other filters like citations and impact rating, but they tend to be more complex and can be foregone if you find a metastudy. These metrics are useful for figuring out how relevant to scientists in the field it is, but the number alone is misleading. The primary reason I bother with these metrics is when I can't find a metastudy because a metastudy is already an overview of the field. If a paper has zero citations, it's worth looking up the quality of the journal it's from in case it's avoided peer review.
Now you've converged on the papers worth reading.
Scientists write to each other and not to you. They want other scientists to read their work and replicate it or build on it. But you don't need to know how to conduct the experiment in order to be informed, so most of this information is of limited use.
Here are the sections in your typical science report:
And this is the same information minus everything that is designed for replicability.
Of this remaining information, contradictory to the mindset of a good scientist, you want to start from your conclusion and work your way backward if you feel you still don't understand it. A part of you might feel like this is laziness, but it's not. By taking in as much of the big picture as possible as fast as possible, you are calibrating your brain to be more receptive to pertinent information. By understanding conclusions you are better positioned to understand premises.
With this in mind, the order I suggest reading articles in is:
Here are the sections in your typical science report:
And this is the same information minus everything that is designed for replicability.
Of this remaining information, contradictory to the mindset of a good scientist, you want to start from your conclusion and work your way backward if you feel you still don't understand it. A part of you might feel like this is laziness, but it's not. By taking in as much of the big picture as possible as fast as possible, you are calibrating your brain to be more receptive to pertinent information. By understanding conclusions you are better positioned to understand premises.
With this in mind, the order I suggest reading articles in is:
- Conclusion and Recommendations
- Discussion (read if you didn't understand conclusion)
- Introduction and Literature Review (read if you didn't understand discussion)
Now you understand what experts think.
Finally, and this is the important bit, accept the conclusion unless you've got good reason to think you know better than someone whose life is dedicated to this line of work. You don't need to assume it's 100% correct, but accepting that the evidence does not currently agree with you is a workable compromise. An understanding of statistics might help you in this in that you will understand the degree to which the evidence agrees/disagrees with you. A lot of the time in the softer sciences, a metastudy will conclude that the evidence is mixed.
Historically, the only reason I reject the conclusion of a paper is if their stats are weak (I set an alpha of .01, .05 is too unreliable), or the design is systematically biased, or if there are superior papers by the metrics outlined above that contradict its conclusions harshly. Many readers compromise their efforts by trusting scientists to conduct a study, but somehow don't trust them to interpret the results.
Ultimately, a bayesian view of the evidence is required, but there are so many additional layers of bias in the publication process that weak evidence can be dismissed in the presence of strong evidence (both because weak evidence is over-represented, and because it's easier).
Historically, the only reason I reject the conclusion of a paper is if their stats are weak (I set an alpha of .01, .05 is too unreliable), or the design is systematically biased, or if there are superior papers by the metrics outlined above that contradict its conclusions harshly. Many readers compromise their efforts by trusting scientists to conduct a study, but somehow don't trust them to interpret the results.
Ultimately, a bayesian view of the evidence is required, but there are so many additional layers of bias in the publication process that weak evidence can be dismissed in the presence of strong evidence (both because weak evidence is over-represented, and because it's easier).