• OK, it's on.
  • Please note that many, many Email Addresses used for spam, are not accepted at registration. Select a respectable Free email.
  • Done now. Domine miserere nobis.

Guide: Shortcuts to Scientific Literacy

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
This is the quick and dirty filtering system to interface with the scientific literature directly (cutting out the media middleman), reliably (filtering out the garbage), quickly (focusing you on the parts that matter), and honestly (filtering out your own biases when conclusions disagree). This is my method, and it is not the most strict, but it is a happy medium between finding strong evidence and respecting your time.

This process is the most valuable thing I know. While it's a bit of effort to understand, it changed my life and could change yours.

Use google scholar to filter out everything that is not science. If you are a student you might have access to databases. When I'm a student I tend to prefer databases, when I'm not, google scholar will do.

If an article is behind a paywall, you can look around to find alternate versions. Otherwise, you can copy-paste the doi or title into sci-hub and if it's in their database you'll have access. Sci-hub is not legal, but any scientists use it and I've never met anyone that would take issue with you accessing their articles that way. Scientific journals are similar to record companies in that they take the bulk of the credit for the labour of the artists/scientists they represent.

Now you have access to most scientific conclusions, but there will often be many articles with at times contradictory results. Science is a liar sometimes and most people will accept the first article they find that substantiates their views. This is perhaps the greatest pitfall of scientific discourse. The way to get around this is to use objective standards to converge on the best papers.

Any papers you read, save. Have a filing system for them that makes them findable.

I personally limit myself to ejournals where I can because academic referencing is a chore and these are the most convenient medium. I avoid books because while accessible, they often skirt peer review.

The first filter is the...

hierarchy of evidence.png

Systematic reviews and metastudies or meta-analyses are the gold standard for giving you a comprehensive understanding of the state of the scientific evidence. The next step down, RCT, will give you insight into one singular study with a strong design, which is an enormous step down:

cancer.png


Every red dot in this picture represents the results of an entire study regarding the relationship between a specific consumable (y axis) and how likely it is to predict cancer (x axis). The leftmost article on milk suggests drinking milk halves your likelihood of cancer. The rightmost article suggests it multiplies your risk by five. These are both scientific conclusions, but it would be irresponsible to assume the veracity of either given the greater context. This is what metastudies are for.

After the level of evidence, the most important aspects are its relevance to your question (read the abstract), and its age (look at the publication date).

Article relevance can be less clear than you might expect. Scientists write to a technical audience and often use terms that are loaded with additional meaning specific to their area that you (presumably not a highly specialised researcher in this field) may not pick up on.

I've heard many aspiring scientists give rules of thumb for what an acceptable range of years is (usually 5-20 years) but there is no hard rule. The reason it is important to take the most recent evidence is that science does not stand still and most theories, while perhaps the best guess at the time, are later disproven or improved upon. The further back you take your evidence from, the more likely it is that this has occurred and you are now relying on outdated information. This is a trick that the media will play to support their material interest in a given conclusion, but if you talk to people in the field references to outdated theories will stand out like a sore thumb.

As a rule, I will prioritise newer studies over older ones unless there is a difference in the aforementioned evidential level. I will accept a paper from 50 years ago if there hasn't been a more relevant one since.

There are other filters like citations and impact rating, but they tend to be more complex and can be foregone if you find a metastudy. These metrics are useful for figuring out how relevant to scientists in the field it is, but the number alone is misleading. The primary reason I bother with these metrics is when I can't find a metastudy because a metastudy is already an overview of the field. If a paper has zero citations, it's worth looking up the quality of the journal it's from in case it's avoided peer review.

Now you've converged on the papers worth reading.

Scientists write to each other and not to you. They want other scientists to read their work and replicate it or build on it. But you don't need to know how to conduct the experiment in order to be informed, so most of this information is of limited use.

Here are the sections in your typical science report:
Sections.png

And this is the same information minus everything that is designed for replicability.
SectionsLess.png

Of this remaining information, contradictory to the mindset of a good scientist, you want to start from your conclusion and work your way backward if you feel you still don't understand it. A part of you might feel like this is laziness, but it's not. By taking in as much of the big picture as possible as fast as possible, you are calibrating your brain to be more receptive to pertinent information. By understanding conclusions you are better positioned to understand premises.

1654129716895.png

With this in mind, the order I suggest reading articles in is:
  1. Conclusion and Recommendations
  2. Discussion (read if you didn't understand conclusion)
  3. Introduction and Literature Review (read if you didn't understand discussion)
After this, you can read the methods if you're curious about how stuff is measured, the results section if you're interested in the statistical process, and the references if you're looking for more (abeit older) relevant articles.

Now you understand what experts think.

Finally, and this is the important bit, accept the conclusion unless you've got good reason to think you know better than someone whose life is dedicated to this line of work. You don't need to assume it's 100% correct, but accepting that the evidence does not currently agree with you is a workable compromise. An understanding of statistics might help you in this in that you will understand the degree to which the evidence agrees/disagrees with you. A lot of the time in the softer sciences, a metastudy will conclude that the evidence is mixed.

Historically, the only reason I reject the conclusion of a paper is if their stats are weak (I set an alpha of .01, .05 is too unreliable), or the design is systematically biased, or if there are superior papers by the metrics outlined above that contradict its conclusions harshly. Many readers compromise their efforts by trusting scientists to conduct a study, but somehow don't trust them to interpret the results.

Ultimately, a bayesian view of the evidence is required, but there are so many additional layers of bias in the publication process that weak evidence can be dismissed in the presence of strong evidence (both because weak evidence is over-represented, and because it's easier).
 

dr froyd

__________________________________________________
Local time
Today 12:32 PM
Joined
Jan 26, 2015
Messages
1,485
---
i agree that looking at meta analysis and systematic reviews is the most principled approach, but it's not immune to bias of the scientific consensus itself. If you looked at the scientific consensus in 1850, you would conclude that Semmelweis was an idiot for thinking that handwashing for surgeons prevented infections.

my method is even simpler: if you have a hypothesis, find papers that support and refute it, then try to judge which side has the highest quality of research.

i use the same method for finding good books: read the most positive and most negative reviews, and figure out which side sounds like they know what they are talking about.
 

Minuend

pat pat
Local time
Today 1:32 PM
Joined
Jan 1, 2009
Messages
4,142
---
What's your thought on corruption, tampering and financial driven motivation in published science? And bias toward publishing studies that show results vs those that do not? And asking biased question that lead to biased results?

 

dr froyd

__________________________________________________
Local time
Today 12:32 PM
Joined
Jan 26, 2015
Messages
1,485
---
yeah, as long as prestige, financial rewards, and career prospects hinge on interesting statistical results, there will be survivalship bias in published results and one will get a massively higher rate of false positives than what the p-values suggest. That's the case even if there is no p-value hacking going on.

the idea of publishing the hypothesis before doing the actual research, and then publishing the result regardless of the outcome is a good one, but then again there's no journals that are gonna fill their pages with negative results. It's also not immune to misleading p-values since it doesn't prevent the researcher from making the hypothesis after having seen some data that support the hypothesis.
 

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Today 6:32 AM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
What a great public service.

The grievances people mention can be mediated by avoiding certain errors. People only want to avoid errors when it comes to things that impact them. Naturally, if they believe it'll impact them as well.

1. Putting all your eggs in one basket.
-Non-peer reviewed material, unreferenced and uncited.

2. Mindlessly accepting the dominant conclusions/theories
-Even with peer-reviewed papers, if everyone does the same experiment, that doesn't rule out the experiment itself being biased. Solution is to look at different experiments/methodologies trying to address the same/similar phenomena. Featured in most meta-analysis.

3. The human brain is a very simple organ
-When it comes to things that are so complex that we are likely to run into things we don't know we don't know it's safe to take ANYTHING ANYONE says about the cause of something, especially a article by a journalist who has no skin in the game if the science is proven wrong.

A paper came out recently saying that those who smoke only marijuana, and those who smoke only nicotine have their own unique impacts on the connections of the brain, but if done together, users of both marijuana and nicotine have brains similar to people who do neither substance.

People will parade this finding around even though, EVEN IF we accept the premise, there are still a multitude of other changes that can be done to the brain. But it doesn't matter, science said something that some people want to hear, and others want to make money from. The science may itself mention the implications of this finding, but it will never be heard.
 

Ex-User (9086)

Prolific Member
Local time
Today 12:32 PM
Joined
Nov 21, 2013
Messages
4,758
---
Super useful, thanks Hado :) I struggle with scientific literacy and only recently made a habit of doing step 1. Reading the following steps is very helpful with making and improving my own systematic approach.
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
@dr froyd
Yeah it's not foolproof, the zeitgeist shifts. Models are wrong but some are useful. You accept a margin of error going in and basically assume you're not going to outcompete specialists in their own field. This approach is about striking a balance between quality and quantity by outsourcing all of the hard work.

The scientists do the job of collecting the data and interpreting it. You take on their position as your own. The people you talk to then do the job of having a brain and criticising it, helping you identify gaps in the study or in how you understood it.

You need to file down your gyri to make your brain more aerodynamic ;)
 

BurnedOut

Your friendly neighborhood asshole
Local time
Today 6:02 PM
Joined
Apr 19, 2016
Messages
1,457
---
Location
A fucking black hole
Damn, thanks a lot for this post. I kept using google's search operators to get my shit but google scholar is much more convenient. I guess another part of scientific literacy is to know basic statistics. Without knowledge of statistics, you will fuck up. For example, I stumble upon lots of researches on interesting topics but with a very small sample size, many times below 100.
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
Regarding posting hypotheses prior to testing, I am highly in favour of this method but froyd is right, journals won't publish them unless it's in their best interest.

I'd like it if there were some sort of webbed structure to papers, so that responses and replications are inseparable from a published paper. This would increase the visibility of these unpublished results. I consider the institution of scientific journals to be an instance of "toxic" capitalism in that it undermines the incentive structure for good research.
 

BurnedOut

Your friendly neighborhood asshole
Local time
Today 6:02 PM
Joined
Apr 19, 2016
Messages
1,457
---
Location
A fucking black hole
Alternative approaches involving usage of only search engines:
1. https://developers.google.com/search/docs/advanced/debug/search-operators/overview
2. Append 'PDF' to your search query to filter out useless expert opinion pieces and other popsci balderdash.
3. Use minimal grammar. Use terms more
4. Do use grammar when your query is very specific. For example, I wanted to research on misogynistic attitudes among women. My search query was 'misogyny in women' but that yielded incorrect results.

Lastly and the most most most important thing. The word 'significantly correlated' is never to be taken in a literal sense. It is only in the context of statistics. It simply means that an observable correlation exists. Therefore any poppsy articles talks about 'correlations', take them with a grain of salt. The best examples are the articles on intelligence. All the articles say that 'there is a significant correlation between height and intelligence' This means that height and intelligence are related but the extent of that is something you will have to check for yourself. Most of the times, very serious researches yield inconclusive results. If this is the case with the research you have chosen, it is likely that the authors took efforts to prevent making exciting conclusions. If a research features correlations that are too strong (especially anything related to psychology), there is something wrong with the research if it is about a relatively unique phenomenon.

Now, go ahead young bird, test your skills: https://iheartintelligence.com/surprising-signs-of-intelligence/

Classical poppsy bullshit. If you manage to find all the follies in this one, you have already learned the basics.
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
Google-fu: "misogyny" would be a better search term because almost all psychology studies will sort results by gender anyway.

Re: Significant Correlation
Yep, but this itself is complicated. Significance is not just "observability". Almost everything has an observable correlation but not that many are statistically significant.

A correlation coefficient's meaningfulness is relative to what you expect it to be, and your expectations should be based on a historical understanding of the field. So for personality psychology, .1 would actually be a pretty impressive effect size iirc because personality is complex with many factors (despite being objectively "small"). But if this were the correlation between testosterone and strength, maybe not.

This is one of the areas where people from different fields often show over-confidence by generalising the norms from the fields they understand to the ones they don't. //siderant

These are some good red flags you've outlined. That PDF trick is a clever workaround too.

re: sample size
This is another one where people trip up. A sample under 100 can be perfectly okay depending on the design of the study. My thesis used only 20 participants, but for many of the variables p = <.00001 because we took 840 data points from each. It's better to be concerned about sample size than to accept them willy-nilly, but you will throw out a lot of valid results this way.

If the results are highly significant (check the p-value) then the sample size is probably justified.

By the same token, even extremely unlikely results can be p hacked up to 5-6 sigma. IIRC this occurred multiple times when people were excited about the Higgs boson.
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
Would people be interested in doing a bookclubesque thread for sharing articles and interpretations? Collectively going through our process and discussing ups and downs?
 

dr froyd

__________________________________________________
Local time
Today 12:32 PM
Joined
Jan 26, 2015
Messages
1,485
---
Would people be interested in doing a bookclubesque thread for sharing articles and interpretations? Collectively going through our process and discussing ups and downs?
hell yeah
 

dr froyd

__________________________________________________
Local time
Today 12:32 PM
Joined
Jan 26, 2015
Messages
1,485
---
re: sample size
This is another one where people trip up. A sample under 100 can be perfectly okay depending on the design of the study. My thesis used only 20 participants, but for many of the variables p = <.00001 because we took 840 data points from each. It's better to be concerned about sample size than to accept them willy-nilly, but you will throw out a lot of valid results this way.

in theory even a sample size of 1 can be enough, depending on the distribution of the outcome under the null hypothesis.

the actual sampling process is much more important than the sample size
 

BurnedOut

Your friendly neighborhood asshole
Local time
Today 6:02 PM
Joined
Apr 19, 2016
Messages
1,457
---
Location
A fucking black hole
Sample size is directly proportional to the accuracy of the said hypothesis because even if you take a 1000 datum from the participants, it risks having autocorrelation (because of having the same person apply cognitive biases to multiple factors at the same time)

About the bookclub, definitely in.
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
Yeah you're right burned out, but that's a second problem that the sample size addresses.

The first issue is variance. Dr Froyd is right in that a sample size of one can be enough with respect to this. If we're testing the atomic structure of gold, we can have a sample size of one because the variance is zero.

The second issue that you're talking about is diluting systematic bias with a diverse sample. This is generally a good thing to do but again it depends on what you're measuring and whether there is a meaningful difference between participants.

Again, drawing on my thesis (the only meaningful science experience I have), we were measuring the influence of virtual lesions on motor evoked potentials and reaction time. We knew from prior studies what the variance between participants should look like and could assess whether our sample was unrepresentative of the greater population from there. We also screened out potential complications like neural conditions, handedness, drug use, vision impairment, and even lab anxiety. Even then, the study was still classified as "exploratory", meaning that we weren't aiming to draw strong conclusions from our findings. Rather, the purpose of the whole experiment was to identify potential future research directions.

In fact, within-subject designs have a lot of value compared to between-subject designs in that, while your criticism regarding autocorrelation is correct, you also get to hold many other variables constant over many trials. There's a cost and a benefit.
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
What would make for a good first topic for the science book club? I'm happy to do one of the topics relevant to some of the forum discussions but we'll need to focus on the science and not our political positions.
 

BurnedOut

Your friendly neighborhood asshole
Local time
Today 6:02 PM
Joined
Apr 19, 2016
Messages
1,457
---
Location
A fucking black hole
I completely agree with all your points but my concern is about a new reader. As a basic rule of thumb, i believe that until the reader is comfortable with the hooplah around testing, generally looking out for a bigger sample size is an easy method. The trade-off is not that bad because genuinely bad studies are definitely weeded out with ease. In the case you mentioned, I don't disagree at all but given how much crappy research gets featured in pop articles, these 3 rules can help a beginner:
1. Larger sample size
2. No very high correlations
3. Test methodology
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
Fair enough.

I guess we're sort of addressing different concerns. I tend not to read pop-sci unless someone refers me to it because I see it as increasing the number of things that have to go right for me to be correctly informed (the author has to understand what they're talking about and present it honestly, which is far from a given).

But yeah, there's certainly something to be said for screening these red flags.
 

EndogenousRebel

Even a mean person is trying their best, right?
Local time
Today 6:32 AM
Joined
Jun 13, 2019
Messages
2,252
---
Location
Narnia
When a study is synthesizing studies with the intent of making a model, I usually trust these because if your intent is just to make a model, then you focus is entirely if the inputs that inform and justify components of the model are good.

This is more or less trust worthy because the study is attached to the persons name, and that person is likely way more versed in all the literature than you will ever be. This doesn't mean you shouldn't question their model or the evidence they use to substantiate it. It's just good to see the language of how they substantiate it, what words they use and how much they rely on that reference to get the points across in the paper they are writing.

So synthesis papers that make models is a good way to begin I think. There aren't much incentives behind getting them wrong at all.. Unless you think things like this graphic were intended to go viral or whatever.

conceptual framwork for emotions.png

--

I think a book club is entirely the point of getting around our deficiencies and knowledge the quickest way possible.

Assuming you aren't afraid of looking like a dunce or whatever, looking at different people's interpretations and the logic behind why they have it and how you came to your conclusive interpretation is key.

We as humans are more invested in learning through discussions then learning through one dimensional text.

Also https://www.researchgate.net/ is great for getting around restricted access to articles because they usually include references the article has even if you would normally have to pay for it.
 

BurnedOut

Your friendly neighborhood asshole
Local time
Today 6:02 PM
Joined
Apr 19, 2016
Messages
1,457
---
Location
A fucking black hole
Let us start it. I am definitely brimming with ideas that I want to share especially my thoughts on usage of algorithmic thinking
 

Hadoblado

think again losers
Local time
Today 10:02 PM
Joined
Mar 17, 2011
Messages
7,065
---
Okay. I'll start off using the recent discussion of guns as a starter. I'll also make a thread for deciding what the next topic will be.
 
Top Bottom