Why does finding small effects in large studies indicate publication bias?Funnel plots and publication bias:...

How can changes in personality/values of a person who turned into a vampire be explained?

boss asked me to sign a resignation paper without a date on it along with my new contract

How can I make my enemies feel real and make combat more engaging?

What does “to the numbers” mean in landing clearance?

Where can I educate myself on D&D universe lore, specifically on vampires and supernatural monsters?

Taking an academic pseudonym?

Can I use a larger HVAC Hard Start kit than is recommended?

Face Value of SOFR futures

How can a kingdom keep the secret of a missing monarch from the public?

How can guns be countered by melee combat without raw-ability or exceptional explanations?

Can "ee" appear in Latin?

Why does finding small effects in large studies indicate publication bias?

Is it ethical to apply for a job on someone's behalf?

Is it common to refer to someone as "Prof. Dr. [LastName]"?

Buying a "Used" Router

Why is Bernie Sanders maximum accepted donation on actblue $5600?

Why do we divide Permutations to get to Combinations?

Why do some musicians make such weird faces when they play?

What did Putin say about a US deep state in his state-of-the-nation speech; what has he said in the past?

How does the income of your target audience matter for logo design?

How do I handle a blinded enemy which wants to attack someone it's sure is there?

Which was the first story to feature space elevators?

put country dropdown in phtml file in magento2

Ramanujan's radical and how we define an infinite nested radical



Why does finding small effects in large studies indicate publication bias?


Funnel plots and publication bias: Does asymmetry means a bad meta-model?Alternative funnel plot, without using standard error (SE)Publication bias in meta-analysis of single case studiesMeta-analysis: dealing with bias with a small number of studiesWhy are small sample sized avoided in meta-analyses?Publication bias in meta-analysis of single observation studiesInterpreting stripe like shapes on funnel plot from meta-analysisPublication bias: Egger's Test significant depending on subset. Why doesTrim and Fill not add studies?Can traditional publication bias modelling approaches work properly with meta-analyses of proportions?Why aren't common asymmetry tests used to test funnel-plot asymmetry?Combine group studies and case-studies in the same meta-analysis?













27












$begingroup$


Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    yesterday








  • 7




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    yesterday
















27












$begingroup$


Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    yesterday








  • 7




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    yesterday














27












27








27


6



$begingroup$


Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.










share|cite|improve this question











$endgroup$




Several methodological papers (e.g. Egger et al 1997a, 1997b) discuss publication bias as revealed by meta-analyses, using funnel plots such as the one below.
Funnel plot of beta blockers in myocardial infarction



The 1997b paper goes on to say that "if publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects." But why is that? It seems to me that all this would prove is what we already know: small effects are only detectable with large sample sizes; while saying nothing about the studies that remained unpublished.



Also, the cited work claims that asymmetry that is visually assessed in a funnel plot "indicates that there was selective non-publication of smaller trials with less sizeable benefit." But, again, I don't understand how any features of studies that were published can possibly tell us anything (allow us to make inferences) about works that were not published!



References

Egger, M., Smith, G. D., & Phillips, A. N. (1997). Meta-analysis: principles and procedures. BMJ, 315(7121), 1533-1537.



Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315(7109), 629-634.







meta-analysis publication-bias






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited yesterday









Alexis

16.4k34599




16.4k34599










asked yesterday









z8080z8080

6021827




6021827












  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    yesterday








  • 7




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    yesterday


















  • $begingroup$
    I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
    $endgroup$
    – mdewey
    yesterday








  • 7




    $begingroup$
    For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
    $endgroup$
    – einar
    yesterday
















$begingroup$
I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
$endgroup$
– mdewey
yesterday






$begingroup$
I do not think you have this the right way round. Perhaps the answer to this Q&A might help stats.stackexchange.com/questions/214017/…
$endgroup$
– mdewey
yesterday






7




7




$begingroup$
For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
$endgroup$
– einar
yesterday




$begingroup$
For a small study to get published at all it will have to show a large effect no matter what the true effect size is.
$endgroup$
– einar
yesterday










3 Answers
3






active

oldest

votes


















18












$begingroup$

The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size.



The grey points in the plot would not be published under a strict $p < .05$ regime. The grey line is a regression of effect size on sample size including the "bad p-value" studies, while the red one excludes these. The black line shows the true effect.



As you can see, under publication bias there is a strong tendency for small studies to overestimate effect sizes and for the larger ones to report effect sizes closer to the truth.





set.seed(20-02-19)

n_studies <- 1000
sample_size <- sample(2:2000, n_studies, replace=T)

studies <- plyr::aaply(sample_size, 1, function(size) {
dat <- rnorm(size, mean = .01, sd = .1)
c(effect_size=mean(dat), p_value=t.test(dat)$p.value)
})

studies <- cbind(studies, sample_size=log(sample_size))

include <- studies[, "p_value"] < .05

plot(studies[, "sample_size"], studies[, "effect_size"],
xlab = "log(sample size)", ylab="effect size",
col=ifelse(include, "black", "grey"), pch=20)
lines(lowess(x = studies[, "sample_size"], studies[, "effect_size"]), col="grey", lwd=2)
lines(lowess(x = studies[include, "sample_size"], studies[include, "effect_size"]), col="red", lwd=2)
abline(h=.01)


Created on 2019-02-20 by the reprex package (v0.2.1)






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Excellent point, really helps understanding this intuitively, thanks!
    $endgroup$
    – z8080
    21 hours ago






  • 1




    $begingroup$
    +1 This graphic is worth a thousand words and summarizes the problem well. This type of bias can even be found when the true effect size is 0.
    $endgroup$
    – Underminer
    15 hours ago



















27












$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    yesterday





















19












$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there were no bias.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    yesterday






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    yesterday








  • 2




    $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    yesterday






  • 2




    $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    yesterday










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    yesterday











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f393256%2fwhy-does-finding-small-effects-in-large-studies-indicate-publication-bias%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









18












$begingroup$

The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size.



The grey points in the plot would not be published under a strict $p < .05$ regime. The grey line is a regression of effect size on sample size including the "bad p-value" studies, while the red one excludes these. The black line shows the true effect.



As you can see, under publication bias there is a strong tendency for small studies to overestimate effect sizes and for the larger ones to report effect sizes closer to the truth.





set.seed(20-02-19)

n_studies <- 1000
sample_size <- sample(2:2000, n_studies, replace=T)

studies <- plyr::aaply(sample_size, 1, function(size) {
dat <- rnorm(size, mean = .01, sd = .1)
c(effect_size=mean(dat), p_value=t.test(dat)$p.value)
})

studies <- cbind(studies, sample_size=log(sample_size))

include <- studies[, "p_value"] < .05

plot(studies[, "sample_size"], studies[, "effect_size"],
xlab = "log(sample size)", ylab="effect size",
col=ifelse(include, "black", "grey"), pch=20)
lines(lowess(x = studies[, "sample_size"], studies[, "effect_size"]), col="grey", lwd=2)
lines(lowess(x = studies[include, "sample_size"], studies[include, "effect_size"]), col="red", lwd=2)
abline(h=.01)


Created on 2019-02-20 by the reprex package (v0.2.1)






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Excellent point, really helps understanding this intuitively, thanks!
    $endgroup$
    – z8080
    21 hours ago






  • 1




    $begingroup$
    +1 This graphic is worth a thousand words and summarizes the problem well. This type of bias can even be found when the true effect size is 0.
    $endgroup$
    – Underminer
    15 hours ago
















18












$begingroup$

The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size.



The grey points in the plot would not be published under a strict $p < .05$ regime. The grey line is a regression of effect size on sample size including the "bad p-value" studies, while the red one excludes these. The black line shows the true effect.



As you can see, under publication bias there is a strong tendency for small studies to overestimate effect sizes and for the larger ones to report effect sizes closer to the truth.





set.seed(20-02-19)

n_studies <- 1000
sample_size <- sample(2:2000, n_studies, replace=T)

studies <- plyr::aaply(sample_size, 1, function(size) {
dat <- rnorm(size, mean = .01, sd = .1)
c(effect_size=mean(dat), p_value=t.test(dat)$p.value)
})

studies <- cbind(studies, sample_size=log(sample_size))

include <- studies[, "p_value"] < .05

plot(studies[, "sample_size"], studies[, "effect_size"],
xlab = "log(sample size)", ylab="effect size",
col=ifelse(include, "black", "grey"), pch=20)
lines(lowess(x = studies[, "sample_size"], studies[, "effect_size"]), col="grey", lwd=2)
lines(lowess(x = studies[include, "sample_size"], studies[include, "effect_size"]), col="red", lwd=2)
abline(h=.01)


Created on 2019-02-20 by the reprex package (v0.2.1)






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Excellent point, really helps understanding this intuitively, thanks!
    $endgroup$
    – z8080
    21 hours ago






  • 1




    $begingroup$
    +1 This graphic is worth a thousand words and summarizes the problem well. This type of bias can even be found when the true effect size is 0.
    $endgroup$
    – Underminer
    15 hours ago














18












18








18





$begingroup$

The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size.



The grey points in the plot would not be published under a strict $p < .05$ regime. The grey line is a regression of effect size on sample size including the "bad p-value" studies, while the red one excludes these. The black line shows the true effect.



As you can see, under publication bias there is a strong tendency for small studies to overestimate effect sizes and for the larger ones to report effect sizes closer to the truth.





set.seed(20-02-19)

n_studies <- 1000
sample_size <- sample(2:2000, n_studies, replace=T)

studies <- plyr::aaply(sample_size, 1, function(size) {
dat <- rnorm(size, mean = .01, sd = .1)
c(effect_size=mean(dat), p_value=t.test(dat)$p.value)
})

studies <- cbind(studies, sample_size=log(sample_size))

include <- studies[, "p_value"] < .05

plot(studies[, "sample_size"], studies[, "effect_size"],
xlab = "log(sample size)", ylab="effect size",
col=ifelse(include, "black", "grey"), pch=20)
lines(lowess(x = studies[, "sample_size"], studies[, "effect_size"]), col="grey", lwd=2)
lines(lowess(x = studies[include, "sample_size"], studies[include, "effect_size"]), col="red", lwd=2)
abline(h=.01)


Created on 2019-02-20 by the reprex package (v0.2.1)






share|cite|improve this answer









$endgroup$



The answers here are good, +1 to all. I just wanted to show how this effect might look in funnel plot terms in an extreme case. Below I simulate a small effect as $N(.01, .1)$ and draw samples between 2 and 2000 observations in size.



The grey points in the plot would not be published under a strict $p < .05$ regime. The grey line is a regression of effect size on sample size including the "bad p-value" studies, while the red one excludes these. The black line shows the true effect.



As you can see, under publication bias there is a strong tendency for small studies to overestimate effect sizes and for the larger ones to report effect sizes closer to the truth.





set.seed(20-02-19)

n_studies <- 1000
sample_size <- sample(2:2000, n_studies, replace=T)

studies <- plyr::aaply(sample_size, 1, function(size) {
dat <- rnorm(size, mean = .01, sd = .1)
c(effect_size=mean(dat), p_value=t.test(dat)$p.value)
})

studies <- cbind(studies, sample_size=log(sample_size))

include <- studies[, "p_value"] < .05

plot(studies[, "sample_size"], studies[, "effect_size"],
xlab = "log(sample size)", ylab="effect size",
col=ifelse(include, "black", "grey"), pch=20)
lines(lowess(x = studies[, "sample_size"], studies[, "effect_size"]), col="grey", lwd=2)
lines(lowess(x = studies[include, "sample_size"], studies[include, "effect_size"]), col="red", lwd=2)
abline(h=.01)


Created on 2019-02-20 by the reprex package (v0.2.1)







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered 22 hours ago









einareinar

2,02311324




2,02311324








  • 1




    $begingroup$
    Excellent point, really helps understanding this intuitively, thanks!
    $endgroup$
    – z8080
    21 hours ago






  • 1




    $begingroup$
    +1 This graphic is worth a thousand words and summarizes the problem well. This type of bias can even be found when the true effect size is 0.
    $endgroup$
    – Underminer
    15 hours ago














  • 1




    $begingroup$
    Excellent point, really helps understanding this intuitively, thanks!
    $endgroup$
    – z8080
    21 hours ago






  • 1




    $begingroup$
    +1 This graphic is worth a thousand words and summarizes the problem well. This type of bias can even be found when the true effect size is 0.
    $endgroup$
    – Underminer
    15 hours ago








1




1




$begingroup$
Excellent point, really helps understanding this intuitively, thanks!
$endgroup$
– z8080
21 hours ago




$begingroup$
Excellent point, really helps understanding this intuitively, thanks!
$endgroup$
– z8080
21 hours ago




1




1




$begingroup$
+1 This graphic is worth a thousand words and summarizes the problem well. This type of bias can even be found when the true effect size is 0.
$endgroup$
– Underminer
15 hours ago




$begingroup$
+1 This graphic is worth a thousand words and summarizes the problem well. This type of bias can even be found when the true effect size is 0.
$endgroup$
– Underminer
15 hours ago













27












$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    yesterday


















27












$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    yesterday
















27












27








27





$begingroup$

First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.






share|cite|improve this answer











$endgroup$



First, we need think about what "publication bias" is, and how it will affect what actually makes it into the literature.



A fairly simple model for publication bias is that we collect some data and if $p < 0.05$, we publish. Otherwise, we don't. So how does this affect what we see in the literature? Well, for one, it guarantees that $|hat theta |/ SE(hat theta) >1.96$ (assuming a Wald statistic is used). Now, one point being made is that if $n$ is really small, then $SE(hat theta)$ is relatively large and a large $|hat theta|$ is required for publication.



Now suppose that in reality, $theta$ is relatively small. Suppose we run 200 experiments, 100 with really small sample sizes and 100 with really large sample sizes. Note that of 100 really small sample size experiments, the only ones that will get published by our simple publication bias model is those with large values of $|hat theta|$ just due to random error. However, in our 100 experiments with large sample sizes, much smaller values of $hat theta$ will be published. So if the larger experiments systematically show smaller effect than the smaller experiments, this suggests that perhaps $|theta|$ is actually significantly smaller than what we typically see from the smaller experiments that actually make it into publication.



Technical note: it's true that either having a large $|hat theta|$ and/or small $SE(hat theta)$ will lead to $p < 0.05$. However, since effect sizes are typically thought of as relative to standard deviation of error term, these two conditions are essentially equivalent.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited yesterday

























answered yesterday









Cliff ABCliff AB

13.2k12465




13.2k12465












  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    yesterday




















  • $begingroup$
    "Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
    $endgroup$
    – Alexis
    yesterday


















$begingroup$
"Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
$endgroup$
– Alexis
yesterday






$begingroup$
"Now, one point being made is that if $n$ is really small, then $SE(theta)$ is relatively large and a large $|theta|$ is required for publication." This is not, technically speaking, necessarily true: $SE(theta) = frac{SD(theta)}{sqrt{n}}$: if $SE(theta)$ is very small, then a small $SE$ may result even for a small sample size, right? EDIT: Oh wait! Just read your closing sentence. :) +1
$endgroup$
– Alexis
yesterday













19












$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there were no bias.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    yesterday






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    yesterday








  • 2




    $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    yesterday






  • 2




    $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    yesterday










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    yesterday
















19












$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there were no bias.






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    yesterday






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    yesterday








  • 2




    $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    yesterday






  • 2




    $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    yesterday










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    yesterday














19












19








19





$begingroup$

Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there were no bias.






share|cite|improve this answer











$endgroup$



Read this statement a different way:



If there is no publication bias, effect size should be independent of study size.



That is, if you are studying one phenomenon, the effect size is a property of the phenomenon, not the sample/study.



Estimates of effect size could (and will) vary across studies, but if there is a systematic decreasing effect size with increasing study size, that suggests there is bias. The whole point is that this relationship suggests that there are additional small studies showing low effect size that have not been published, and if they were published and therefore could be included in a meta analysis, the overall impression would be that the effect size is smaller than what is estimated from the published subset of studies.



The variance of the effect size estimates across studies will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes if there were no bias.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited 5 hours ago









Neil G

9,66812968




9,66812968










answered yesterday









Bryan KrauseBryan Krause

696211




696211








  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    yesterday






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    yesterday








  • 2




    $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    yesterday






  • 2




    $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    yesterday










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    yesterday














  • 1




    $begingroup$
    But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
    $endgroup$
    – z8080
    yesterday






  • 2




    $begingroup$
    @z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
    $endgroup$
    – Bryan Krause
    yesterday








  • 2




    $begingroup$
    @z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
    $endgroup$
    – Bryan Krause
    yesterday






  • 2




    $begingroup$
    "Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
    $endgroup$
    – Acccumulation
    yesterday










  • $begingroup$
    @Acccumulation Does my edit fix the lack of clarity you saw?
    $endgroup$
    – Bryan Krause
    yesterday








1




1




$begingroup$
But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
$endgroup$
– z8080
yesterday




$begingroup$
But is it really correct to say that "If there is no publication bias, effect size should be independent of study size"? This is true of course when you refer to the true underlying effect, but I think they are referring to the estimated effect. An effect size that is dependent of study size (suggesting bias) amounts to a linear relationship in that scatter plot (high correlation). This is something I have personally not seen in any funnel plots, even though of course many of those funnel plots did imply that a bias existed.
$endgroup$
– z8080
yesterday




2




2




$begingroup$
@z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
$endgroup$
– Bryan Krause
yesterday






$begingroup$
@z8080 You're right, only if the mean and standard deviation estimates are unbiased will the estimated effect size be completely independent of study size if there is no publication bias. Since the sample standard deviation is biased, there will be some bias in the effect size estimates, but that bias is small compared to the level of bias across studies that Egger et al are referring to. In my answer I'm treating it as negligible, assuming the sample size is large enough that the SD estimate is nearly unbiased, and therefore considering it to be independent of study size.
$endgroup$
– Bryan Krause
yesterday






2




2




$begingroup$
@z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
$endgroup$
– Bryan Krause
yesterday




$begingroup$
@z8080 The variance of the effect size estimates will depend on sample size, but you should see an equal number of under and over estimates at low sample sizes.
$endgroup$
– Bryan Krause
yesterday




2




2




$begingroup$
"Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
$endgroup$
– Acccumulation
yesterday




$begingroup$
"Estimates of effect size could (and will) vary across studies, but if there is a systematic relationship between effect size and study size" That phrasing is a bit unclear about the difference between dependence and effect size. The distribution of effect size will the different for difference sample size, and thus will not be independent of sample size, regardless of whether there's bias. Bias is a systematic direction of the dependence.
$endgroup$
– Acccumulation
yesterday












$begingroup$
@Acccumulation Does my edit fix the lack of clarity you saw?
$endgroup$
– Bryan Krause
yesterday




$begingroup$
@Acccumulation Does my edit fix the lack of clarity you saw?
$endgroup$
– Bryan Krause
yesterday


















draft saved

draft discarded




















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f393256%2fwhy-does-finding-small-effects-in-large-studies-indicate-publication-bias%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Щит и меч (фильм) Содержание Названия серий | Сюжет |...

Венесуэла на летних Олимпийских играх 2000 Содержание Состав...

Meter-Bus Содержание Параметры шины | Стандартизация |...