P-curve Resources

Hi everyone,

I mentioned I would share a few resources on the P-curve to supplement your understanding from my recent “Practical Skills for Navigating the Crisis” lecture and which will be useful for the homework. I organized the resources into three categories.

Interpretation:
To build a better understanding of the motivation, inferential tests and correct interpretation for the P-curve, please first read the original paper:
“P-curve: A Key to the File Drawer (2014)”.  

In a follow-up paper, “Better P-curves: Making P-curve analysis more robust to errors, fraud, and ambitious P-hacking, a Reply to Ulrich and Miller (2015),” the authors made an extension to the original method to prevent inferential estimates that neglect what they term “ambitious p-hacking.” They consider this to be p-hacking after getting a value below .05, which is significantly less likely and more challenging, but plausible.  Their proposed half-curve test attempts to identify this by testing for right skewness and low power with p-values between 0 and 0.025.

You can find both papers on the author’s website.

Finally, for a concise overview of the p-curve method along with its relation to a broader set of meta-analysis techniques, such as the previously mentioned funnel plot, you can refer to the chapter on Publication Bias from “Doing Meta-Analysis in R (Harrer et al. 2021).”

Conducting a P-curve analysis:

Please read and follow the p-curve user guide.  It describes the procedure, the appropriate statistical tests to include in the analysis and the importance of generating a complete disclosure table.

Moreover, the authors make a number of suggestions that I outline below:
1.) Decide in advance your selection rule
2.) Disclose this selection rule
3.) If a study is ambiguous, report or disclose the p-curve with the study and without study, which they term a robustness check
4.) If you are doing a P-curve of a single article with multiple studies, try to include a direct replication of this study in your analysis as well.

To compute the p-curve (step 4 from the user guide) you can use any of the following resources.

1) P-curve app 

2) Code from p-curve section in “Doing Meta Analyses in R

3) P-curve library from R*


To run the R code, you can either download R studio locally along with miniconda package manager, use RCloud, or use Replit.

* This resource allows you to get an estimate of your heterogeneity (I²) and warns you if it is higher than the recommended amount for applying a p-curve (I² ≤ 0.5)

Caveats of the P-curve:

The authors note a series of limitations with the p-curve. For example, it “does not yet technically apply to studies analyzed using discrete test statistics”, and is “less likely to conclude data have evidential value when a covariate correlates with the independent variable of interest.” Figure 3a. of the original 2014 paper shows for instance that you can get a right-skewed p-curve with p-hacking if findings contain fraud, evidential value or confounds. 

There have also been problems noted by inconsistent disclosure of studies and quantities computed using the p-curve, such as the “average power” of the studies, estimated effect size after publication bias correction, and incorrect effect size estimate with heterogenous effect size estimates.

The statistical problems are discussed in this blog post from Andrew Gelman, a statistician from Columbia University.

For a better and more comprehensive understanding of each problem, read the following resources:
Average power:
Average Power: A Cautionary Note (McShane et al. 2020)

 

Adjusted effect size:
Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods (2019)

 

Heterogeneity:
Section on p-curve heterogeneity from “Doing Meta-Analyses in R


Importance of disclosure of studies:
P-curve section in light of power-posing research from “Transparent and Reproducible Social Science Research: How to do Open Science” (Garret et al. 2018)

These resources may prove useful to aid your accurate interpretation of the P-curve.  Finally, if you would like to quantify the degree of how much evidential value are in the studies you selected (adjusted effect size), along with reading the above paper, I would encourage you to look through the book by Harrer et al. and incorporate other meta-analytic methods into your final analysis such as, Ulrich Shimmack’s Z-Curve.  

 

For other discussion on caveats, refer to these resources below.

Right skew vs. left skew P-curves discussions:

Left skew vs. right skew prevalence

 

Finally, I will add an updated set of slides (including new resources of journals accepting preregistrations/registered reports and better takeaways) to the website for your review.

 

Thanks and feel free to reach out to me, David or Matt if you have any questions regarding the homework.

 

Best,

Noah