When presenting pvalues, journals tend to request a lot of finessing.
table_pvalue()
is meant to do almost all of the finessing for you.
The part it does not do is interpret the pvalue. For that,
please see the guideline on interpretation of pvalues by the American
Statistical Association (Wasserstein, 2016). The six main statements
on pvalue usage are included in the "Interpreting pvalues" section
below.
table_pvalue( x, round_half_to = "even", decimals_outer = 3L, decimals_inner = 2L, alpha = 0.05, bound_inner_low = 0.01, bound_inner_high = 0.99, bound_outer_low = 0.001, bound_outer_high = 0.999, miss_replace = "", drop_leading_zero = TRUE )
x  a vector of numeric values. All values should be > 0 and < 1. 

round_half_to  a character value indicating how to break ties when the rounded unit is exactly halfway between two rounding points. See round_half_even and round_half_up for details. Valid inputs are 'even' and 'up'. 
decimals_outer  number of decimals to print when p > bound_outer_high or p < bound_outer_low. 
decimals_inner  number of decimals to print when

alpha  a numeric value indicating the significance level, i.e. the probability that you will make the mistake of rejecting the null hypothesis when it is true. 
bound_inner_low  the lower bound of the inner range. 
bound_inner_high  the upper bound of the inner range. 
bound_outer_low  the lowest value printed. Values lower than the threshold will be printed as <threshold. 
bound_outer_high  the highest value printed. Values higher than the threshold will be printed as >threshold. 
miss_replace  a character value that replaces missing values. 
drop_leading_zero  a logical value. If 
a character vector
The American Statistical Association (ASA) defines the pvalue as follows:
A pvalue is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value.
It then provides six principles to guide pvalue usage:
Pvalues can indicate how incompatible the data are with a specified statistical model.A pvalue provides one approach to summarizing the incompatibility between a particular set of data and a proposed model for the data. The most common context is a model, constructed under a set of assumptions, together with a socalled "null hypothesis". Often the null hypothesis postulates the absence of an effect, such as no difference between two groups, or the absence of a relationship between a factor and an outcome. The smaller the pvalue, the greater the statistical incompatibility of the data with the null hypothesis, if the underlying assumptions used to calculate the pvalue hold. This incompatibility can be interpreted as casting doubt on or providing evidence against the null hypothesis or the underlying assumptions.
Pvalues do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone. Researchers often wish to turn a pvalue into a statement about the truth of a null hypothesis, or about the probability that random chance produced the observed data. The pvalue is neither. It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself.
Scientific conclusions and business or policy decisions should not be based only on whether a pvalue passes a specific threshold. Practices that reduce data analysis or scientific inference to mechanical "brightline" rules (such as "p < 0.05") for justifying scientific claims or conclusions can lead to erroneous beliefs and poor decision making. A conclusion does not immediately become "true" on one side of the divide and "false" on the other. Researchers should bring many contextual factors into play to derive scientific inferences, including the design of a study, the quality of the measurements, the external evidence for the phenomenon under study, and the validity of assumptions that underlie the data analysis. Pragmatic considerations often require binary, "yesno" decisions, but this does not mean that pvalues alone can ensure that a decision is correct or incorrect. The widespread use of "statistical significance" (generally interpreted as "p<0.05") as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process.
Proper inference requires full reporting and transparency Pvalues and related analyses should not be reported selectively. Conducting multiple analyses of the data and reporting only those with certain pvalues (typically those passing a significance threshold) renders the reported pvalues essentially uninterpretable. Cherry picking promising findings, also known by such terms as data dredging, significance chasing, significance questing, selective inference, and "phacking," leads to a spurious excess of statistically significant results in the published literature and should be vigorously avoided. One need not formally carry out multiple statistical tests for this problem to arise: Whenever a researcher chooses what to present based on statistical results, valid interpretation of those results is severely compromised if the reader is not informed of the choice and its basis. Researchers should disclose the number of hypotheses explored during the study, all data collection decisions, all statistical analyses conducted, and all pvalues computed. Valid scientific conclusions based on pvalues and related statistics cannot be drawn without at least knowing how many and which analyses were conducted, and how those analyses (including pvalues) were selected for reporting.
A pvalue, or statistical significance, does not measure the size of an effect or the importance of a result. Statistical significance is not equivalent to scientific, human, or economic significance. Smaller pvalues do not necessarily imply the presence of larger or more important effects, and larger pvalues do not imply a lack of importance or even lack of effect. Any effect, no matter how tiny, can produce a small pvalue if the sample size or measurement precision is high enough, and large effects may produce unimpressive pvalues if the sample size is small or measurements are imprecise. Similarly, identical estimated effects will have different pvalues if the precision of the estimates differs.
By itself, a pvalue does not provide a good measure of evidence regarding a model or hypothesis. Researchers should recognize that a pvalue without context or other evidence provides limited information. For example, a pvalue near 0.05 taken by itself offers only weak evidence against the null hypothesis. Likewise, a relatively large pvalue does not imply evidence in favor of the null hypothesis; many other hypotheses may be equally or more consistent with the observed data. For these reasons, data analysis should not end with the calculation of a pvalue when other approaches are appropriate and feasible.
Wasserstein, Ronald L., and Nicole A. Lazar. "The ASA statement on pvalues: context, process, and purpose." (2016): The American Statistician: 129133. DOI: https://doi.org/10.1080/00031305.2016.1154108
Other table helpers:
table_ester()
,
table_glue()
,
table_value()
# Guideline by the American Medical Association Manual of Style: # Round pvalues to 2 or 3 digits after the decimal point depending # on the number of zeros. For example, ##  Change .157 to .16. ##  Change .037 to .04. ##  Don't change .047 to .05, because it will no longer be significant. ##  Keep .003 as is because 2 zeros after the decimal are fine. ##  Change .0003 or .00003 or .000003 to <.001 # # In addition, the guideline states that "expressing P to more than 3 # significant digits does not add useful information." You may or may not # agree with this guideline (I do not agree with parts of it), # but you will (hopefully) appreciate `table_pvalue()` automating these # recommendations if you submit papers to journals associated with # the American Medical Association. pvals_ama < c(0.157, 0.037, 0.047, 0.003, 0.0003, 0.00003, 0.000003) table_pvalue(pvals_ama)#> [1] ".16" ".04" ".047" ".003" "<.001" "<.001" "<.001"# > [1] ".16" ".04" ".047" ".003" "<.001" "<.001" "<.001" # `table_pvalue()` will fight valiantly to keep your pvalue < alpha if # it is < alpha. If it's >= alpha, `table_pvalue()` treats it normally. pvals_close < c(0.04998, 0.05, 0.050002) table_pvalue(pvals_close)#> [1] ".04998" ".05" ".05"# > [1] ".04998" ".05" ".05"