Correction re. Jeffreys' pseudocounts
This commit is contained in:
parent
9a987c22be
commit
a1b5eb7b90
@ -25,7 +25,7 @@
|
||||
|
||||
|
||||
#TOC> ==========================================================================
|
||||
#TOC>
|
||||
#TOC>
|
||||
#TOC> Section Title Line
|
||||
#TOC> -----------------------------------------------------------------------
|
||||
#TOC> 1 Introduction 49
|
||||
@ -42,7 +42,7 @@
|
||||
#TOC> 4.2.1 An example from tossing dice 452
|
||||
#TOC> 4.2.2 An example from lognormal distributions 574
|
||||
#TOC> 4.3 Kolmogorov-Smirnov test for continuous distributions 616
|
||||
#TOC>
|
||||
#TOC>
|
||||
#TOC> ==========================================================================
|
||||
|
||||
|
||||
@ -449,7 +449,7 @@ chisq.test(countsL1, countsG1.9, simulate.p.value = TRUE, B = 10000)
|
||||
# be applied to discrete distributions. But we need to talk a bit about
|
||||
# converting counts to p.m.f.'s.
|
||||
|
||||
# === 4.2.1 An example from tossing dice
|
||||
# === 4.2.1 An example from tossing dice
|
||||
|
||||
# The p.m.f of an honest die is (1:1/6, 2:1/6, 3:1/6, 4:1/6, 5:1/6, 6:1/6). But
|
||||
# there is an issue when we convert sampled counts to frequencies, and estimate
|
||||
@ -482,7 +482,7 @@ pmf
|
||||
# for ordered data one could substitute the average values of the two bracketing
|
||||
# outcomes. But a simple and quite robust solution is to add "pseudocounts".
|
||||
# This is called adding a Laplace prior, or a Jeffreys prior: in our case,
|
||||
# simply add 0.5 to every value that the two functions don't share.
|
||||
# simply add 0.5 to every category.
|
||||
|
||||
# pmf of an honest die
|
||||
pmfHD <- rep(1/6, 6)
|
||||
@ -571,7 +571,7 @@ abline(v = KLdiv(rep(1/6, 6), pmfPC(counts, 1:6)), col="firebrick")
|
||||
# somewhat but not drastically atypical.
|
||||
|
||||
|
||||
# === 4.2.2 An example from lognormal distributions
|
||||
# === 4.2.2 An example from lognormal distributions
|
||||
|
||||
# We had compared a set of lognormal and gamma distributions above, now we
|
||||
# can use KL-divergence to quantify their similarity:
|
||||
|
Loading…
Reference in New Issue
Block a user