We hear within us the perpetual call: There is the problem. Seek its solution. You can find it by pure reason, for in mathematics there is no ignorabimus.1 Though not a writer himself, David Hilbert had a flair for the literary. And, at the moment in history when he spoke these words, he had every right to indulge it. When he delivered his ... Read more 14 Jan 2019 - 26 minute read

This post follows the table at the end of the Conjugate prior Wikipedia page to derive posterior distributions for parameters of a range of likelihood functions. Many resources for learning the mechanics of posterior inference under conjugate priors already exist, so there’s nothing particularly new to be seen here. However, maybe others learnin... Read more 13 Dec 2018 - 14 minute read

Define a sequence of random variables indexed by \(n\) as follows: Divide the unit interval into sub-intervals of size \(1/3^n\) Let the random variable be equal to \(1\) if \(\omega\) falls within the “middle” third of each subinterval, and let it be equal to \(0\) otherwise More formally, let \[X_n := \mathbf{1}_{B_n}\... Read more 28 Nov 2018 - 2 minute read

For a symmetric \(2 \times 2\) matrix \(\Sigma\), consider the following problem: \[\begin{aligned} & \underset{x}{\text{minimize}} & & x^T \Sigma x \\ & \text{subject to} & & x \geq 0, \; \mathbf{1}^T x = 1 \end{aligned}\] Let \[\Sigma = \left[ \begin{array}{cc} a & c \\ c & b \end{array} \right]\] and note t... Read more 06 Nov 2018 - 1 minute read

The bias-variance tradeoff is usually discussed in terms of the mean squared error (MSE) of a predictor. However, it can also be applied to estimates of coefficients in a linear model. Below we examine how bias and variance figure into the MSE of coefficient estimates under a \(g\)-prior. Assume a linear model of the form \[Y = X \beta + \epsi... Read more 05 Nov 2018 - 4 minute read