Jordan Bryan

Incompleteness and the Underground

We hear within us the perpetual call: There is the problem. Seek its solution. You can find it by pure reason, for in mathematics there is no ignorabimus.1 Though not a writer himself, David Hilbert had a flair for the literary. And, at the moment in history when he spoke these words, he had every right to indulge it. When he delivered his ... Read more

Conjugate prior bootcamp

This post follows the table at the end of the Conjugate prior Wikipedia page to derive posterior distributions for parameters of a range of likelihood functions. Many resources for learning the mechanics of posterior inference under conjugate priors already exist, so there’s nothing particularly new to be seen here. However, maybe others learnin... Read more

Pairwise independence does not imply countable mutual independence

Define a sequence of random variables indexed by \(n\) as follows: Divide the unit interval into sub-intervals of size \(1/3^n\) Let the random variable be equal to \(1\) if \(\omega\) falls within the “middle” third of each subinterval, and let it be equal to \(0\) otherwise More formally, let \[X_n := \mathbf{1}_{B_n}\... Read more

Minimize a quadratic form on the 2d probability simplex

For a symmetric \(2 \times 2\) matrix \(\Sigma\), consider the following problem: \[\begin{aligned} & \underset{x}{\text{minimize}} & & x^T \Sigma x \\ & \text{subject to} & & x \geq 0, \; \mathbf{1}^T x = 1 \end{aligned}\] Let \[\Sigma = \left[ \begin{array}{cc} a & c \\ c & b \end{array} \right]\] and note t... Read more

The bias-variance circuit under g-priors

The bias-variance tradeoff is usually discussed in terms of the mean squared error (MSE) of a predictor. However, it can also be applied to estimates of coefficients in a linear model. Below we examine how bias and variance figure into the MSE of coefficient estimates under a \(g\)-prior. Assume a linear model of the form \[Y = X \beta + \epsi... Read more