Jordan Bryan

Conjugate prior bootcamp

This post follows the table at the end of the Conjugate prior Wikipedia page to derive posterior distributions for parameters of a range of likelihood functions. Many resources for learning the mechanics of posterior inference under conjugate priors already exist, so there’s nothing particularly new to be seen here. However, maybe others learnin... Read more

Pairwise independence does not imply countable mutual independence

Define a sequence of random variables indexed by \(n\) as follows: Divide the unit interval into sub-intervals of size \(1/3^n\) Let the random variable be equal to \(1\) if \(\omega\) falls within the “middle” third of each subinterval, and let it be equal to \(0\) otherwise More formally, let \[X_n := \mathbf{1}_{B_n}\... Read more

Minimize a quadratic form on the 2d probability simplex

For a symmetric \(2 \times 2\) matrix \(\Sigma\), consider the following problem: \[\begin{aligned} & \underset{x}{\text{minimize}} & & x^T \Sigma x \\ & \text{subject to} & & x \geq 0, \; \mathbf{1}^T x = 1 \end{aligned}\] Let \[\Sigma = \left[ \begin{array}{cc} a & c \\ c & b \end{array} \right]\] and note t... Read more

The bias-variance circuit under g-priors

The bias-variance tradeoff is usually discussed in terms of the mean squared error (MSE) of a predictor. However, it can also be applied to estimates of coefficients in a linear model. Below we examine how bias and variance figure into the MSE of coefficient estimates under a \(g\)-prior. Assume a linear model of the form \[Y = X \beta + \epsi... Read more


The Cancer Data Science team and others at the Broad Institute came out with an article in Nature Communications today detailing a new method for normalizing and integrating multiple large RNAi screening datasets. It’s called DEMETER2, named after the Roman goddess of the harvest. DEMETER2 uses a hierarchical Bayesian model to estimate gene dep... Read more