Undocumented Machine Learning (VI): Logistic Regression

In this blog post, I will discuss the logistic regression model and show some thought of mine regarding the relation between logistic regression and naive Bayes.

1. Logistic Regression

The binary classifier of logistic regression is to assume the label {y\in\{0,1\}} following the Bernoulli distribution

\displaystyle  p(y|\mathbf{x},\mathbf{w},w_0)= \pi^{y}(1-\pi)^{1-y}

where

\displaystyle  \pi=\sigma(\mathbf{w}^T\mathbf{x}+w_0)

The logistic sigmoid function is

\displaystyle  \sigma(a)=\frac{1}{1+\exp(-a)}

Given a data set {(\mathbf{y},\mathbf{X})} of independent samples, the solution of the model can be found by maximise the log-likelihood with respect to the parameter {\mathbf{w}} and {w_0}.

\displaystyle  \max_{\mathbf{w},w_0} ~\ln p(\mathbf{y}|\mathbf{X},\mathbf{w},w_0)

where

\displaystyle  \ln p(\mathbf{y}|\mathbf{X},\mathbf{w},w_0)=\sum_{i=1}^n\ln p(t_i|\mathbf{x}_i,\mathbf{w},w_0)

The classification boundary of logistic classifier is a linear hyperplane. However, the optimization problem could be ill posed. For example, if the data are lying in a subspace, there will be infinite many hyperplanes in the ambient space cutting through the same linear boundary in the subspace. To overcome the problem, a prior can be introduced for {\mathbf{w}}, where the Gaussian prior is mostly used

\displaystyle  p(\mathbf{w})=\mathcal{N}(\mathbf{w}|0,1/\sqrt\lambda)=\sqrt{\frac{\lambda}{2\pi}}\exp\left(-\frac{\lambda}{2}\|\mathbf{w}\|^2\right)

The posterior then is

\displaystyle  \begin{array}{rcl}  p(\mathbf{w}|\mathbf{X},\mathbf{y},w_0)&\propto &p(\mathbf{y}|\mathbf{X},\mathbf{w},w_0)p(\mathbf{w})\\ &\propto & \prod_{i=1}^n y_i^{t_i}(1-y_i)^{1-t_i}\exp\left(-\frac{\lambda}{2}\|\mathbf{w}\|^2\right) \end{array}

The solution can be found from the MAP estimation

\displaystyle  \max_{\mathbf{w},w_0} ~\ln p(\mathbf{w}|\mathbf{y},\mathbf{X},w_0)

which is equivalent to minimizing such a regularized error function

\displaystyle  -\sum_{i=1}^n[t_i\ln y_i+(1-t_i)\ln(1-y_i)]+\frac{\lambda}{2}\|\mathbf{w}\|^2

Note that here we place penalty only on {\mathbf{w}} not {w_0}. If {\lambda} is fixed, such an optimization target will ensure the solution of {\mathbf{w}} is translation invariant. Majority of literatures overlooked this fact that they also place a prior over {w_0}, which is wrong.

For multiclass problem, we use the indicator vector {\mathbf{y}=[y_k]_{k=1}^K,y_k\in\{0,1\}} as label. {\mathbf{y}} is assumed following the discrete distribution (a special case of multinomial distribution)

\displaystyle  p(\mathbf{y}|\mathbf{x},\mathbf{W},\mathbf{w}_0)=\prod_{k=1}^K\pi_k^{y_k}

where {\mathbf{W}=[\mathbf{w}_k]_{k=1}^K}, {\mathbf{w}_0=[w_{k0}]_{k=1}^K} and

\displaystyle  p(y_k|\mathbf{x})=\pi_k=\frac{\exp(\mathbf{w}_k^T\mathbf{x}+w_{k0})}{\sum_k \exp(\mathbf{w}_k^T\mathbf{x}+w_{k0})} \ \ \ \ \ (1)

The model in eq(1) is also called log linear model.

2. Discussion

I don’t want to open the deep generative vs discriminant discussion here (maybe in later post), just show some of my thought. From previous post, we can see that the decision function of naive Bayes is the same as logistic regression eq(1). Both of them are log linear model. It means that both methods operate in the same functional space. The difference is that they use different methods to find the solution points in that space. Logistic regression minimizes an error function derived from eq(1), where naive Bayes computes an integral derived from eq(1). It is usually believed that logistic regression makes less restrictive assumption about the data, therefore it is more general than naive Bayes. However, from what I observed, this belief is not necessarily true. Since the functional form of pdf they use is the same, they should have the similar discriminant power.

In naive Bayes model, the actual data violating the assumption does not mean that the solution is invalid for the data. Different assumption of data (e.g. Gaussian and multinomial) may lead to the same decision boundary. Invalid assumption may lead to an valid solution. For example, your data are actually generated from a Gaussian. However, you made the multinomial assumption to get a solution. This solution may turn out to be (approximately) the same solution when Gaussian assumption is made with your naive Bayes model. That is why, although the assumption in naive Bayes is not quite right, you can still have a good classifier for your data. Based on the same reason, we also can argue that even the data violate the feature independent assumption, the solution of naive Bayes might still be a valid one. This phenomenon might attribute to fact that although naive Bayes starts with the feature independent assumption, in the end it is log linear model which is more than feature independent. Though starting from something wrong to get something right is not the right thing to do in per se.

On the other hand, although logistic regression does not make the assumption of feature independent, the log linear assumption already restricts it to work on a small subset of the data of which the features are correlated. For example, given two classes of Gaussian data, if the optimal decision boundary is linear, the features of two classes of data must correlated in the same way (same covariance matrices). For those correlated data, the naive Bayes might still work.

The conclusion here is that we can not tell which one is better by simply looking at the assumptions. What really matters is the fact that both models are log linear eq(1).

Posted in Machine Learning | Tagged

Undocumented Machine Learning (V): Naive Bayes

In this blog post, I will show that the decision function of Naive Bayes is the same as logistic regression for observation from exponential family.

1. Naive Bayes with Gaussian

The basic assumption of Naive Bayes is {p(\mathbf{x}|y_k)=\prod_jp(x_j|y_k)}. The posterior is

\displaystyle p(y_k|\mathbf{x})=\frac{p(y_k)p(\mathbf{x}|y_k)}{\sum_k p(y_k)p(\mathbf{x}|y_k)}=\frac{p(y_k)\prod_j p(x_j|y_k)}{\sum_kp(y_k)\prod_j p(x_j|y_k)}

Usually in books, {p(\mathbf{x}|y_k)} is a multinomial distribution. However, there is nothing preventing us from using other distributions, as long as variables of different dimensions are independent. For example, for continuous variable, we can use Gaussian

\displaystyle p(x_j|y_k)=\mathcal{N}(x_j|\mu_{kj},\sigma_{kj})=\frac{1}{\sigma_{kj}\sqrt{2\pi}}\exp\left(-\frac{(x_j-\mu_{kj})^2}{2\sigma_{kj}^2}\right)

\displaystyle p(y_k)=\pi_k

\displaystyle  \begin{array}{rcl}  	p(y_k)p(\mathbf{x}|y_k)&=&p(y_k)\prod_j p(x_j|y_k)=\exp\left(\ln p(y_k)+\sum_j\ln p(x_j|y_k)\right)\\ 	&=&\exp\left(\ln \pi_k-\sum_j\frac{\mu_{j}^2}{2\sigma_{kj}^2}+\sum_j\frac{\mu_{kj}}{\sigma_{kj}^2}x_j\right)\prod_j\frac{1}{\sigma_{kj}\sqrt{2\pi}}\exp\left(-\frac{x_j^2}{2\sigma_{kj}^2}\right)\\ 	&=&\exp\left(\ln\pi_k-\frac{1}{2}\sum_j\sigma_{kj}^2w_{kj}^2+\mathbf{w}_k^T\mathbf{x}\right)p_k(\mathbf{x}) \end{array}

where

\displaystyle w_{kj}=\frac{\mu_{kj}}{\sigma_{kj}^2}

and

\displaystyle p_k(\mathbf{x})=\prod_j\frac{1}{\sigma_{kj}\sqrt{2\pi}}\exp\left(-\frac{x_j^2}{2\sigma_{kj}^2}\right)

The posterior is

\displaystyle p(y_k|\mathbf{x})=\frac{p(y_k)p(\mathbf{x}|y_k)}{\sum_k p(y_k)p(\mathbf{x}|y_k)}

If {\sigma_{kj}=\sigma_j} for all {k}, then all {p_k^o(\mathbf{x})} are canceled. The posterior becomes

\displaystyle p(y_k|\mathbf{x})=\frac{\exp(w_{k0}+\mathbf{w}_k^T\mathbf{x})}{\sum_k\exp(w_{k0}+\mathbf{w}_k^T\mathbf{x})}

where

\displaystyle w_{k0}=\ln\pi_k-\frac{1}{2}\sum_j\sigma_j^2w_{kj}^2

We can see Naive Bayes is a linear model, whose decision function is the same as logistic regression, given that the variance is the same for all classes.

2. Naive Bayes with Exponential Family

Actually, the result in previous section is not an accident. In general, we can use any distribution from the exponential family of such a form for Naive Bayes

\displaystyle  	p(\mathbf{x}|y_k)=\exp(\boldsymbol{\theta}_k^T\mathbf{x}-A(\boldsymbol{\theta}_k))p_o(\mathbf{x})	 \ \ \ \ \ (1)

This is the sub-family of the general exponential family introduced in previous blog post. It is also widely used, such as in generalized linear model. This family of distribution has some special properties. First, variables of all dimensions are independent, which meets the requirement of Naive Bayes. Second, the sufficient statistics is only the first order momentum

\displaystyle E[\mathbf{x}]=\nabla A(\boldsymbol{\theta}_k)=\mu(\boldsymbol{\theta}_k)

The maximum likelihood estimation of {\boldsymbol{\theta}} is simply

\displaystyle \boldsymbol{\theta}_k=\mu^{-1}\left(\frac{1}{n_k}\sum_{i\in y_k}\mathbf{x}_i\right)

The posterior again is

\displaystyle p(y_k|\mathbf{x})=\frac{\exp(w_{k0}+\mathbf{w}_k^T\mathbf{x})}{\sum_k\exp(w_{k0}+\mathbf{w}_k^T\mathbf{x})}

where

\displaystyle \mathbf{w}_k=\boldsymbol{\theta}_k

and

\displaystyle w_{k0}=\ln\pi_k-A(\boldsymbol{\theta}_k)

From these results, we can see that, for Naive Bayes, the decision boundary is always linear, if the conditional distribution of observations are of exponential family (1).

Posted in Machine Learning | Tagged | 1 Comment

Undocumented Machine Learning (IV): Exponential Family

1. Exponential Family

The exponential family of distributions over {x}, given the parameter {\theta}, is defined to be the set of distributions of the form

\displaystyle p(x|\theta)=h(x)g(\theta)\exp\left(\theta^T\phi(x)\right)

or equivalently

\displaystyle p(x|\theta)=p_0(x)\exp\left(\theta^T\phi(x)-A(\theta)\right)

Here {\theta} is called the natural parameter of the distribution, and {\phi(x)} is some function of {x} called the sufficient statistic. The function {g(\theta)} is called partition function which is a normalization coefficient such that

\displaystyle g(\theta)\int h(x)\exp\left(\theta^T\phi(x)\right)dx=1

where the function

\displaystyle -\ln g(\theta)=\ln\int h(x)\exp\left(\theta^T\phi(x)\right)dx

is a convex function and has the property

\displaystyle  \begin{array}{rcl}  -\nabla_{\theta}\ln g(\theta)&=&\frac{\int \phi(x)h(x)\exp\left(\theta^T\phi(x)\right)dx}{\int h(x)\exp\left(\theta^T\phi(x)\right)dx}\\ &=&\int \phi(x)h(x)g(\theta)\exp\left(\theta^T\phi(x)\right)dx=E[\phi(x)] \end{array}

Therefore we have

\displaystyle  \begin{array}{rcl}  	E[\phi(x)]&=&-\nabla_{\theta}\ln g(\theta)=\mu(\theta)\\ 	\mathrm{Cov}[\phi(x)]&=&-\nabla_{\theta}^2\ln g(\theta) \end{array}

The likelihood of the distribution given data {X=\{x_i\}_{i=1}^n} is

\displaystyle p(X|\theta)=\prod_{i=1}^n h(x_i)g(\theta)^n\exp\left(\theta^T\sum_{i=1}^n \phi(x_i)\right)

The maximum likelihood estimation of the natural parameters is equivalent to solving following convex optimization problem

\displaystyle \min_{\theta}~~~ -\left<\frac{1}{n}\sum_{i=1}^n \phi(x_i),\theta\right>-\ln g(\theta)

The global optima is the solution of following equation

\displaystyle \theta=\mu^{-1}\left(\frac{1}{n}\sum_{i=1}^n \phi(x_i)\right)

For members of the exponential family, there exists a conjugate prior that can be written in the form

\displaystyle p(\theta|\chi^{\mathrm{o}},\nu^{\mathrm{o}})=f(\chi^{\mathrm{o}},\nu^{\mathrm{o}})g(\theta)^{\nu^{\mathrm{o}}}\exp\left(\nu^{\mathrm{o}}\theta^T\chi^{\mathrm{o}}\right)

where {f(\chi^{\mathrm{o}},\nu^{\mathrm{o}})} is a normalization coefficient. The posterior distribution is

\displaystyle p(\theta|X,\chi^{\mathrm{o}},\nu^{\mathrm{o}})\propto p(X|\theta)p(\theta|\chi^{\mathrm{o}},\nu^{\mathrm{o}})

which is in the same parametric form as the prior distribution

\displaystyle p(\theta|X,\chi^{\mathrm{o}},\nu^{\mathrm{o}})=p(\theta|\chi,\nu)=f(\chi,\nu)g(\theta)^{\nu}\exp\left(\nu\theta^T\chi\right)

where

\displaystyle  \begin{array}{rcl}  \nu&=&\nu^{\mathrm{o}}+n,\\ \chi&=&\frac{\nu^{\mathrm{o}} \chi^{\mathrm{o}}+\sum_{i=1}^n \phi(x_i)}{\nu^{\mathrm{o}}+n} \end{array}

The parameters {\nu^{\mathrm{o}}} and {\nu} can be interpreted as the effective numbers of pseudo-observations in the prior and the posterior respectively. {\chi^{\mathrm{o}}} and {\chi} are the averages of the effective observations. We will use the term prior hyperparameters to refer to {\eta^{\mathrm{o}}=\{\chi^{\mathrm{o}},\nu^{\mathrm{o}}\}}, and the term posterior hyperparameters to refer to {\eta=\{\chi,\nu\}}.

For the distribution of the exponential family with the conjugate prior, given observations {X=\{x_i\}_i}, we can also analytically evaluate the marginal likelihood (also known as model evidence) as

\displaystyle  \begin{array}{rcl}  p(X)&=&\int p(X|\theta)p(\theta|\chi^{\mathrm{o}},\nu^{\mathrm{o}})d\theta\\ &=&\int \prod_{i=1}^n h(x_i)g(\theta)^n\exp\left(\theta^T\sum_{i=1}^n \phi(x_i)\right)f(\chi^{\mathrm{o}},\nu^{\mathrm{o}})g(\theta)^{\nu^{\mathrm{o}}}\exp\left(\nu^{\mathrm{o}}\theta^T\chi^{\mathrm{o}}\right) d\theta\\ &=&f(\chi^{\mathrm{o}},\nu^{\mathrm{o}})\prod_{i=1}^n h(x_i)\int g(\theta)^{\nu^{\mathrm{o}}+n}\exp\left(\theta^T\left(\sum_{i=1}^n \phi(x_i)+\nu^{\mathrm{o}}\chi^{\mathrm{o}}\right)\right)d\theta\\ &=&f(\chi^{\mathrm{o}},\nu^{\mathrm{o}})\prod_{i=1}^n h(x_i)\int g(\theta)^{\nu}\exp\left(\nu\theta^T\chi\right)d\theta\\ &=&\frac{f(\chi^{\mathrm{o}},\nu^{\mathrm{o}})}{f(\chi,\nu)}\prod_{i=1}^n h(x_i) \end{array}

The predictive likelihood of a new observation {x^*} is given by

\displaystyle  \begin{array}{rcl}  p(x^*|X)&=&\int p(x^*|\theta)p(\theta|X,\chi^{\mathrm{o}},\nu^{\mathrm{o}})d\theta=\int p(x^*|\theta)p(\theta|\chi,\nu)d\theta\\ &=&\int h(x^*)g(\theta)\exp\left(\theta^T\phi(x^*)\right) f(\chi,\nu)g(\theta)^{\nu}\exp\left(\nu\theta^T\chi\right)d\theta\\ &=&f(\chi,\nu)h(x^*)\int g(\theta)^{\nu+1}\exp\left(\theta^T(\phi(x^*)+\nu\chi)\right)d\theta\\ &=&f(\chi,\nu)h(x^*)\int g(\theta)^{\nu^*}\exp\left(\nu^*\theta^T\chi^*\right)d\theta\\ &=&\frac{f(\chi,\nu)}{f(\chi^*,\nu^*)}h(x^*) \end{array}

where

\displaystyle  \begin{array}{rcl}  \nu^*&=&\nu+1\\ \chi^*&=&\frac{\nu\chi+\phi(x^*)}{\nu+1} \end{array}

The marginal distribution is usually not in exponential family.

2. Conjugate Gaussian Distribution

The density function of the Gaussian distribution is

\displaystyle p(x|\mu,\Lambda)=\mathcal{N}(x|\mu,\Lambda^{-1})=\frac{|\Lambda|^{1/2}}{(2\pi)^{d/2}}\exp\left(-\frac{1}{2}(x-\mu)^T\Lambda(x-\mu)\right).

where {\mu} and {\Lambda} are the mean and the precision of the Gaussian distribution. The likelihood is

\displaystyle p(X|\mu,\Lambda)=\prod_{i=1}^{n}p(x_i|\mu,\Lambda)=\left(\frac{|\Lambda|}{(2\pi)^{d}}\right)^{n/2}\exp\left(-\frac{1}{2}\sum_{i=1}^n(x_i-\mu)^T\Lambda(x_i-\mu)\right)

The maximum likelihood estimation of the parameters {\mu} and {\Lambda^{-1}} are the sample mean and covariance

\displaystyle  \begin{array}{rcl}  \bar{x}&=&\frac{1}{n}\sum_{i=1}^n x_i\\ \Lambda^{-1}&=&\frac{1}{n}\sum_{i=1}^n(x_i-\bar{x})(x_i-\bar{x})^T \end{array}

The conjugate prior for the parameter {\mu} of {\mathcal{N}(x|\mu,\Lambda^{-1})} is a Gaussian

\displaystyle p(\mu|m^{\mathrm{o}},\kappa^{\mathrm{o}})=\mathcal{N}(\mu|m^{\mathrm{o}},(\kappa^{\mathrm{o}}\Lambda)^{-1})

The posterior is

\displaystyle  \begin{array}{rcl}  p(\mu|X)&\propto& p(X|\mu)p(\mu|m^{\mathrm{o}},\kappa^{\mathrm{o}})\\ &\propto& \exp\left(-\frac{1}{2}\sum_{i=1}^n(x_i-\mu)^T\Lambda(x_i-\mu)\right) \exp\left(-\frac{\kappa^{\mathrm{o}}}{2}(\mu-m^{\mathrm{o}})^T\Lambda(\mu-m^{\mathrm{o}})\right)\\ &\propto&\exp\left(-\frac{1}{2}\left[\sum_{i=1}^n(x_i-\mu)^T\Lambda(x_i-\mu)+\kappa^{\mathrm{o}}(\mu-m^{\mathrm{o}})^T\Lambda(\mu-m^{\mathrm{o}})\right]\right)\\ &\propto&\exp\left(-\frac{\kappa^{\mathrm{o}}+n}{2}(\mu-m)^T\Lambda(\mu-m)\right) \end{array}

where

\displaystyle m=\frac{\kappa^{\mathrm{o}}m^{\mathrm{o}}+n\bar{x}}{\kappa^{\mathrm{o}}+n}

The posterior is again a Gaussian

\displaystyle p(\mu|X)=\mathcal{N}(\mu|m,(\kappa\Lambda)^{-1})

with the parameters

\displaystyle  \begin{array}{rcl}  \kappa&=&\kappa^{\mathrm{o}}+n\\ m&=&\frac{\kappa^{\mathrm{o}}m^{\mathrm{o}}+n\bar{x}}{\kappa^{\mathrm{o}}+n} \end{array}

The conjugate prior for both the parameters {\mu} and {\Lambda} of {\mathcal{N}(x|\mu,\Lambda^{-1})} is the Gaussian-Wishart distribution

\displaystyle p(\mu,\Lambda|m^{\mathrm{o}},\kappa^{\mathrm{o}}, T^{\mathrm{o}},\nu^{\mathrm{o}})=p(\mu|\Lambda)p(\Lambda)=\mathcal{N}(\mu|m^{\mathrm{o}},(\kappa^{\mathrm{o}}\Lambda)^{-1})\mathcal{W}(\Lambda|(T^{\mathrm{o}})^{-1},\nu^{\mathrm{o}})

where {\mathcal{W}(\Lambda|W,\nu)} is the Wishart distribution with the density function given by

\displaystyle \mathcal{W}(\Lambda|W,\nu)=B(W,\nu)|\Lambda|^{(\nu-d-1)/2}\exp\left(-\frac{1}{2}\mathrm{Tr}(W^{-1}\Lambda)\right)

where

\displaystyle B(W,\nu)=|W|^{-\nu/2}\left(2^{\nu d/2}\pi^{d(d-1)/4}\prod_{i=1}^d \Gamma\left(\frac{\nu+1-i}{2}\right)\right)^{-1}

The posterior is of the same parametric form as the prior

\displaystyle p(\mu,\Lambda|X,m,\kappa,T,\nu)=\mathcal{N}(\mu|m,(\kappa\Lambda)^{-1})\mathcal{W}(\Lambda|T^{-1},\nu)

where

\displaystyle  \begin{array}{rcl}  \kappa&=&\kappa^{\mathrm{o}}+n\\ m&=&\frac{\kappa^{\mathrm{o}}m^{\mathrm{o}}+n\bar{x}}{\kappa^{\mathrm{o}}+n}\\ \nu&=&\nu^{\mathrm{o}}+n\\ T&=&T^{\mathrm{o}}+nS+\frac{\kappa^{\mathrm{o}}n(\bar{x}-m^{\mathrm{o}})(\bar{x}-m^{\mathrm{o}})^T}{\kappa^{\mathrm{o}}+n}\\ \end{array}

The Bayesian posterior inference of conjugate Gaussian-Wishart model is given by

\displaystyle  \begin{array}{rcl}  p(x|X)&=&\int\int p(x|\mu,\Lambda)p(\mu,\Lambda|X)d\mu d\Lambda\\ &=&\int \int\mathcal{N}(x|\mu,\Lambda^{-1})\mathcal{N}(\mu|m,(\kappa\Lambda)^{-1})d\mu \mathcal{W}(\Lambda|T^{-1},\nu) d\Lambda\\ &=&\int\mathcal{N}(x|m,(1+\kappa^{-1})\Lambda^{-1})\mathcal{W}(\Lambda|T^{-1},\nu) d\Lambda\\ &=&\mathcal{T}(x|m,\frac{(\kappa+1)}{\kappa(\nu-d+1)}T,\nu-d+1) \end{array}

where {\mathcal{T}(x|\mu,\Sigma,\nu)} is a student-t distribution with the density function

\displaystyle \mathcal{T}(x|\mu,\Sigma,\nu)=\frac{\Gamma((\nu+d)/2)}{\Gamma(\nu/2)}\frac{1}{(\nu\pi)^{d/2}|\Sigma|^{1/2}}\left[1+\frac{1}{\nu}(x-\mu)^T\Sigma^{-1}(x-\mu)\right]^{-(\nu+d)/2}

Posted in Machine Learning | Tagged | 1 Comment

Undocumented Machine Learning (III): From Probabilistic Regression to Gaussian Process

In this post, I will show how directly derive the Gaussian process formulation of nonlinear regression from probabilistic linear regression model. Doing this, some common misunderstanding of GP will be clear.

First, for reference purpose, let’s write down the probabilistic formulation for linear regression.

0.1. Probabilistic Linear Regression

For a linear model

\displaystyle y=\mathbf{w}^T\mathbf{x}+\epsilon

where the noise {\epsilon} follows {\mathcal{N}(\epsilon|0,\beta^{-1})}, the likelihood is

\displaystyle p(y|\mathbf{w},\mathbf{x})=\mathcal{N}(y|\mathbf{w}^T\mathbf{x},\beta^{-1})

Assuming the priors is

\displaystyle p(\mathbf{w})=\mathcal{N}(\mathbf{w}|0,\alpha^{-1}\mathbf{I})

The posterior then is

\displaystyle p(\mathbf{w}|\mathbf{y},\mathbf{X})=\mathcal{N}(\mathbf{w}|\mathbf{m}_n,\mathbf{S}_n)

where

\displaystyle  \begin{array}{rcl}  \mathbf{m}_n&=&\beta\mathbf{S}_n\mathbf{X}\mathbf{y}^T\\ \mathbf{S}_n&=&(\alpha \mathbf{I}+\beta\mathbf{X}\mathbf{X}^T)^{-1} \end{array}

Given new samle {\mathbf{x}}, the predictive distribution of a value {y} is

\displaystyle  \begin{array}{rcl}  p(y|\mathbf{y},\mathbf{x},\mathbf{X})&=&\int p(y|\mathbf{x},\mathbf{w})p(\mathbf{w}|\mathbf{y},\mathbf{X})d\mathbf{w}\\ &=&\mathcal{N}(y|m(\mathbf{x}),\sigma^2(\mathbf{x})) \end{array}

where

\displaystyle  \begin{array}{rcl}  m(\mathbf{x})&=&\mathbf{m}_n^T\mathbf{x}=(\beta^{-1}\alpha\mathbf{I}+\mathbf{X}\mathbf{X})^{-1}\mathbf{X}\mathbf{y}^T\\ \sigma^2(x)&=&\beta^{-1}+\mathbf{x}^T\mathbf{S}_n\mathbf{x} \end{array}

These are textbook illustration which can be found in PRML.

To deal with the bias term {w_o} for a linear model

\displaystyle p(\mathbf{y}|\mathbf{X},\mathbf{w},w_0)=\mathcal{N}(\mathbf{w}^T\mathbf{X}+w_0\mathbf{I},\beta^{-1} \mathbf{I})

one way is to use the augment variable trick {\tilde{\mathbf{x}}=[1,\mathbf{x}^T]^T}. Then applying above formulation is equivalent to use a prior {p(w_0|0,\alpha_0)} with {\alpha_0=\alpha} for {w_0}. It does not make much sense to do so. As indicated before, By incorporating this prior, you lose the translation invariant property. To fix this, we can use non-informative prior for {w_0}, where we simply make {\alpha_0=0}. Then we can derive the posterior

\displaystyle

0.2. Gaussian Process

Textbooks often directly present you the Gaussian process, then show the predictive distribution of GP is actually equivalent to linear regression when linear kernel used. However, since there is equivalence, we should be able to derive from one to another. Usually your textbook does not tell you how.

Here is the PRML way to present the GP (where we switch the symbols for consistency sake)

\displaystyle p(\mathbf{y}|\mathbf{t})=\mathcal{N}(\mathbf{y}|\mathbf{t},\beta^{-1}\mathbf{I})

where

\displaystyle  p(\mathbf{t})=\mathcal{N}(\mathbf{t}|0,\mathbf{K}) \ \ \ \ \ (1)

Then marginalizing out {\mathbf{t}} you get

\displaystyle  p(\mathbf{y})=\int p(\mathbf{y}|\mathbf{t})p(\mathbf{t})d\mathbf{t}=\mathcal{N}(\mathbf{y}|0,\mathbf{K}+\beta^{-1}\mathbf{I}) \ \ \ \ \ (2)

Then inference becomes

\displaystyle  p(y|\mathbf{y})=\frac{p([y,\mathbf{y}])}{p(\mathbf{y})}=\mathcal{N}(y|m(\mathbf{x}),\sigma^2(\mathbf{x})) \ \ \ \ \ (3)

where

\displaystyle  \begin{array}{rcl}  m(\mathbf{x})&=&\mathbf{k}^T(\mathbf{K}+\beta^{-1}\mathbf{I})^{-1}\mathbf{t}\\ \sigma^2(\mathbf{x})&=&c+\mathbf{k}^T(\mathbf{K}+\beta^{-1}\mathbf{I})^{-1}\mathbf{k} \end{array}

where {c=\kappa(\mathbf{x},\mathbf{x})+\beta^{-1}}. However, this argument is not strictly correct, since the kernel matrix {\mathbf{K}} will almost certainly be singular. Therefore the distribution (1) will be ill-defined. Actually, we can derive the Gaussian process from linear regression without involving the ill-defined distribution. In Bayesian inference, we all know that given likelihood {p(\mathbf{X}|\theta)}, we assume prior {p(\theta)} to derive the posterior {p(\theta|\mathbf{X})}, then inference by marginalize the parameter

\displaystyle p(\mathbf{x}|\mathbf{X})=\int p(\mathbf{x}|\theta)p(\theta|\mathbf{X})d\theta

Actually, there is another way that we can first marginalize parameter {\theta} by

\displaystyle p(\mathbf{X})=\int p(\mathbf{X}|\theta)p(\theta)\theta

By integrate out the parameter {\theta}, the samples are no more independent, this is called explain away effect. Then we can do inference by

\displaystyle p(\mathbf{x}|\mathbf{X})=\frac{p([\mathbf{x},\mathbf{X}])}{p(\mathbf{X})}

To derive GP formulation of regression from linear regression, one way is to first marginalize out {\mathbf{w}} from {p(\mathbf{y}|\mathbf{X},\mathbf{w})=\mathcal{N}(\mathbf{y}|\mathbf{w}^T\mathbf{X},\beta^{-1} \mathbf{I})}, the marginal likelihood is

\displaystyle  \begin{array}{rcl}  p(\mathbf{y}|\mathbf{X})&=&\int p(\mathbf{y}|\mathbf{X},\mathbf{w})p(\mathbf{w})d\mathbf{w}\\ &=&\int \mathcal{N}(\mathbf{y}|\mathbf{w}^T\mathbf{X},\beta^{-1} \mathbf{I})\mathcal{N}(\mathbf{w}|0,\alpha^{-1} \mathbf{I})d\mathbf{w}\\ &=&\mathcal{N}(\mathbf{y}|0,\alpha^{-1}\mathbf{X}^T\mathbf{X}+\beta^{-1}\mathbf{I}) \end{array}

This marginal likelihood is equivalent to (2). Therefore you can see, in order to establish the exact equivalence, the kernel matrix should be {\mathbf{K}=\alpha\mathbf{X}^T\mathbf{X}}. The predictive distribution then is

\displaystyle p(y|\mathbf{y},[\mathbf{x},\mathbf{X}])=\frac{p([y,\mathbf{y}]|[\mathbf{x},\mathbf{X}])}{p(\mathbf{y}|\mathbf{X})}=\mathcal{N}(y|m(\mathbf{x}),\sigma^2(\mathbf{x}))

where

\displaystyle  \begin{array}{rcl}  m(\mathbf{x})&=&\mathbf{x}^T\mathbf{X}(\mathbf{X}^T\mathbf{X}+\beta^{-1}\alpha\mathbf{I})^{-1}\mathbf{y}^T\\ \sigma^2(\mathbf{x})&=&\beta^{-1}+\mathbf{x}^T\mathbf{S}_n\mathbf{x} \end{array}

Then you can substitute the inner product in above formulation with kernel to get nonlinear regression.

Be careful when you see a Gaussian process {\mathrm{GP}(f(\mathbf{x})|m(\mathbf{x}),\kappa(\mathbf{x},\mathbf{x}'))}, the kernel here is not the usual kernel you used in other kernel methods. It has to be positive definite, otherwise the marginal distribution wont be a valid distribution. To be more clear, maybe we should use something like {\mathrm{GP}(f(\mathbf{x})|m(\mathbf{x}),\lambda\kappa(\mathbf{x},\mathbf{x}')+\epsilon\delta(\mathbf{x},\mathbf{x}'))}, where {\lambda=\alpha^{-1}}, {\epsilon=\beta^{-1}} and the kernel then is the usual kernel used in non-probabilistic kernel methods.

Dealing with bias in GP is very tricky. One way is to use the augment kernel {\tilde{\kappa}(\mathbf{x},\mathbf{x}')=\kappa(\mathbf{x},\mathbf{x}')+1} as suggested in previous post. Then define a Gaussian process {\mathrm{GP}(f(\mathbf{x})|0,\lambda\tilde{\kappa}(\mathbf{x},\mathbf{x}')+\epsilon\delta(\mathbf{x},\mathbf{x}'))} and regress with it as in (3). Again, you can not expect translation invariance by using this.

Another way is to use the centering kernel

\displaystyle  \begin{array}{rcl}  \bar{\kappa}(\mathbf{x},\mathbf{x}')&=&\kappa(\mathbf{x},\mathbf{x}')-\frac{1}{n}\sum_i\kappa(\mathbf{x},\mathbf{x}_i)\\ &&-\frac{1}{n}\sum_i\kappa(\mathbf{x}',\mathbf{x}_i)+\frac{1}{n}\sum_i\frac{1}{n}\sum_j\kappa(\mathbf{x}_i,\mathbf{x}_j) \end{array}

then define a Gaussian process {\mathrm{GP}(f(\mathbf{x})|\bar{y},\lambda\bar{\kappa}(\mathbf{x},\mathbf{x}')+\epsilon\delta(\mathbf{x},\mathbf{x}'))} and regress with it as in (3). This method gives you translation invariance. However, this definition of GP depends on the training data, since the center depends on data. The kernel function also depends on the data.

One might think that we can derive the GP from a probabilistic linear regression formulation which is translation invariant. However, it is not easy. To see why, we explicitly marginalize the {w_0} as

\displaystyle  \begin{array}{rcl}  p(\mathbf{y}|\mathbf{X})&=&\int p(\mathbf{y}|\mathbf{X},\mathbf{w},w_0)p(\mathbf{w})d\mathbf{w} p(w_0)dw_0=\int p(\mathbf{y}|\mathbf{X},w_0)p(w_0)dw_0\\ &=&\int \mathcal{N}(\mathbf{y}|w_0\mathbf{1},\alpha^{-1}\mathbf{X}^T\mathbf{X}+\beta^{-1}\mathbf{I})\mathcal{N}(w_0|0,\alpha_0^{-1})dw_0\\ &=&\mathcal{N}(\mathbf{y}|0,(\alpha^{-1}\mathbf{X}^T\mathbf{X}+\alpha_0^{-1}\mathbf{1}^T\mathbf{1})+\beta^{-1}\mathbf{I}) \end{array}

As indicated in previous section, the data centering is equivalent to make {\alpha_0=0}. However by doing so, {p(\mathbf{y}|\mathbf{X})} wont be a valid Gaussian, and you can not construct a GP from it.

To contruct a GP for regression which is translation invariant is superisingly non-trivial. How to solve the dilemma is still bothering me. I cannot figure out a way to have a GP, when used for regression, the solution of which is translation invariant. I hope there is a easier way to do it. Maybe the reality is that you can not! Any suggestion is welcome.

Posted in Machine Learning | Tagged | 1 Comment

Undocumented Machine Learning (II): Kernel Regression

Last time I talked about linear ridge regression. You might think the trick in last post is trivial. However, applying the trick for nonlinear regression won’t be so obvious any more.

It is natural to extend the linear ridge regression to a nonlinear one using the kernel trick. To use kernels, we assume a (nonlinear) feature map {\phi_i=\phi(\mathbf{x}_i)} that map {\mathbf{x}_i} from the input space to {\phi_i} in feature space, and have the kernel function {\kappa(\mathbf{x}_i,\mathbf{x}_j)=\left<\phi_i,\phi_j\right>} as the inner product in feature space.

To derive the kernel regression, let me repeat the textbook solution of ridge regression here

\displaystyle \mathbf{w}=(\Phi\Phi^T+\lambda \mathbf{I})^{-1}\Phi\mathbf{y}^T

0.1. Kernel Regression

An usual way that a textbook deriving the kernel regression is to apply the Woodbury identity. The solution of ridge regression can be rewritten as

\displaystyle \mathbf{w}=( \Phi \Phi^T+\lambda \mathbf{I})^{-1} \Phi\mathbf{y}^T= \Phi( \Phi^T \Phi+\lambda \mathbf{I})^{-1}\mathbf{y}^T

The prediction {y} of future input {\mathbf{x}} is

\displaystyle  \begin{array}{rcl}  y&=&\mathbf{w}^T \phi\\ &=& \phi^T \Phi( \Phi^T \Phi+\lambda \mathbf{I})^{-1}\mathbf{y}^T\\ &=&\mathbf{k}(\mathbf{K}+\lambda \mathbf{I})^{-1}\mathbf{y}^T \end{array}

where the elements of the vector {\mathbf{k}} and matrix {\mathbf{K}} are {k_i=\kappa(\mathbf{x},\mathbf{x}_i)} and {K_{ij}=\kappa(\mathbf{x}_i,\mathbf{x}_j)}. We also can rewrite the solution as

\displaystyle y=\sum_i\alpha_i\kappa(\mathbf{x},\mathbf{x}_i)

where

\displaystyle \alpha=(\mathbf{K}+\lambda \mathbf{I})^{-1}\mathbf{y}^T

Again, up to now, nothing is new. And again this derivation does not take the bias term into account. One thing the textbook usually forgot to mention is how to model the bias in the kernel version. The variable augment trick in the linear case is not working here!

You might want to try to augment the input directly as {\tilde{\mathbf{x}}=[1,\mathbf{x}^T]^T} and then compute kernel with {\kappa(\tilde{\mathbf{x}}_i,\tilde{\mathbf{x}}_j)}. By doing this, you are first augmenting the variable in the input space then map it to feature space. This is not right! Besides for some kernels that are not applied to vectors (for example, graph kernel, string kernel, etc), you cannot even augment the input. In principle, what you really should do is to augment the variable {\phi} (instead of {\mathbf{x}}) in the feature space, which leads to a nonlinear model that is equivalent to the linear model in the feature space with bias term

\displaystyle  y=\mathbf{w}^T \phi+w_0

These two methods are not equivalent for most of the kernels. The question then is how to incorporate the bias in a kernel formulation.

We can start with solution in the previous post for ridge regression that models the bias explicitly, then apply Woodbury identity as

\displaystyle  \begin{array}{rcl}  \mathbf{w}=&(( \Phi-\bar{ \phi}\mathbf{1}^T)( \Phi-\bar{ \phi}\mathbf{1}^T)^T+\lambda \mathbf{I})^{-1}( \Phi-\bar{ \phi}\mathbf{1}^T)(\mathbf{y}-\bar{y}\mathbf{1}^T)^T\\ =&( \Phi-\bar{ \phi}\mathbf{1}^T)(( \Phi-\bar{ \phi}\mathbf{1}^T)^T( \Phi-\bar{ \phi}\mathbf{1}^T)+\lambda \mathbf{I})^{-1}(\mathbf{y}-\bar{y}\mathbf{1}^T)^T \end{array}

With this, the prediction becomes

\displaystyle  \begin{array}{rcl}  y&=&\mathbf{w}^T \phi+w_0=\mathbf{w}^T \phi+\bar{y}-\mathbf{w}^T\bar{ \phi}=\mathbf{w}^T( \phi-\bar{ \phi})+\bar{y}\\ &=&( \phi-\bar{ \phi})^T ( \Phi-\bar{ \phi}\mathbf{1}^T)(( \Phi-\bar{ \phi}\mathbf{1}^T)^T( \Phi-\bar{ \phi}\mathbf{1}^T)+\lambda I)^{-1}(\mathbf{y}-\bar{y}\mathbf{1}^T)^T+\bar{y} \end{array}

which is pretty intense and still not in a form of kernel formulation. The trick to reduce this to simpler form is kernel centering (meaning, center the data in feature space using kernel trick).

0.2. Data centering in feature space

In general, we can get a new kernel {\bar\kappa(\mathbf{x}_i,\mathbf{x}_j)} that computes the inner product in feature space for the data that are centered in kernel space as

\displaystyle  \begin{array}{rcl}  \bar{\mathbf{K}}&=&( \Phi-\bar{ \phi}\mathbf{1}^T)^T( \Phi-\bar{ \phi}\mathbf{1}^T)\\ &=&( \Phi-\frac{1}{n} \Phi\mathbf{1}\mathbf{1}^T)^T( \Phi-\frac{1}{n} \Phi\mathbf{1}\mathbf{1}^T)\\ &=&(\mathbf{I}-\frac{1}{n}\mathbf{1}\mathbf{1}^T) \Phi^T \Phi(\mathbf{I}-\frac{1}{n}\mathbf{1}\mathbf{1}^T)\\ &=&\mathbf{H}\mathbf{K}\mathbf{H} \end{array}

where

\displaystyle \mathbf{H}=\mathbf{I}-\frac{1}{n}\mathbf{1}\mathbf{1}^T

is called center matrix. Or you can expand the new kernel as

\displaystyle \bar{\mathbf{K}}=\mathbf{K}-\frac{1}{n}\mathbf{1}\mathbf{1}^T\mathbf{K}-\frac{1}{n}\mathbf{K}\mathbf{1}\mathbf{1}^T+\frac{1}{n}\mathbf{1}\mathbf{1}^T\mathbf{K}\mathbf{1}\mathbf{1}^T\frac{1}{n}

which means

\displaystyle  \begin{array}{rcl}  \bar\kappa(\mathbf{x}_i,\mathbf{x}_j)&=&\kappa(\mathbf{x}_i,\mathbf{x}_j)-\frac{1}{n}\sum_k\kappa(\mathbf{x}_i,\mathbf{x}_k)\\ &&-\frac{1}{n}\sum_k\kappa(\mathbf{x}_j,\mathbf{x}_k)+\frac{1}{n}\sum_i\frac{1}{n}\sum_j\kappa(\mathbf{x}_i,\mathbf{x}_j) \end{array}

To compute {\bar{\mathbf{k}}}, whose elements are {\bar{k}_i=\bar\kappa(\mathbf{x},\mathbf{x}_i)} where {\mathbf{x}} is a future sample that have not been seen in the training data, we can compute

\displaystyle  \begin{array}{rcl}  \bar{\mathbf{k}}&=& (\phi-\bar{ \phi})^T( \Phi-\bar{ \phi}\mathbf{1}^T)\\ &=&( \phi-\frac{1}{n} \Phi\mathbf{1})^T \Phi\mathbf{H}\\ &=&( \phi^T \Phi-\frac{1}{n}\mathbf{1}^T \Phi^T \Phi)\mathbf{H}\\ &=&(\mathbf{k}^T-\frac{1}{n}\mathbf{1}^T\mathbf{K})\mathbf{H} \end{array}

This expands to

\displaystyle  \begin{array}{rcl}  \bar\kappa(\mathbf{x},\mathbf{x}_i)&=&\kappa(\mathbf{x},\mathbf{x}_i)-\frac{1}{n}\sum_k\kappa(\mathbf{x},\mathbf{x}_k)\\ &&-\frac{1}{n}\sum_k\kappa(\mathbf{x}_i,\mathbf{x}_k)+\frac{1}{n}\sum_i\frac{1}{n}\sum_j\kappa(\mathbf{x}_i,\mathbf{x}_j) \end{array}

Update:

In general, computing the inner product of two vectors {\phi} and {\phi'} in a feature space, of which the origin is the center of sample set {\Phi}, is

\displaystyle  \begin{array}{rcl}  \bar{\kappa}(\mathbf{x},\mathbf{x}')&=&\left<\phi-\bar{\phi},\phi'-\bar\phi\right>=\left<\phi-\frac{1}{n}\sum_i\phi_i,\phi'-\frac{1}{n}\sum_i\phi_i\right>\\ &=&\kappa(\mathbf{x},\mathbf{x}')-\frac{1}{n}\sum_i\kappa(\mathbf{x},\mathbf{x}_i)\\ &&-\frac{1}{n}\sum_i\kappa(\mathbf{x}',\mathbf{x}_i)+\frac{1}{n}\sum_i\frac{1}{n}\sum_j\kappa(\mathbf{x}_i,\mathbf{x}_j) \end{array}

With these results, we can simplify the solution as

\displaystyle y=\bar{\mathbf{k}}(\bar{\mathbf{K}}+\lambda\mathbf{I})^{-1}(\mathbf{y}-\bar{y}\mathbf{1}^T)^T+\bar{y}

which can also be rewrite as

\displaystyle y=\sum_i\alpha_i\bar{\kappa}(\mathbf{x},\mathbf{x}_i)+\alpha_0

where {\alpha_0=\bar{y}} and

\displaystyle \alpha=(\bar{\mathbf{K}}+\lambda \mathbf{I})^{-1}(\mathbf{y}-\bar{y}\mathbf{1}^T)^T

The centering trick actually is mentioned by the Kernel PCA paper. However, nobody seems to use it in kernel regression as far as I have seen. Another trick I have never seen and found out by myself is that you actually can simply augment the kernel as

\displaystyle \tilde{\kappa}(\mathbf{x}_i,\mathbf{x}_j)=\kappa(\mathbf{x}_i,\mathbf{x}_j)+1

and then directly apply

\displaystyle y=\sum_i\alpha_i\tilde\kappa(\mathbf{x},\mathbf{x}_i)

where

\displaystyle \alpha=(\tilde{\mathbf{K}}+\lambda \mathbf{I})^{-1}\mathbf{y}^T

The reason is that if you directly augment the variable in feature space as {\tilde\phi=[1,\phi^T]^T}, and then compute the inner product with augmented variable, you get

\displaystyle \tilde\kappa(\mathbf{x}_i,\mathbf{x}_j)=\left<\tilde\phi_i,\tilde\phi_j\right>=\left<\phi_i,\phi_j\right>+1=\kappa(\mathbf{x}_i,\mathbf{x}_j)+1

Using this trick you don’t have to bother with the centering problem. Although this method is simpler to use, it is actually not solving the exactly same problem as center kernel version shown above. The augmented kernel solution solves

\displaystyle \|\mathbf{y}-(\mathbf{w}^T\Phi+w_0\mathbf{1}^T)\|^2+\lambda\|\mathbf{w}\|^2+\lambda_0w_0^2

where {\lambda_0=\lambda}. Whereas the center kernel solution solves {\lambda_0=0} which is more reasonable as indicated in previous post.

0.3. Regularization Effect

One might wonder what optimization problem the solutions corresponding to. Let’s take the center kernel solution as example. By substituting all those quantities into the ridge regression target we have

\displaystyle \min_{\alpha}~\sum_i\|y_i-\sum_{j}\alpha_j\bar{\kappa}(\mathbf{x}_i,\mathbf{x}_j)-\alpha_0\|^2+\lambda\sum_i\sum_j\alpha_i\alpha_j\bar{\kappa}(\mathbf{x}_i,\mathbf{x}_j)

From which we can see that in order to control the model complexity, we have to control dual form of the norm of {\mathbf{w}} in the feature space. That means we have to control the norm of the kernel. This is expected, since the kernel itself will make our function arbitrarily complex if we do not control it.

Some literatures might tell you that kernel regression is to solve

\displaystyle \min_{\alpha}~\sum_i\|y_i-\sum_{j}\alpha_j\kappa(\mathbf{x}_i,\mathbf{x}_j)\|^2+\lambda\|\alpha\|

This is wrong! You cannot build the duality relationship with linear ridge regression out of this formulation even without considering the centering effect. And with this formulation, you won’t be able to control the model complexity that the nonlinear kernel brings in.

BTW, who knows how to use bold Greek symbol in wordpress? I usually use the bm package in Latex, which seems not working here.

Posted in Machine Learning | Tagged

Undocumented Machine Learning (I): Linear Regression

In this post, I will discuss a seldom documented aspect or trick for one of the simplest model: linear regression. I will show how to make the solution of ridge regression translation invariant and what the meaning of the bias term is. Some people might find it obvious. However it is important for practical purpose. And later when we deal with non-linear models, this trick will not be obvious.

In general, given data {\{\mathbf{X},\mathbf{y}\}} we are interested in fitting a following linear model

\displaystyle y = \mathbf{w}^T\mathbf{x}+w_0

Most of the textbook will tell you that we don’t have to worry about the bias term {w_0}. Since we can always augment the variables by adding an extra dimension as {\tilde{\mathbf{x}}=[1,\mathbf{x}^T]^T} and {\tilde{\mathbf{w}}=[w_0,\mathbf{w}^T]^T}. Then we are only interested in

\displaystyle y=\mathbf{w}^T\mathbf{x}

with augmented variables. Your textbook will tell you that to fit the linear model we minimize

\displaystyle  Q=\|\mathbf{y}-\mathbf{w}^T\mathbf{X}\|^2+\lambda\|\mathbf{w}\|^2 \ \ \ \ \ (1)

which is called ridge regression. The solution is simply

\displaystyle  \mathbf{w}=(\mathbf{X}\mathbf{X}^T+\lambda \mathbf{I})^{-1}\mathbf{X}\mathbf{y}^T \ \ \ \ \ (2)

Nothing is new up to now. However, what the textbook missed, the bias term {w_0}, is actually important. Here we pursuit another derivation to take a better look at what the {w_0} means. We can equivalently rewrite the (1) as

\displaystyle  Q=\|\mathbf{y}-(\mathbf{w}^T\mathbf{X}+w_0\mathbf{1}^T)\|^2+\lambda\|\mathbf{w}\|^2+\lambda_0w_0^2 \ \ \ \ \ (3)

What (1) really does is to let {\lambda_0=\lambda}. However, it actually make little sense to use non-zero {\lambda_0} which puts penalty over the bias {w_0}. For example, if we translate our data, the solution of {\mathbf{w}} will be different.

If we want our solution invariant w.r.t. translation, we should let {\lambda_0=0}, which means we should minimize

\displaystyle  Q=\|\mathbf{y}-(\mathbf{w}^T\mathbf{X}+w_0\mathbf{1}^T)\|^2+\lambda\|\mathbf{w}\|^2 \ \ \ \ \ (4)

To derivate the solution for this, we first solve {\frac{\partial Q}{\partial w_0}=0} (detail omitted here)

\displaystyle \frac{\partial Q}{\partial w_0}=2nw_0-2(\mathbf{y}-\mathbf{w}^T\mathbf{X})\mathbf{1}=0

to get

\displaystyle  w_0=\frac{1}{n}(\mathbf{y}-\mathbf{w}^T\mathbf{X})\mathbf{1}=\bar{y}-\mathbf{w}^T\bar{\mathbf{x}} \ \ \ \ \ (5)

Substituting (5) back to the target (4), we have

\displaystyle  \begin{array}{rcl}  Q&=&\|\mathbf{y}-(\mathbf{w}^T\mathbf{X}+(\bar{y}-\mathbf{w}^T\bar{\mathbf{x}})\mathbf{1}^T)\|^2+\lambda\|\mathbf{w}\|^2\\ &=&\|(\mathbf{y}-\bar{y}\mathbf{1}^T)-\mathbf{w}^T(\mathbf{X}-\bar{\mathbf{x}}\mathbf{1}^T)\|^2+\lambda\|\mathbf{w}\|^2 \end{array}

The solution for {\mathbf{w}} then is

\displaystyle  \mathbf{w}=((\mathbf{X}-\bar{\mathbf{x}}\mathbf{1}^T)(\mathbf{X}-\bar{\mathbf{x}}\mathbf{1}^T)^T+\lambda I)^{-1}(\mathbf{X}-\bar{\mathbf{x}}\mathbf{1}^T)(\mathbf{y}-\bar{y}\mathbf{1}^T)^T \ \ \ \ \ (6)

This solution means that we first center the data {\mathbf{X}} and {\mathbf{y}}, then regress the centered data in an usual way as in (2) to get {\mathbf{w}}. Then we substitute {\mathbf{w}} back to (5) to get {w_0}. The bias term is just the average of the differences between your data {y} and your regression value {\mathbf{w}^T\mathbf{X}}. In this way, we have a translation invariant solution for {\mathbf{w}}, since we always center our data first.

It is also interesting to see that, the general solution for keeping {\lambda_0} non-zero is to solve

\displaystyle \frac{\partial Q}{\partial w_0}=2nw_0-2(\mathbf{y}-\mathbf{w}^T\mathbf{X})\mathbf{1}+2\lambda_0w_0=0

to get

\displaystyle  \begin{array}{rcl}  w_0&=&\frac{1}{n+\lambda_0}(\mathbf{y}-\mathbf{w}^T\mathbf{X})\mathbf{1}\\ &=&\bar{y}-\mathbf{w}^T\bar{\mathbf{x}} \end{array}

Here we use a pseudo version of the average notation

\displaystyle  \begin{array}{rcl}  \bar{y}&=&\frac{1}{n+\lambda_0}\mathbf{y}\mathbf{1}\\ \bar{\mathbf{x}}&=&\frac{1}{n+\lambda_0}\mathbf{X}\mathbf{1} \end{array}

This result means that, before we see the data, we assume there are some pseudo samples {\{\mathbf{x},y\}} sitting at the origin 0. The number of these samples equals to {\lambda_0}. Again, why would one make such an assumption? In my opinion, you should not unless you really know what you are doing.

Posted in Machine Learning | Tagged

Undocumented Machine Learning

Having been working on Machine Learning for a while, I realize that many models work only if you have an deep understanding of them and use them correctly, even for the simplest ones. In the most of time, the literatures that describe them usually missed many details, which are sometime important for practical usage. Even worse, some authors might not understand them deeply or correctly, even for the inventors of the models sometime. In this series of blog posts, I will share some of my understanding regarding some popular models that might not be seen from literatures.

Since I am not a famous guy in this area, I can use a more critical style to evaluate these models. It is certainly possible that my understanding or criticize is wrong. Therefore, I also keep an open mind for criticize for my posts. The hope is with this series, audiences (if there is any) and I can both learn something.

This page serves as a table of content of the series and will be updated regularly. I actually have a very long list in mind, from very basic models such as linear regression to more complicated ones such as Bayesian nonparametric and deep learning. Hope that I can squeeze out enough time to finish them. Here are the topics I will discuss in recent posts:

(I) Linear Regression
(II) Kernel Regression
(III) From Probabilistic Regression to Gaussian Process
(IV) Exponential Family
(V) Naive Bayes
(VI) Logistic Regression
(VII) Discriminant VS Generative Models (Maximum Entropy)
(VIII) Bayesian View of Semi-supervised Learning
(IX) Dirichlet Process
(X) Variational (Empirical) Bayesian Inference
(XI) TBD

Since these blog posts are not research papers, I will omit some detail derivations from time to time and try to make the posts succinct, if the details can be easily found in a textbook. One excellent machine learning textbook that I like very much is the Pattern Recognition and Machine Learning by Dr. Bishop. I will try to use a similar symbol system as in the book. If you find my posts confusing, it is better to first take a look at corresponding part in that book. Of course, you can always leave comment. I will be happy to discuss with you.

Posted in Machine Learning | Tagged