Change Point Test: Test Statistic Computation

on Wednesday, 12 December 2012
This post will not go through the details of hypothesis testing of the problem, instead this will focus on the test statistic computation of the Change Point test, manually.

Problem:
In a study of the effect of change in payoff in a two-choice probability learning task, the payoff or reward given to a subject was changed (or not changed) after an individual's performance had stabilized at an asymptote (or steady performance level). The hypothesis was that a change in payoff for correct responses would affect the level of responding by the subject. The experiment consisted of  trials on each of which the subject made a binary response. Since a subject's response pattern cannot be thought to have stabilized until some learning takes place, only the last trials analyzed here. At trial 120 (trial 180 in the original sequence) one-half of the subjects experienced a change in payoff. The experimenter wished to determine whether there was a change in the parameter of the binary sequence of response over the last 240 trials. If there was a change for those subjects who experienced a change in payoff, then it might be concluded that the change in payoff induced a change in response level.

The Pigeonhole Principle

on Saturday, 1 September 2012
Definition. Let $k$ and $n$ be any two positive integers. If at least $kn+1$ objects are distributed among $n$ boxes, then one of the boxes must contain at least $k+1$ objects. In particular, if at least $n+1$ objects are to be put into $n$ boxes, then one of the boxes must contain at least two objects.

Proof. If no boxes contain $k+1$ or more objects, then every box contain at most $k$ objects. This implies that the total number of objects put into the $n$ boxes is at most $kn$, a contradiction. $\blacksquare$

Maximum Likelihood Estimation

on Friday, 31 August 2012
Suppose that the likelihood function depends on $k$ parameters $\theta_1 , \theta_2 , \cdots , \theta_k$ . Choose as estimates those values of the parameters that maximize the likelihood $L(y_1 , y_2 , \cdots , y_n | \theta_1 , \theta_2 , \cdots , \theta_k )$.

To emphasize the fact that the likelihood function is a function of the parameters $\theta_1, \theta_2 , \cdots , \theta_k$ , we sometimes write the likelihood function as $L(\theta_1 , \theta_2 , \cdots , \theta_k )$. It is common to refer to maximum-likelihood estimators as MLEs. We illustrate the method with an example

Example 1. A binomial experiment consisting of $n$ trials resulted in observations $y_1 , y_2 , \cdots , y_n$ , where $y_i = 1$ if the $i$th trial was a success and $y_i = 0$ otherwise. Find the MLE of $p$, the probability of a success.

Solution: The likelihood of the observed sample is the probability of observing $y_1 , y_2 , \cdots , y_n$. Hence,$$L(p) = L(y_1 , y_2 , \cdots , y_n | p) = p^y(1 − p)^{n−y},\quad where~y =\sum_{i=1}^nyi .$$We now wish to find the value of $p$ that maximizes $L(p)$. If $y = 0$, $L(p) = (1− p)^n$ , and $L(p)$ is maximized when $p = 0$. Analogously, if $y = n$, $L( p) = p^n$ and $L(p)$ is maximized when $p = 1$. If $y = 1, 2, \cdots , n − 1$, then $L(p) = p^y(1 − p)^{n−y}$ is zero when $p = 0$ and $p = 1$ and is continuous for values of $p$ between $0$ and $1$. Thus, for $y = 1, 2, \cdots , n − 1$, we can find the value of $p$ that maximizes $L(p)$ by setting the derivative $\frac{d~L(p)}{dp}$ equal to $0$ and solving for $p$.