Rapidly Computing the Standard Deviation for a Fixed Period (Length of Data)

Wikipedia’s page for Standard Deviation presents a formula for rapidly calculating the standard deviation of a growing set of numbers:

[1] Qk = Qk-1 + (xk – Ak-1)(xk – Ak) where xk is the kth element in the set and Ak is the mean of the first kth numbers.

The standard deviation squared of the first n numbers of a set is Qn / n.

From this, it is easy to compute the standard deviation for a growing set of numbers without having to read through all numbers in the set a second time.  Recomputing the mean requires only an addition of (xk – Ak-1) / k to the prior mean of Ak-1 — so this does not require looking back through all the elements of the set.

Therefore recomputing Q does not require looking back through all the elements in the set since it only depends on xk, Qk-1, Ak, and Ak-1.  We’ve deduced that Ak can be derived from Ak-1 without having to look back through all elements of the set, so new values of Q can be computed likewise.

Now how would one rapidly calculate the standard deviation if the set was not growing, but rather was a moving window of fixed length?  This is useful in finance or science when we want to keep an up to date standard deviation for a set of constant size with numbers changing one at a time.  For example, in finance, when calculating Bollinger Bands, we want to always know the standard deviation in stock prices for the last 20 periods.  So the set of stock prices isn’t growing in our set, instead it is of constant size, removing the oldest stock price and replacing it with the latest one at every tick interval.  We can adapt formula [1] above for growing sets to meet our needs here by realizing that a moving window is a set that grows by one at one end, and shrinks by one at the other.

Without loss of generality, let’s assume the beginning of our window is element a of a given set (and we will be discarding the a-th element when the window moves by one), the second element is b, the last element of the window is c, and the element to be added is d.  We are moving from a window of data that encompasses elements a through c, to a window of data one over that encompasses elements b through d.  Our window length is constant, that is d – b = c – a.

So if our data set looks like … 1, 2, 3, 5, 8, 11, 13 … and our period is 4, the a-th element might have a value of 2, b-th is 3, c-th 8 and d-th 11.  We moved from a window looking at [2, 3, 5, 8] to [3, 5, 8, 11].

If we grow our set from b to c to b to d by adding the d-th element, the Q formula would look like the following:

[2] Qbd = Qbc + (xd – Abc)(xd – Abd)

Now this formula looks the same as growing a set from b to c to a to c by adding the a-th element:

[3] Qac = Qbc + (xa – Abc)(xa – Aac)

Rearranging [3], we find out that shrinking a set, in this case, from a to c to b to c, is simply reducing Q by the product of terms on the RHS:

[4] Qbc = Qac – (xa – Abc)(xa – Aac)

Substituting [4] into [2], we find out how to get Q (and consequently the standard deviation) for the new window b to d, from the Q (or standard deviation) of our old window a to c:

[5] Qbd = Qac – (xa – Abc)(xa – Aac) + (xd – Abc)(xd – Abd)

We arrive at [5], a formula where we can compute new values of Q for a moving window (and we can easily calculate standard deviation from that) without having to go through a second pass of all numbers in the set.  The derivation is simple after realizing a moving window is simply a set that grows and shrinks by one element.

An efficient but effective artificial intelligence for Big 2

(tl;dr: You can play against an AI computer user I designed for an online Big 2 game here.  The bots are capable of calulcating the expectation for its score for every possible line of play very quickly, resulting in a rather worthy opponent for even the most seasoned Big 2 players).

The dynamics in Big 2 make programming AI for it an interesting problem.  The usual “game tree” approach when solving games like Chess or Reversi can still somewhat be applied.  However, like programming optimal AI for backgammon, there is an element of the unknown to be included.  We know the cards we hold in our hand, but how might they be distributed amongst the other concealed hands?

There are many other factors that human players have to their advantage.  They can spot and exploit players’ weaknesses and habits, bluff, and come to logical conclusions putting those two weapons together.

Emulating this behaviour and analyzing a fairly large “game tree” makes designing a fast, yet effective AI for Big 2 a challenge.  For AI that is controlled by the server (for obvious reasons it is server-side and not client-side), we have to be careful too to not take up many cycles per AI decision.  The assumption is that multiple computer players will be running at once, so for a multiplayer online gaming site, CPU cycles for AI should be minimal.

So instead of a game tree where we guess what cards will be played (and where they lie) and determine the optimal play by looking ahead, the approach taken is one giant heuristic: let’s calculate the expected value of a play simply from the cards we see and the cards we did not see up until the play.  Ultimately the goal of Big 2 is to get rid of all your cards, so we simplify the calculation by estimating how many cards will be shedded by playing a certain hand.

The process is simple: when it’s the computers turn, what are all the possible plays we have?  If a single was played, we can play another single, or pass.  If the computer is leading, then anything is possible, but passing.  Whichever the case, we have a small finite set of moves.  Now for each move, how will our hand look like after?  What will our possible plays be?  To answer this problem, we treat it as if the computer is leading a hand, and determine the set of all plays regardless if we are approached with a single, double, triple or poker hand.  Now, given the cards that are concealed in everyone else’s hand (and we know the exact set of cards, since the computer easily tracks what cards have already been played, we just don’t know how these cards are distributed — we could guess and assign probabilities, but for purposes of efficiency, we opt not to) let’s calculate how many cards we expect to shed by the end of the game.  If one of our plays in our “future” hand contains a poker hand, how likely is it going to beat all the other possible poker hands out there?  If it beats all the other possible poker hands (or a high percentage of them), how likely are we even going to be seeing a poker hand in play?  That factor is nil if the remaining players have 5 or less cards each.

Anyway, just by answering those questions above you have a relatively good gauge on expected value, and interestingly enough figuring out these probabilities is fast, by applying some good finite mathematics formulae.  One thing that human players do well is to have a game plan to run out their cards, waiting for a deuce to be played to promote one of their deuces, giving them an unbeatable exit plan.  To emulate this on the AI side, we simply do a small probability assignment ourselves: when we know a certain line of play has hands that can be unbeaten (or close to unbeaten), we can boost the probability to 1 (or close to 1) for a weaker hand that had no chance of being played unless the computer leads it itself.

So with the original probabilities calculated and the “fudging” of probabilities to emulate creating a “game plan” with a certain line of play, we have a rather decent AI for Big 2 that can give even the most seasoned players a challenge.  Although the computer still does funny stuff from time to time, it makes up for its shortcomings by doing pinpoint probability calculations quickly, and also has the inhuman ability to figure out exactly what cards are left, to accurately determine the strength of its hand.

So how does it play?  You be the judge!