Randomness with temperature

Wednesday, February 19, 2020

Tagged: softmax, random, python, storylets, temperature, math, algorithm

If you've messed around with neural net packages, or even read about them, you've probably encountered the idea of a "temperature parameter". This is usually described as some kind of chaos level. If the temperature is low, the algorithm is boring and sticks close to the source material. If the temperature is high, the algorithm can jump wildly in unexpected directions.

I think this is pretty cool -- no pun intended. It seems like it would be useful in all sorts of systems! For example, in a storylet-based narrative structure, you might want a temperature parameter in the selection engine. Lower temperatures mean the player gets storylets that are highly relevant to recent events. Higher temperatures mean the player gets a more random selection, more prone to non sequiturs and topic shifts.

In AI research this is called the softmax function (or "softargmax" if you want to be even nerdier). You can find lots of example code, but it's usually meant to run in the context of an AI algorithm. I couldn't find a version that worked on a weighted list of options.

So I wrote one. Here it is in Python 3. (Attached at the end of this post, or see this gist snippet.)

The underlying math is not complicated. You have a bunch of options, each with a weight (base probability). You take the probabilities, divide by the temperature, run them through exp(), and then normalize. The result has the two properties we want:

  • As the temperature approaches zero, the highest-weighted option approaches 100% likelihood. (The randomness "freezes out" into a deterministic choice.)
  • As the temperature approaches infinity, all options become equally likely. (The wildest, lowest-weighted outcome is just as likely as any other.)

I do a little more work to ensure a third, common-sense property:

  • If the temperature is 1.0, then the likelihood of an option is roughly proportional to its weight.

So if you put in options weighted 2, 3, and 5, at a temp of 1.0, they'll have probabilities of about 20%, 30%, and 50% respectively. (Not exactly that, because the math doesn't work that way. But close enough to feel right.)

Here's a chart showing the probabilities for a range of temperatures. These are the results of calling weighted_choice([ ('A', 2), ('B', 3), ('C', 5) ], temp=T) with various values of T.

Temp Opt A Opt B Opt C
0.01 0.0000 0.0000 1.0000
0.1 0.0001 0.0025 0.9974
0.2 0.0105 0.0469 0.9426
0.5 0.1127 0.2054 0.6819
0.8 0.1807 0.2629 0.5565
1.0 0.2079 0.2807 0.5114
1.25 0.2312 0.2939 0.4749
2.0 0.2681 0.3115 0.4204
5.0 0.3068 0.3258 0.3674
10.0 0.3200 0.3298 0.3502
100.0 0.3320 0.3330 0.3350

#!/usr/bin/env python3

import math
import random

def weighted_choice(ls, temp=1.0, verbose=False):
    """The argument to this function should be a list of tuples, representing
    a weighted list of options. The weights must be non-negative numbers.
    For example:

    weighted_choice([ ('A', 0.5), ('B', 2.0), ('C', 2.5) ])

    This will return 'A' roughly 10% of the time, 'B' roughly 40% of the
    time, and 'C' roughly 50% of the time.

    The temperature parameter adjusts the probabilities. If temp < 1.0,
    the weights are treated as extra-important. As temp approaches zero,
    the highest-weighted option becomes inevitable.

    If temp > 1.0, the weights become less important. As temp becomes high
    (more than twenty), all options become about equally likely.

    Set verbose to True to see the probabilities go by.
    """
    count = len(ls)
    if count == 0:
        if verbose:
            print('No options')
        return None

    values = [ tup[0] for tup in ls ]
    origweights = [ float(tup[1]) for tup in ls ]

    if count == 1:
        if verbose:
            print('Only one option')
        return values[0]

    if temp < 0.05:
        # Below 0.05, we risk numeric overflow, and the chance of the highest-
        # weighted option approaches 100% anyhow. So we switch to a
        # deterministic choice.
        if verbose:
            print('Temperature is close to zero; no randomness')
        bestval = values[0]
        bestwgt = origweights[0]
        for ix in range(1, count):
            if origweights[ix] > bestwgt:
                bestwgt = origweights[ix]
                bestval = values[ix]
        return bestval

    # Normalize the weights (so that they add up to 1.0).
    totalweight = sum(origweights)
    adjustweights = [ val / totalweight for val in origweights ]

    # Perform the softmax operation. I throw in an extra factor of "count"
    # in order to keep the behavior sensible around temp 1.0.
    expweights = [ math.exp(val * count / temp) for val in adjustweights ]

    # Normalize the weights again. Yes, we normalize twice.
    totalweight = sum(expweights)
    normweights = [ val / totalweight for val in expweights ]

    if verbose:
        vals = [ '%.4f' % val for val in normweights ]
        print('Adjusted weights:', ', '.join(vals))

    # Select according to the new weights.
    val = random.uniform(0, 1)
    for ix in range(0, count):
        if val < normweights[ix]:
            return values[ix]
        val -= normweights[ix]
    return values[-1]