Quick Start

Fluctuation Theory is key to understanding many solubility and solubilization issues. But I've never really understood what all the angled bracket stuff, e.g. 〈δxδx〉, means. Here we use a simple function to demonstrate the core ideas of variance and covariance (which are simple), plus the notation used to show them (which is the confusing bit). This RDF link lets you see the connection with Kirkwood-Buff theory.


The app is based on the patient explanations from Dr Seishi Shimizu at U York, a world-class fluctuation theory expert.


〈δxδx〉 = σ²x = Var(x) = Covar(x,x)
〈δyδy〉 = σ²x = Var(y) = Covar(y,y)
〈δxδy〉 = σxy = Covar(x,y)
Other Data
//One universal basic required here to get things going once loaded
window.onload = function () {
    //restoreDefaultValues(); //Un-comment this if you want to start with defaults
//Any global variables go here

//Main is hard wired as THE place to start calculating when input changes
//It does no calculations itself, it merely sets them up, sends off variables, gets results and, if necessary, plots them.
function Main() {

    //Send all the inputs as a structured object
    //If you need to convert to, say, SI units, do it here!
    const inputs = {
        a: sliders.Slidea.value,
        b: sliders.Slideb.value,
        yrel: sliders.Slideyrel.value,

    //Send inputs off to CalcIt where the names are instantly available
    //Get all the resonses as an object, result
    const result = CalcIt(inputs)

    //Set all the text box outputs
    document.getElementById('Comments').value = result.Comments
    document.getElementById('xv').value = result.xv
    document.getElementById('yv').value = result.yv
    document.getElementById('cv').value = result.cv
    //Do all relevant plots by calling plotIt - if there's no plot, nothing happens
    //plotIt is part of the app infrastructure in app.new.js
    if (result.plots) {
        for (let i = 0; i < result.plots.length; i++) {
            plotIt(result.plots[i], result.canvas[i]);

    //You might have some other stuff to do here, but for most apps that's it for CalcIt!

//Here's the app calculation
//The inputs are just the names provided - their order in the curly brackets is unimportant!
//By convention the input values are provided with the correct units within Main
function CalcIt({ a,b,yrel}) {

    let Curves = []
    for (let k=0;k<9;k++) Curves.push([])
    let Comments = ""
    let p=0,xav=0,yav=0,xyav=0,x2av=0;y2av=0,N=0,pTot=0
        for (let  phi = 0; phi < 360; phi+=1){
            for (let r = 0; r <= 2; r+=0.005) {
            for (let k=0;k<9;k++){
                if (Math.abs(p-(0.9-k*0.1))<0.005){
    let cv=xyav-xav*yav,xv=x2av-xav*xav,yv=y2av-yav*yav
    if (xav<1e-6) xav=0;if (yav<1e-6) yav=0;if (Math.abs(xyav)<1e-6) xyav=0;if (x2av<1e-6) x2av=0;if (y2av<1e-6) y2av=0; if (Math.abs(cv)<1e-6) cv=0; if (xv<1e-6) xv=0; if (yv<1e-6) yv=0

    Comments = "〈x〉 = " + xav.toPrecision(3) + " : 〈y〉 = " + yav.toPrecision(3) + " : 〈xy〉 = " + xyav.toPrecision(3) + " : 〈x2〉 = " + x2av.toPrecision(3) + " : 〈y2〉 = " + y2av.toPrecision(3)
    const prmap = {
        plotData: Curves,
        lineLabels: ["0.9", "0.8", "0.7", "0.6", "0.5", "0.4", "0.3", "0.2", "0.1"],
        hideLegend: true,
        //dottedLine: [false, true, false, true, false,true],
        xLabel: "x& ", //Label for the x axis, with an & to separate the units
        yLabel: "y& ", //Label for the y axis, with an & to separate the units
        //y2Label: "Cumulative", //Label for the y2 axis, null if not needed
        yAxisL1R2: [], //Array to say which axis each dataset goes on. Blank=Left=1
        logX: false, //Is the x-axis in log form?
        xTicks: undefined, //We can define a tick function if we're being fancy
        logY: false, //Is the y-axis in log form?
        yTicks: undefined, //We can define a tick function if we're being fancy
        legendPosition: 'top', //Where we want the legend - top, bottom, left, right
        xMinMax: [-1, 1], //Set min and max, e.g. [-10,100], leave one or both blank for auto
        yMinMax: [-1,1], //Set min and max, e.g. [-10,100], leave one or both blank for auto
        y2MinMax: [,], //Set min and max, e.g. [-10,100], leave one or both blank for auto
        xSigFigs: 'F3', //These are the sig figs for the Tooltip readout. A wide choice!
        ySigFigs: 'F3', //F for Fixed, P for Precision, E for exponential

    //Now we return everything - text boxes, plot and the name of the canvas, which is 'canvas' for a single plot

    return {
        Comments: Comments,
        plots: [prmap],
        canvas: ['canvas'],

//Mudholkar, Govind S.; Hutson, Alan D. (February 2000). "The epsilon–skew–normal distribution for analyzing near-normal data". Journal of Statistical Planning and Inference. 83 (2): 291–309. doi:10.1016/s0378-3758(99)00096-8. ISSN 0378-3758.
function Skew(x, W, H, S, O) {
    x = x - O

    if (x < 0) {
        return H * Math.exp(-Math.pow(x / (W * (1 + S)), 2))
    } else {
        return H * Math.exp(-Math.pow(x / (W * (1 - S)), 2))


What do we mean by "fluctuations"? Let's, for simplicity, have a 50:50 mix of two components, e.g. solvents, called X and Y. If we ask how much X or Y there is, the answer is simple, 50%. But if you could look down a molecular microscope you would see that in some areas, at some moment in time, you have 51% of X, in another it might be 47%. Now add up all the snapshots and you get a probability distribution of how much x there is. If we define 50% at the centre, and define that as x=0, and the amount of X at that point is defined as p=1, as you move either side of x=0 you will get a fall-off down to p=0 as the x value differs more from 0. We could imagine it as a normal distribution. These statistical distributions from the mean, averaged over some relevant time, are "fluctuations".

But what about Y. We have the same rules, that at Y=50% we have y=0 and p=1 (it's the same point as x=0), with a similar type of falling off as you deviate away from y=0.

Suppose that X doesn't much like Y. In regions where there is more X there will automatically be less Y as they try to get out of each other's way. So there is a negative correlation between fluctuations in X and in Y. But suppose they really rather like each other. Then as X locally increases, so does Y.

To understand what follows we need to know one key trick. In this world instead of saying Average(x) or using those tricky bars along the top to show an average x̅ we use angled brackets. So the average of x is shown as 〈x〉

Capturing the fluctuations with numbers

The probability function used in the app is not at all solubility-related. It is a simple function that allows us to see what is going on and to understand what the answers mean.


We have x fluctuating around a mean, with, as a default, everything symmetrical, the mean of x, 〈x〉 = 0, using the angled brackets. If you change the offset in the distribution, 〈x〉 becomes non-zero so you can see what happens. The same for 〈y〉. As we know from statistics, you can capture how much variation there is about the mean via the Variance, which can be defined in two ways that are arithmetically equivalent. The first way is intellectually clearer, the second way is computationally simpler. Confusingly, we see different nomenclatures for the variance, each included here

`"〈"δx^2"〉" = "〈"δxδx"〉" = (σ^2)_x = Var(x) = (x-"〈"x"〉")(x-"〈"x"〉")`

The curious δx terms are simply δx=x-〈x〉, i.e. δx is the difference between x at that moment and the mean 〈x〉, the equation is just restating itself. The other way to calculate it (just multiply to get x²-2x〈x〉+〈x〉² = x²-〈x〉²) is:

`"〈"δx^2"〉" = "〈"δxδx"〉" = (σ^2)_x = Var(x) = "〈"x^2"〉"-"〈"x"〉""〈"x"〉"`

We can do the same things for y. But what's of interest is the Covariance, how X and Y are interrelated.

`"〈"δxδy"〉" = (σ)_(xy) = Covar(x,y) = (x-"〈"x"〉")(y-"〈"y"〉")`

and, of course, the equivalent:

`"〈"δxδy"〉" = (σ)_(xy) = Covar(x,y) = "〈"xy"〉"-"〈"x"〉""〈"y"〉"`

If you make a large then the distribution has a co-variation in the positive x and y and in the negative x and y. If you make b large then an increase in x gives a decrease in y (and vice versa) so they are anti-correlated. In the first case 〈δxδy〉 is positive, in the second it is negative. The magnitudes of these effects depend on the absolute values of the a, b and yrel values.

After all this we need one more fact. What everyone calls Variance and Covariance, statistical thermodynamicists call Fluctuation. I suppose they could have called it all Covariance, but although it's respectable to talk about X,X covariance, it's a bit odd. So the single term "fluctuation" ended up as doing the job.

Solubility/solubilization theory

We can now look at the definition of a Kirkwood-Buff integral, a number that describes how much or how little something likes itself compared to the average with the mysterious nomenclature removed. Here we see GXX and GXY, how much X likes itself compared to the average and how much X and Y (dis)like each other. These are expressed in terms of N, the number of moles of each species present For simplicity some pesky volume and Kroneker delta terms are omitted:

`G_(XX) ~ ("〈"δN_XδN_X"〉")/("〈"N_X"〉""〈"N_X"〉"` and `G_(XY) ~ ("〈"δN_XδN_Y"〉")/("〈"N_X"〉""〈"N_Y"〉"`

Translated into normal language this means that the size of the effect is the (co)variance normalized by the means. The bigger the absolute (co)variance, the larger the Gij value. Or, to put it more correctly, if we find a large positive Gij that means i and j prefer to be together more than the average, and a large negative value means they prefer to keep apart.