Fast, elegant way to calculate empirical/sample covariogram

4.6k Views Asked by At

Does anyone know a good method to calculate the empirical/sample covariogram, if possible in Python?

This is a screenshot of a book which contains a good definition of covariagram:

enter image description here

If I understood it correctly, for a given lag/width h, I'm supposed to get all the pair of points that are separated by h (or less than h), multiply its values and for each of these points, calculate its mean, which in this case, are defined as m(x_i). However, according to the definition of m(x_{i}), if I want to compute m(x1), I need to obtain the average of the values located within distance h from x1. This looks like a very intensive computation.

First of all, am I understanding this correctly? If so, what is a good way to compute this assuming a two dimensional space? I tried to code this in Python (using numpy and pandas), but it takes a couple of seconds and I'm not even sure it is correct, that is why I will refrain from posting the code here. Here is another attempt of a very naive implementation:

from scipy.spatial.distance import pdist, squareform
distances = squareform(pdist(np.array(coordinates))) # coordinates is a nx2 array
z = np.array(z) # z are the values
cutoff = np.max(distances)/3.0 # somewhat arbitrary cutoff
width = cutoff/15.0
widths = np.arange(0, cutoff + width, width)
Z = []
Cov = []

for w in np.arange(len(widths)-1): # for each width
    # for each pairwise distance
    for i in np.arange(distances.shape[0]): 
        for j in np.arange(distances.shape[1]): 
            if distances[i, j] <= widths[w+1] and distances[i, j] > widths[w]:
                m1 = []
                m2 = []
                # when a distance is within a given width, calculate the means of
                # the points involved
                for x in np.arange(distances.shape[1]):
                    if distances[i,x] <= widths[w+1] and distances[i, x] > widths[w]:
                        m1.append(z[x])

                for y in np.arange(distances.shape[1]):
                    if distances[j,y] <= widths[w+1] and distances[j, y] > widths[w]:
                        m2.append(z[y])

                mean_m1 = np.array(m1).mean() 
                mean_m2 = np.array(m2).mean()
                Z.append(z[i]*z[j] - mean_m1*mean_m2)
    Z_mean = np.array(Z).mean() # calculate covariogram for width w
    Cov.append(Z_mean) # collect covariances for all widths

However, now I have confirmed that there is an error in my code. I know that because I used the variogram to calculate the covariogram (covariogram(h) = covariogram(0) - variogram(h)) and I get a different plot:

enter image description here

And it is supposed to look like this:

enter image description here

Finally, if you know a Python/R/MATLAB library to calculate empirical covariograms, let me know. At least, that way I can verify what I did.

1

There are 1 best solutions below

7
On BEST ANSWER

One could use scipy.cov, but if one does the calculation directly (which is very easy), there are more ways to speed this up.

First, make some fake data that has some spacial correlations. I'll do this by first making the spatial correlations, and then using random data points that are generated using this, where the data is positioned according to the underlying map, and also takes on the values of the underlying map.

Edit 1:
I changed the data point generator so positions are purely random, but z-values are proportional to the spatial map. And, I changed the map so that left and right side were shifted relative to eachother to create negative correlation at large h.

from numpy import *
import random
import matplotlib.pyplot as plt

S = 1000
N = 900
# first, make some fake data, with correlations on two spatial scales
#     density map
x = linspace(0, 2*pi, S)
sx = sin(3*x)*sin(10*x)
density = .8* abs(outer(sx, sx))
density[:,:S//2] += .2
#     make a point cloud motivated by this density
random.seed(10)  # so this can be repeated
points = []
while len(points)<N:
    v, ix, iy = random.random(), random.randint(0,S-1), random.randint(0,S-1)
    if True: #v<density[ix,iy]:
        points.append([ix, iy, density[ix,iy]])
locations = array(points).transpose()
print locations.shape
plt.imshow(density, alpha=.3, origin='lower')
plt.plot(locations[1,:], locations[0,:], '.k')
plt.xlim((0,S))
plt.ylim((0,S))
plt.show()
#     build these into the main data: all pairs into distances and z0 z1 values
L = locations
m = array([[math.sqrt((L[0,i]-L[0,j])**2+(L[1,i]-L[1,j])**2), L[2,i], L[2,j]] 
                         for i in range(N) for j in range(N) if i>j])

Which gives:

enter image description here

The above is just the simulated data, and I made no attempt to optimize it's production, etc. I assume this is where the OP starts, with the task below, since the data already exists in a real situation.


Now calculate the "covariogram" (which is much easier than generating the fake data, btw). The idea here is to sort all the pairs and associated values by h, and then index into these using ihvals. That is, summing up to index ihval is the sum over N(h) in the equation, since this includes all pairs with hs below the desired values.

Edit 2:
As suggested in the comments below, N(h) is now only the pairs that are between h-dh and h, rather than all pairs between 0 and h (where dh is the spacing of h-values in ihvals -- ie, S/1000 was used below).

# now do the real calculations for the covariogram
#    sort by h and give clear names
i = argsort(m[:,0])  # h sorting
h = m[i,0]
zh = m[i,1]
zsh = m[i,2]
zz = zh*zsh

hvals = linspace(0,S,1000)  # the values of h to use (S should be in the units of distance, here I just used ints)
ihvals = searchsorted(h, hvals)
result = []
for i, ihval in enumerate(ihvals[1:]):
    start, stop = ihvals[i-1], ihval
    N = stop-start
    if N>0:
        mnh = sum(zh[start:stop])/N
        mph = sum(zsh[start:stop])/N
        szz = sum(zz[start:stop])/N
        C = szz-mnh*mph
        result.append([h[ihval], C])
result = array(result)
plt.plot(result[:,0], result[:,1])
plt.grid()
plt.show()

enter image description here

which looks reasonable to me as one can see bumps or troughs at the expected for the h values, but I haven't done a careful check.

The main speedup here over scipy.cov, is that one can precalculate all of the products, zz. Otherwise, one would feed zh and zsh into cov for every new h, and all the products would be recalculated. This calculate could be sped up even more by doing partial sums, ie, from ihvals[n-1] to ihvals[n] at each timestep n, but I doubt that will be necessary.