Possible to compute a correlation value for a single element?

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
So I was asked by a professor to write a program that would read in two (black and white) images, and then on a pixel by pixel generate a new image representing the correlation values between the first two.

Now, just based on what the wikipedia page says about correlation, it is a single value computed between two sets, so I can't see how you could get a pixel by pixel (element by element) value. I've already discussed this with the professor, and he insists it is possible, just compute it for each element. Is wikipedia just not all-knowing? Is perhaps the professor thinking of some similar statistical measure that's not called correlation? Maybe I'm just thinking about it wrong?

Any ideas would be appreciated.
 

Onund

Senior member
Jul 19, 2007
287
0
0
You might want to ignore the word 'correlation' and get the professor to explain to you in other words what he wants. Have him describe his goal instead of using some all encompassing word.

From what you say, it almost sounds like the prof wants you to find a correlation value for each pair of pixels between the two images. Not quite sure how you turn that into an image though. I would guess in this case he just wants a coversion factor to go from one picture to the next? Really, you should get more details form the prof
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
He wants a color overlay representing how the two images are changing together. I could make every adjacent group of pixels a subset and compare that way perhaps?
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
There are a number of ways "correlation" could be interpreted here. If I were given this assignment, I'd write a simple program that compared the value of pixel (x,y) to the corresponding pixel in the other image. Repeat for all pixels in the images. Then, create a linear mapping where 1 is a perfect match and -1 is a maximally imperfect match.* The image correlation could then be the sum of these scores divided by the number of pixels, which would give a number between -1 and 1, which is standard for correlation coefficients.

*Simplest linear mapping I can come up with without turning my brain on would be
s=(I1-I2)/(max(I_max-I1,I1-I_min)),
where I1 is the intensity of the pixel in image 1
I2 is the intensity of the pixel in image 2
I_max is the maximum intensity
I_min is the minimum intensity
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: CycloWizard
There are a number of ways "correlation" could be interpreted here. If I were given this assignment, I'd write a simple program that compared the value of pixel (x,y) to the corresponding pixel in the other image. Repeat for all pixels in the images. Then, create a linear mapping where 1 is a perfect match and -1 is a maximally imperfect match.* The image correlation could then be the sum of these scores divided by the number of pixels, which would give a number between -1 and 1, which is standard for correlation coefficients.

*Simplest linear mapping I can come up with without turning my brain on would be
s=(I1-I2)/(max(I_max-I1,I1-I_min)),
where I1 is the intensity of the pixel in image 1
I2 is the intensity of the pixel in image 2
I_max is the maximum intensity
I_min is the minimum intensity

The values being compared are supposed to be in different units, though I suppose the two images could be normalized between -1 and 1 and then have a scaling factor computed. That doesn't sound like correlation to me though.


And uec0, cross-correlation doesn't seem to work either. It mentions a time delay, but both images are unconcerned with time. The modification for working with images appears to be compared a subset (a mask) to all the points on a single image, from which I could see doing either:
Make each pixel in one image a mask and compare it to the corresponding pixel only in the other image. This could possibly be right, but I could see it running into the same problem as regular correlation on an element by element basis where all output values would be the same.
Make the entire image a mask and compare it to the other one, but this would only result in a single value.

 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: Fox5
The values being compared are supposed to be in different units, though I suppose the two images could be normalized between -1 and 1 and then have a scaling factor computed. That doesn't sound like correlation to me though.
:confused: How can pixels have values with different units? Now I'm totally lost. There is only one type of information stored in a grayscale bitmap: intensity.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: CycloWizard
Originally posted by: Fox5
The values being compared are supposed to be in different units, though I suppose the two images could be normalized between -1 and 1 and then have a scaling factor computed. That doesn't sound like correlation to me though.
:confused: How can pixels have values with different units? Now I'm totally lost. There is only one type of information stored in a grayscale bitmap: intensity.

Well, they're spectrographic images, so each one is an intensity in a different wavelength.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: Fox5
Well, they're spectrographic images, so each one is an intensity in a different wavelength.
You do realize that if you actually gave us this information up front, we probably could have helped you before your assignment was due, right? I don't understand why people come to this forum to ask for help, then refuse to divulge any of the details of what they want help with.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: CycloWizard
Originally posted by: Fox5
Well, they're spectrographic images, so each one is an intensity in a different wavelength.
You do realize that if you actually gave us this information up front, we probably could have helped you before your assignment was due, right? I don't understand why people come to this forum to ask for help, then refuse to divulge any of the details of what they want help with.

It's not due yet, and I didn't think it would really matter. Mathematically, it shouldn't matter what your two data sets are.

I'm thinking perhaps a rolling average would work for the correlation. I'll define something like a 5x5 grid of pixels, move it over by pixel row and then by pixel column defining correlation values, then average the correlation values for each pixel (sum of all correlation values that overlapped / number of samples). There would be less data for the edges of the image, but it seems reasonable to me.
 

uec0

Junior Member
Jun 14, 2004
10
0
0
Maybe this is just for posterity, but it may help someone else, someday.

The "time delay" is just a variable -- It doesn't mean anything. It's just that introductory signal processing classes usually start with functions over time. Realistically, a function is a function -- The variable's real-world counterpart doesn't matter... For our purposes, f(t) could be a function over time or space or whatever.

Now an image is kind of a like a two variable discrete function. We know how to do convolution of single variable discrete functions. It's a simple extension for performing convolutions on images -- In fact, this is the basis for some Photoshop filters.

Then from the Wikipedia article, we can take the next step to find cross-correlation between images since cross-correlation is very similar to convolution (in fact, it might be simpler).

You don't have to invent anything -- I believe this is a fairly common process...

To find the (cross) correlation for two images, simply apply image offsets, multiply, then "integrate". In the resulting correlation "image" C(x, y), the (x, y) would represent the applied shift, and C(x, y) is the "integral" (summation) over the multiplied image intensities. Of course the usual convolution issues apply (like dealing with edges).

Intensities do not have different units in different bands... At most, you may want to normalize the images before correlating them.

I could be wrong since, again, I don't really do this day-to-day and I haven't done signals in almost five years.
 

Hulk

Diamond Member
Oct 9, 1999
5,118
3,662
136
Scan both images to the same resolution in color. Convert to greyscale. Or scan in grayscale. Make sure the scanner can be set manually to not adjust values depending on the image. Might even be better to photograph the image using a high quality dSLR camera so you can set manual exposure and control lighting on both shots. Gotta get the data in right or as you know, "garbage in garbage out" and all of the analysis will be meaningless.

For 8 bit the images will have values ranging from 0 to 255. Subtract the value from corresponding pixels to create new pixel values and display the "difference" image. Dark areas are areas of correlation, light areas are areas that do not correlate.

The resulting image is a visual representation of correlation between the original two images. You could add up all of the intensity values in the "difference" image and divide by the total number of pixels and then again by 256 to arrive at a "correlation" number, from zero to one, for the two images. Zero being perfect image or you compared the same image to itself.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Oh, boy, I'm confused again.
UEC's explanation sounds good, but I'm not quite understanding it.

Hulk's explanation sounds good, but it doesn't sound like it meets the mathematical definition of correlation.
 

uec0

Junior Member
Jun 14, 2004
10
0
0
I tried to leave a few blanks for the reader to fill in in the hope of promoting some independent research, but I may have been too ambiguous ;)

Assuming the professor actually wants the (mathematical) cross-correlation (I think you've had plenty of time to ask for clarification?), this explanation should suffice:
http://local.wasp.uwa.edu.au/~...llaneous/correlate/#2d

It's definitely enough to implement in code easily (double summation...double for-loop).

Note that this formulation includes image normalization. Also note that they look at image cross correlation from the perspective of pattern-matching so their "mask" image is generally smaller than their base image. For you, it seems that both images are similar in size. This just means you need to carefully consider the edge conditions -- See convolution articles for common procedures (defining pixels outside the image as zero is one possible option).

This explanation pretty much tells you how to solve your problem, but I think it may get you to your solution without really touching all the necessary background and theory :(