DSPRelated.com
Forums

two image correlation

Started by kaz 8 years ago6 replieslatest reply 8 years ago217 views

We use 2D cross correlation in fpga platform to find x/y offset of two images. The images are meant to be same but with minor differences due to small movement of object (in x or y directions or in fact even rotate). For now we can measure x/y offset reliably from the peak of correlation. 

The problem is that we also want to evaluate the confidence of the technique in the running system for each case of such correlation. The current algorithm we have uses some sort of peak power/mean of  correlation output but it doesn't convince me it is checking the overall surface of correlation output. accurately as I get inconsistent values and it desn't measure other lower peaks.

It looks to me that this issue may fall into area of surface measurements in general and peak gradient relative to whole surface and doesn't look easy in a platform like fpga.

But here is the confusing bit. Practically may be we are already just looking at the centre part around peak only but fellow engineers reject this idea as it does not check all the output which we have struggled to compute (but they are not doing that anyway).

Any input welcome. Thanks

kaz

#ImageProcessing

[ - ]
Reply by Fred MarshallNovember 7, 2016

I once worked on a system that did something like correlation - the details don't matter.  What was interesting to me was that the temporal output could be noisy or not noisy. If it was not noisy then that indicated confidence in the output.  If it was noisy then that indicated low or zero confidence in the estimate.  In this case, the noise was measured and thresholded so that if it fell below some value, the confidence was deemed "OK" and the output could be used for something.

Perhaps that notion would be useful here...?


Fred

[ - ]
Reply by Tim WescottNovember 7, 2016

I'm having problems sorting out just what you're asking for here.  Are you sure that it's not checking the overall correlation, or do you just think that might be the case?  Is the fact that you're not checking the overall correlation what's at issue, or the fact that you're afraid that you're getting false peaks, or what?

If it's group psychodynamics, then I suggest that you do an analysis of what the algorithm is currently actually doing compared to what people think it's doing.  Then use that as a launching point for figuring out what the group really wants to do.

(Non-engineering professionals say things about engineering like "oh, it's data-driven, and purely rational".  I just laugh and laugh.)

[ - ]
Reply by drmikeNovember 7, 2016

I don't understand the problem.  You know the x/y offset and you have some reference that also allows you to know rotation.  I assume the rotation is around the z axis - yes?  Then you know how things changed from one image to the next - what else do you need to know?

Mike

[ - ]
Reply by kazNovember 7, 2016

x/y offset no problem in principle. I am asking how best to have a measure of confidence in the result in a running system when noise is also likely. 

Using software based modelling we can see the peak and the full surface nice and tidy. The problem is in the running fpga system we also want a single value to tell if the result is reliable ormust be rejected. The target proposed is that some check need to be done involving all values and I am not sure how this is possible. I have done tests on current company algorithm for that and have proved it is inconsistent rather than just think it is unreliable.

Kaz

[ - ]
Reply by ombzNovember 7, 2016

Cross-correlation can result in a very narrow (like 1 pixel) wide peak indicating the true offset while other peaks are likely to appear as well in a real scenario which may lead to false positives.

False positives can be accounted for by additional measures, multi-peak detection, tracking and evaluation during several frames. For sure this will complicate your algorithm and definitely an implementation on FPGA.

Besides, cross-correlation is computationally expensive. Other (possibly simpler measures) might provide more robust and computationally less expensive results. You might want to have a look at https://en.wikipedia.org/wiki/Template_matching for a start.

Most techniques don't easily account for rotation between frames. Histogram matching or more sophisticated techniques can help here.

Concerning your question on how to get an estimate of the reliability of the method in a real case: uhm, sorry but only way to have real results is to actually measure the real case building a test platform. If this is too much asked for: you might try to measure the noise and then simulate your algorithm with synthetic inputs + "real" noise. If this is too much asked for again or too complicated, just try some noise models and see how your algorithm performs. If you get scared by the results, then consider your test or simulation setup: is the noise model too strong/unrealistic, are there bugs in the test or measurement? Are there bugs in the algorithm implementation (did you test that?)? Or is it simply not a good method?

Cheers

Andy

[ - ]
Reply by kazNovember 7, 2016

Thanks Andy, your reply is what I needed to hear.