DSPRelated.com
Forums

Setting prominence in signal.find_peaks

Started by bigalnz 4 months ago3 replieslatest reply 3 months ago157 views

I am rocessing a radio signals on a Rpi5 with a Software Defined Radio (SDR) in python and need to adaptively set the prominence parameter of scipy signal.find_peaks.

The SDR is sampling at 768k samples/sec and I am processing in 256k chunk sizes.

The 256k chunks enter the processing que, a PSD using signal.find_peaks determines where peaks (and hence frequencies) in the chunk are potential signals for processing. The chunk of samples is then channelized into 1024 channels and only those frequencies with power are further processed.

Note: the further processing can determine which frequencies have valid signals vs noise.

p = signal.find_peaks(spec, prominence=0.000010)[0]

A example plot with prominence 0.00001 and a single signal of interest correctly identified at 160.377 is shown below. Note the other peaks are noise or DC spikes.

signal1_463.png


If a different set of samples from the SDR is then processed with signal.find_peaks with the same prominence setting of 0.00001 I get so many peaks the Rpi5 can no longer process in real time and samples are dropped: (Note signal of interest still on 160.377)


signal2_10120.png

I have considered that I could leave the prominence set very low so I get a lot of peaks and only process the 20 strongest which is approximately the maximum number of signals I can process at once on a Rpi5.

Question: Is there a way to adaptively set the prominence?

Note: Please let me know if further information is require and I will update the question. Links to sample data can be provided.

[ - ]
Reply by SlartibartfastOctober 31, 2024

It sounds like you're running up against the usual tradeoff between probability of detection and probability of false alarm.  The easier you make it to detect something, the more likely that there will be false alarms.  Reducing the false alarm rate may also reduce the probability of detection, so there's a tradeoff.   Characterizing your system behavior so that tradeoff can be optimized based on the application criteria always takes some effort.

That said, there are methods that attempt to maintain a constant false alarm rate, so that changes in the overall input power or power level of a signal don't result in excessive false alarms.   Usually this is accomplished by creating a detection trigger threshold that compares the power level of the candidate signal with the surrounding power levels.   This selects signals that protrude above the surrounding power density by some minimum amount.   If the noise or adjacent signal levels increase, this means that a candidate signal must still exceed those levels by some minimum amount.

I'm not familiar with the library that you're using or what the "prominence" level does, maybe it's already doing that.   If it is, it sounds like you just need to adjust it to give you the tradeoff level that you can live with.  If it isn't, consider implementing something that selected only the peaks with the characteristics that you need.



[ - ]
Reply by bigalnzOctober 31, 2024

Thanks Slartibart - yes that does summarize the situation.

I presume you were referring to CFAR - which I have not fully got my head around yet. I could easily average my noise floor across a few chunks of PSD so its more averaged.

The best example of topographical prominence that signal.find_peaks uses that I have found is here : Prominence - MATLAB & Simulink and from what I can understand it should work independent of noise floor.

Since I posted the question I have used code that keeps prominence very sensitive (giving me about 20-80 peaks, but then using some filtering to just process the top 20 of those peaks - and this seems to work well, but I always look for better ways :-)

The less processing I can do of channels that are just noise - the better.




[ - ]
Reply by jbrowerOctober 31, 2024

Hi BigAl NZ-

Slartibart is a top 0.1% radio expert, so he's giving you a sound theoretical basis to explore.

Brower on the other hand, knows about audio and video and less about radio, but likes to think about practical solutions, haha. Since you seem to be using (stuck with?) a woefully unpowerful Raspberry Pi 5, let me ask have you explored overclocking ?  If you are using the GPU, you can overclock that, too. Maybe you can boost your horsepower enough to handle a higher rate of prominent peaks.

Probability estimation tradeoffs in engineering can often be addressed by faster processing and more memory (more data to decide, consensus approach, etc). At some point you might explore a small footprint AI model to recognize spectrograph patterns occurring before each peak to help identify which ones are more likely -- but I suspect that would take a lot more than your Rpi5 can handle.

-Jeff