Sensing beyond the Shannon limit

Status
Not open for further replies.

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
I went to a scientific convention recently, and in one of the talks was discussion about new sensing technology being applied to MRI scanners that would deliver information that was beyond the 'Shannon limit' of the equipment in use.

Unfortunately, this was more of a general 'what's happening' talk, rather than a scientific paper being presented. As a result, it was thin on actual math and full of generally empty language.

Nevertheless, the point was that the Shannon limit might not be all that hard-and-fast, and that it is possible to get around it in the right circumstances.

Is this heresy? Or am I missing something?
 

esun

Platinum Member
Nov 12, 2001
2,214
0
0
"Under the right circumstances" is the key phrase there. I don't know the specifics of what you're talking about, but if you add some assumption about the information or the channel that wasn't present in the original theorem, then the limit of course can be greater than what the original theorem predicts.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
The only way I could see this being feasible is if errors in an MRI signal are non-Gaussian or have error which is correlated with some other parameter which could be monitored. I'm not familiar enough with modern MRI to say how it processes the signal or how the Shannon limit might otherwise apply, but these seem like the most likely "loopholes."
 

Born2bwire

Diamond Member
Oct 28, 2005
9,840
6
71
Dammit, we talked about this general topic previously in terms of antenna arrays. I seem to recall, vaguely, that we can beat the Nyquist limit using irregular sampling. This was applied to the spacing of antenna arrays since it can be treated similar to a Fourier transform... using Prony method... at this point my memory starts to merge it with the topic of gingerbread houses. I'll try to pick a colleague's brain and try to remember how it worked.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
It may help, but an MRI scanner is basically a Fourier transform. The signal that is detected is a Fourier transform of the actual structure data - a 2D FT if the scanner is operated in 'image acquisition' mode, or 3D FT if the scanner is in 'volume acquisition' mode.

The scanner acquires the data point by point in the Fourier domain.

The latest scanners used phased-array antennas (e.g. with 32 or 64 channels) to detect the signal. This allows them to undersample in the Fourier domain, by using synthetic aperture techniques to solve the Nyquist aliasing problem in the spatial domain - but this isn't a new technique, so I don't think this is what was being discussed here.

I wonder if this is something to do with the fact that the high spatial frequencies contain little useful, or visible, information - so if you sense those incompletely, you only get a slight blurring of the image (in much the same way that JPEG compression, shaves off the high spatial frequencies in the DCT domain).

Approaches similar to this have been used before (e.g. scanners can acquire the Fourier domain in a trajectory spiralling our from zero - so that if the patient moves during the scan, only the high-spatial frequency data gets trashed, and the resulting image is likely to remain of adequate quality). But as far as I'm aware, they've always needed a full set of data.

Are there new techniques that are able to solve a Fourier transform, where the coefficients have been sampled irregularly and incompletely?
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
So is the idea to improve temporal resolution at minimal cost to spatial resolution?

The only thing I can think of that might affect the ability to improve on the Shannon limit is that you might have some information from the FT that allows you to know that the error is non-Gaussian, presumably due to a known power distribution. Unfortunately, I'm just speculating at this point. :(
 

CP5670

Diamond Member
Jun 24, 2004
5,666
765
126
I went to a scientific convention recently, and in one of the talks was discussion about new sensing technology being applied to MRI scanners that would deliver information that was beyond the 'Shannon limit' of the equipment in use.

This sounds like something related to compressed sensing, which has become a big trend in DSP circles in the last few years. There are some tutorial articles on it here. Basically, if the signal you're trying to recover is sparse in some sense and you take that sparsity into account, there are alternative approaches that allow you to get around many of the traditional limitations in signal acquisition.

I seem to recall, vaguely, that we can beat the Nyquist limit using irregular sampling.

The compressed sensing stuff involves nonuniform (typically random) sampling, but the main assumption is really the sparsity of the underlying signal. For a general signal that is not necessarily sparse in any way, nonuniform sampling doesn't let you beat the (appropriately defined) Nyquist rate.
 

canis

Member
Dec 10, 2007
152
0
0
I went to a scientific convention recently, and in one of the talks was discussion about new sensing technology being applied to MRI scanners that would deliver information that was beyond the 'Shannon limit' of the equipment in use.

Unfortunately, this was more of a general 'what's happening' talk, rather than a scientific paper being presented. As a result, it was thin on actual math and full of generally empty language.

Nevertheless, the point was that the Shannon limit might not be all that hard-and-fast, and that it is possible to get around it in the right circumstances.

Is this heresy? Or am I missing something?

I think you are confusing the Shannon limit with the sampling theorem. The sampling theorem is a sufficient condition only so there is nothing new in what you are saying.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
This sounds like something related to compressed sensing, which has become a big trend in DSP circles in the last few years. There are some tutorial articles on it here. Basically, if the signal you're trying to recover is sparse in some sense and you take that sparsity into account, there are alternative approaches that allow you to get around many of the traditional limitations in signal acquisition.
Ah. Yes, I think that's exactly it. I'd not remembered the term, but now that you mention it, I'm pretty sure that's what he called it. The stuff about lack of information in the higher spatial frequencies also fits with the need for sparsity in the data.

So is the idea to improve temporal resolution at minimal cost to spatial resolution?

This is the main aim. In MRI, if you want higher resolution, you need more pixels, which means more aquisitions needed in the Fourier domain - and therefore time. Phased-array techniques can accelerate acquisitions (if you undersample the Fourier space by a factor of 4, then your scan takes 1/4 the time) - and by using the phased-array data you can preserve spatial resolution - but something has to be compromised, and in this example, it is SNR (which is cut in half).

I think the issue here with compressive sensing, is that if you assume sparsity in the high spatial frequencies, you needn't spend much time sampling them - and therefore save time, without degrading the SNR as you would by undersampling the whole Fourier domain.
 

CP5670

Diamond Member
Jun 24, 2004
5,666
765
126
Ah. Yes, I think that's exactly it. I'd not remembered the term, but now that you mention it, I'm pretty sure that's what he called it. The stuff about lack of information in the higher spatial frequencies also fits with the need for sparsity in the data.

Yeah, the most common setting for compressed sensing involves a signal that has a large bandwidth, but has relatively few active frequencies within that band. You only know their total number, not their exact locations.

For a finite-dimensional signal, if the entire DFT band has length N and there are K nonzero frequencies in it, the traditional theory says that you still have to sample at the Nyquist rate N to recover the signal, while the compressed sensing framework allows for a sampling rate of roughly K*log(N/K), which can be much smaller than N if the frequency set is very sparse.

I think you are confusing the Shannon limit with the sampling theorem. The sampling theorem is a sufficient condition only so there is nothing new in what you are saying.

I thought that's what he meant too. Although the Nyquist rate condition in the classical sampling theorem is sharp unless you make extra assumptions on the signal, such as sparsity.
 
Status
Not open for further replies.