One software package I use a lot performs, as a key part of its work, a deconvolution. Given the high noise in the input data, it uses the generally accepted technique of linear algebraic deconvolution by using the truncated singular value decomposition of a Toeplitz matrix.
In practice, the measured convolution kernel needs considerable zero-padding - to at least 2x or even 3x the size of the measurements (20-30 points). However, like most commercial packages it actually implements the zero padding by making A a circulant block matrix.
So I was wondering what the reason for that might be - given the increased complexity of calculating the SVD of a block matrix, over a simple circulant matrix.
I was wondering if this was a computational complexity issue. One data set may need 1,000,000 deconvolutions - and I could imagine that if each deconvolution required the SVD for a 60x60 matrix - this could be an issue.
Presumably there is a good reason for this - as the block matrix technique seems to be almost ubiquitous.
In practice, the measured convolution kernel needs considerable zero-padding - to at least 2x or even 3x the size of the measurements (20-30 points). However, like most commercial packages it actually implements the zero padding by making A a circulant block matrix.
So I was wondering what the reason for that might be - given the increased complexity of calculating the SVD of a block matrix, over a simple circulant matrix.
I was wondering if this was a computational complexity issue. One data set may need 1,000,000 deconvolutions - and I could imagine that if each deconvolution required the SVD for a 60x60 matrix - this could be an issue.
Presumably there is a good reason for this - as the block matrix technique seems to be almost ubiquitous.