CMOS sensor chip, what limits it from saving 2400p 120fps video

Status
Not open for further replies.

paperwastage

Golden Member
May 25, 2010
1,848
2
76
Let's say I have a 13mpix CMOS sensor chip

http://www.sony.net/SonyInfo/News/Press/201208/12-107E/

I know that it can take 13mpix pictures (4208x3120).

What limits it from sampling 13mpix continuously, to make a 30fps 4208x3120 (13mpix) video?

If it can sample 1920x1080 at 30fps, why not 1920x1080 at 240fps?

Is there any limitation besides the data transfer bottleneck (between the pixels, chip and output)
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
There's probably a limit to how fast each sensor can accurately 'sample' incident light. I don't know enough about optics to know the right word, but I understand the concepts enough to think this may be one reason why the 'sample rate' can be limited. That's not to say there's no way around a slow sample rate, but it's probably situation-dependent. Using more power is often a way to increase the rate at which a circuit operates, for example. There may also be inherent limitations in a CMOS sensor for correctly storing and digitizing energy from a photon.
 
Last edited:

Brian Stirling

Diamond Member
Feb 7, 2010
3,964
2
0
The electronics are not fast enough (read from CMOS imager etc). Also, to be useful the images would need to be processed to make video and there's no way the uP is up for that task at that resolution.

But, in a few years 4K video will be the norm and many will be playing with 8K video with resolution of about 32MP.

What's impossible today will be commonplace tomorrow...


Brian
 

AD5MB

Member
Nov 1, 2011
81
0
61
there is a finite rate at which data can be reliably transferred off the chip. Think of a square wave as rise time - dwell - decay time. You need some stable dwell time so the device the data is transferred to can reliably read the data.

If you reduce dwell to zero, all you have is rise time and decay time. If you reduce that, you never rise to full potential, and you decay from that reduced rise, and the recipient circuit can never trust the data.

CMOS does not have speedy rise and decay times.

you can play tricks like using a reduced portion of the chip, or unusual combinations like 200 horizontal by 50 vertical pixels, or 50 horizontal by 200 vertical pixels, but there just isn't any way to combine reading the entire chipset beyond X bits per second
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
The electronics are not fast enough (read from CMOS imager etc). Also, to be useful the images would need to be processed to make video and there's no way the uP is up for that task at that resolution.

He didn't ask about realtime playback. Capturing and viewing are completely independent. Besides, you're wrong anyway about not being able to read data out fast enough.

1920 * 1080 * 3 (pixels) * 8 (bits per pixel) = 49,766,400 bits per frame

50Mbpf * 240 fps = 12Gbps

12Gbps is a fairly low transfer rate and certainly well below the computing capability of many modern CPUs especially when you convert the raw bits into words. Like I said before, this comes down to power. CMOS logic can definitely switch fast enough to make this possible, but it requires well behaved channels with low losses and a lot of power.

I interpreted part of the question to be beyond simple electronics. Inherent bottlenecks in a camera sensor are far more interesting than designing high frequency data paths. At 240fps, each pixel is read every 4.17ms, which means the bottleneck is probably in the data path, but I don't know that for sure.
 
Last edited:

paperwastage

Golden Member
May 25, 2010
1,848
2
76
but there just isn't any way to combine reading the entire chipset beyond X bits per second

so R&D for image sensors would be focused on increasing that "X bits/s" throughput, along with increasing mpix count, increasing quality etc...?
 

Brian Stirling

Diamond Member
Feb 7, 2010
3,964
2
0
He didn't ask about realtime playback. Capturing and viewing are completely independent. Besides, you're wrong anyway about not being able to read data out fast enough.

1920 * 1080 * 3 (pixels) * 8 (bits per pixel) = 49,766,400 bits per frame

50Mbpf * 240 fps = 12Gbps

12Gbps is a fairly low transfer rate and certainly well below the computing capability of many modern CPUs especially when you convert the raw bits into words. Like I said before, this comes down to power. CMOS logic can definitely switch fast enough to make this possible, but it requires well behaved channels with low losses and a lot of power.

I interpreted part of the question to be beyond simple electronics. Inherent bottlenecks in a camera sensor are far more interesting than designing high frequency data paths. At 240fps, each pixel is read every 4.17ms, which means the bottleneck is probably in the data path, but I don't know that for sure.


That's an amazingly convoluted analysis...

First you go on about the data rate being a piece of cake (12Gbps) then state that the bottleneck is probably the data path -- wow, just wow...


Brian
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
That's an amazingly convoluted analysis...

First you go on about the data rate being a piece of cake (12Gbps) then state that the bottleneck is probably the data path -- wow, just wow...


Brian

I'm currently designing a 16Gbps transmitter over a much more complicated channel than a CMOS sensor would see. So, yes, it's not technically very hard and I have enough experience to say that. With that said, the data rate can also be the bottleneck. They aren't mutually exclusive thoughts. Nice try, though.

You completely missed the point of the analysis. The data can be read out fast enough, but not knowing the performance characteristics of a CMOS sensor means the bottleneck could still be there. You're forgetting that CMOS sensors may not even be able to physically (optically) capture or transfer data through an individual pixel this fast. I solved the data path part, but the sensor analysis is outside of my area of expertise.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
CMOS image sensors use a row/column grid construction. You activate a row, and each pixel in that row, outputs its analog voltage signal onto it's respective column. An ADC on each column (or, more usually, an analog MUX and a smaller number of ADCs) then performs the conversion.

The total number of ADCs, MUX channels and their operation rate are chosen for the application of the sensor. E.g. a low-end digital camera sensor with 8 Mpx might use 2 ADCs, with a sample rate of 48MHz. This will allow a full resolution image to be read out in about 0.1 seconds. Alternatively, the MUX could be configured to skip pixels, or to average pixels, prior to ADC, for lower resolution/higher frame rate modes. Often, the data output is just presented as a parallel bus output from the ADC.

Faster sensors designed for high speed imaging will use a larger array of ADCs to perform the conversion, and frequently end-up with a massive pin-count to handle the large number of ADC buses. I suppose it would be possible to build a high-speed transceiver onto the sensor die, but I don't know how amenable the sensor processes are to GHz digital circuits. I suspect that as these are low-volume specialist products, it's not worth the R&D to build and test high-speed on-die transceivers; customers generally will accept a slow 256 bit bus, connected to a suitable FPGA or ASIC.
 

endrebjorsvik

Junior Member
Sep 14, 2004
10
0
61
The capacitors (pixels) in CMOS sensors are usually read row by row. This reduces the number of ADCs required (reduces chip size) and minimizes the space between each pixel (ADCs can be put on the edge of the chip).
Because of this, the data converters do not perform 120 conversions per second in a 120p camera, but 120 * number of rows (i.e. 2400) = i.e. 288 kS/s. And within this tiny conversion time, the converted value should be accurate to a given number of bits, i.e. 12 bits. What you got then is an ADC that should be close in performance to an audio ADC (i.e. 92 kS/s, 16 bit). But you do not need only two of these audio ADCs as in an audio system, but 2400 of them since the entire row should be converted in parallel. This could in theory make the sensor itself 1200 times more expensive than an audio ADC. If a good audio ADC is sold for $1 (not a very unreasonable price, but should probably be closer to $0,1), the image sensor itselt would be $1200 alone. And then you should have the camera around the sensor as well.

And you also have a whole lot of other considerations in the analog circuitry. Switched-capacitor bandwidth, noise, power, cooling, linearity, etc. All these must be taken care of before the signal even gets digital.

Conclusion: Development cost, chip size (chip cost) and power are the largest obstacles.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
I took this question to be more theoretical than practical. I just thought I'd mention that.
 

Brian Stirling

Diamond Member
Feb 7, 2010
3,964
2
0
One other thing from a technical standpoint is that the image chip is almost certainly a Bayer pattern sensor and that being the case there is only two photosites per pixel: green and red or green and blue, the missing color is interpolated from nearby pixels. The processor must take this raw data and construct an RGB image or RGB video -- that's non trivial, particularly at greater than 4K resolution!


Brian
 

Knavish

Senior member
May 17, 2002
910
3
81
You can, in fact, buy CMOS cameras that work at well over 1000 frames/second. From a random web search, Photron apparently sells 1024x1024x12bits cameras that run at 10,000 frames/sec.

I can identify two major drawbacks here. First, think about how long your camera exposure is in a dark room -- sometimes it can be a good fraction of a second. Now if you limit your maximum exposure time to be less than 1/1000 of a second. (i.e. 1000 frames/sec camera), you better have a very bright scene. Second, sensor read-out noise typically goes up as the read-out rate goes up. Again, you better have a really bright scene to overwhelm the additional noise with lots of signal.
 

MrDudeMan

Lifer
Jan 15, 2001
15,069
94
91
You can, in fact, buy CMOS cameras that work at well over 1000 frames/second. From a random web search, Photron apparently sells 1024x1024x12bits cameras that run at 10,000 frames/sec.

I can identify two major drawbacks here. First, think about how long your camera exposure is in a dark room -- sometimes it can be a good fraction of a second. Now if you limit your maximum exposure time to be less than 1/1000 of a second. (i.e. 1000 frames/sec camera), you better have a very bright scene. Second, sensor read-out noise typically goes up as the read-out rate goes up. Again, you better have a really bright scene to overwhelm the additional noise with lots of signal.

The resolution isn't as high as I'd want, but for 10kfps it would be plenty. I think I've actually seen that camera in the past, but I can't remember.

Here are the bandwidths that camera requires for each resolution in the feature list:
Code:
X       Y       FPS         Bit   Bandwidth
1024    1024    1.25E+04    12    1.57E+11
1024    1000    1.35E+04    12    1.66E+11
640     488     4.00E+04    12    1.50E+11
384     264     1.00E+05    12    1.22E+11
128     8       1.00E+06    12    1.23E+10

166Gbps max. Pretty crazy.
 
Status
Not open for further replies.