How do digital cameras work? | James May Q&A | Head Squeeze


Loading ....

You’re at the best concert ever and all you can think of is taking as many pics of your favourite super star on your trusty smart phone. You’ve spent more time taking photos than actually enjoying the concert, but who cares? You’ve got the proof to show all your workmates you were there!

But how exactly does a digital camera work?

It’s all very simple really. The camera’s sensor is covered with tiny light sensitive cells, each of which can measure the amount of light that falls on it. The cells act like the old photosensitive film, reacting to the light which falls on them and then reporting to the camera’s microprocessor brain. The camera doesn’t just look at an individual pixel on the sensor; it also looks at the pixels around it to come up with an informed guess of what the true colour of that pixel is.

In taking that awesome shot of your dinner for Instagram, making sure you have just enough light, have you ever wondered how light bulbs work? http://www.youtube.com/watch?v=-MYB8butQwQ&list=PLMrtJn-MOYmfqNgyPxx6NYMZnd25y4shc&index=6

Getting your eyebrows waxed ahead of your photo shoot? Greg Foot explains why we have eyebrows: http://www.youtube.com/watch?v=6BUvEH42EpA&list=PLMrtJn-MOYmcMLEnLKQjjxJqled29hSLB&index=3

What question do you want Head Squeeze to answer next? Join our G+ Community and tell us! https://plus.google.com/communities/115682880183087388642

http://www.youtube.com/user/HeadsqueezeTV
http://www.youtube.com/subscription_center?add_user=HeadsqueezeTV

 

Loading ....

213 thoughts on “How do digital cameras work? | James May Q&A | Head Squeeze

  1. A sensor does not have pixels. No big lies, but not good enough. Pixels – as in displays – have subpixels in red, green and blue, or a native RGB value. Digital camera sensors have photosites and these are either red, or green, or blue. Meaning the photovoltaic cells in the sensor – aka photosites – are monochrome in either red, green or blue and not pixels. And are generally arranged in a way where one red, one blue and two green specialized photosites in a square according to Mr. Bayer populate the mega quantity of photosites. When we make a photo, the main electronics “reads” (measures) the photosites – sequentially. These “read” values are stored in memory, but only have either a red, a green or a blue value. This is the rawest version of our digital photos that we shall never see. The computer in the camera, in firmware (software) – alluded to by James – now processes the photosite readouts and extrapolates – by interpolation with the environment of a photosite value – a hypothetical color value to complement the one that was measured. Thus the value of an R photosite is complemented by a G and B value. Because of the regular arrangement of color specialization in a Bayer sensor, and because of naive firmware and hardware, we have seen a lot of photos with Moiré patterns. These are pure digital artifacts from the algorithms applied. Demosaicing was added into these algorithms to hunt for Moiré patterns and take then away again. But then, what do you do with certain photos that have Moiré patterns because the subject has them?
    When all this computational effort is ready, the result is written to the camera’s memory card – as the raw file. I’d say, computationally, that file is not raw at all, but extremely well done. If a camera was not equipped to save raw files, then the raw file is certainly created in the “camera” and in the case of JPEG format, the file is processed another time to compress dynamic range from the camera’s bit depth to the 8 bits per color channel of JPEG – meaning we lose gradation or nuance levels. Also, we loose detail because of data compression in the JPEG format. The story would not have been more difficult if the AD conversion had been positioned, the bit depth, the sequential reading of cells and the time parallax this causes andsoonandsoforth. Just not good enough.

  2. Daily reminder that I fancy myself smart but am in fact not even half as well educated as people in STEM today. No worries, I’m content staring blankly in awe at the achievement of others.

    1. Jeff Russ it is fine to call it that way as you are not an English teacher. But most engineers are not English teachers.
      The fact that “They invented the things” makes no difference. German invented cars but they don’t call it a car.

    2. Nina Z All electrical engineers say “see moss” and they invented the things. If you don’t believe me, just search video for “CMOS” and literally every professor, scientist and engineer will say it that way.

    3. Jeff Russ formally every letter from a short term like this should be pronounced.
      You never call US the country “us”.

  3. I’ve been watching Top Gear for years and it is only now that I discover that May had a show on Youtube this whole time? What sorcery is this?!?!?

  4. Hi @Brit Lab I have one. Why and how lips are as they are. Like why does is separate half inside the mouth and the other outside? How does the skin drying out works? – and why it happens, doesn’t they lubricate themselves like the rest of the body? Why are some bigger than another persons? – and why is the lower lip always bigger?

  5. Thanks for a great video! Well explained and entertaining 🙂 Showing this to my photography students.

  6. it felt like I was watching some alien transcript about some super super advanced technology beyond human comprehension

  7. James May can you please explain to us why many people take very annoying vertical video instead nice horizontal video?

  8. well James this from Dominican Republic me and my family are fans from you and your partners of top gear and we are missing that glorious trio in top gear the new show is as bad as top gear usa we really sorry and we miss the real program and yourselves

  9. around minute 4, he says human eye is most sensitive to green and its conceived as brightness, its wrong. human eye is most sensitive to BLUE and yes it is conceived as brightness. I know it because I do TV calibrations. this is why in stores TVs are pumped up on blue. and if you use your phones or laptops calibration setting and take away all blue youll see a brownish looking picture.

    1. it was Kodak but the first cmos cameras with digital signal processing was started by the soviet spy satellite program

  10. I like James May head squeeze. 🙂 Lots of interesting stuff. Here is to more head squeezing with James May in the future!

  11. There aren’t enough green pixels to make the image bright enough for Handmaids suspiciously white teeth either

  12. So why do we call the camera “digital”? It uses digits, but how? What’s behind a pixel? It works through the binary system? Thanks a lot.

    1. +Sevak Kirakosyan Because the processed electric signal is in 0’s and 1’s – digits. Also known as binary code.

    2. @Sevak Kirakosyan Digital is a polysemantic world. You call it digital camera becouse digital electronics are “electronic circuits representing signals by discrete bands of analog levels, rather than by a continuous range”. You can think about a digital camera or clock or whatever digital electronic device  as a device that contains a microchip.

  13. A 16mp camera will not allow you to produce large images with no loss of perceived quality unless it comes from a large enough sensor, the same way you could not do the same with 110 film no matter how fine a grained film you used, you can’t cheat the system.

    1. Because once upon a time, things that startled us was likely to kill us and eat us. Thus, moving gives you a couple moments head start on getting out of the way.

  14. that is so funny, this is a british-produced show, but it is very transatlantic in its cultural references.  I wonder if peopel high-up are trying to merge the cultures of Britain and America to make Oceana?

  15. I’ve got a question for you James… Can you actually swim in space to move towards a desired location??

    1. @MoonEyes2k I thought so!! thus, the typical stereotype thinking of people moving into space by swimming like in cartoon is not real at all!! but then, how does astronaut move about in space??

    2. Well, I’m not James, but the answer is no. To ‘swim’ in space, you’d need some form of gas or similar, to create resistance to your motions, thus allowing you to exploit Newtons third law of motion. With nothing to push against, you’ll just flail around.

  16. the constant flickering of position by the time elapsed was a bit distracting, the text probably shouldn’t have been auto aligned in every frame

  17. Sorry James (or, more properly, Head Squeeze researchers), Big Mouth Billy Bass came out in 1998 while I definitely had a digital camera in 1996 (the Kodak DC25, I’ve still got it packed away somewhere) while my middle-school buddy had the Apple Quicktake 100 at least a year earlier (it was released in 1994). And these were just the early COMMERCIALLY available digital cameras, extremely expensive professional digital cameras existed before that.

    Also worth noting that the primary reason CMOS (pronounced SEE-moss) sensors are so ubiquitous is that it’s far less expensive and less power-hungry than the CCD varieties. The downside being that it actually has to observe the image one line at a time (kinda like a scanner) which, on particularly cheap cameras like those found on phones, can result in distortion effects when the camera and subject are moving in relation to each other (see http://en.wikipedia.org/wiki/Rolling_shutter ). CCD, on the other hand, takes a snapshot of the scene so you don’t get this effect (though you do get other effects such as when a point light source is bright enough you can get a vertical smear as the electrons spill over into neighbouring cells).

    Of course the ubiquity, low cost, and low power use of CMOS means that there has been far more R&D on it than CCD which has resulted in some very high quality CMOS sensors that don’t have nearly as much distortion as their cheaper siblings which is why you can even find them in high-end professional DSLRs.

  18. Who wants to see James May on camera?

    Driving a Koenigsegg Agera R.

    At 270+ MPH.

    While Clarkson and Hammond shit themselves silly.

    And The Stig, just stands there in the background. Being “The Stig”. And nothing else…

  19. Finally a video with James May again! I signed up for James May, you know? And I will not watch any videos without James May! Exclamation Mark!

  20. Err plus “10 years ago struggle to get a MegaPixel” ???

    Canon EOS 300D, revolutionary and came out in 2003 with 6.3MP sensor and was under a grand.

    The people who write his script really need to search the interweb before Mr May gets to read it out, makes him look like he doesn’t really know his subject.

  21. The digital camera came after Billy the Bass singing fish???
    That  was released in 1998. So that clears that up then.

  22. Sorry, I can’t watch this video without getting furious about that they didn’t use a monospaced font for the time shown on the camera!! :S

    1. I was annoyed because the white slider bar for the time was above the stencil not under it giving it a really annoying effect

  23. It’s not true that ten years ago you’d struggle to get a digital camera with more than 1 megapixel. I bought my first digital camera in 2004, an HP Photo Smart R707, which I still have, and it had 5.1 megapixels.

  24. Megapixels are great. But they’re also misleading. Did you know 35 mm film has detail equivalent to about 35 megapixels? For that matter, ‘full frame’ film cameras produce images exceeding an effective resolution of 100 megapixels. But… This doesn’t take into account yet that the ‘megapixels’ of a digital camera aren’t proper pixels… a colour pixel needs to have an RGB value. And the ‘megapixel’ rating counts each red, green & blue pixel element as individual things. (Which would be correct in a black & white sensor, but is wrong in a colour one).
    This means the actual ‘resolution’ of a digital camera, when compared to say a computer screen, is only 1/3 of it’s ‘megapixel’ rating. (or 1/2 at most if you consider the image processing.) – I’m staring at a computer screen with the (for current computers) rather low resolution of about 1 million pixels. But, to get a good quality picture that takes up the whole screen without any missing detail, it would require a 3 megapixel camera…

    1. @malvinmalvin Technically yes. And for black and white images it makes no difference. Interpolating data by some estimates can, under ideal circumstances give you about half the resolution you’d expect (largely because there are usually two green pixels in a group of 4), but what I was getting at overall is that in spite of the output resolution, you aren’t actually getting the resolution you think you are in the image itself. (By contrast, a 3 CCD video camera is giving you exactly the resolution it claims to without interpolation, because it has independent sensors for each of the RGB channels)

    2. @KuraIthys Indeed there are some drawbacks, and using a bayer array the way you described might yield better results, but with much fewer pixels. I only meant to point out that a 3MP camera actually outputs 3MP, not 1 MP. 

    3. @malvinmalvin You can’t just reuse a photosite without consequence though; Because photosites generally do not overlap, if there is a green photosite the red & blue information is missing for that location. Therefore, it is interpolated.
      – Interpolated data is NOT the same as having a proper RGB pixel data. You’re generating 12 million pixels from 12 million photosites, where you should be generating it from 36 million. Thus 2/3 of all your image information is made up, and is not actually something the camera ever captured…

    4. @KuraIthys Also, a 12.2 MP DSLR actually outputs 12 million pixels. A bayer array (or bayer filter) doesnt use every four “pixels” (photosites) once, it generates an RGB value from every point between the photosites. So every photosite contributes to four adjecent pixels. Source: http://www.cambridgeincolour.com/tutorials/camera-sensors.htm

    5. No, you’re making an incorrect assumption there. A standard HD screen is not what I mentioned in the slightest. I was referring to the display on the laptop I typed it with, – Which is 1280 by 800. or pretty much EXACTLY 1 million pixels.
      Your comment seems to have missed the point though.
      While your DSLR may well output at 4272×2848, this is an interpolated resolution.
      The 12 megapixels of your camera consist of just 4 million ‘real’ pixels. Any additional resolution is entirely down to interpolation.
      The 1280×800 display on my laptop contains 1 million RGB pixels. Which, in fact each contain 3 sub-pixels.
      By comparison, your ’12’ megapixel DSLR DOES not contain 12×3 million sub-pixels. It has 12 million sub-pixels TOTAL.
      To create a single colour pixel value thus requires combining 3 sub-pixels.
      Thus the actual resolution (as in the definition of ‘resolution’ used in telescopes and the like, as the minimum size at which two points can be seen as seperate from one another )of a 12 megapixel camera is only sufficient for a display with 4 million pixels.

      Unfortunately, this means if your camera is outputting files with 12 megapixels, it is ‘making up’ the content of at least half of them.

  25. It is quite sad that for the amount of pictures taken by humans, only a microfraction of them appreciate and understand this fascinating process.

    1. thanks!!i whatched a lot of he’s videos and i really liked them all!!i thought he just makes TOP GEAR.thanks again!!

    2. There’s a whole playlist of videos on here with James! http://www.youtube.com/playlist?list=PLMrtJn-MOYmfqNgyPxx6NYMZnd25y4shc

    1. With the greatest of respect for your phone…I don’t make phonecalls with my camera, why would I take pictures with my phone?

    2. @asnowboards1  Yes quite true. The camera is actually technically better than it’s younger sibling’s camera because of the physically bigger sensor, although the Lumia has optical image stabilization which makes it more usable in low light with out a tripod. In videos I still prefer the 808 because you can turn the digital stabilization off when you don’t want it 🙂 You can take pictures up to 38MP with it, but obviously you will loose the lossless zoom 🙂

    3. Is the Nokia 808 Pureview / is was released during summer 2012 I believe and – strangly – the camara actually war decently good.  I read a test in Digital Photo Magazin (Germany), and it scored quite well.  Aperture is 2.2 and it uses a 1/1,2 inch sensor.  Basically the picture quality is about equivalent to mid-price compact cameras.  The pictures are downsized to 8 megapixels by the way… otherwise every file would be (very) about 15MB.

  26. Most CCDs in consumer technologies also work how James described the CMOS, using the Bayer pattern to record color information.  The real differences between the two come at the fundamental electronics levels and the mechanic of how they actually handle the light to charge to voltage to digital value conversions on the sensor itself, or within a second complementary circuit.

  27. Thanks everyone who replied with useful info on this tech, yes I know that it does 38mp + 5mp or 34mp + 5mp depending on aspect ratio, I was amazed with the low light performance on my lumia 920 & 1020. I was genuinely asking why top end point and shoot and dslr ‘s don’t have the high pixel count like the 1020 and Iam aware mp is not the main factor of picture quality. Once again thank you all who replied with useful info.

    1. The only pro DSLR body to have a high MP count is the Nikon D800, which has 36 megapixels. The Flagship DSLR’s do not, mainly because of how pixel density affects low light/High ISO performance. When this isn’t a factor, such as for medium format cameras that are used only in a studio with controlled lighting, the megapixel count is increased. Check out the Hasselblad H4x. It’s an 80 megapixel camera.

    1. @TechLaboratories @Máté Magyar Thanks a lot for your answers, thanks tech labs for explaining that in such detail. 

    2. @kara88bg While what @Máté Magyar says is technically true, from a practical construction standpoint the real differences between CCD and CMOS sensors in cameras are a little bit more complicated than that.

      Largely it has to do with the charge-to-voltage conversion methodology and the location of processing circuitry.  In a CMOS sensor, each pixel converts the charge gained from exposure to light to a voltage, which is then addressed individually and passed to an analog to digital converter (ADC) to get a numerical value for the pixel.

      With a CCD, however, there is only one common charge-to-voltage converter.  Pixels are not individually addressed, but are addressed by column.  The bottom-most pixel’s charge is passed to the two converters, and each pixel above it in the same column passes its charge down to the pixel below it, until all of the pixels in that column are read.  Then the process starts over with the next column of pixels.

      In terms of results of the construction, this means two things.  First, CMOS sensors can easily ‘scale down’ in resolution, i.e. digitally zoom in, by windowing only a small region.  Focusing on only one region of the sensor allows them to have a higher frames-per-second refresh rate which is why many professional cameras offer higher frame rates at a slightly lower resolution (slow motion).  On the other hand, CMOS sensors have no global electronic shutter: if there is no physical shutter between them and light, they are always gaining charge, unlike CCDs that require a global electronic shutter to prevent smear as they pass the charge down the column to the common charge-to-voltage converter.  CCDs cannot be windowed, and the entire sensor must be read at once for every frame.

      While both types of sensors were invented around the same time (and by the same people), CCDs quickly became the defacto standard for imaging because without an ADC they can be used as an analog video sensor, at much lower power consumption than the video tubes used before it.  The CCD made electronic news gathering (field news recording on video instead of film) possible.  For decades CMOS sensors suffered from variability in the black and white points of the charge-to-voltage converters – slight differences between them led to wildly different voltage values.  Modern CMOS sensor systems compensate for the black point variability by storing a black point calibration, factory set for each CMOS sensor and only changeable in the highest end digital cinema cameras.

    3. CCD refers to NMOS sensors, that means that instead of transistor pairs (the complementary in CMOS) you only have one n-channel transistors, CMOS is the newer technology, with lower power consumption among other benefits. these terms are not camera/sensor specific, but they are basic building blocks in electronics.

  28. Nice video art!! I loved the fact that the timer in the camera is in perfect sync with the actual video length 😀

  29. Was the chip first pioneered for space exploration? All those old images from the ships sent to Mars, Jupiter, Saturn etc… it facilitated sending them back digitally. And the first ones would definitely be low res and clunky. And they were.

  30. It’s been far too long since your last one of these @Head Squeeze Glad to see the channel still alive and well.

    1. The makers of professional cameras only raises the pixel density when they are sure that they can keep the noise at a reasonable level.

      The smaller the pixel sensors become, the more susceptible they are to electrical interference from neighboring pixel sensors, AKA noise. The sensors you find in professional cameras, like Hasselblad, are huge with huge price tag on them as well.

    2. higher mp isnt equal to better quality of picture.
      Besides, http://www.gizmag.com/canon-120-megapixel-cmos-sensor/16128/

  31. IMO the problem with Hammond’s hair is caused by the AntiAliasing (AA) filter which is nothing but slightly matted piece of glass in front of the sensor which makes everything a tad blurried. Combine it with the mess on the Hammond’s head and you’ll get a visual disaster. Try to remove this filter – you will have a nasty moire on the regular objects but Hammond’s head (and may be even his jokes) will be rendered a little bit sharper. Sorry for my poor english.

    1. I think the trend that manufacturers do these days is removing the AA filter to give more detail to the image, and even though it doesn’t have the filter, the results for moire is almost to none *at least to me* 

      The technology is improving a lot compared to the early digital cameras 

    1. Nemanja Milosevic I thought it was gonna cut him off before he finished like in grand tour and top gear. Sorry for late reply only just discovered this series

    1. @TheDoubleBee Less data for the processor to deal with. I would hate to see how big the heat sink and battery would have to be to manage 36mp raw files at 10fps.

    2. Although, top range pro bodies, namely Canon 1Dx and Nikon D4, still rock “only” 18 and 16 MP, respectively. But that’s because of better low-light performance and incredible burst rate they can achieve.

    3. @Luke Clark Did I say anything about 24 making people better photographers? I was referring to what he was saying about 12 to 16 megapixels being common place with high end cameras, when most professional cameras and even cheaper ones are using 2x and sometimes 3x more pixels than that. Not including the Hasselblad or other medium format.

Leave a Reply

Your email address will not be published. Required fields are marked *

CERTAIN CONTENT THAT APPEARS ON THIS SITE COMES FROM AMAZON SERVICES LLC. THIS CONTENT IS PROVIDED 'AS IS' AND IS SUBJECT TO CHANGE OR REMOVAL AT ANY TIME.
Please note that "As an Amazon Associate I earn from qualifying purchases".
%d bloggers like this: