John Sadowsky

John Sadowsky

Charlestown, MD

John S. Sadowsky is a retired wireless system and signal processing engineer. After 15 years as a professor of Electrical Engineering (Purdue and Arizona State), in 1998 he moved to industry to participate in the rapid development of digital wireles...
Read More

Articles

Noise, ISO and Dynamic Range Explained

This article examines the noise characteristics of a modern digital camera. The primary spec used to specify the noise performance of a camera is Dynamic range (DR). Most expositions on DR tend to get highly technical very quickly. OK, this article gets technical here...

Read more
Figure 3:  Gamma level histograms with logarithmic vertical axis.

A Better Histogram

The photo-histogram is probably the most ubiquitous exposure tool in digital photography; that is, short light metering itself.  It has been with us more than 25 years, and it hasn’t changed much.  The histograms we are familiar with are calculated from transformed...

Read more

Forum Topics Started

Forum Replies Created

  • Author
    Topic: Members Read 0 Times
  • John Sadowsky
    John Sadowsky
    Participant
    Posts: 49
    School Safety Rules
    on: February 22, 2020 at 12:10 am

    That cool – uh, I meant cold.

    JSS

    John Sadowsky
    John Sadowsky
    Participant
    Posts: 49
    Re: Does pixel-shift increase resolution?
    Reply #1 on: February 21, 2020 at 5:51 pm

    I would expect the practical issues would overwhelm the 16-way pixel shift, starting with the fact that you have to have a rock-solid tripod and the high-resolution glass.  And what about shutter shake?  I’ve tried some 4-way pixel shift on my a7RM3 – I really couldn’t see much difference even under close pixel peeping.  Compare pixel-shift to HDR merging, or even just image stack averaging of the same exposure for noise reduction.  Layer-alignment in Photoshop (and other software) is quite accurate.  But you can’t use that with pixel-shift.  Pick your poison – sharpness or noise?  Pixel-shift is limited to indoor studio photography, for studios located far from highways, rail tracks, or anything else that might produce a micro-shake.

    I would gladly swap pixel-shift for focus stacks (as in the Nikons).  Why doesn’t Sony give us something we can actually useful?  (Not to mention that the Sony software used to merge the pixel-shift images is pathetic, IMHO.)

    Having said that, as a signal processing engineer, I was not at all satisfied with that article.

    The underlying assumption is that pixels have a 100% fill factor, meaning 100% of the pixel area is active in light detecting.  That just can’t be true.  The surface area of a pixel includes the light-sensing photodiode and several other components including at least 4 CMOS transistors.  Modern sensors do have micro-lenses on top of the sensor pixels, and BSI moves metal layers to the dark side of the sensor.  Those things do improve fill factor, but it is still a pretty big stretch of the imagination to think they achieved close to 100% fill.

    If it were true (that these pixels have 100% area efficiency), then it is almost immediately obvious 16-way pixel-shift would have no resolution advantage over 4-way pixel-shift.  The resolution would be determined by the “sampling aperture” of the square pixel area.  Extra 1/2 pixel offsets of those squares don’t add anything to resolution over 4-way pixel-shift.  4-way still makes sense as it eliminates the need for interpolation in demosaicing interpolation.

    I’d be happy to back up what I just said with more details from Fourier transform and Shannon-Nyquist sampling theory.  Here it is in a nutshell.  MTF = the Fourier transform of the PSF = point spread function.  He uses the MTF because he can combine the lens MTF and the sampling aperture MTF by multiplication (as opposed to the convolution of PSFs in the spatial domain).  But who thinks in terms of spatial frequency?  He should have at least press the button on his Matlab software (or whatever he’s using) to transform back to the spatial domain to show us PSF.  Then we could actually measure the parameter that photographers actually use to quantify resolution = the circle-of-confusion = the diameter of the PSF.  MTF – WTF?

     

     

    JSS

    John Sadowsky
    John Sadowsky
    Participant
    Posts: 49
    Re: New Article Announcements & Discussions
    Reply #2 on: February 19, 2020 at 9:10 pm

    This is the coolest article I’ve read on photography blogs in a very long time.  Thank you, Jeff Schewe!  I especially liked the video demo of Photoshop 1.0.  I really think it is important to not just track the most recent technology (although that is fun), but also appreciate the history of how we got to where we are today.  Again, thanks to Jeff for sharing his memories of these great contributions.

    JSS

    John Sadowsky
    John Sadowsky
    Participant
    Posts: 49
    Re: General
    Reply #3 on: February 13, 2020 at 11:40 am

    Does anybody use a pen tablet for post-processing?  This video, Scott Williams on HS610 Tablet, raves about using a tablet with Capture One.  Does anybody have good or bad experiences with these things?  Recommendations?

    JSS

    John Sadowsky
    John Sadowsky
    Participant
    Posts: 49
    Re: Scott Williams Tutorials
    Reply #4 on: February 12, 2020 at 10:08 am

    Thanks, Mike – that was a cool video.  I’ve been using Luma masks for several things, but mostly for isolating regions, like the sky in landscapes.  This video opened up a whole new technique using broad soft transitions in the luma mask.  I’ll have to watch more of Williams’ stuff.

    JSS