(see the updates below)

I was watching the Rittenhouse trial. For those interested in commentary on the trial, I recommend you read Branca's summaries at Legal Insurrection, look at the summaries at Viva Frei's youtube channel, or, if you want all of the insight and commentary, the epic streams at Rekeita's channel.

Instead, I want to point out something I think was overlooked or not well presented by the defense in objecting to evidence being presented by the defense for rebuttal.

Now - Corey Chirafasi did a good job, and I expected no less, as he appears to be the most competent lawyer in the room when it comes to explaining things. He did a good job of pointing out exactly why "blowing up" an image digitally alters and distorts the image. He didn't really nail down why it's important.

Here's what he was trying to explain. Things are somewhat simplified, but not to the point of irrelevance.

Look at the image below. Each square is a pixel. It is, as the expert Mr Black noted, the smallest unit you can have in an image.

4 pixel image

What happens when you blow it up?

Now, if you look at it through a magnifying glass, you are literally bending the light to make the image appear larger, but you are NOT modifying the image that is being looked at.

The problem comes up when you want to enlarge the image, or zoom in on a LCD or similar display. Let's increase this by 50% so that the image is now 3x3.

As the image is "blown up" to 50% more than it was, there are now "blank" pixels that the application has to interpolate, or more simply, guess, at what they are supposed to be. Several methods are used - the exact methodology is beyond my exact knowledge, and irrelevant to the point I'm making anyway, other than that in some situations, some algorithms are better than others.

In either case, whether an in between color is chosen (purple between red and blue?), or one of the colors is simply copied over as above, the image is now distorted. This is, incidentally, why non-4k/retina displays look like crap if not at their native resolution.

Now, a lot of work has been put into these algorithms. For larger swathes of color, encompassing multiple pixels, or with relatively gradual changes of color, little is visibly ruined. Most of the time, they work.

The real problem is the edge cases.

A lot of people don't realize how limited 1080p or camera imagery is from a standard gopro or drone is. Sure, it looks sharp, up close, or with a fairly narrow field of view. The issue is that in any camera looking over a larger area, say, a car lot, can easily end up where the head of someone only a hundred or more feet away is only a handful pixels tall. Smaller or thinner objects may measure a pixel or two, if that. Or disappear utterly.

Add on top of that that a lot of modern digital cameras do a lot of processing, especially in poor lighting, to "guess" at what's there.

In short, when you've got a long-range shot, even at "1080p" - and the drone shot in question wasn't - at night, critical details that would tell us exactly where a gun barrel was pointed, or exactly how a head was turned, are already a "best guess" of the camera, sensor, and electronics. Fine details are also small enough that any interpolation at all is significant compared to how much data already exists in the picture.

As far away as that drone was, in those lighting conditions, the details of exactly where Kyle and his gun were facing, or even if he had it raised, are likely subject to distortion equal to or exceeding the original data when zoomed in.

Update - the images in question via Legal Insurrection. Go read the article.

Assuming that's a fair representation of what was shown in court, neither the original nor the "enhanced" version is worth a damn.