Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(PetaPixel)   If you're reading this on a screen instead of a print out, you can thank this inventor   (petapixel.com) divider line
    More: Hero, Invention, Computer science, Computer scientist Russell A. Kirsch, Computer graphics, Digital imaging, Digital photography, pixel image of his son Walden, IMAGE  
•       •       •

1114 clicks; posted to Geek » on 14 Aug 2020 at 12:35 PM (5 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



19 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest
 
2020-08-14 12:06:13 PM  
I'm pretty sure that pixels were first commercialized in 1865 or so.
 
2020-08-14 12:54:41 PM  
He's seen quite a few pixels in his lifetime.
 
2020-08-14 12:55:58 PM  
No I can't.  He's dead.
 
2020-08-14 1:02:24 PM  
I read it on a screen. I could tell from the pixels
 
2020-08-14 1:27:30 PM  
Sure he did.
sothebys-com.brightspotcdn.comView Full Size
 
2020-08-14 1:32:10 PM  
Fark user imageView Full Size
 
2020-08-14 1:44:59 PM  
Are we still using pixels (one dot, one color and brightness)?  I hope not, but I've never seen any documentation from nvidia or AMD that GPUs calculate images by anything but pixels.  It isn't like any hardware in use wants pixels these days.  I know Windows has been (for something like 10+ years) doing tricks to go beyond pixels for typefonts, but I don't think it is typically done for much else.

What uses pixels?  CRTs did, and DMD (digital mirror displays) still do, especially if you get the expensive types with 3 DMDs with one for each color.  That's pretty much it.

Everything else, from the color cameras that take the pictures to LCD/QDOT/OLED displays that display them, to the JPEG, MPEG, and related (lossy) formats that store them are either locked into chroma subsampling or simply leave chroma subsampling as the default that is rarely changed for good reason.

Granted, displays intentionally using chroma subsampling (pentile matrix) have given the whole idea a bad name by claiming far higher resolution than they actually provide, but does that really justify deliberately introducing adding error by giving 2/3rds of the color "dots" a chroma for a point 1/3rd of a pixel away?  And why throw away nearly half (don't try to make it half, that's one of the many things wrong with the pentile matrix) of your resolution away anyway?

Pixels were great in the 20th century.  But tech has moved on.

/oddly enough, you had to pry my CRT from me
//LCDs were popular a long, long time before they could match a 21" 1600x1200 trinitron
/// finally on my second LCD "monitor".
 
2020-08-14 2:06:29 PM  

yet_another_wumpus: Are we still using pixels (one dot, one color and brightness)?  I hope not, but...


Fark user imageView Full Size

(iPhone screen)

Fark user imageView Full Size

(camera sensor)

/What modern pixels might look like.
 
2020-08-14 2:19:32 PM  
Fark user imageView Full Size


That MacBook has terrible resolution.
 
2020-08-14 2:22:36 PM  

Fursecution: yet_another_wumpus: Are we still using pixels (one dot, one color and brightness)?  I hope not, but...

/What modern pixels might look like.


My whole rant was that was what modern pixels look like.  And media stores data in a similar format.  But going from YUV data to LCD "pixels" requires layer after layer changed to work in YUV and not RGB.  If you think that is simple, riddle me this: why do we need G-sync/freesync?  You'd think the GPU would send the image to the LCD and the LCD would display it?  NOOOOO..... It "just doesn't work" and free/g-sync helps make it "just work".  Going from RGB to YUV is even worse.
/rant
 
2020-08-14 3:12:03 PM  
Vectors rule, pixels drool!
 
2020-08-14 3:14:46 PM  

dittybopper: No I can't.  He's dead.


Well dig him up and shake his hand.
 
2020-08-14 3:17:07 PM  

SpectroBoy: Sure he did.
[sothebys-com.brightspotcdn.com image 850x478]


If you're ever in Chicago, I encourage you to visit the Art Institute and see the real thing, it's beautiful.
 
2020-08-14 3:25:23 PM  

Tyrone Slothrop: SpectroBoy: Sure he did.
[sothebys-com.brightspotcdn.com image 850x478]

If you're ever in Chicago, I encourage you to visit the Art Institute and see the real thing, it's beautiful.


I surely will.
 
2020-08-14 3:27:49 PM  

yet_another_wumpus: Are we still using pixels (one dot, one color and brightness)?  I hope not, but I've never seen any documentation from nvidia or AMD that GPUs calculate images by anything but pixels.  It isn't like any hardware in use wants pixels these days.  I know Windows has been (for something like 10+ years) doing tricks to go beyond pixels for typefonts, but I don't think it is typically done for much else.


Font smoothing was just an algortithmic way to use the pixels to make the font look smoother. Instead of "black box jagies" you add some grey pixels to sooth the eye. Still pixels.

bitmap2lcd.comView Full Size
 
2020-08-14 3:33:36 PM  

Tyrone Slothrop: SpectroBoy: Sure he did.
[sothebys-com.brightspotcdn.com image 850x478]

If you're ever in Chicago, I encourage you to visit the Art Institute and see the real thing, it's beautiful.



Fark user imageView Full Size
 
2020-08-14 3:57:02 PM  
Wait a minute.   This guy was the first to digitize a picture.  He wasn't involved in developing Video Display Units.

So yet another misleading Fark headline.
 
2020-08-14 4:41:17 PM  
i.pinimg.comView Full Size
 
2020-08-14 6:02:11 PM  

yet_another_wumpus: Are we still using pixels (one dot, one color and brightness)?  I hope not, but I've never seen any documentation from nvidia or AMD that GPUs calculate images by anything but pixels.  It isn't like any hardware in use wants pixels these days.  I know Windows has been (for something like 10+ years) doing tricks to go beyond pixels for typefonts, but I don't think it is typically done for much else.

What uses pixels?  CRTs did, and DMD (digital mirror displays) still do, especially if you get the expensive types with 3 DMDs with one for each color.  That's pretty much it.

Everything else, from the color cameras that take the pictures to LCD/QDOT/OLED displays that display them, to the JPEG, MPEG, and related (lossy) formats that store them are either locked into chroma subsampling or simply leave chroma subsampling as the default that is rarely changed for good reason.

Granted, displays intentionally using chroma subsampling (pentile matrix) have given the whole idea a bad name by claiming far higher resolution than they actually provide, but does that really justify deliberately introducing adding error by giving 2/3rds of the color "dots" a chroma for a point 1/3rd of a pixel away?  And why throw away nearly half (don't try to make it half, that's one of the many things wrong with the pentile matrix) of your resolution away anyway?

Pixels were great in the 20th century.  But tech has moved on.

/oddly enough, you had to pry my CRT from me
//LCDs were popular a long, long time before they could match a 21" 1600x1200 trinitron
/// finally on my second LCD "monitor".


Damn.  I have the option of "disable chroma color subsampling" when saving something as a JPG, but have never checked that box.  Does this specifically refer to the Bayer mosaic filter used by digital cameras?  Because I can think of three situations where it would be false to assume that the original image was taken using a Bayer filter:

1) Before consumer digital cameras with Bayer filters, there were color digital images making the rounds that were the result of multiple scans with different color filters.  These images had full color data for every location, and the first color digital image formats were designed for them.  The color images sent back from the Viking and Voyager missions were taken this way.  Many classic images now in digital form were digitized by placing photographs or even original artwork on a color scanner.

2) If you take a picture with a consumer digital camera with a Bayer filter and then shrink it (often necessary for size constraints), the Bayer mosaic is essentially gone and the smaller image will have true color information for every pixel (albeit a smaller number of pixels).  Yet the smaller image may still need to be saved as a JPEG.

3) Some models of digital cameras can use their image stabilization mechanics to make the sensor do a little "square dance" and capture multiple exposures that are combined into a single image that actually has Red, Green, and Blue data for each pixel.  Serious photographers save all their photographs in Raw format of course, but ultimately these enhanced resolution images are converted into TIFFs or JPEGs for printing or distribution.

https://www.hasselblad.com/h6d-multis​h​ot/

https://support.d-imaging.sony.co.jp/​s​upport/ilc/psms/ilce7rm3/en/

https://www.olympuspassion.com/2018/0​7​/16/high-res-mode-olympus-om-d-e-m1-ma​rk-ii/
 
Displayed 19 of 19 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking




On Twitter



  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.