Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because VGA is fundamentally an analogue standard built around how a CRT works. Each bit of the signal serves a very defined purpose based on how a CRT and thus a vacuum tube works. This is also why VGA is so crazy to do digital capture for, because there are a lot of things that are "VGA" that are well outside the IBM standard. Monitors don't often care because as long as they can adjust their pixel clock, something they can do based on detecting the blanking intervals, they can display the signal because CRTs don't fundamentally have a "resolution" they have "Dot pitch" which is different. But for a digital monitor like an LCD it has to detect and verify the stability of the signal, then try to decode it into something that makes sense. That can take multiple attempts to get right. Most hardware also makes assumptions based on the common resolutions because there is no resolution information... just blanking. So without EDID it can be a real challenge to deal with VGA capture. IIRC you can have multiple resolutions at approximately the same pixel clock because of how VGA works that's fine, a CRT doesn't care... but digital capture needs to know how many times per second to sample so it can make pixels.

https://en.wikipedia.org/wiki/Cathode-ray_tube#Construction_...

https://en.wikipedia.org/wiki/Video_Graphics_Array

https://en.wikipedia.org/wiki/Extended_Display_Identificatio...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: