I never understand why browsers expose APIs to read the state of a canvas and WebGL results, or color depth of the screen. How often are they used for anything other than fingerprinting?
Note: at least for my profile, those are the only things that truly seem to give significant data. Everything else is in the 1 in 2-300 range, just by using FF on Android with uBlock.
Ok, so that's one use case. How many webapps are doing image manipulation (not to mention plain sites)?
There should really be a prompt whenever a site tries to access this information, especially in privacy-conscious browsers (I wouldn't expect Chrome to want such an anti-feature for their customers, the ad companies).
I used more than I can recall. The browser is a creative tool and being able to use all the creative features is great actually.
Just editing your profile picture is a good idea to do on the client side, why waste CPU on your server for example? I do photo editing a lot in the browser.
This is a feature that benefits everybody and the fact that it can be abused is unfortunate, but not as tragic as you paint it.
Photo editing could be designed in such a way that the JS code does not actually get read access to the canvas - it just specifies transformations.
It's becoming very clear I think that having this level of control for web apps is more detrimental than it is a positive. Leave rich apps to the OS, and keep the web as untrackable as possible.
Note: at least for my profile, those are the only things that truly seem to give significant data. Everything else is in the 1 in 2-300 range, just by using FF on Android with uBlock.