Keep a history of the biometrics of the devices either on the account or in a cookie. Use a sufficiently secure and obfuscated language to capture and upload the data (maybe using steganography in an image, explaining the weird image uploads.) Prompt when the biometrics don't match a known user of the device/account (according to ml). Or if the tracking data didn't exist (incognito) then prompt using a backup model trained in the difference between the biometrics of known bots and known humans. Keep a very loose threshold here.
That's how I'd do it if I were Google (and no one else because only Google can afford that.) All the browser fingerprinting stuff mentioned is great but doesn't really work as much as you'd hope in practice.
The basic theory behind that checkbox is to attach an unmovable cookie to your browser.
The majority the client side reCAPTCHA is fingerprinting to make it impossible for spammers to steal cookies from legitimate users.
Once you have the immovable cookie, is easy to do regular reCAPTCHA challenges until you are sure that browser is being used by a regular human.
You will notice that if you ever move to fresh OS install, or a different browser that reCAPTCHA suddenly starts showing you image challenges again, which last for several weeks.
Keyboard/mouse biometics is a nice theory. But that's all it is. It doesn't work as a general CAPTCHA solution because it's so easy for bots to fake human looking input.
Great points. I agree the critical and biggest innovation is building a secure environment inside the browser. When I explored keyboard/mouse biometrics it was for detecting account theft which is a bit different.
If they have a way to create a secure, immovable cookie across browser sessions even in incognito mode then they don't need biometrics. In the absence of persistence, biometrics could serve as the cookie. Even with a naive approach in a hackathon, a member of my team was able to get very high precision identifying users based on a small sample of keyboard and mouse movements. I'm sure Google can do better.
So it's not really about attackers being able to look like any human. It's about being able to look like a specific human. Which is much harder.
But maybe you have more experience? We abandoned it seemed intrusive and because we knew we couldn't invest in the secure environment. Without that it doesn't matter. And with it maybe there's an easier solution. But I figured that given Google made it that they would be using keyboard/mouse movements for user identification.
Keep a history of the biometrics of the devices either on the account or in a cookie. Use a sufficiently secure and obfuscated language to capture and upload the data (maybe using steganography in an image, explaining the weird image uploads.) Prompt when the biometrics don't match a known user of the device/account (according to ml). Or if the tracking data didn't exist (incognito) then prompt using a backup model trained in the difference between the biometrics of known bots and known humans. Keep a very loose threshold here.
That's how I'd do it if I were Google (and no one else because only Google can afford that.) All the browser fingerprinting stuff mentioned is great but doesn't really work as much as you'd hope in practice.