userbinator a day ago

Making it easier to reject "unapproved" or "unsupported" browsers and take away user freedom. Trying to make it harder for other browsers to compete.

  • ajross 20 hours ago

    That can be done already based on User-Agent, though. Other browsers don't spoof their agent strings to look like Chrome, and never have (or, they do, but only in the sense that everyone still claims to be Mozilla). And browsers have always (for obvious reasons) been very happy to identify themselves correctly to backend sites.

    The purpose here is surely to detect sophisticated spoofing by non-user-browser software, like crawlers and robots. Robots are in fact required by the net's Geneva Convention equivalent to identify themselves and respect limitations, but obviously many don't.

    I have a hard time understanding robot detection as an issue of "user freedom" or "browser competition".

    • jml7c5 20 hours ago

      >I have a hard time understanding robot detection as an issue of "user freedom" or "browser competition".

      The big one is that running a browser other than Chrome (or Safari) could come to mean endless captchas, degrading the experience. "Chrome doesn't have as many captchas" is a pretty good hook.

      • hedora 17 hours ago

        Concretely: Google meet blocks all sorts of browsers / private tabs with a vague: “you cannot join this meeting” error. They let mainstream ones in though.

      • BolexNOLA 19 hours ago

        Not to mention how often you can get stuck in an infinite loop where it just will not accept your captcha results and keeps making you do it over and over. Especially if you’re using a VPN. It’s maddening sometimes. Can’t even do a basic search

      • jherskovic 17 hours ago

        I use Safari (admittedly, with Private Cloud and a few tracking-blocking extensions) and get bombarded with Cloudflare's 'prove you are human' checkbox several times an hour.

        It's already a pretty degraded experience.

    • Sayrus 20 hours ago

      > I have a hard time understanding robot detection as an issue of "user freedom" or "browser competition".

      In the name of robot detection, you can lock down device, require device attestation, prevent users from running non-standard devices/OS/software, prevent them from accessing websites (CloudFlare dislikes non-chrome browser and hates non-standard browsers, ReCaptcha blocks you out if you're not on Chrome-like/Safari/Firefox). Web Environment Integrity[1] is also a good example of where robot detection ends up affecting the end user.

      [1] https://en.wikipedia.org/wiki/Web_Environment_Integrity

      • ajross 18 hours ago

        Aren't all those solutions even more impactful on the user experience though? Someone who cares about user freedom would think they're even worse, no?

    • jsnell 18 hours ago

      The purpose here isn't to deal with sophisticated spoofing. This is setting a couple of headers to fixed and easily discoverable values. It wouldn't stop a teenager with Curl, let along a sophisticated adversary. There's no counter-abuse value here at all.

      It's quite hard to figure out what this is for, because the mechanism is so incredibly weak. Either it was implemented by some total idiots who did not bother talking at all to the thousands of people with counter-abuse experience that work at Google, or it is meant for some incredibly specific case where they think the copyright string actually provides a deterrent.

      (If I had to guess, it's about protecting server APIs only meant for use by the Chrome browser, not about protecting any kind of interactive services used directly by end-users.)

      • Sophira 14 hours ago

        I would imagine that this serves the same purpose as the way that early home consoles would check the inserted cartridge to see that it had a specific copyright message in it, because then you can't reproduce that message without violating the copyright.

        In this case, you would need to reproduce a message that explicitly states that it's Google's copyright, and that you don't have the right to copy it ("All rights reserved."). Doing that might then give Google the legal evidence it needs to sue you.

        In other words, a legal deterrence rather than a technical one.

    • soulofmischief 17 hours ago

      It's easy to change the User Agent and we cannot handwave this fact away for the sake of argument.

Avamander 20 hours ago

> Why do you think Chrome bothers with this extra headers. Anti-spoofing, bot detection, integrity or something else?

Bot detection. It's a menace to literally everyone. Not to piss anyone off, but if you haven't dealt with it, you don't have anything of value to scrape or get access to.

  • motorest 18 hours ago

    > Bot detection. It's a menace to literally everyone. Not to piss anyone off, but if you haven't dealt with it, you don't have anything of value to scrape or get access to.

    What leads you to believe that bit developers are unable to set a request header?

    They managed fine to set Chrome's user agent. Why do you think something like X-Browser-Validation is off limits?

    • Sophira 14 hours ago

      Because you would need to reproduce an explicit Google copyright statement which states that you don't have the right to copy it ("All rights reserved.") in order to do it fully.

      That presumably gives Google the legal ammunition it needs to sue you if you do it.

      • userbinator 10 hours ago

        Companies like SEGA have tried doing stuff like that in the past, and lost.

      • tomsonj 14 hours ago

        It seems like the requirement to reproduce this copyright header alone, nevermind the validation hash, would be enough to scare off scrapers?

        • Sophira 14 hours ago

          I'm no lawyer, but my take on it is that by reproducing this particular value for the validation header, you are stating that you are the Chrome browser. It's likely that this has been implemented in such a way that other browsers could use it too if they so choose; the expected contents of the copyright header can then change depending on what you have in the validation header.

          To me, it seems likely that the spec is for a legally defensible User-Agent header.

    • Avamander 8 hours ago

      > They managed fine to set Chrome's user agent. Why do you think something like X-Browser-Validation is off limits?

      It's not off-limits technically. But do you think it'll remain this simple going forward? I doubt that.

  • lxgr 18 hours ago

    Do you mean bot and non-Chrome-using human detection?

  • IshKebab 14 hours ago

    Bots can easily copy the header though so I don't see how that helps?

    • Avamander 9 hours ago

      Only if they know to implement it and while it uses a more trivial approach. I expect it to become increasingly difficult gradually. It's also yet another way to make mistakes and make it entirely obvious that one is forging Chrome.

  • ohdeargodno 20 hours ago

    Bullshit. You don't have anything of value either. Scrapers will ram through _anything_, and figure out if it's useful later.

twapi 3 days ago

Seems like they are using these headers only for google.com requests.

  • xnx a day ago

    Yes I think it is part of their multi level testing of for new version rollouts. In addition to all the internal unit and performance tests, they want an extra level of verification that weird things aren't happening in the wild

  • AznHisoka 19 hours ago

    They probably are using it to block bots scraping Google results is my theory

wernerb a day ago

Is it not likely that it protects against AI bot Llama?

  • wut42 16 hours ago

    I don't see how you can "protect" against a large language model that cannot do browsing.

exiguus a day ago

I have two questions:

1. Do I understand it correctly and the validation header is individual for each installation?

2. Is this header only in Google Chrome or also in Chromium?

  • gruez a day ago

    >1. Do I understand it correctly and the validation header is individual for each installation?

    I'm not sure how you got that impression. It's generated from fixed constants.

    https://github.com/dsekz/chrome-x-browser-validation-header?...

    • exiguus 21 hours ago

      It's still not clear to me because it's called the default API key. And for me, default means that this is normally overwritten. And if overwritten, during build or during install? That's what I'm asking myself.