And since you won’t be able to modify web pages, it will also mean the end of customization, either for looks (ie. DarkReader, Stylus), conveniance (ie. Tampermonkey) or accessibility.

The community feedback is… interesting to say the least.

  • diffuselight@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 year ago

    This has to be seen in context of AI - Google will offer this to companies to ‘protect their pages from being scraped’

    • BigDanishGuy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Actually it would make some sense, not that I like it though.

      What can you do to prevent scraping? A lot of people are screaming about their IP being used to train AI, but have they actually done anything to tell the world that you can’t use the texts to train AI? Does copyright alone protect against the use for AI training? To the best of my knowledge there’s no case law either way. But if you have to circumvent DRM to train AI then you’ll have a hard time witht the “I did not know that I couldn’t do that” defense.

      So some news outlets get to protect their precious little articles from the big bad AI, which will probably destroy news as we know it anyway even more than it already has, while the rest of us gets force fed advertisement.

      Allow me to sarcastically quote timbuk3

      Things are going great, and they’re only getting better

      • johnnixon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        So some news outlets get to protect their precious little articles from the big bad AI, which will probably destroy news as we know it anyway

        I was thinking about this. What happens when all the big outlets are having AI write their news?You can’t get answers on today’s news without feeding the model today’s news. Therefore, somebody has to create the data source.

        I see a few scenarios:

        • Google scrapes, aggregates, and summarizes to the point that nobody reads the article/sees the ads and the news site goes under. Then Google has nothing to scrape but press releases and government sources. Or…
        • News sites block access to scrapers and charge for it but may be wary of crossing their customers (news aggregators) in their coverage
        • The above creates a tiered system where premium news outlets (AI assisted writing but with human insight) are too expensive for ad supported Google to scrape, so Google gets second tier news from less reliable, more automated sources, or simply makes it themselves. Why not cut out the middle man?
        • Rouge summarizers will still scrape the real news outlets and summarize stories to sell to Google. This will again make paid news a luxury since someone with a subscription will summarize and distribute the main point (okay) or their spin (bad).

        I’m failing to see where this will go well. Is there another scenario?

    • volodymyr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Yeah, the whole idea of open web democratizing knowledge empowers similarly learning of people and learning of AI. For the same reasons. Blocking and fragmenting this powerful space is reactionary, I think ownership of AI is a bigger issue.