• SavedKriss@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    What a surprise! A traditional outfit appears statistically significant to a large statistical model and shows more frequently. What a novel finding. I’m flabbergasted! What will be next? CEOs in jacket and tie? Dogs with fur? Why my 512x512 picture of a Inuit in a snowfield doesn’t portrait the subject wearing a bikini? Why can’t meta read my mind? WHY, MARK? WHHHHY?

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      A traditional outfit

      How traditional? How statistically relevant is it? Most Indians i know do not wear turbans at all.

      If these stats are trustworthy (and i think they are), the only Indians that wear turbans are Sikhs (1.7%) and Muslims (14.2%). I’d say 15.9% is not statistically significant.

      • catsarebadpeople@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        I think you’re looking at it wrong. The prompt is to make an image of someone who is recognizable as Indian. The turban is indicative clothing of that heritage and therefore will cause the subject to be more recognizable as Indian to someone else. The current rate at which Indian people wear turbans isn’t necessarily the correct statistic to look at.

        What do you picture when you think, a guy from Texas? Are they wearing a hat? What kind? What percentage of Texans actually wear that specific hat that you might be thinking of?

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          6 months ago

          A surprising number of Texans wear cowboy and trucker hats (both stereotypical). A surprising number of Indians don’t wear turbans since it’s by far a minority.

  • VirtualOdour@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Articles like this kill me because the nudge it’s kinda sorta racist to draw images like the ones they show which look exactly like the cover of half the bollywood movies ever made.

    Yes, if you want to get a certain type of person in your image you need to choose descriptive words, imagine gong to an artist snd saying ‘I need s picture and almost nothing matters beside the fact the look indian’ unless they’re bad at their job they’ll give you a bollywood movie cover with a guy from rajistan in a turbin - just like their official tourist website does

    Ask for an business man in delhi or an urdu shop keeper with an Elvis quiff if that’s what you want.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      6 months ago

      the ones they show which look exactly like the cover of half the bollywood movies ever made.

      Almost certainly how they’re building up the data. But that’s more a consequence of tagging. Same reason you’ll get Marvel’s Iron Man when you ask an AI generator for “Draw me an iron man”. Not as though there’s a shortage of metallic-looking people in commercial media, but by keyword (and thanks to aggressive trademark enforcement) those terms are going to pull back a superabundance of a single common image.

      imagine gong to an artist snd saying ‘I need s picture and almost nothing matters beside the fact the look indian’

      I mean, the first thing that pops into my head is Mahatma Gandhi, and he wasn’t typically in a turbine. But he’s going to be tagged as “Gandhi” not “Indian”. You’re also very unlikely to get a young Gandhi, as there are far more pictures of him later in life.

      Ask for an business man in delhi or an urdu shop keeper with an Elvis quiff if that’s what you want.

      I remember when Google got into a whole bunch of trouble by deliberately engineering their prompts to be race blind. And, consequently, you could ask for “Picture of the Founding Fathers” or “Picture of Vikings” and get a variety of skin tones back.

      So I don’t think this is foolproof either. Its more just how the engine generating the image is tuned. You could very easily get a bunch of English bankers when querying for “Business man in delhi”, depending on where and how the backlog of images are sources. And urdu shopkeeper will inevitably give you a bunch of convenience stores and open-air stalls in the background of every shot.