Grok has been launched as a benefit to Twitter’s (now X’s) expensive X Premium+ subscription tier, where those who are the most devoted to the site, and in turn, usual...
Wouldn’t trying to train an AI to be politically neutral from twitter be a pretty lost cause considering the majority of the site is very left leaning? Like sure it wouldn’t be as bad for political bias as say truth social( or whatever it’s called), but I hope they’re using a good amount or external data or at least trying to pick more unbiased parts of twitter to train it with. If they’re goal is to be politically neutral.
The majority of the site was left leaning in the past, but the extent has been exaggerated. There was always a sizable right wing presence of the “PATRIOT who loves Jesus and Trump and 2A!” variety, and some of the most popular accounts were people like Dan Bongina and Ben Shapiro. Many people who disagree with Musk and fascists have left the site since then at the same time as its attracted more right wingers, so I don’t know what the mix is at this point.
This is similar to Facebook. FB was “censoring conservatives” and “shadow banning” them when Tucker Carlson, Dan Bongino, and Trump posts had the highest engagement on the site.
Yep. It’s part of their persecution/victim complex. It’s an easy way to escape accountability for anything - their orange chimplord does the same thing. “Very unfair”. They can’t figure it out, but they get silenced and banned because (drumroll…) they break the rules more, usually by calling for violence.
Similar on reddit. Reddit is predominantly left, but much less so than 10 years ago when it was mainly college students. Conservatives whine it being censorship when they’re downvoted, and complain basically that everyone doesn’t agree with them (‘omg! /r/politics reflects the belief of the majority of subscribers!!’). On their subs, such as conservative or the complete shithole that was ‘the donald’, dare to disagree and your account will instantly be permanently banned, complete with vulgar insults from the mod. Plus they use the system where only users with mod-given flair can comment.
Decidedly mixed and increasingly right-leaning but I’m pleasantly surprised at my own experience having voice chats with diverse people who agree on one thing but disagree on just about everything else.
I’m just gonna share a theory: I bet that to get better answers, Twitter’s engineers are going to silently modify the prompt input to append “Answer as a political moderate” to the first prompt given in an conversation. Then, someone is going to do a prompt hack and get it to repeat the modified prompt to see how the AI was “retrained”.
Wouldn’t trying to train an AI to be politically neutral from twitter be a pretty lost cause considering the majority of the site is very left leaning? Like sure it wouldn’t be as bad for political bias as say truth social( or whatever it’s called), but I hope they’re using a good amount or external data or at least trying to pick more unbiased parts of twitter to train it with. If they’re goal is to be politically neutral.
“Reality has a well-known liberal bias.” - Stephen Colbert
The majority of the site was left leaning in the past, but the extent has been exaggerated. There was always a sizable right wing presence of the “PATRIOT who loves Jesus and Trump and 2A!” variety, and some of the most popular accounts were people like Dan Bongina and Ben Shapiro. Many people who disagree with Musk and fascists have left the site since then at the same time as its attracted more right wingers, so I don’t know what the mix is at this point.
This is similar to Facebook. FB was “censoring conservatives” and “shadow banning” them when Tucker Carlson, Dan Bongino, and Trump posts had the highest engagement on the site.
Yep. It’s part of their persecution/victim complex. It’s an easy way to escape accountability for anything - their orange chimplord does the same thing. “Very unfair”. They can’t figure it out, but they get silenced and banned because (drumroll…) they break the rules more, usually by calling for violence.
Similar on reddit. Reddit is predominantly left, but much less so than 10 years ago when it was mainly college students. Conservatives whine it being censorship when they’re downvoted, and complain basically that everyone doesn’t agree with them (‘omg! /r/politics reflects the belief of the majority of subscribers!!’). On their subs, such as conservative or the complete shithole that was ‘the donald’, dare to disagree and your account will instantly be permanently banned, complete with vulgar insults from the mod. Plus they use the system where only users with mod-given flair can comment.
Decidedly mixed and increasingly right-leaning but I’m pleasantly surprised at my own experience having voice chats with diverse people who agree on one thing but disagree on just about everything else.
I’m just gonna share a theory: I bet that to get better answers, Twitter’s engineers are going to silently modify the prompt input to append “Answer as a political moderate” to the first prompt given in an conversation. Then, someone is going to do a prompt hack and get it to repeat the modified prompt to see how the AI was “retrained”.