We’ve had some trouble recently with posts from aggregator links like Google Amp, MSN, and Yahoo.
We’re now requiring links go to the OG source, and not a conduit.
In an example like this, it can give the wrong attribution to the MBFC bot, and can give a more or less reliable rating than the original source, but it also makes it harder to run down duplicates.
So anything not linked to the original source, but is stuck on Google Amp, MSN, Yahoo, etc. will be removed.
You’re wrong. Tons of peer-reviewed research says you’re wrong. There just isn’t any that says you’re right.
Do you have an explanation for why this bias you claim is so pervasive cannot be found when anyone looks for it? Is it… paranormal bias? Is it just really shy bias that hides when it gets scared?
How can that be true and MBFC be in broad consensus across thousands of news sites with different tools from academics, journalists, and other bias monitoring organizations? Both things cannot be true. In fact, whenever someone compares MBFC to any other resource they find almost perfect correlation, not bias. I’d love for you to explain to me where that bias disappears to when under a microscope.
Is there a conspiracy between bias monitoring organizations, journalists, and academics you have evidence of? Are the prestigious journals that published them in on it too? I can’t wait to sketch out this vast global conspiracy to pull the wool over our eyes and convince us that Democracy Now is just… highly factual. Those bastards!
That’s not saying what you want it to say. It’s a top level picture taking great pains to speak in general terms. So no, it’s not a guard against MBFC having a bias where it rates conservative stuff higher.
We’ve found concrete examples of bias in MBFC that would be very hard to see if you’re just smashing 11,000 data points against each other. This requires checking the actual sites by hand, basically doing their self appointed job again and checking their work. Then checking it against MBFCs other ratings for internal consistency.
No, if there were serious, pervasive bias impacting scores, it would lower the correlation and MBFC would be an outlier in the group because they would be in agreement less. If something’s happening at such a low level that it doesn’t impact correlation, it’s just an outlier. Multiple researchers conclude that the differences between monitors is too low to impact downstream analysis which is hard to square with your claim. And, each entry represents about 0.01% of their content, so what percentage of that data is being used to draw sweeping conclusions about the whole?
There is just high agreement about what constitutes high and low quality news sites. The notion that MBFC is somehow inferior to other bias monitors or extremely biased is not supported by evidence. If one of those organizations is better than the others, it isn’t much better. As this study concludes, because the level of agreement between them is so high, it doesn’t really matter which one you use. They’re all fine. Even they think so. Not only do MBFC ratings correlate nearly perfectly with Newsguard, Newsguard’s rating of MBFC is a perfect score. They’re well-respected by each other.
And, really, how could these researchers who’ve dedicated their lives to understanding this stuff have gotten it so wrong? Academia definitely isn’t a hotbed of conservatism. Using awful tools could destroy their careers but MBFC is regularly used in research. Why? How are these studies getting through peer-review? How are they getting published? There are just too many failure points required.
Because there’s a lot that goes into statistics. Notice I didn’t say they would be conservative, just that sometimes they can be wrong. And they take great pains to say this is a general thing. That means there’s a lot of room in the numbers. It’s not at all what you’re trying to say it is.