- Episode 210
The Ultimate Noise: AI and News Pollution
AI has been in the news a lot lately. But what happens when AI starts making the news? Generative AI–the type of artificial intelligence that pulls from existing data to create new content–presents a significant challenge to journalism. It can enable misinformation to spread like wildfire. How can the average consumer tell what’s real and what’s not? Hosts Leah Dajches and Matt Jordan find out by talking with Jack Brewster from NewsGuard, an organization that provides transparent tools to counter misinformation.
-
Leah Dajches: On November 6, 2023, an article appeared on a website called global village space. It claimed that-- quote, "A renowned Israeli psychiatrist who was celebrated for his work in curing severe mental illnesses was discovered dead in his Tel Aviv home." The article also stated that the psychiatrist left behind a, quote, "devastating suicide note" that implicated Netanyahu. This claim quickly spread on several platforms in numerous languages, and it was amplified by thousands of social media users. It was investigated by Newsguard, an organization that provides tools to counter misinformation for readers, brands, and democracies. Newsguard found that the article was generated using AI. Despite the fact that the psychiatrist appears to have been fictitious, the claim was featured on an Iranian TV show, and it was recirculated on media sites in Arabic, English, and Indonesian and spread by users on TikTok, Reddit, and Instagram. This is not an isolated incident. To date, Newsguard has identified 750 unreliable AI generated news and information websites spanning 15 languages. This is what's known as generative AI. This is a type of artificial intelligence that pulls from existing data to create new content, and it presents a significant challenge to journalism. How can the average consumer tell what's real and what's not?
Matt Jordan: We're going to find out by talking with Jack Brewster, Enterprise Editor for Newsguard, an organization that provides transparent tools to counter misinformation. Prior to working at Newsguard, Jack was a Fulbright scholar in Munich, Germany conducting a research project about the role of journalism in the digital age. He was a reporter for Forbes Magazine covering politics, misinformation, and extremism. Jack also has written about politics for Time Magazine, Newsweek, Vice News, and the New York Daily News. Jack, welcome to News Over Noise.
Jack Brewster: Thank you for having me. I'm glad to be here to talk about a very important topic.
Leah Dajches: Yeah, we were excited to have you on. Actually, I was looking at the Newsguard website earlier, and there seems to be so many various tools and data points, research, articles. There's a lot available. Could we maybe just start by, what is Newsguard and how does it work?
Jack Brewster: So Newsguard is a misinformation watchdog that uses human journalists to rate the reliability of websites, podcasts, and TV shows. And in addition to that, we also track false narratives and how they're spreading across various platforms and sites, and we put out periodic special reports as well as some newsletters like a consumer-focused newsletter called Reality Check, which covers the world of misinformation.
Matt Jordan: You used to be on a different beat. What does Reality Check do as a newsletter? What are you-- its consumer forward-- what is it? A consumer protection idea behind it or what's—
Jack Brewster: No, no, it's meant for anyone and everyone who cares about our media ecosystem and the current threats that are facing it. So, this is not a fact checking newsletter. It really covers the way that misinformation spreads and how it's impacting the pipelines that we use to get our information. So, it's for anyone and everyone who cares about our democracy, who cares about media, who cares about the way that technology is changing the way that we get our news.
Leah Dajches: Yeah. And Matt, I actually think you sent me the February version of Reality Check, the newsletter, because there was a prominent focus on Taylor swift and misinformation about Taylor swift. And so, as a media researcher and a Swiftie, I was excited to see this. We know a lot of young people are looking to influencers and celebrities for their news information among other things. And so, with this in mind, I was wondering, how does Newsguard conceptualize or define news or is it a broad wide cast in thinking about our media landscape and what is news?
Jack Brewster: I would say-- I would go back to saying, we are unique in the sense that we cover the way that misinformation is spreading. Now, that sounds like a very broad topic, but I don't really know of any other news outlet that actually only covers that. There are a lot of fact checking news outlets out there, and we do a ton of fact checking. I don't want to take away from that. But we not only debunk false narratives, but we track the way that they're spreading online. We have a database of over 8000 websites that we have all written these long research memos on-- we call them nutrition labels because we give them scores. And because of that, we have this massive database that we can draw upon when we're writing about the latest Russian disinformation narrative or a false narrative about Taylor swift. We can see very easily, who are these people that are spreading this false claim? And I think that's a very important thing for consumers to know. I mean, a basic function of the internet should be that people have information about the sources where they're getting their news. And on social media, they absolutely do not. And so, we're dedicated to trying to provide more information. That's our founding mission.
Matt Jordan: How have you found other news organizations responding to your work? So, you're putting out things like trust indicators and recommending their usage and search engines. How do you find the other news organizations, embracing or keeping your work at arm's bay? How would you say that works?
Jack Brewster: Yeah. I mean, for anyone who cares about having more information about where they're getting their news or about our media ecosystem in general, I think the recession has been positive. I laugh sometimes when people say that we're censors because if anything, we're adding to speech, not taking away. We don't tell people what to do with our ratings. We put it out there and we say, look, we're the only news outlet out there right now that is providing this information. So, I think anyone who recognizes that simple but for some reason hard to understand fact, their reception is very positive. Even for some news organizations that get low scores in the beginning when we started about-- what is it? Six years ago, have worked with us to get higher scores. We are not a journalism review outlet. We want news organizations to get higher scores because our rating system is not meant to be overly strict and punitive. We're trying to provide just the basic functions of what a news outlet should offer in terms of information and say to consumers, look, does this news outlet provide that? Do they tell you who the owner is? Do they say who their content creators are? Do they frequently publish false claims like the 2020 election was stolen, and the COVID vaccine is a bioweapon? These sound like really like a low bar, and that's because it is. But you'd be surprised at how many outlets don't tell people about their ownership or don't list content creators or spread false claims about the election.
Matt Jordan: How about search engines? Have search engines-- I know that you have worked some with Bing, the search engine Bing, but have how has Google responded to your enticement of having trust indicators in terms of the things that pop up on somebody's search?
Jack Brewster: So, I've had to fight that question a little bit, just because I'm not on the business side. I know right now we don't have-- we're not on Google. I would love for us to be, I will say that. I want our ratings to be everywhere. I want it to be on TikTok, Instagram, Facebook, every social media platform or search engine. I mean, I think it's truly a no brainer, the fact that a everyday consumer has to fight and claw to find out information about a news website. It's possible right now for someone to go on Facebook, and they could be fed a Russian sponsored disinformation site and they would have no idea on an American social media platform. And that is just crazy to me. It doesn't make any sense. So, I would love for our ratings to be everywhere. It makes sense that it be on search engines and every social media platform, but right now we're not on Google, no.
Leah Dajches: So, from my understanding, it seems like Newsguard is really a useful kind of tool outlet to help user’s media consumers check false narratives, misinformation, disinformation after a news story has been released. But is there a component or something within Newsguard that can help journalists as they're creating the story at the front end of production?
Jack Brewster: So, I'll actually push back on that. I think we're more, if anything, what we would call a prebunking outlet. And everyone knows-- your listeners will know what debunking is, which is what you just said, countering a false narrative after it's been published. But we engage in prebunking in the sense that we have this database of scores and information about news outlets. So, we're able to almost give people information so they are prepared to debunk things on their own when they come up. So, if I know-- for instance, if I know-- if I'm on Twitter and-- or X, sorry, and I know that DC Weekly is a Russian disinformation source, my antennas are going to be up when they say that the US has biolabs in Ukraine. I'm just making this up. But so, we, I believe, engage in prebunking. We help consumers and journalists before they encounter misinformation by giving information about the sources. If there's detailed information about it and you know that a source has published false claims previously, hopefully, that news consumer, that journalist, or whatever is going to be more conscious about things that news outlet or social media account will produced in the future.
Matt Jordan: So one of the things you-- and this might-- the prebunking idea, this misinformation fingerprint catalog that you have, so what are some of the narratives out there, some of the false narratives out there that are, let's say, your top three out there right now that are starting to surface more and more, especially in, say, relation to Ukraine or to the US elections?
Jack Brewster: Well, off the top of my head-- I mean, recently, there's been a surge in AI related false narratives surrounding the war and elections. The big ones recently have been around that, fake images, doctored images about the war purporting to show travesty in Gaza even, which is strange because it's already an absolute travesty without any doctored images. I mean, the images coming out of there-- the real images are just totally gruesome. But still, we're still seeing a lot of those. There's a lot of COVID vaccine related myths still popping up that it's changing your DNA. That it is somehow tainting your testosterone, those kinds of false claims. That's, again, still a huge dominant theme. And we're starting to see some 2020 election voter fraud claims come back as well with the elections coming up. So those are the major narratives. With Ukraine, I'm always shocked by all of the things that they throw at Zelensky. I mean, it's like the kitchen sink at this point. I mean, they have said that he's gay. That he is a Nazi. That he has villas all across the world. That he buys yachts. That he is everything under the sun that you can think of they have thrown at him. And by they, I mean, the Russians, but also other media organizations, State run media organizations. So, I'm continually shocked at these Zelensky false narratives, and we have just continued to see those just two years on from the war.
Matt Jordan: So, you're describing these narratives and maybe their origins, but so how does AI impact this now? How is artificial intelligence impacting the spread or amplification of these types of false narratives?
Jack Brewster: Yeah. So, I like to say that AI has democratized the troll farm. And what I mean by that is that just as the internet democratized journalism and access to information, AI has given the person in his or her basement the capacity to have 100 if not thousands of journalists, videographers, audio technicians, at the push of a button. And because of that, AI has thus democratized the troll farm. Anyone and anyone can start a troll farm with the push of a button. So, think about 2016 when the IRA in Russia, the Internet Research Agency, pumped out content on Facebook to try and sway the 2016 election. They had human journalists doing that. I forget off the top of my head how many they hired, but it was a good number. That staff could produce the same type of content, same quality, if not better, at 10 times, 100 times the scale if they want-- if they used AI correctly. They could set up a script that combed the internet for certain keywords, ran it through a language model, and pumped out 100 if not thousands of articles a day without any human oversight whatsoever. And that is a scary potential, and that's what AI has done across the board. So that's one thing. The second point I like to talk about is that the flip side of all of this, of AI generated images and video and audio, is that consumers, likely for the better, are more skeptical about the things that they see online. But that means that now we're seeing false narratives emerge about real images where people can say, people are saying that is AI generated when it's actually not. And that is another very dangerous part of the effects that AI has had on our media ecosystem. I'll give an example of the Israel-Hamas War. There was an image that Benjamin Netanyahu and Ben Shapiro, the Daily Wire host, circulated of a dead baby corpse. And somebody online swapped out the baby corpse for a dog and said, look, Israel is doctoring images using AI, and they're trying to pass off this dog as a baby. And this got millions of interactions across social media. So, there were tons of people out there that likely believed that Benjamin Netanyahu and this American right wing media host had tried to pass off a photo of a dead baby corpse as-- by using AI. And so that shows you the effect that AI has had on our media ecosystem. It's so polluted it already that we have people out there who are claiming that real photos are fake.
Leah Dajches: I'm glad we're talking about AI because I was looking at Newsguard and thinking about how these various tools work. Jack, correct me if I'm wrong, don't you all use AI algorithms to help assist with the analysts? So, using AI for good to help detect AI?
Jack Brewster: Yeah. I mean, look, AI, it's a broad topic, but can be absolutely be used for good, and I welcome it across the board. I think ChatGPT is an unbelievable technology. I think other AI technologies and health and other areas will do tremendous good for the world. Newsguard is not anti-AI. We're anti-AI if there's no transparency we're anti-AI if it's being weaponized by bad actors to spread misinformation at large. But we're not anti-AI. And I do think that AI can be used to help counter misinformation online in various ways. And there's other things to talk about when it comes to journalism ethics about producing content using AI. That's a different conversation, and we're not engaged in that currently. But we're absolutely not anti-artificial intelligence.
Matt Jordan: So some of the things you're describing in terms of how misinformation spreads and how it starts to create this doubt and skepticism or cynicism about reliability of anything, which is of Russian cyber attack really, is this firehose of falsehood where nobody knows what's true anymore, and that's the point. But Newsguard has done a lot of reports on this one indicating that ChatGPT 3.5 generated misinformation 80% of the time and that 4.0 did about 100% of the time. So, we know the tools are there and they can be used for the bad. But what is it about the incentives of the media system that make it so powerful? And I'm thinking in particular about unreliable artificial intelligence generated news as a category. How does that just in a way reflect the incentives of the news ecosystem?
Jack Brewster: The news ecosystem, to answer your question, succinctly would be follow the money. And where is the money coming from? It's often programmatic advertising on social media and through search engines. So, you mentioned AI content farms. So why are content farms popping up? There are two reasons why someone would start such a thing. One, to try and sway public opinion, promote a cause. That's what Russia does when they pump English language news articles into American newsfeeds. The other is to make money. And if I can set up a AI content farm to run a script to generate hundreds or thousands of articles a day and I get enough clicks on them, that's a passive income investment. That's the same as throwing money in the stock market. You're getting cash without doing anything. So, I think that's a main driver behind a lot of these sites. Now, your listeners are going to be like, OK, then well, if it's not trying to convince me of misinformation, why does Newsguard care and why should I care? And the answer is, we all should care about our newsfeeds and the places that we're getting our news from. If it's being polluted by AI crap, that's a problem, and that we're all going to suffer that way. And you could easily imagine a world-- there have been reports that this is already happening, where your Google search results, when you type in 2024 presidential election or a question about politics or the Israel-Hamas War or Russia Ukraine War, you're being fed an article from some chat bot. I mean? Do you want that? And if that's happening, then who is the revenues going? To that AI content farm and not to the journalist who actually had to do the reporting to write the article. That's a huge problem. So, the incentive often behind these content farms and AI generated content farms on social media in general is typically to generate ad revenue. And they'll sprinkle in misinformation along the way because it sells.
Matt Jordan: So, what do you think the-- I know that-- you can defer this if you want because I know you're trying to get a relationship with Google. But Google, according to your studies, accounts for 90% of that ad revenue. I think the facts that 2.6 billion in advertising revenue and for our listeners programmatic ads are just AI placed ads that kind of pop up when somebody does a search.
Often, the brands aren't even aware that their ad is popping up on an AI site, and that's part of the problem. But what do you think the relationship of Google should be to this? Because essentially, they're working as a Shill or as a-- I don't know, an enabler of fraud. What do you think the best way to get a handle of this is?
Jack Brewster: Yeah. Not because of any business relationship, I don't care about that. But just in terms of journalism ethics, I'm not going to say what I think Google should do. I mean, what I will say is that a report after report after report from us has found that Google is indirectly financing the proliferation of misinformation super spreading sites. Full stop. So do with that what you will. The companies themselves that are advertising through Google often do not know that their ads are appearing on Russian disinformation sites or COVID misinformation sites, and Google is-- all these reports show that Google is not really doing that much to try and stop that. And they often will come back and say, well, we have keyword blockers. And companies will say, well, we have keyword blockers, and we use various services to stop us from advertising on those sites. And the problem is that these algorithms that they're talking about often miss blatant misinformation because keywords is not really how misinformation spreads. And people are smart. And misinformation superspreaders are smart. I mean, they're in the business of information warfare, so it's pretty easy to trip up an algorithm.
Leah Dajches: I realize I should have asked this question earlier, but Jack, we've heard you use the phrase false narratives. And for our listeners out there, taking this opportunity to help them with their media literacy skills, can you walk us through how false narratives fits into these broader understandings of disinformation and misinformation?
Jack Brewster: Sure. So, I'll start with a false narrative. A false narrative is any myth or myth variation. So, a false narrative could be that the COVID vaccine is a bioweapon, or it could be that the 2020 election was stolen. It's often a big umbrella term, and then there are variations that are beneath that. So, a false narrative encompasses-- can encompass several things under one umbrella. Misinformation is the same thing. It's a false claim. Disinformation is a little bit different. It's when often a state-run media source or a bad actor working on behalf of a government attempts to spread misinformation to promote a certain cause. So, think what I was describing earlier about Russia's meddling in the 2016 election. That's disinformation. It's different than misinformation though it falls under the same umbrella. So those are the big three that you just mentioned. There's also misleading narratives is important distinction. There's a difference between misleading and false, and that's a huge differentiator that we constantly argue and debate at Newsguard, whether something is false or if it's just misleading. And that can be a really, really important distinction. So just today, we were talking about how some left leaning outlets framed Trump's bloodbath comments when he, at his speech, talked about how the-- I think I forget the direct quote was, something like, the economy is going to fall into a bloodbath. He was trying to say that, but some news outlets, especially left leaning ones, took that as meaning that he was calling for a bloodbath or that he was predicting a bloodbath if Biden was reelected. And we were discussing, is that false-- is it false to say that he called for a bloodbath? And we went back and forth and decided it was more misleading. So that is the argument that we go back and forth on constantly at Newsguard, and that's an important distinction for your listeners to know.
Matt Jordan: So, one of the ways in which AI is impacting the news media ecosystem, of course, is because AI generated stories are sucking up some of the money that could be going to reporters. So just to tell the listeners a little bit of how it works. That if there's a story that comes out, AI can immediately instantaneously scrape it, often the same images, and repackage it in some phony sounding newspaper thing, one of these unreliable AI generated news and information--
Jack Brewster: You can use AI content farm for short. I hate that acronym anyway, but we use sometimes.
Matt Jordan: But so, it's siphoning off money that could be going to reporters for that. And one of the-- I'm a media historian, so I've looked at these moments when technology shifts when there are these kind of openings for this type of thing. And there was a very similar moment back when the international wire was coming into practice. Where the Hearst's international News service, they were wittingly I think spreading German misinformation during World War I and got banned from The Wire by the British and French governments. So, they couldn't send stuff over the wire back home because they got caught red handed spreading German talking points. So, what they did was, of course, just take somebody, put them next to the AP wire, and just have them typing out the story as soon as it came in and then disseminating that through the wire to all the newspapers who ran it. So, this case went all the way to the Supreme Court, which generated a new legal concept to deal with this, which was called quasi property. They said that the value of news had to do with its temporal dimension. That if it was news, it was new. That somebody else was profiting off the fruit of the labor. So, there's a legal framework for dealing with this, and this case seems even more ripe for plunging. That you have these things that can content scrape without any human interference, so there's not even any expression, first amendment expression, and just monetize very quickly thanks to Google that kind of theft of quasi property. So, do you think that there needs to be cases that test the capacity of these and hold somebody culpable for this to change at all?
Jack Brewster: Again, you're going to hate this answer. I'm about to deflect a little bit on this one, just because we're in the business of following-- of covering other news outlets. Again, what I will say is that there are many different ways that news written by human journalists and paid for is being taken and copied by artificial intelligence. Not only that the models themselves are trained on mainstream news, but also as you mentioned-- and we've done reports on this, the content farms are basically using AI to covertly possibly plagiarize. So, it's basically a new form of getting around the teacher. And except in this case, it's not the teacher. It's the news outlets themselves-- The New York Times, The Washington Post, The Wall Street Journal, et cetera, et cetera, and local news outlets as well. So, it's a huge, huge, huge question. And when I wrote that article a few months ago about that topic I just spoke about, I spoke to some experts and legal experts. And they essentially said, I don't know, it's too new for me. This is uncharted territory. I don't know if this is quote, unquote, "plagiarism". I don't know if this is theft.
So that's going to need to be debated in the courts, and journalists should pay attention. And I think news consumers should have their antennas up about who is feeding me this information and how do I want my information fed? Do we want a world where 80% of the content is just generated by chat bots scrambled from other sources and we have five mainstream news outlets that are actually doing journalism and everyone else is just copying it? I don't know, do we want that? So that's the question that we're going to need to answer. And all news consumers should care about that question.
Matt Jordan: I'll answer it. I don't want that world.
Leah Dajches: I mean, I don't want it either. I have to be honest, as we're sitting here, part of the goal with the news literacy initiative in this podcast is, how can we help our listeners and news consumers want to turn back to news and stop avoiding news, which we know from research is happening?
And as we're talking about this and we're talking about AI generated content and this potential future world we could live in, it's very overwhelming. And I worry that for news consumers who are already overwhelmed by journalistic practices and the structures of the news, what can they do about this? How can we empower listeners? Is there a first step or introduction or something on news card that we can at least point people to as a helpful tool to help them really manage this AI world and the spread of misinformation, disinformation, and all of that.
Jack Brewster: Well, first I would encourage them to download our browser extension. That means our ratings will pop up on search engines as they're searching. So, I wish I could even do a demo for your listeners, but it's quite literally easy as just downloading the browser extension. It's free. If you use Microsoft edge browser-- I know it's not the most popular browser, but it's trust me, it's just as good as chrome and others. And Our ratings will show up on the side of Google search results and on social media as you're browsing the internet. That's a great starting point. Other things that Newsguard, they should follow reality check, subscribe. To reality check. It's free right now. It is on substack and available to anyone and everyone. We publish three times a week. There's commentary on Wednesdays that delves into the topics that we're talking about right here. And on Mondays and Fridays, we have stories about, as I said, the spread of misinformation online. It's written by this amazing journalist named Jack Brewster. So, I really, really encourage your listeners to subscribe to it. And then follow us on social media, and you will hear media literacy tips and other behind the scenes looks on how we cover the spread of misinformation.
Matt Jordan: So, you said before that the site relies on journalism, because there's no there's no-- one of the things that AI doesn't do very well or that any of these network platforms do very well is scale. Optimization in the logic of Silicon Valley is always about scaling up so that everything can be cheap and bigger. But, of course, scale also becomes their biggest problem because nobody can deal with the flood of stuff that emerges from that scale. So, you all use pretty analog methods to interpret what is AI? So, when you're when you're doing the AI tracking center work, you identify sites that generate content with little to no human oversight. What do you look for that lets you know as a journalist, no, wait, this is a phony news site. What are the tricks of the trade that somebody who's reading through just the random site that Google throws at them can have in their mind So they might it might also be aware of that
Jack Brewster: Two different answers. So, the first, I'm not sure if your listeners would do, but the way that we spot them is by looking for AI error messages. So, AI models will commonly have error messages that they feed back to users when they can't respond to a-- not a query. I'm used to calling that a query, but--
Matt Jordan: Prompt.
Jack Brewster: A prompt, yes. And those messages, you would be surprised, end up on AI content forum sites all the time. And that's usually the telltale sign that we use to try and spot AI content farms. That basically gives us a tell that there's no human editorial oversight. Because hopefully, unless it's an underskilled editor, they will spot those error messages and edit them out. But often, they do not.
And that's one of the main ways that we spot AI content firms. I'm not sure if your listeners would be casually browsing and do that. But if they wanted to, they could look for those and things like, as an AI language model, I cannot-- dot, dot, dot, perform whatever task. That's one of them. But there are others as well. And the second thing that we do is, we are pretty skilled at-- when we identify a site that's using AI or spreading misinformation for that matter, we're pretty skilled at being able to track whether that site is part of a network. So, if a site is using AI to pump out content, we can usually use tracking tools to see, OK, this site is-- this person behind this site also runs x, y, and z. It's the same WordPress template. It shares the same Google AdSense codes. There are other things as well that we look for. But everyday consumers, I mean, it's more difficult because oftentimes, these sites try to make the content look like it's coming from a local news outlet even and we've seen we've seen that already.
So I would say to your listeners, if they're on Facebook and they feel like there's a new source or they haven't seen this source before and Aunt Judy is saying that this new news outlet is great, it's wonderful, and it's telling me all the truth I need to know about COVID vaccines and how they're changing our DNA, well, I would encourage your listeners to take the name of the outlet that Aunt Judy is posting and enter it into Google, and there's a very good chance that if it's reached Aunt Judy's Facebook feed, that it's been written about. So that is called cross platform checking. Cross checking in elementary school and middle school, that's what they usually call it. That quite literally just means checking your sources, just looking at typing into Google. And that's an easy thing that your listeners can do. There's also, if you want to get more in the weeds, learn-- I encourage everyone to play around with language models, see how they work. I feel like if you work in journalism, you just assume that everyone has used ChatGPT. That's not true at all. My parents, I encouraged them to use my dad's a writer, and he use it a couple of times. And said, oh, it gives me general answers. I'm like, well, yeah, you need to learn how to prompt it. It's a new tool. It's a technological tool. You need to play around with it and see how it functions and learn the way that it responds to prompts. And by doing that, you can more easily even yourself spot AI generated content. It often writes things like firstly, secondly, thirdly, comma in paragraphs. Very generic sounding words. There will be language-- there'll be date cutoffs on some content. So, if you're reading an article and it seems like it's out of date and seems stock generated, it might be AI. Those are often telltale signs. And this happens more than you think where day readers and users are able to do this. I mean, look, Kate Middleton is a perfect example. A lot of people, even though that was spotted by, I think some journalists at the wires that were going to publish those that photo, there were people online that were able to just pick that photo apart. So, I really do think there are ways for everyday people when they're browsing to spot AI generated content.
Matt Jordan: So, I wanted to shift to another venture that you're involved in. You're also CEO of Newsreel. Tell us a little bit about what that is.
Jack Brewster: Sure. Yeah. So, Newsreel is trying to answer another big problem that we have. I hate to be so doom and gloom, but that young people are increasingly tuning out from the news. News avoidance is up, and Pew research center has showed this, many other tracking centers have showed this. And because of that, they're often getting a lot of their news and information from social media. And we have created a news app to try and answer the problem of news avoidance. We believe that by leveraging brain science, we can deliver the news in a different way to help answer the problem of news avoidance. So, by using timelines and quizzes, we can engage a new generation of consumers to stop browsing so much and maybe get their news from a different source. It's going to be a step above social, media but it's deliver it's not delivering news in a way that say your parents news app is The New York Times, The Washington Post. It's going to be delivering news in a different way that uses brain science to help us retain and understand news better.
Matt Jordan: So, if I were to open it, what would it look like?
Jack Brewster: Have you ever used Duolingo?
Matt Jordan: Yeah.
Leah Dajches: Yeah.
Jack Brewster: Yeah. So, think about Duolingo meets The New York Times.
Matt Jordan: OK.
Jack Brewster: And that is what you would see when you open up Newsreel. So--
Leah Dajches: Is Newsreel available yet? No, it's a one month away. We are going to launch it in beta testing in about one month. And I encourage everyone-- all of your listeners to go to newsreel.co to sign up for the waitlist. It's going to be free we're looking for feedback from people like your listeners who care about our current media ecosystem and want to be better informed and believe that young people need to be engaged in our political process. So that's our mission. That's what we want to do with Newsreel. We want to create a news app that helps young people retain and understand the information that they need to stay informed.
Matt Jordan: How will you interface with other news outlets? Will you share revenue with them if they get things from your site? I mean, how are how are you solving that kind of political economic question of the news media ecosystem?
Jack Brewster: For sure. Yeah, and you're talking about with Newsreels?
Matt Jordan: Yeah.
Jack Brewster: Yeah. So, in the beginning, it's going to be a lot of just news from the wires and other news outlets and social media we're going to be delivering it in different way, but we'll be writing it ourselves we're not aggregating. It's not going to be like Apple News. It's going to be content that we're providing value by writing it in a way that we think is accessible for younger audiences and is delivered in a different way that helps them retain that information. Eventually, I want to do more original reporting and be the news outlet for the younger generation by covering topics that maybe aren't covered as much as they need to be like student loans and climate change and things that young people really care about. And delivering it in the news real way as I'm talking about as well. So, I want to be the news outlet for the next generation, and I want to do it by being different than just a digestible newsletter. There are so many things out there. Actually, Axios and Morning Brew and they're great. They're wonderful they. Deliver news in a digestible format, but we have talked to many, many people that feel like those newsletters have become white noise. And what we're trying to do is go a step beyond that and say, OK. We understand that newsletters are white noise, and that social media is exhausting and that the 24/7 news cycle is not for you. You can have social media and those newsletters. We're going to be your sidekick. We're going to be the tool that you can use to get caught up in the news when you haven't read it for-- you've tuned out for two to three weeks, and you need a timeline that catches you up on the news that you've missed. You want to start a news habit; you don't have really a place to start right now other than a newsletter. And some people find that exhausting. So come to Newsreel. So, these are all things that we're trying to answer. The problems of news avoidance online is a huge one, and I really think that Newsreel is the only news outlet out there. That's trying to answer that problem.
Leah Dajches: Now, it sounds very exciting. I know personally, I'll want to check it out. And I'm curious, so within Newsreel, I'm envisioning-- I'm mentally picturing my Duolingo, thinking of a timeline. And I'm wondering, are there going to be components that are I don't know if it's explicitly or indirectly teaching consumers these news literacy skills, how to identify misinformation, disinformation. Will that be built into the program, the app?
Jack Brewster: Yes. I want it to be at the same time as I don't want it to be-- make the news like homework, and that's one of what one of the beta testers, as already said. So Duolingo has done an amazing job of making language learning fun and interactive and addicting to be honest with you. Every day, I'm worried that like my streak of learning French is going to disappear because I forgot to do it.
So, I do want to teach people the news in some ways, and as I call it, a news learning app or a news literacy app. But I want to do it in a way that makes staying informed fun and engaging. That's our mission. So yes, the answer is yes, but maybe we wouldn't be as explicit as saying that.
Matt Jordan: Well, Jack, I want to thank you for coming in and making-- unpacking AI and making it fun and explicit for us, and wish you the best of luck with the new app.
Jack Brewster: Thank you so much. It was a pleasure coming on here. Thank you.
Leah Dajches: I feel like I learned so much talking with Jack. There really was a lot to unpack in terms of thinking about AI, but then even taking it a step back and really refreshing misinformation, disinformation. But what I think is really cool for our listeners is how useful Newsguard can be as a tool in our news literacy, media literacy tool boxes and thinking about also Newsreel. So, I encourage everyone to check out Newsguard, sign up to beta test Newsreel. Matt, what were some of your takeaways?
Matt Jordan: As somebody who's always convinced that there's nothing new under the sun, even if we're talking about something that is absolutely new, which is AI, it's interesting how AI intensifies and amplifies tendencies in the media ecosystem that are already there. Exploiting weaknesses just at a at a scale and at a speed that we haven't seen before. But in a way, what we're seeing is a lot of the same ways that misinformation proffers and profits on the internet.
Leah Dajches: That's it for this episode of News Over Noise. Our guest was Jack Brewster, Enterprise Editor for Newsguard. To learn more, visit newsoverneus.org. I'm Leah Dajches.
Matt Jordan: And I'm Matt Jordan.
Leah Dajches: Until next time, stay well and well-informed.
Matt Jordan: News Over Noise is produced by the Penn State Donald P. Bellisario College of Communications and WPSU. This program has been funded by the Office of the Executive Vice president and Provost at Penn State and is part of the Penn State News Literacy initiative.
[END OF TRANSCRIPT]
A NewsGuard look at satire
NewsGuard analysts have found multiple false claims in weeks prior to the 2024 presidential election that began innocently as satire but were deceptively spread by others as fact. See the commentary from NewsGuard's Rumor Check.
About our guest
Jack Brewster is Enterprise Editor for NewsGuard. Prior to working at NewsGuard, Brewster was a Fulbright scholar in Munich, Germany, conducting a research project about the role of journalists in the digital age. Previously, Brewster was a reporter at Forbes Magazine, covering politics, misinformation, and extremism. Brewster also has written about politics for Time Magazine, Newsweek, Vice News, and the New York Daily News.