AI-FLUENT Podcast

AI-Fluent is my new podcast where I talk with storytellers from around the world about journalism and storytelling in all its shapes and forms, its marriage with AI and other technology, and innovative thinking. Most of my guests are from the Global South - Latin America, Asia, the Middle East, Africa, so it’s a rare opportunity for those of you who are interested in the subject to listen to people with different perspectives, different challenges, and solutions they have to offer.

Listen on:

  • Apple Podcasts
  • Podbean App
  • Spotify
  • Amazon Music
  • iHeartRadio
  • PlayerFM
  • Podchaser
  • BoomPlay

Episodes

Monday Mar 10, 2025

In a new episode of AI-FLUENT, I am talking to Bruno Fávero, a journalist who became Director of Innovation at one of Brazil's leading fact-checking websites Aos Fatos.
They developed their own bots that tackle misinformation and tools that not only document digital lies and hate but also how the "distorted algorithms" of apps and platforms contributed to their spread.
So, what can we all learn from Aos Fatos' business model with its focus on tech and fact-checking?
Main Things We've Discussed:
How to become a Director of Innovation without a tech background
Apart from investment in tech, what else contributes to Aos Fatos' success
Aos Fatos' business model and how they create their own tech products
"We create tools to solve a specific problem, not for the sake of creating a new tool"
What the Fatima bot is and how it helps to fight misinformation
What kind of relationship Aos Fatos has with its audience
How they try to reach out to Gen Z
The word "innovation" and how it has become empty
A new tool to fact-check live events/debates, etc.
Distorted algorithms and Aos Fatos' project called Golpeflix
Social media platforms, how they became unhealthy and how journalists can navigate them to distribute quality journalism
How the perception of facts and truth has changed in Brazil in recent years
How the media industry took people's trust for granted and now needs to earn it by being more transparent and diligent
Relationships with Meta and other Big Tech companies: liability, yet necessity? Can these relationships be re-negotiated?
How social media has contributed to the loss of trust in professional journalists
The biggest challenge Aos Fatos faces as a newsroom and Bruno as a Director of Innovation, and what the solutions to those challenges are
The biggest misconception of generative AI in the context of journalism/storytelling
Ways to use generative AI more creatively - creation of new user interfaces might be one of them
A lifehack from Bruno on how to use smaller generative AI models
The future of journalism

Saturday Mar 01, 2025

Tattle, one of India's pioneering civic tech organisations using AI to combat online gender-based violence. 
In this new episode of AI-FLUENT, Tattle's co-founder Tarunima Prabhakar shares insights into Tattle's innovative projects, including Uli, which helps women navigate harmful online content, and their Deepfake Analysis Unit deployed during India's recent elections. 
We explore the complex challenges of making technology accessible to less tech-savvy women while balancing relationships with Big Tech platforms and governments. 
Main Topics We Discussed in This Episode:
What is the role of civic tech organisations in India?
One of Tattle's first projects - Uli - was aimed at helping women deal with harmful and offensive online content. How does Uli work?
What about rural, less tech-savvy women, how to help them?
How does Tattle leverage AI and machine learning to identify and combat abusive or harmful online content? What are the key technical challenges in building such systems?
Another of Tattle's projects is called Deepfake Analysis Unit. It was introduced during India's 2024 elections, but continues to work until now. They collaborate on it with fact-checkers and forensic experts. How does it work?
How does Tattle work with social media platforms or other online spaces to implement its tools and what are the biggest challenges in getting these platforms to adopt Tattle's solutions?
On relationships with Big Tech and how they can be re-imagined/re-negotiated
On collusion between Big Tech and governments
There's a risk that tech solutions like Uli may only reach a small, tech-savvy subset of women. How does Tattle ensure it doesn't create a bubble that excludes those who need these tools the most?
Where do they see the future of AI in addressing harmful online content? Are there emerging technologies or approaches that could revolutionise this space?
On the most painful lessons learnt as co-founder of a civic tech organisation
Lifehack from Tarunima for those who want to start their own civic tech startup.
What is Tarunima's personal criteria of impact and success?

Sunday Feb 23, 2025

We usually talk about biased data and inaccurate interpretations regarding technology that comes from the East, China for example. Yet we rarely discuss the lack of transparency and biased data produced by Western tech companies.
So it was refreshing to have this conversation with Rana Arafat, Assistant Professor in Digital Journalism at City, University of London.
How are Arab newsrooms, especially in Egypt, Lebanon and the UAE, adapting to generative AI technologies? How do the governments of these countries regulate and control AI technology, and how do Big Tech companies operate in authoritarian countries in the Middle East?
In this new episode of AI-FLUENT, we also discussed Indian Elections and the Israel-Gaza War through the lens of AI manipulation, disinformation and algorithmic bias.
 
Main Topics We Discussed in This Episode:
The most surprising discovery for Rana whilst researching AI manipulation, generative disinformation, and algorithmic bias in the Global South
How technology in authoritarian countries oppresses people more than empowers them
The governments' involvement in regulating and controlling generative AI technology. The Saudi government, for example, has its own chatbot, as does the UAE government. How unbiased is their data?
The importance of cross-pollination between journalists, developers, product designers, etc.
Rana specifically examined Egyptian, Lebanese and United Arab Emirates newsrooms on three levels: narrative, practice and technological infrastructure. What did she discover about all three levels?
How Big Tech companies operate in authoritarian countries in the Middle East
Regarding algorithmic bias - how Rana researched it and her most important findings
Algorithms as a form of censorship
What shadowbanning is and how it was used during the Israel-Palestine war
How pro-Palestinian content creators tried to evade social media algorithms
The Indian elections and how generative AI technology was used and misused during the 2024 elections
How we as a media industry and society can protect ourselves from technology which, as we see, can be used as a weapon of propaganda and misinformation. What are possible solutions?
Regarding the teaching of future journalists - what's missing in the current education system? Is it keeping pace with all the rapid changes in the tech and media industry?

Saturday Feb 15, 2025

Marius Dragomir, the Founding Director of the Media and Journalism Research Centre, shares with me the most surprising findings from his recent research on propaganda narratives, as well as the revealing discoveries about ownership and financial aspects of the 100 AI tools most used by journalists.
Remember that 67 % of these 100 tools lack critical data on ownership and finances. Check the tools you are using.
How can the media industry become less dependent on Big Tech? One of the possible solutions lies with audiences and the private sector. We are discussing the case of the Czech Republic – what makes it so special?
Main things we've discussed in this episode:
One of the Media and Journalism Research Centre's recent research articles was titled "Ownership and Financial Insights of AI Used in Journalism", and we are discussing its findings. They looked into the 100 AI tools most used by journalists and found that only 33 % of AI tool companies demonstrate sufficient transparency, with 67 % lacking critical data on ownership, finances and other basic information.
There is one AI-powered fact-checking tool mentioned in the research article called Clarity, owned by former Israeli military personnel. The owners promoted it as the best fact-checking tool for the Israeli-Palestinian war. How might this opacity affect journalistic independence and credibility in the long term?
The impact technology is having on fact-checking in general
BBC's research on AI assistants' news accuracy. Their tests of ChatGPT, Copilot, Gemini, and Perplexity exposed major flaws. 51 % of AI responses had significant issues. 19 % of responses introduced errors when citing the BBC. 13 % misquoted or made up BBC content entirely.
The Centre's research on disinformation and propaganda narratives in different parts of the world. Marius shares trends and examples that surprised him in terms of how these narratives are being shaped and distributed.
Big Tech and media - a healthy or toxic relationship?
How Big Tech turned from a liberating tool to an oppressive one, from a solution to a problem for many journalists
Marius shares his observations on how Big Tech companies work closely with governments, prioritising the government's information on search engines and social media. We are talking about Europe here, not only authoritarian regimes.
What are the solutions - how can the media industry become less dependent on Big Tech? One of the possible solutions lies with audiences and the private sector. The case of the Czech Republic.
On regulations and ethics: what would be Marius's top three AI-related regulations that he would issue immediately?
When regulations work and when they don't
What media organisations need to change to survive and stay relevant
On public service journalism. What is the biggest challenge that public service journalism in Europe faces today? And what are the possible solutions?
Talking about the evolving media ecosystems in Europe, Marius came up with four distinct models: the Corporate model, the Public Interest model, the Captured model (high government control) and the Atomised model (journalism for sale, private interests driven).
How relevant are all of these models in a world where generative AI is becoming a new storytelling medium? In that world where every viewer is an audience of one - what will the perception of facts as such be? And how might that type of storytelling medium change the perception of non-fiction reporting?
Given the trend towards hyper-personalised storytelling through AI, how might this affect the traditional role of public service media in creating shared national conversations and cultural touchpoints? Are we risking further social fragmentation?
What will replace the increasingly commercialised and disengaging social media?

Friday Feb 07, 2025

Do you believe you care about facts more than people in India? For a split second, did you notice any bias in your quick mental response?
Whom do you trust more when it comes to news: a well-established media brand or your closest friend who runs a popular current affairs YouTube channel? Who do you think is more likely to spread misinformation?
As Meta and other tech giants show their allegiance to Trump's administration and become increasingly partisan, will financial connections with these platforms become a reputational liability for the media industry?
We discuss these and many other technology, storytelling, and misinformation-related issues in this episode of AI-FLUENT with Rakesh Dubbudu, founder and CEO of India's fact-checking website, Factly.
Main Topics We Discussed in This Episode:
How much do people in India care about facts?
The significant silent population - they need fact-checkers, not people on the extremes
The relationship between fact-checking websites in India: friends, competitors or enemies?
DeepSeek: what signals does this AI tool send to the media industry?
Censorship in Indian media
The most frequent questions Factly's journalists ask themselves about generative AI
Why people with generalist skills who can connect the dots will shine in the world of AI
The biggest misconception about AI in the storytelling industry
The biggest challenge for Factly in general and for Rakesh as CEO in particular
Factly's relationship with Meta as a fact-checking partner after Mark Zuckerberg announced the end of fact-checking on Facebook in the US, with likely further expansion
As Meta and other tech giants show their allegiance to Trump's administration and become increasingly partisan, will financial connections with them become a reputational liability for the media industry?
How Factly is working to decrease its reliance on external tech platforms
How Factly ensures unbiased data input when building their own AI tools
How they maintain and measure public trust in their fact-checking operations amid declining trust in traditional media globally, particularly when handling politically sensitive topics in India
The specific patterns and trends of misinformation in India
How misinformation can be addressed holistically
Factly's revenue model and their use of AI tools to reimagine their business model
The book Rakesh is currently reading to better understand generative AI
 

Saturday Feb 01, 2025

Why do we choose to read or watch something - are we manipulated by algorithms? Do we have any cognitive independence? How and where will we receive news in the near future? In a world where the majority of content will be created by AI, how will we know whom to trust?
And what does neuroscience tell us about all of that? In a nutshell, these are the questions we tried to answer in this episode with Mariano Blejman, an Argentinian media entrepreneur and the founder of the Media Party festival in Buenos Aires.
Main Topics We Discussed in This Episode:
Synthetic democracies and the role of AI in creating them
What neuroscience tells us about why we choose certain content
When truth becomes as scarce as water, it will start creating value
The broken business model of the media: opportunities to fix it or rebuild from scratch
In a world where the majority of content will be created by AI, how will we know whom to trust
Haven't heard about a BBC car? Well, it doesn't exist yet, but it might be one of the ways people will receive their news in the future
Narcissism is one of the media industry's problems
How to use neuroscience to understand news consumption and audience behaviour in general. Mariano refers to Annemarie Dooling's research
Mariano explains what his startup is doing to help newsrooms understand their audiences' behaviour. He is fascinated with neuroscience and keeps returning to it. I am becoming increasingly fascinated with it too :-)
Mariano's predictions on the future of news consumption: content will flow to you without being asked. Your behaviour will activate a prompt, rather than you asking for a prompt - and other predictions
How will the relationship between media and tech industries develop: marriage, divorce, or a never-ending affair?
Examples from Latin America of how journalists use AI tools to avoid censorship
How journalists can have more agency to influence the way generative AI technology is developing
Mad cow world
And about neuroscience again

Thursday Jan 23, 2025

Technology has a value but doesn't have values, reminds us Madhav Chinnappa. It's up to us humans who have values to define how the technology will be used and whether it will bring more good than evil.
The majority of us are now in a so-called efficiency phase of using AI tools, thinking mainly about how to optimise things. However, those who will jump to a creativity phase more quickly will have an advantage. Ask yourself which phase you are in now. And if you are still only in the first one, you really should be thinking about how to move on to the second, and quickly.
Main Topics We Discussed in This Episode:
How we use AI without realising we are doing it
From an electric bike phase of AI to jet airplane - and how the jet airplane phase will transform storytelling
Free tools versus paid tools
Content creation versus business model
Licensing and how platforms can compensate content creators: what is fair?
Is Human Native a sort of eBay where the rights holders and AI developers meet?
Extractive versus sustaining technology
Interest in non-English languages and how you as a non-English speaker/data owner can use this as an advantage
Common patterns of how newsrooms use or misuse generative AI
AI won't replace you - it will augment you
Misconception of traffic and how some newsrooms will pay a price for that
What aspects of using generative AI we are not thinking about
Ethical considerations and risks
Efficiency play versus creativity, audience-focused play
How AI technology changes your audience's behaviour and what it means for the journalism industry
The reality about the audience's view on AI will not be defined by news; it will be defined by other parts of their life
Should we label AI-generated or human-created content, or both?
Trust and interdependency of different industries and how they use generative AI
Do generative AI tools reduce the amount of intention and meaning in the world?
Do we as journalists have agency to influence how AI technology is developing, and if you feel you don't - what to do to have more agency
 

Friday Jan 17, 2025

As a media organisation, they stand on three pillars: journalism, technology and the wisdom of crowds. From the very beginning, they have worked at the intersection of journalism, technology and community engagement.
It was co-founded by a woman who was awarded the Nobel Peace Prize for safeguarding freedom of expression and for her efforts to address corruption in her country.
They were the first media organisation in their country to publish guidelines on AI use in 2023.
They were among ten digital media organisations selected by OpenAI to participate in "innovative experiments around deliberative technologies".
By now, I am sure you have guessed that in a new episode of the AI-FLUENT podcast, I am talking to Gemma Mendoza. Along with Maria Ressa and other journalists, she was one of the co-founders of Rappler, the most prominent Filipino news website. Now she leads digital innovation and disinformation research at Rappler.
Main Topics We Discussed In This Episode:
How to balance the speed and efficiency that AI offers with Rappler's commitment to slow journalism and deep investigative reporting
AI's impact on the relationship between journalists and their sources
The biggest misconception about generative AI in journalism and the most surprising aspects of its development
Rappler—one of the most famous Filipino news websites in the world—stands on three pillars: journalism, technology and the wisdom of crowds. How does their newsroom rely on this wisdom?
How they utilise AI at the intersection of journalism, technology and community engagement
At Rappler, they create their own AI tools in-house. What determines whether they create their own tool rather than use existing market solutions? How do they address ethical implications when using their own data, including ensuring it isn't biased?
The role and input of Gen Z journalists in Rappler's newsroom
Required changes in journalism schools' educational systems to better prepare future journalists for the new reality
Gemma, who leads Rappler's efforts to address disinformation in digital media, shares recent examples of how AI tools have made their work more efficient
Patterns and peculiarities in how people use deepfake technologies
In 2023, Rappler became the first Filipino newsroom to publish guidelines on AI use. Gemma's recommendations for other newsrooms worldwide on approaching AI—what are the crucial aspects not to overlook?
The future of journalism in three words
Gemma's case for journalism: why should a 15-year-old become a journalist in this AI-driven world?

Thursday Jan 09, 2025

Are you a small newsroom or a one-person content creator, and are you, like this episode's guest, happy to fail fast and fail often? You might not have a designated tech team, yet you want to use AI-powered tools to speed up your work and solve certain problems.
Are you tempted to use AI to produce even more content? Pause here and reconsider: might there be better ways to use generative AI to advance your work and develop a deeper understanding of those whom you serve?
We are discussing all of this and more with Tshepo Tshabalala from LSE's JournalismAI global initiative, which helps small newsrooms devise ideas and solutions for using AI wisely and responsibly. I started this conversation by asking Tshepo what kinds of projects they work on with different newsrooms around the world.
 
Main Topics We Discussed In This Episode:
 
 Why is this network beneficial for small newsrooms applying for their fellowships?
Quick solutions versus long-term AI strategy
Examples of AI-powered solutions from different newsrooms
Common misperceptions of AI-powered tools in newsrooms JournalismAI works with
Why does your audience come (or not) to you? Do you really know why?
Ethical implications of using AI in a newsroom: guidelines or no guidelines?
Creativity versus meaning: does generative AI reduce us to recyclers of meaning rather than creators of it?
Example of a "wow project" that Tshepo has come across recently
Blitz questions

Friday Dec 20, 2024

If journalism is valuable, as many of us think, why don't people pay for it? That's the question Alan returned to several times during our conversation. And he actually gave the answer, an existential answer to this question.
What's the purpose of your newsroom? Why do you exist as a media organisation? What audience needs do you serve? If you are able to answer these questions honestly without any corporate fluff, you come closer to answering the money question: why should people pay for your content?
'We have gone past peak content', Alan says. Nobody wakes up in the morning wanting more content - people want to get stuff done, they want a sense of community, they want to learn something new, they want to have fun etc. The problem is, as Alan points out, that many newsrooms still operate as if people wake up wanting more content for the sake of content. Wake up!
Main Topics We Discussed in This Episode:
How AI Enables Media Solopreneurs
Supply versus Demand Side Thinking: How Does It Work in Journalism?
How to Use AI to Get a Deeper Understanding of What Audiences Want
No One Wakes Up in the Morning Wanting More Content - Why Do We Create So Much of It Then?
What Is a Better Use of AI-powered Tools, Apart from Creating Content?
How to Stand Out in a Crowded Content Market Wherever You Are
We All Pay for Stuff and Services, Why Not for News?
"What People Buy Is Very Different from What People Get, or What People Want"
Creativity versus Recycling Old Narratives
AI and the Nature of Originality
How to Measure AI Impact on Revenue Generation, Audience Engagement and Credibility
Where Does Obsession with Optimising Everything Lead Us?
AI as a Co-thinker and Co-founder of Your Potential Start-up
Lifehack from Alan on Using AI in the Context of Storytelling
On the Future of Journalism
Come back and listen to us on January 9, 2025 for an all new episode.

Copyright 2024 All rights reserved.

Podcast Powered By Podbean

Version: 20241125