Til forsiden

November 2023

Tinius Digest

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

Logo Tinius Digest

Om Tinius Digest

Tinius Digest gir deg en oversikt over rapporter om og analyser av utvikling i mediebransjen og publiseres en gang i måneden. Her er våre viktigste funn fra denne måneden.

Del gjerne innholdet med kollegaer og bruk det i møter og presentasjoner.

Digital Resistance: The Rise of Anti-Immigration Networks Online

Oslo Metropolitan University and the Centre for Social Research in Norway have published a paper exploring the formation and internal dynamics of emerging online counterpublics among immigration critics.

Download the research paper.

Key Findings:

1

Seeking Alternative Communities

Individuals critical of immigration often face condemnation for their views within their regular social networks. Consequently, they turn to alternative online communities for recognition, belonging, and shared resistance against the perceived immigration crisis.

2

Development of a Counterpublic on the Web

The study reveals that those holding critical views on immigration form loosely organized online communities, or 'counterpublics.' These groups typically feature personally curated information streams and networks, offering members a sense of belonging, moral affirmation, and viewpoint-supportive information.

3

Balancing Voice and Risk on Social Media

Interviewees in the study use various strategies to balance expressing their opinions with avoiding social risks. This frequently involves selectively sharing content and meticulously controlling who can view and interact with their posts on platforms like Facebook.

The Transparency Gap in AI Foundation Models

Researchers at Stanford University, MIT and Princeton University have evaluated the transparency of large language models (LLMs) and created a 'Foundation Model Transparency Index'.

Download the research paper.

Key Findings:

1

Need for Better Transparency

The actors behind the most used language models (LLMs) are largely non-transparent about their models. This involves how the models are trained and designed (upstream), the limitations and risks associated with the model, and finally, the use, consequences, and moderation (downstream).

2

Wide Range of Transparency Levels

There's a big difference in how transparent AI companies are. Some are doing much better than others, forming three groups: high scorers (like Meta and OpenAI), average scorers (like Google), and low scorers (like Amazon). The Index (0-100p): Llama 2 (54p), BLOOMZ (53p), GPT-4 (48p), Stable Diffusion 2 (47p), PaLM 2 (40p), Claude 2 (36p), Command (34p)—and Jurassic-2, Inflection-1, Titan Text with scores ranging from 0 to 25.

3

Lack of Clarity in Key Areas

Transparency is particularly poor in essential areas like data handling and computing resources. Some companies have very low scores here, with important details often not shared or unclear.

AI Faces Perceived as More Real Than Human Ones

Researchers from the UK, Canada, and Australia have investigated why AI-generated faces are perceived as more real than actual human faces.

Download the research paper.

Key Findings:

1

AI Hyperrealism in White AI Faces

White AI-generated faces are often seen as more human-like than actual human faces. This effect, known as AI hyperrealism, is particularly strong for white AI faces, likely because algorithms are trained more on white faces.

2

Overconfidence in Error-Making

People who frequently mistook AI faces for human ones tended to be more confident in their judgments, reflecting the Dunning-Kruger effect. This effect occurs when individuals with limited knowledge or skill mistakenly believe they are more competent than they actually are.

3

Misinterpretation of Key Facial Attributes

The study used face-space theory and participant feedback to pinpoint facial features that differentiate AI from human faces. Participants often misread these features, adding to the AI hyperrealism effect. Yet, these features enabled accurate AI face recognition when applied in machine learning.

Deepfake-Fuled Scepticism

Researchers from Ireland, the UK, and South Africa have analyzed the consequences of deepfake videos in the Russian invasion of Ukraine.

Download the research paper.

Key Findings:

1

Deepfakes and Misinformation

The study found significant interactions between deepfakes and misinformation, especially in the context of the war in Ukraine.

2

Affects Trust in News

The research presents the first practical qualitative evidence of the harm deepfakes can cause on social media. Incorrectly labelling authentic content contributes to conspiracy theories and affects trust in information and news.

3

Distrust in Authentic Content

Paradoxically, increased awareness of fake videos and images can undermine trust in authentic content. This scepticism often leads to epistemic distrust, where people doubt the authenticity of real videos and information.

Widespread Use of Publisher Content in AI Training Data

The News/Media Alliance has released a whitepaper focusing on how content from newspapers, magazines, and digital media publishers is utilized in training generative artificial intelligence systems.

Download the whitepaper.

Key Findings:

1

Extensive Use of News and Media Content

AI developers have extensively used content from news, magazines, and digital media to train Large Language Models (LLMs), which are integral to AI products and systems.

2

Disproportionate Weight of Publisher Content in Training Data

The training data for LLMs heavily favours content from publishers, with an overrepresentation ranging from five to almost 100 times more than generic web content, as per the analysis of datasets compared to Common Crawl's web collection.

3

Key Sources in Google's C4 Training Set

In Google's C4 training set, which is used for developing AI-powered search capabilities and products like Bard, news and digital media content are the third most common source category. Half of the top ten sites in this training set are news outlets.

The Global Impact of Online Disinformation

UNESCO has published a report detailing the results of a global survey on the impact of online disinformation and hate speech.

Download the report.

Key Findings:

1

Social Media as a Primary Information Source

Social media is the primary information source worldwide and in nearly every country surveyed. Nonetheless, social media trust is lower than traditional media, particularly in countries with a higher Human Development Index (HDI).

2

Disinformation's Dominance on Social Media

Around 68 percent of internet users in 16 countries pinpoint social media as the most common platform for disinformation, outpacing online messaging apps (38 percent) and media websites/apps (20 percent). A significant majority, 85 percent, are concerned about the effect of disinformation, a concern that escalates in lower HDI countries. Furthermore, 87 percent believe disinformation has significantly impacted their nation's political scene.

3

Prevalence of Hate Speech and Call for Action

67 percent of internet users report encountering hate speech online, with Facebook noted as the leading platform for such content. There is a widespread consensus that governments, regulatory bodies, and social media platforms must proactively address disinformation and hate speech, particularly in elections.

Flere rapporter

Januar 2024

Tinius Digest januar 2024

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

Desember 2023

Tinius Digest desember 2023

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

Oktober 2023

Tinius Digest oktober 2023

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

September 2023

Tinius Digest september 2023

Månedlige rapporter om endringer, trender og utviklinger i mediebransjen.

Se alle rapporter

Meld deg på nyhetsbrevet