Pura Duniya
world26 February 2026

Ashwini Vaishnaw flags deepfakes, misinformation, disinformation ‘barrage’ as major threat to public trust

Ashwini Vaishnaw flags deepfakes, misinformation, disinformation ‘barrage’ as major threat to public trust

A senior minister has sounded the alarm on a wave of fabricated videos and false information that could erode confidence in institutions and leaders. The warning comes as synthetic media tools become easier to use, allowing anyone to create convincing videos of real people saying or doing things they never did.

Rising concern over synthetic media

Deepfake technology, which uses artificial intelligence to swap faces or alter speech, has moved from a novelty to a practical weapon. In recent months, several high‑profile incidents have shown how quickly a fabricated clip can spread on social platforms, prompting panic, influencing elections, or damaging reputations. The minister highlighted a "barrage" of such content that is already circulating in the country and warned that the problem is not confined to any single region.

Experts say the cost of creating a believable deepfake has dropped dramatically. Open‑source tools and cloud‑based AI models enable users with minimal technical skill to produce videos that pass casual scrutiny. When combined with the speed of social media sharing, false narratives can reach millions before fact‑checkers have a chance to intervene.

In response, the government announced a multi‑pronged strategy aimed at curbing the spread of manipulated media. The plan includes:

Regulatory measures – New guidelines will require online platforms to label synthetic content clearly and to remove harmful deepfakes within a set timeframe. Technical solutions – Investment in AI‑driven detection tools that can flag altered videos for review. Public awareness campaigns – Educational programs designed to teach citizens how to spot signs of manipulation and verify sources before sharing. Legal framework – Strengthening existing laws to penalize the creation and distribution of malicious deepfakes, especially those targeting public officials or inciting violence.

The minister emphasized that cooperation with technology firms, civil society, and international partners is essential. "We cannot tackle this alone," he said, noting that many deepfake operations operate across borders, using servers located in multiple jurisdictions.

Why it matters globally

The issue is not limited to one nation. Democracies worldwide face similar threats, as deepfakes can be weaponized to undermine elections, sow discord, or manipulate markets. Recent reports from Europe and North America have documented coordinated campaigns that use fabricated videos to amplify extremist viewpoints or to discredit political opponents.

When citizens lose trust in the authenticity of visual media, the broader impact can be severe. Media outlets may become hesitant to publish video content, slowing the flow of information. Law enforcement agencies could find it harder to rely on video evidence, and businesses might suffer reputational damage from false claims.

International bodies are beginning to address the challenge. The United Nations has hosted panels on digital integrity, while the European Union is drafting a "Digital Services Act" that includes provisions for synthetic media. The United States is exploring legislation that would require clear labeling of AI‑generated content.

Potential future impact

If unchecked, deepfakes could reshape public discourse. Imagine a scenario where a fabricated speech by a world leader sparks diplomatic tension, or a fake video of a health official spreads vaccine misinformation, leading to a public health crisis. The speed at which such content can be produced means that response mechanisms must be equally swift.

On the other hand, the same AI technology that creates deepfakes can also help detect them. Researchers are developing algorithms that analyze inconsistencies in facial movements, lighting, or audio cues to flag manipulated media. As detection tools improve, a cat‑and‑mouse game between creators and defenders is likely to continue.

Steps ahead for societies

The minister’s call to action underscores three key areas for immediate focus:

1. Education – Schools and community groups need curricula that teach digital literacy, helping people question what they see online. 2. Transparency – Platforms must adopt clear policies that distinguish authentic content from synthetic material, offering users easy ways to verify sources. 3. Collaboration – Governments, tech companies, and NGOs should share threat intelligence and best practices, creating a unified front against malicious deepfakes.

By addressing the problem on multiple fronts, societies can preserve trust in information ecosystems. While technology will continue to evolve, a proactive stance—combining regulation, innovation, and public awareness—offers the best chance to keep misinformation from overwhelming public confidence.

The warning from the minister highlights a growing reality: the line between real and fabricated media is blurring, and the consequences extend far beyond entertainment. As deepfake tools become more accessible, the risk of widespread misinformation rises, threatening democratic processes, public health, and social cohesion. A coordinated effort that blends policy, technology, and education is essential to safeguard the integrity of information and to maintain the public’s trust in the digital age.