Why Internet Governance Matters?

I used to think the Internet was just a place to search, scroll, and binge something that “just works.” Then I started meeting people who actually build and protect the thing, and I realised the Internet is less a magic cloud and more a living system that needs careful tending. The rules, decisions, and institutions behind that tend to be what we call Internet governance, and they quietly shape how our lives look online: what we can say, how our data is used, whether a small business can sell to the world, or a whole region gets cut off by a national firewall. Internet governance is not a faraway policy club; it’s the scaffolding behind every link you click.


Think about the simple things: domain names (the addresses you type), IP addresses (the numbers machines use to find each other), and the protocols that let your phone talk to a website. These things don’t manage themselves, organisations coordinate them so your blog loads, your payments go through, and your messages arrive. ICANN, for example, coordinates those unique identifiers and helps keep more than a billion websites reachable for roughly 5.8 billion Internet users worldwide that’s not a small administrative detail, it’s the difference between an Internet that works for everyone and a fractured set of national networks.

When governance stumbles or the wrong people get the loudest voice, the consequences are immediate and visible. “Splinternet” is not just a trendy phrase , it’s a real risk where different countries end up with their own closed-off internets. If that happens, the web ceases to be a global commons. It becomes a patchwork of walled gardens: one Internet for big-tech-dominated markets, another for heavily regulated states, and a host of isolated networks in between. This fragmentation isn’t hypothetical; research and debates in global forums document the trend and warn of its costs for trade, learning, and civic life.

Security is another place where governance shows its teeth. The web is attacked every day, from phishing and ransomware to nation-state operations. The economic cost is staggering: researchers and industry trackers have projected global cybercrime losses to reach eye-watering sums (a commonly cited projection pegs it at around $10.5 trillion annually by 2025), and governments and companies regularly report rising losses and fraud volumes. Those are not abstract numbers; they translate to blocked transactions, stolen savings, disrupted hospitals, and silenced activists. Governance matters because it is how countries, technical experts, and companies agree on standards, incident response, and norms to reduce harm. For instance, international groups that recommended norms against using cyberspace to cause large-scale civilian harm have pushed the conversation toward restraint and accountability.

Rights and inclusion are political but also deeply personal. The Internet should be a place to learn, to organize, to make money, to meet people , but those opportunities evaporate when narrow interests capture governance. When only corporations and powerful states shape the rules, privacy gets eroded, surveillance increases, and marginalised voices get drowned out. That’s why multistakeholder spaces, where civil society, technical experts, governments, businesses, and youth groups come together does matter. They’re imperfect and messy, but they keep decision-making from becoming a closed playbook that benefits the already powerful. UNESCO and other global institutions frame the Internet as a public resource with the potential to build inclusive knowledge societies, which is a nice way of saying: the Internet is too important to be left to the highest bidder.

There are practical, everyday examples that make all this obvious. Look at debates over encryption: when a government asks platforms for message traceability, the security community warns that weakening end-to-end encryption would make millions of users—and critical systems—less safe. Look at the Cambridge Analytica scandal: a single instance of data misuse shook public trust and prompted governments to enact privacy protections. Look at national bans and restrictions on apps and platforms: they show how data-flow rules and national security claims can suddenly change which services you can use. These are governance battles, and they decide whether the Internet is an engine of opportunity or a tool for control.

At the same time, governance is the mechanism that keeps innovation alive. The Internet grew because it was open: anyone could build, try, fail, and scale. Standard-setting groups, technical communities, and open protocols make that possible. But as tech like AI, biometric ID, and large-scale data analytics advance, society needs guardrails: not to stop progress but to steer it so that benefits are shared and harms minimised. Good governance tries to strike that balance , it gives startups a chance to compete while preventing unchecked monopoly power and predatory practices that stamp out newcomer innovation.

If you’re wondering where you fit in, you can be at the table. Internet governance isn’t a closed thing reserved for ministers or a small set of engineers. The whole model is supposed to be multistakeholder — that includes youth networks, community groups, NGOs, researchers, and everyday users. Forums like the Internet Governance Forum (IGF), regional IGFs, ICANN meetings, et cetera exist precisely so more voices can shape rules that affect them. Suppose you’re passionate about digital rights, access, women’s safety online, or fair data practices. In that case, those are real places to make a difference.


We also need to be honest about the complex parts. Governance legitimacy is a moving target: who decides which voices matter? Enforcement across borders is messy because laws are national while networks are global. Technology evolves faster than policy, meaning norms are always catching up, and marginalised communities remain underrepresented in international fora. These are solvable problems, but they require persistent engagement rather than periodic outrage when a scandal breaks.


At the end of the day, Internet governance is not about control for its own sake, it’s about coordination and protection. It’s about making sure that the infrastructure that lets a student in a small town attend an online class, a small business sell to new markets, or an activist safely document human rights abuses, keeps working. It’s about setting norms so a state or an attacker can’t simply switch off large parts of the web or weaponise it without consequences. It’s about protecting rights while enabling innovation.


So why should you care? Because the future of the Internet will be shaped by whoever shows up: if users don’t participate, corporations and states will fill the space and write the rules. If people like us do not show up, through local advocacy, by joining civil-society delegations, by following and contributing to public consultations, the Internet can remain open, secure, and fair. It’s not glamorous work, and it rarely trends on social media. Still, it’s the kind of slow, steady action that decides whether the Internet serves everyone or just the loudest and wealthiest. The choice is not inevitable, it will be decided in rooms, on mailing lists, in meetings, and in courts. If you want the Internet to be a public resource that helps people, not a closed marketplace or a surveillance grid, start caring about governance now.

Should India Regulate Influencers the Chinese Way?

Should India’s DPDP Law Ask Influencers for Credentials? A Question Worth Asking and put your time into, because the Internet Era we are living in and everybody now-a-days on social media is influencing in some way, so how can we protect us or restrict them if in case they are spreading misinformation.

What China Did

China introduced sweeping rules for online influencers. Those who provide advice in fields such as finance, medicine, law, or education must now show formal qualifications (a professional licence, a degree, or certification) before hosting content on platforms. Platforms must verify credentials and tag AI-generated content, with penalties for violations.

That move has its defenders , they argue it will tame misinformation, protect vulnerable citizens, and stop amateurs from posing as experts. And it has its critics , they warn of stifling speech, of excluding voices grounded in lived experience rather than formal certification, of creating gatekeepers of knowledge.

India’s Digital Landscape: Familiar, Yet Different

In India, the DPDP law centres on how personal data is collected, processed, and protected , how consent is handled, how platforms are held accountable, and how user privacy is safeguarded. What it doesn’t do is ask: “Who is allowed to speak on social media when it comes to advice?”

We already have regulations on advertising and endorsements (via the Advertising Standards Council of India and consumer protection laws). Still, no law requires that an influencer with a million followers discussing investment strategies hold a chartered accountant’s licence, or that a “wellness guru” discussing treatment options have a medical degree.

Why This Matters — Especially in India

India is uniquely positioned as a vast, multilingual, multicultural digital environment. Many voices offering advice come from informal expertise, community networks, regional languages, and lived experience rather than Ivy League credentials. That’s a strength , it’s inclusive, diverse, rooted in context. But it’s also a vulnerability. Been there: a viral post promising a miracle cure, a “hot tip” on investment gone wrong. The potential for real harm is there.

So we’re faced with a dilemma: how to protect citizens from harmful advice without extinguishing the spark of diverse voices that make India’s digital discourse rich?

A Middle Path for India

Rather than a blanket credential-requirement like China’s, India might consider a tiered approach:

  • Influencers who explicitly claim“professional expertise” in regulated fields (medicine, legal, financial advice) are required they display or link to verified credentials (license, certification) or clearly state they are not certified.
  • Require stronger, clearer disclosures when content is paid-for, AI-generated, or high-risk (e.g., “This is for information only”, “Not certified advice”).
  • Increase platform responsibility: when content is potentially harmful, the platform should enable flagging and swift removal of misleading advice.
  • Use sectoral regulators, for example, the Securities and Exchange Board of India (SEBI) for investment advice, the Food Safety and Standards Authority of India (FSSAI) for nutrition claims, and the Reserve Bank of India (RBI) for financial advice.

The Take-Away

We must ask ourselves: What kind of Internet do we want? One where only officially credentials-bearing voices are heard? Or one where diverse voices can speak, but with transparency, accountability, and user awareness?

India has the chance to learn from China’s experiment and also to chart its own path, one respectful of constitutional freedoms, grounded in our pluralism, yet rigorous enough to protect people from digital harm. If we tighten credentials, we risk excluding grassroots voices. If we do nothing, we risk letting misinformation flourish unchecked.

Eyeglasses reflecting computer code on a monitor, ideal for technology and programming themes.

A Thought to Leave You With

India’s challenge isn’t just to prevent misinformation , it’s to protect digital expression without diluting its diversity. The DPDP Act offers a foundation for digital trust. Still, as we enter an age where social media shapes real-world decisions, we must ask: “Should the Internet reward only the “qualified,” or can wisdom still come from experience, storytelling, and community voices?”

“Is Public Interest Technology Just a Buzzword?”

How can inclusive design be effectively integrated into a business strategy, rather than remaining a side project? What role must policy, public procurement, regulation, and community engagement play so that inclusion is viable, not optional? How do technologists, designers, business leaders, and policymakers align so that public interest and profit aren’t at odds but intertwined?

I was introduced to terms like Internet governance four years ago. Since then, one new word after another has surfaced in the tech-policy space. When I came across the phrase public interest technology, I found myself pausing: How exactly does this connect to what we often see around us? And in a world where almost everything is built around making a profit, can this idea really find a place?

Public interest technology refers to the study and application of technology expertise to advance the public interest , to generate benefits for society and promote the public good. It doesn’t mean simply building the “next app” for the mass market, but asking: Who is this technology for? What impact does it have beyond revenue? How are communities being engaged so that the technology doesn’t inadvertently exclude or harm?

When I say “inclusion and equity” in this context, I do not mean only gender or rural-versus-urban divides (though both matter). I’m talking about a broader idea: technology designed and governed so that anyone, regardless of income, language, disability, digital-skill level, location, or other circumstance; has the capacity and opportunity to engage, benefit from, and not be left behind. For example: accessibility for persons with disabilities, multilingual interfaces, low-bandwidth versions of apps, and inclusive design that anticipates users with low digital literacy. These dimensions matter because the barriers to participation in tech are many and intersecting.

From one perspective, the appeal is compelling. Imagine designing a digital service with accessibility built in, community-driven features, fairness in algorithms, and transparency in how data is used. When implemented effectively, technology can expand access to essential services (such as education and healthcare), empower historically marginalized groups, and help reduce inequality. Technology becomes not just a tool for commerce, but a tool for inclusion and justice.

Yet from another view, the feasibility question looms large. Many tech firms, investors, and start-ups operate under business models that expect scale, monetisation, rapid growth, and market competition. Designing for inclusion or public interest often involves additional costs, a slower rollout, more engagement with user communities, localization, and more careful governance. The incentives may not align: underserved populations may be more complex to serve, with less immediate profit potential; inclusive design may not be rewarded in the same way as a feature that boosts user numbers or ad revenue. There are also measurement problem like how do you capture “fairness”, “access”, “dignity” as business metrics? Without strong alignment of incentives, inclusive features risk being sidelined.

So where do these two paths meet? Where does the promise of public interest technology sit in a profit-driven world? One possibility is when value is redefined: if inclusive design becomes a source of new markets (for example, by reaching underserved users), if social license, trust, and reduced risk become integral to the business strategy. Another is when business models become hybrid, combining private sector, public funding, and philanthropic support. Additionally, when regulations, procurement policies, or public-sector contracts demand or reward inclusive features, they thereby shift incentives. Or when the ecosystem builds the infrastructure, norms, and tools, inclusive design becomes less costly, easier, and standardized.

A contemporary office desk setup with laptops, gadgets, and accessories, creating a tech-savvy workplace.

When I think of my context (India and the Global South), the stakes become quite concrete. The digital divide is real. Multilingual diversity, rural-urban gaps, access issues, these are not abstract. Suppose technology is designed with inclusion in mind. In that case, huge populations can be brought into digital services, and new users can be accessed. However, the pressures of cost, scalability, and profit motive remain simultaneously. The question becomes: How can inclusive design be effectively integrated into a business strategy, rather than remaining a side project? What role must policy, public procurement, regulation, and community engagement play so that inclusion is viable, not optional? How do technologists, designers, business leaders, and policymakers align so that public interest and profit aren’t at odds but intertwined?

In what ways can we expect technology companies to embed public interest values when the ecosystem rewards speed, scale, and monetisation? Conversely, what does a truly public-interest-oriented technology initiative look like when it must survive and sustain itself in a market economy? Perhaps the intersection lies not in choosing one side over the other, but in asking how the structures around technology, business models, funding mechanisms, regulation, community participation, need to shift so that public interest becomes part of the equation rather than an afterthought.

I’m not claiming I have the answer. But I believe this is one of the questions we must keep alive: Can technology designed for profit also truly serve inclusion and equity — and if so, under what conditions?

The cure becomes the disease: Karnataka’s Misinformation Bill

1) The Final Recommendation:


I, as a Policy analyst, recommend to oppose the current draft of Karnataka Misinformation and Fake News (Prohibition) Bill 2025 by Karnataka Government.While the government’s intent to combat misinformation is laudable, this bill ironically becomes a threat to democracy rather than its safeguard. It undermines constitutional freedoms, lacks checks and balances, and risks becoming a tool of censorship rather than a protector of truth.

CORE DEFECTS:
– Violates Article 19(1)(a) Through vague and broad definitions that fail supreme court’s three-pronged test: narrow tailoring, connection to harm, and judicial oversight
– Creates administrative and federal conflicts resulting in over-censorship
– Lacks judicial oversight or proportionality safeguards as per Supreme Court standards


2) THE CRISIS: Constitutional Catastrophe

This bill is a direct attack on Article 19(1)(a) of the Indian Constitution. Government officers are conferred unparalleled authority to decide truth and falsehood with minimal judicial supervision. Historical precedent has warned us:  Section 66A of the IT Act, invalidated by The Supreme Court in Shreya Singhal v. Union of India for similar constitutional flaws.

The law breaks every rule the Supreme Court has set for speech limits:
– Definitions need to be narrow and exact,
– Enforcement needs to involve judicial review, and
– Punishment needs to be proportional

Thus, the bill fails on all counts creating risk of criminal prosecution before meaningful hearings, allowing content to be removed in hours by any administrative order. Even the Appeal procedures do not have specific timeframes or meaningful review procedures to follow. This represents the administrative tyrannic rule disguised as law enforcement.

3) Evidence and Supporting Arguments (Justifications for Opposition)

  •  THE CAUSE OF CHAOS: Definitional Disaster

    The question is, what is the truth? The bill vaguely addresses the critical issue. Information exists contextually (past, present, future), it evolves, and much of it is also very technical and requires expert-level knowledge. The bill is more based on a presumption that staff in the public service have expertise on all matters of public life domains.

The bill defines “misinformation” and “fake news” so broadly that it could criminalise any speech in a variety of contexts. For example,  In initial reporting of the Balasore train tragedy, where casualty figures and causes were later corrected, could be prosecutable under this bill. Surely, if a citizen questioned official statistics and used misleading data (i.e., disputed statistics) they simply found on the internet, they could end up in front of the criminal judges panel.

  • THE EXECUTION FLAW: Enforcement Nightmare

Karnataka’s administrative body is ill-equipped with both the technical infrastructure and human resources for large scale content moderation. The number of pieces of digital content that would need to be evaluated would overwhelm any such system and risk arbitrary action or complete failure of the mechanism. In the Germany NetzDG case experience, the Law required social media to delete illegal content quickly or risk hefty fines. Since the rules are unclear, platforms frequently take down even legal posts to avoid problems. This results in excessive censorship and restricts free speech.

  • THE SYSTEMIC RIFT: Federalism Fracture

Think about it. Online communications do not stop at the state border or even national borders, so which law will apply when a tweet is posted from Delhi but consumed within Bengaluru? What happens if the different states have different approaches regarding how they will treat tweets? How are platforms supposed to responsibly navigate this landscape without violating one law over the other? This legislation confounds an already dynamic landscape and adds further legal ambiguity in the very complex system.

Digital communications are already governed by Federal legislation, like the IT Act, 2000, in India. By introducing laws relating to digital communications that pertain to locations and territories, the new legislation from Karnataka adds another layer of complexity, as it will not coincide with the existing laws at the national level. Still, it will also not necessarily coincide with what policies emerge in other states in this regard. The question remains, who is supposed to keep track of all this?

  • ‘THE GLOBAL LESSONS’: International Warning

Worldwide experiences in enacting misinformation legislation provide clear warnings to countries. Like Singapore’s POFMA was criticised for curbing legitimate political discourse. In Thailand, the broadening of the Computer Crime Act beyond its original scope effectively silenced critics of the government. Broad misinformation laws are particularly troubling in even democratic countries, where they have become laws of censorship.

  • ‘THE FUNDAMENTAL MISUNDERSTANDING’: Democracy Misread

The bill’s binary true-false, allowed or controlled understanding is woefully incapable of handling the complexity of information. During COVID-19, health authorities updated their official guidance at least 18 times. Under this bill’s logic, would health authorities, media channels face criminal charges for initially providing gradual updating information?

Facts update, official guidance changes and early reports are refined as more information emerges gradually. Criminalising or making such laws will challenge the correction and transparency that society needs.


4) ‘THE POSSIBLE ALTERNATIVES’: Smarter Solutions Exist

Rather than take this punitive approach, Karnataka could replace it with a democratic capacity-building exercise:

  • Adopt the approach of investing in media literacy programs to equip citizens with the capacity to identify misinformation on their own. For example  Finland, which consistently rank at or near the top of resilience to fake news, are pioneering national media literacy programs and have integrated these into schools, teaching students how to assess the information critically online.
  • Other democratic alternatives would be mandatory transparency requirements for online platforms; similar to, but less robust thinking than those in the EU Digital Services Act (supporting independent factchecking; and public funding for digital education campaigns).

Each of these alternatives help address underlying issues, the lack of awareness, exposure to the opaque nature of algorithms, and lowering civic trust, while upholding the right to freedom of expression.


5) ‘THE SLIPPERY SLOPE’: Power Without Limits

When we empower the government to define “truth” and punish what it classifies as “lies,” we invite serious risks of governmental overreach. Today’s anti-misinformation law can easily be the next vehicle for the suppression of dissent. In India’s own history there is a cautionary tale, when the country was under Emergency (1975–77), the government exercised near-unlimited powers to censor the media, shut down criticism, and control the public discourse under the “National interest Tag.” More recently, authorities cited Section 66A of the IT Act (held unconstitutional in 2015) to arrest individuals for very banal online posts, even if that law had been declared unconstitutional. 

These trends demonstrate, once the powers to control information are granted (in any form), they rarely remain confined to their original purpose, but instead, expand often against the voices that state would find inconvenient.

6) ‘THE STAKEHOLDER IMPACT’: Silencing Society
Civil society organisations are in danger of losing critical space for legitimate advocacy when vague definitions of “misinformation” are used against all manner of routine communication. Journalists also face a situation where they can no longer applaud professional ethics or assure their own personal safety. Citizens will begin to self-censor, worried that one tweet or one blog could trigger arbitrary sanctions. Even academic researchers might be unwilling to pursue critical topics for reasons of personal safety without some level of belief that their multi-million-dollar research agenda is not in conflict with the official narrative. This is a complete chilling effect that threatens the very foundations of an open society. 


A bill designed to safeguard democracy could be transformed into a vehicle for dismantling it. Legislation promised to ensure the truth could be applied to criminalise those seeking the truth. A law designed to empower citizens could silence them. When the cure becomes the cause of the disease instead of professing it to protect, we must ask: who is at the benefit zone?

7) ‘THE PATH FORWARD’: A Better Way

Karnataka needs to withdraw this draft bill altogether and begin a transparent and inclusive consultation process:
– Inviting constitutional experts, advocates for digital rights, media experts and technology professionals, making it a complete multistakeholder process.
– Publishing a white paper on constitutional based regulation in a minimum time duration, and 

– Doing pilot programs like media literacy and platform transparency measures before coming to a punitive lawmaking

The state still has the opportunity to lead the way in fighting misinformation, but it will need to support solutions that strengthen democratic norms rather than tearing them down.

8) Closing line:

The Karnataka Misinformation and Fake News (Prohibition) Bill, 2025 fails in every dimension
– of its constitutional validity,
– administrative feasibility, and
– democratic integrity.

The harms associated with misinformation are real. There are no shortcuts to building an informed citizenry, certainly not ones paved with suppression, because

जो लोकतंत्र बहस से डरता है, वो पहले ही गलत जानकारी से हार चुका है।
ಜೋ ಪ್ರಜಾಪ್ರಭುತ್ವ ವಾದವಿವಾದದಿಂದ ಭಯಪಡುತ್ತದೆ, ಅದು ಈಗಾಗಲೇ ತಪ್ಪು ಮಾಹಿತಿಗೆ ಸೋತಿದೆ.
A democracy that fears debate is already losing to misinformation.

Thus, Karnataka must choose the path of open, participatory, and rights-respecting information governance, not control in the name of clarity, but clarity through collaboration.

For Digital Cats and Data Rights: Is DPDP India’s GDPR-Lite?

In 2025, even India’s street animals have a Social Media presence, and every vendor of the chai stall has a digital presence (talking from my university-renowned chaiwala- Sudhama ji). At the same time, billions of data points bounce off from smartphones, digital land records, and the cheeky community WhatsApp groups. In this limitless virtual wilderness, India’s Digital Personal Data Protection Act (DPDP) saunters onto the scene, just as GDPR once did for those regulation-loving Europeans. But should Indians rest peacefully with the DPDP’s protective vigilance, or stay awake at night for marauding data breaches?

Top-down view of Scrabble tiles spelling 'LIKE' on a pile of blank wooden tiles.

Visualise a cat in a Delhi colony that receives its first smartphone. You upload a video of it meowing; your video goes viral with a million views, analytics, targeted ads, and data brokers knocking around, does the cat have rights? Doubt it. However, this improbable scenario highlights a fundamental problem: digital visibility is so pervasive in India that everything is saturated with data. Personal data is not only texts and selfies; it’s voice prints, location, and traces of behaviour. And yet, up until recently, regulation fell behind, and those AI created photos are also taking our data, sadly yes!

So, is the DPDP, GDPR’s Long-Lost Cousin?

Let’s get past the jargon, DPDP is India’s effort to corral personal data privacy after 2023. Similar to its European inspiration, the GDPR, it seeks to give the little guy (or buffalo, these days, courtesy of bestie ‘Trump’) ownership over their digital trail. However, unlike GDPR, the DPDP is a touch of a finicky vegetarian, extremely particular about what it consumes. It safeguards only digital personal data. Un-digitized data and paper records are in the clear or so it seems, not covered.

What GDPR Does (Very Briefly) vs. What DPDP Promises

First, the essentials, because legal geekiness is inevitable.

  • Scope / Territorial Reach: Applies to all personal data (digital or paper) processed regarding people in the EU; applies even if processing occurs outside the EU.Applies to digital personal data (digital, or paper-based if subsequently digitised). Also extends extraterritorially when goods/services are being marketed in India. Special Data Categories are “special categories” like health, biometric, racial/ethnic origin, political opinions, etc., which enjoy higher protection. No special category designation in the ultimate Act; all data (digital personal data) is dealt with on an equal footing.
  • Data Subject Rights, Extremely robust: access, rectification, erasure (“right to be forgotten”), restriction, data portability, objection, etc. India’s Act provides rights of access, correction, erasure, grievance redressal, and nomination (enabling a person to act on behalf of the principal in case they die or become incapacitated). But no express right of data portability, no robust provision for objecting to automated decisions.
  • Government/Law Enforcement Exemptions: There are a few exemptions (e.g., public interest, national security), but these are often subject to oversight, judicial review, and transparency. DPDP has comparatively broad exemptions for government agencies, the police, public order, sovereignty, etc. Certain obligations and requirements are suspended or modified by government notices.

Rights, Wrongs, and Magical Missing Powers

GDPR, in all the fanfare that is to be expected of its EU roots, grants individuals a behemoth set of rights, data portability, immunity from robotic despots (automated decision-making), and the power to make their data poof away into nothing. DPDP? It provides some of these, but with less magic. Need your data to be wiped? Okay, in a few instances. Need to contest an algorithm determining whether you will receive a loan? Good luck—DPDP makes no such guarantee.

And where GDPR makes everyone play fair, governments included whereas DPDP allows India’s government organizations to bypass much of its regulations whenever “national security” or “public order” is involved. If that sounds like a privacy alibi, well, it is. GDPR is fixated on “special categories” (medical, biometrics, religious beliefs). India’s DPDP? One byte is (nearly) as good as another. Your doctor’s record and your pizza order receive comparable safeguarding, satire, but also fact.

Laptop displaying charts and graphs with tablet calendar for data analysis and planning.

GDPR believes adolescents can manage their data between 13 and 16 years old (subject to parental consent per country). DPDP, the overbearing parent, states no one under 18 can provide valid consent, so Indian teenagers may have mom’s approval to access any platform, not precisely the digital liberty some of us dream of, which I believe in my own estimation is quite correct, considering the Indian next generations using the platform.

Under GDPR, businesses have to panic-report significant data breaches within 72 hours. DPDP states: “Just report all breaches, large or small, to the Board and inform affected individuals, but when, precisely? The clock’s a tad imprecise.” A recipe for delayed outrage always.

Where DPDP Does Well (and Some Applause also required)

Since all is not doom and loophole jokes, there are positives to mention:

Grants legal rights to individuals (data principals) previously vague or nonexistent: the right to access, correction, erasure, and grievance redressal.In most instances, it requires consent; it insists that it be “clear affirmative action”. That is better than some uncertain past practices.It is contemporary, more in line with international standards, providing India with a framework rather than leaving things largely to sectoral regulations or no regulations.

Penalties are severe; the obligation to establish a Data Protection Board is a good thing.

Is DPDP Enough? (Yes, No, Maybe, Depends on What You Mean)

Let me do my think-outside-the-box hat, DPDP, on paper, gets India closer to the GDPR level than it otherwise would have been. It addresses gaps, it provides rights, it places obligations, and it establishes regulatory infrastructure. It’s a giant step forward compared to previous regimes.

It is not yet the case that India has as robust a shield as GDPR in all aspects. The exemptions, absence of special categories, and ambiguity of specific rules are genuine concerns. For most Indians, it will enhance privacy. But in corner cases, particularly when state agencies or influential companies are concerned, there will remain possibilities of excesses.

So, DPDP could be “GDPR-lite” in some ways; “GDPR-adequate” if some of its Rules and notifications are enforced; not yet “GDPR-fully-protective” across all situations and most upmark it is not in the region or country India.

What Needs Fixing (Recommendations)

The following are what I believe would assist, perhaps treating India’s digital cat population better:

  • Define and secure sensitive / “special” data categories: Health, biometric, political, religious, etc. It must have higher safeguards.
  • Limit government exceptions and make them transparent: Well-defined rules on when government may use them; oversight; reporting; judicial review.
  • Improve right to portability & limit on damaging automated decisions: Individuals need to be able to take their data with them; profiling by computers should have safeguards in place.
  • Clarity in Regulations / Notifications: Time limits for notice, withdrawal of consent; clear duties for data fiduciaries; transparency duties; how cross-border flows will function.
  • Awareness & Access: Outreach programs, easy-to-use tools for users, and improved education. Even cows would maybe know “privacy settings” better than most users do.
  • Strong, independent regulator / Data Protection Board: It has to be able to withstand undue political or bureaucratic interference; it has to have resources; enforce compliance, and ensure that investigations occur.
  • Specific rules for AI/profiling/behaviour advertising: With data science progressing, profiling, inference, and algorithmic harms need to be addressed directly. Specially reuqired after seeing Ghibli world and saree trends.

Conclusion

Will the law shield the cat with a cell phone? Not at all. But it might provide that cat with some room to meow without being surveilled by twenty ad-nets.

What is essential now is vigilance. DPDP is not a GDPR copycat; it is a good start. India tries to herd its huge flock over the wild steppes of the Internet. Nevertheless, the fences are slightly patchy, and the night watchman averts his eyes whenever the government approaches to ring the doorbell. Until everyone, even the cats, are trained in their rights( which perhaps needs in India seeing supreme court decisions and our population retaliations).

Welcome to Our Digital Fragile Lives!

Picture yourself in Mumbai, racing to get into a Zoom call, when your screen is frozen or, in South Korea, someone’s Netflix marathon is interrupted, leaving them suspended at the end of their favourite show. The outage hit several countries, including India, Pakistan, Saudi Arabia, Kuwait, and the UAE, reminding us again that in our globalised world, we’re all just a cable cut from entering the digital stone age.

Cables cut, switch is not functional, router is providing amber/green light, or LAN is gone. Numerous more such lines have become a familiar sound to our ears, particularly if you and your job are Internet-oriented, supporting all of us in this age of the Internet. The Internet, an interconnection of networks, has its backbone comprised of undersea cables, satellite links, and land-based cables.

The old symphony was played once more this Sunday morning: IT departments all over Asia and the Middle East collectively groaned as their phones blew up with the same old refrain of “the Internet is down.” Red Sea undersea cable cuts caused Internet outages in some areas of Asia and the Middle East. Overnight, millions of people learned what their great-grandparents had always known, that occasionally you can’t get to everyone you want to, when you want to.

The culprit this time around? Major subsea systems like the South East Asia–Middle East–Western Europe 4 (SMW4) decided to hang up their boots, or more specifically, someone or something decided for them. The reason is as apparent as your video call quality peak-hours-through: completely foggy.

The Usual Suspects and Finger-Pointing Olympics

In the grand tradition of Internet outages or Internet Shutdown, the speculation game began immediately. There has been concern about cables being targeted by Yemen’s Houthi rebels, which the insurgents explain as an effort to pressure Israel to end its war on Hamas in the Gaza Strip. However, the Houthis have denied attacking the lines, leaving everyone to play the world’s least fun guessing game.

The timing is especially significant, as it coincides with ongoing tensions in the area. It’s a sort of digital war with the victim being our combined capability to stream, shop, and share cat memes across the globe. Between November 2023 and December 2024, the Houthis hit over 100 vessels with missiles and drones. Therefore, suspicion tends to fall their way even though they’re playing the “wasn’t me” game more vigorously than a toddler caught with cookie crumbles on his face.

Welcome to Cable Cut Season

Here’s the thing that would be hilarious if it weren’t so crucial to modern civilisation: this isn’t exactly breaking news. Cable cuts happen with the regularity of a Swiss watch, approximately 150 times per year globally. It’s as if the universe is reminding us that our digital omnipotence has some very analog vulnerabilities.

We’ve built a world where a teenager in Tokyo can video chat with his grandparents in Tehran, a programmer in Karachi can talk to colleagues in Riyadh, and a student in Dubai can attend virtual classes organised in New Delhi. All of this magic travels through cables, no thicker than a garden hose lying there vulnerably on the ocean floors with anchors, fishing nets, natural disasters, or, presumably, the odd geopolitical tantrum in between.

The Heroes We Deserve (And Desperately Need)

As millions of users vainly refreshed their browsers and rebooted their routers, because honestly, that’s always the first thing to try, the true heroes were already on the move. Somewhere out there in the world’s 60 cable ships, sailors were likely taking deep breaths, setting down their coffee, and bracing themselves for another emergency mission to the Red Sea.

These sea-faring IT warriors have repair kits at the ready in harbors all over the world, because in the game of keeping humankind together, preparedness is not only a Boy Scout slogan, it’s a matter of survival. They’ll cast off into the ocean, find the cut cables with high-tech gear, pull them up from the seafloor (picture fishing, but with the Internet), reconnect them with jumper cables, and send them plunging into the depths.

It’s a high-tech precision operation coupled with good old-fashioned elbow grease, all carried out by individuals who likely get seasick more than they get screen time, but keep our online world going.

The Irony of Our Connected Disconnection

While repair teams labor to re-establish complete connectivity and IT support staff answer another batch of “have you tried turning it off and on again?” calls, this latest Red Sea cable cut is a gentle reminder of an elemental truth: for all our cloud computing and wireless this and wireless that, the Internet itself remains very much a physical entity, with physical weaknesses, kept up by actual people doing actual work in the real world.

So the next time your video call fails or your site takes a while to load, consider the cables quietly beached on ocean bottoms, transmitting our digital fantasies and memes to the world. They’re doing their best, and when they break down, individuals are waiting to set sail and repair them, splice by splice.

After all, someone has to maintain the cat memes and videos coming. Civilization is counting on it.

Borders in the Cloud: How Global Tensions Impact Your Technology

In today’s hyperconnected world, the phrase “data at your fingertips” has become almost a given. But what happens when politics gets in the way and suddenly, the keys to your digital kingdom are out of reach? This isn’t a futuristic cautionary tale, it’s reality for Nayara Energy, a major India-based refiner, whose experience with tech giant Microsoft shows how geopolitics can instantly disrupt business operations.

[Image source: Reuters.com]

The Day the Cloud Went Dark
The shock to Nayara Energy was immediate: access to its emails, data, and communication tools . paid for upfront and essential to the day-to-day running of the company; just vanished. Why? 

Not due to a technical error or unpaid bill, but due to sanctions against Russia, with which Nayara conducts large volumes of business. Microsoft, reacting to European Union laws, shut off the tap, and Nayara was left clutching at air: frozen out of valuable digital assets without warning and an immediate fix.

Legal Fights and a Last-minute Glue

Hopeful to restore control, Nayara approached the Delhi High Court. The countdown had begun: If your business’s lifeline is dependent on digital infrastructure, a few hours of outage can prove costly. Microsoft restored Nayara’s access a few days before the hearing was scheduled, but doubts lingered. Would services get cut once again in case the winds of geopolitics shift?

The Hidden Risks Behind the Cloud

What the Nayara–Microsoft case illustrates is a poorly ventilated hazard of the modern corporate era. Cross-border tech transactions are increasingly beholden not just to commercial conditions, but to the whimsical tides of international law and politics. Multinationals operating in politically sensitive sectors — energy, finance, communications, stand genuine risk of abruptly losing access to equipment and data due to sanctions or regulatory edicts half a world away.

[Image source: Economic Times]

Protecting Your Business in an Ideologized Age of TechnologyNayara’s story is a warning to companies everywhere, especially those with international reach. Don’t assume things will continue as usual: Even robust service agreements may be superseded by foreign regulations or regulatory decrees. Legal preparedness is essential: Be adamant about having devices for swift legal action, even in nations other than your homeland.

Diversify when possible: Avoid single-provider lock-in for mission-critical services, especially where geopolitical sensitivities are at play.

Monitor geopolitical events: Remain ahead of regulatory updates, sanctions, and policy advancements in every country you do business.

Looking Ahead: 

  • Worldwide IT infrastructure has sped up and driven innovation in business, but, as Nayara found out, it is more susceptible to government interference. 
  • Digital resilience is no longer an issue of cybersecurity or availability; it’s about interpreting and getting ready for a world in which your ability to access technology can depend upon the following geopolitical headline.
  • Embedding Resilience into Your Digital Strategy

Is your business prepared for digital shocks? Share your experience or comments below — and let’s build robust strategies together for the new era of global business.

To Read More:
1) https://www.thehindu.com/business/nayara-energy-moved-delhi-hc-against-microsoft-over-suspension-of-services/article69865880.ece
2) https://www.business-standard.com/companies/news/nayara-energy-microsoft-legal-case-service-suspension-eu-sanctions-india-125072801100_1.html
3) https://www.reuters.com/world/india/russia-backed-nayara-taps-indian-it-firm-after-microsoft-suspends-service-2025-07-29/

Tariffs Trumpification: Dressed Up, Dubiously Priced

In what can be counted only as an astonishing technological step since someone invented the “Are You Sure?” dialog box, President Trump has weighed in on tariffs for Indian IT services. “I know technology better than anybody else, believe me,” he said, likely while his iPhone auto-corrected “covfefe” in the background.

We are seeing a digital enlightenment from a man whose biggest tech contribution was getting his tweets trending with sheer mistakes.

The Technical Breakthrough: Applying Tariff Logic to Binary Code

A developer writes code on a laptop in front of multiple monitors in an office setting.

Our steady genius has identified the primary problem with America’s technology issues: we’ve allowed foreign nationals to control our computer infrastructure. His additional 25% tariff on Indian goods, bringing the total to as much as 50%, does not apply to IT services directly, but the ripple effects may leave American bits and bytes feeling the heat. The tariff was announced on August 6, 2025, and will take effect approximately 21 days later, around August 27, 2025 . The policy is centered on:

  • Software Development Services: Why employ competent programmers when you can pay extra for less talented ones? It’s like paying for a Ferrari that’s nothing more than a shopping cart with racing stripes.
  • Call Center Operations: At last! Americans can experience being on hold with someone in their time zone who is unable to assist.
  • Management of Cloud Infrastructure: Trump has solved cloud computing with it raining money. Unfortunately, this is our money in the wrong hands.

Trump’s Technology Credentials: A Closer Look

Wooden tiles spelling 'USA' and 'TARIFFS' on a wooden surface symbolizing trade issues.

When queried on the nitty-gritty technicalities, Trump explained from his expertise: “I know cyber more than anybody. I have a great brain for technology. My nephew attended MIT, so it runs in the family.”

This has stunned Silicon Valley, where managers are rushing to sort out how they have got it all wrong all along. “We’ve been worried about things like scalability, distributed systems design, and machine learning optimization,” opined one anonymous tech CEO. “We ought to have been worrying about smart nephews.”

The Practical Implications: A Systems Analysis

For American Companies:
The tariffs will build what economists refer to as a “forced innovation environment.” Firms will make outsourcing so expensive that they’d have to employ locally. This presumes that the approximately 200,000 skilled developers required in the U.S. market will simply materialize through technological replication.

For the Technology Stack:
Today’s U.S.–India IT collaborations manage approximately 60% of America’s enterprise software maintenance, 40% of cloud computing, and 35% of cybersecurity surveillance. The tariffs will compel American businesses to do one of the following:

  1. Pay 25% extra for the same services
  2. Rapidly train native substitutes
  3. Allow their systems to deteriorate while yelling “America First!” at their bug reports

For Network Latency:
Moving operations back to the U.S. will decrease ping times, a technical silver lining nobody probably expected Trump but will probably claim credit for. “I made the internet faster,” would be an easy next tweet.

The Digital Diplomacy Angle

Trump’s tech diplomacy strategy is like a DDoS attack on global affairs: bludgeoning, relentless, and ultimately useless. With attacks on India’s $150 billion IT services sector, the administration is effectively issuing an ultimatum to the country that:

  • Offers 24/7 support to U.S. businesses across 12 time zones
  • Provides the backend for 70% of the Fortune 500
  • Has been educating U.S. businesses on digital transformation since Y2K

The Irony Protocol

The most profound irony is that while Trump’s tariffs visibly target India’s $150 billion IT services machinery, his own platform, Truth Social, runs on American hosting (RightForge and Rumble) rather than offshore providers. So the tech behind his tweets isn’t at risk from his policy, but American consumers and IT clients very much are.

It’s like trying to perform surgery on yourself blindfolded, technically feasible, but the patient almost always doesn’t survive.

Expert Analysis: The Dunning-Kruger Effect in Policy Form

Dr. Sarah Chen, a systems architect at MIT (yes, the same MIT that Trump’s nephew attended), offered this insight:
“This is what happens when someone who thinks Twitter is ‘the cyber’ tries to manage a $4 trillion global technology system. It’s like attempting to debug a complex system using just a magnifying glass and patriotic feelings.”

This policy is a textbook-level demonstration of the Dunning–Kruger effect, when individuals with little knowledge in an area overestimate their expertise. Here, we witness it played out on the stage of international trade policy in a way that would make the inventors of ARPANET weep.

The Bottom Line: Merging the Consequences

As U.S. companies prepare to either write larger checks or rebuild their technical networks within the U.S., one thing remains certain: Trump’s grasp of technology is as secure as Internet Explorer’s 1995 security model.

The real winners? American developers, are now able to command better salaries as firms seek domestic alternatives. The losers? American consumers will eventually pay more for everything, from banking services to streaming sites.

But at least we’ll have the consolation of knowing our customer service reps work in the same time zone, even if they can’t help us any better than before.

In the immortal words of every system administrator who has ever survived a catastrophic failure: “Well, this should be interesting.”

“Focus on Task Automation, Not Job Replacement”

Why the most innovative companies are automating the work, not the worker?

The Great Automation Misconception

Walk into any corporate boardroom discussing digital transformation, and you’ll likely hear executives talking about “automating jobs” and “replacing positions.” But here’s the thing: “This mindset is not just wrong, it’s counterproductive”. The companies that truly succeed with automation aren’t trying to eliminate humans from the equation. They’re eliminating the soul-crushing, repetitive parts of work that humans shouldn’t be doing anyway.

The difference between automating tasks versus automating jobs isn’t just semantic , it’s the difference between empowering your workforce and alienating them.

What Does Task Automation Look Like?

A robotic hand reaching into a digital network on a blue background, symbolizing AI technology.

Let’s start with a real example. Jaya works as a financial analyst at a mid-sized company. Her job description says she’s responsible for “financial planning and analysis,” but if you shadowed her for a week, you’d see she spends roughly:

  • 40% of her time is spent manually extracting data from various systems and formatting it into spreadsheets
  • 25% of her time is spent creating the same monthly reports with slightly different numbers
  • 20% of her time is spent chasing down colleagues for missing information
  • 15% of her time is spent analyzing trends and providing strategic insights

Now, which of these activities would you automate? The smart answer isn’t to replace Jaya entirely – it’s to eliminate the first three categories so she can spend 100% of her time on what she was hired to do: think strategically about the numbers.

The Task-First Automation Framework

When we shift from “How can we replace this person?” to “How can we eliminate the tedious parts of this person’s day?”, everything changes. Here’s how to think about it:

1. Identify the Human Elements That Matter

Before automating anything, ask yourself: What parts of this job require human judgment, creativity, or relationship-building? These are your protected zones , the activities that should never be automated, no matter how sophisticated the technology becomes.

For a customer service representative, this might be:

  • Handling complex complaints that require empathy
  • Building relationships with key clients
  • Identifying patterns in customer feedback that suggest product improvements

2. Map the Administrative Burden

Next, identify everything that falls into what I call “administrative friction” – the tedious, repetitive tasks that prevent people from doing their real work:

  • Data entry and formatting
  • Scheduling and calendar management
  • Generating routine reports
  • Following up on standard requests
  • Moving information between systems

These are your automation targets.

3. Design Human-AI Collaboration

The magic happens when you design systems where humans and automation complement each other. Think of automation as the world’s best research assistant – it handles the grunt work so humans can focus on the thinking work.

A human hand with tattoos reaching out to a robotic hand on a white background.

Why This Approach Works Better

It Reduces Resistance

When you tell employees you’re “automating their jobs,” you create fear and resistance. When you tell them you’re “automating the boring parts so they can focus on the interesting work,” you create excitement. I’ve seen teams go from actively sabotaging automation initiatives to becoming their most prominent champions, simply because of how the initiative was framed and implemented.

It Improves Job Satisfaction

Nobody gets into marketing because they love updating spreadsheets. Nobody becomes an engineer because they enjoy writing status reports. When you remove these administrative tasks, you’re not just making people more productive – you’re making their work more fulfilling.

It Creates Better Business Outcomes

Here’s what most executives miss: A human doing only human-level work is far more valuable than a human doing a mix of human work and robot work. When you automate tasks rather than jobs, you don’t just maintain your human capital – you amplify it.

The Implementation Reality Check

This approach isn’t always more straightforward than the “replace everything with robots” mentality. It requires more nuanced thinking about work design. You need to:

Invest in Change Management: People need to understand not just what’s changing, but why it benefits them personally.

Redesign Roles, Not Just Processes: When you remove 40% of someone’s tasks, you need to be intentional about what fills that space. This is an opportunity to add more strategic, creative, or relationship-focused work.

Accept That Some Jobs Will Change Dramatically: While you’re not eliminating positions, some roles will transform significantly. A data analyst who no longer needs to spend hours collecting data will require new skills in interpretation and storytelling.

The Skills Shift Strategy

When you automate tasks instead of jobs, you create what I call “skills drift” – the natural evolution of roles toward more uniquely human capabilities. This means investing in:

  • Critical thinking and problem-solving for roles that will handle more complex scenarios
  • Communication and presentation skills for people who will spend more time explaining insights rather than generating them
  • Strategic thinking for roles that will have more time to focus on long-term planning
  • Cross-functional collaboration, as people have more bandwidth to work across teams

Common Pitfalls to Avoid

The “Efficiency Trap”

Don’t measure success purely by how much faster tasks get completed. Measure it by how much more valuable work gets done. If you automate data collection but don’t create space for data analysis, you’ve missed the point.

The “Set It and Forget It” Mentality

Task automation requires ongoing refinement. As people get comfortable with their new workflows, they’ll identify additional opportunities for improvement. Build feedback loops into your process.

The “One-Size-Fits-All” Approach

Different roles will have different automation opportunities. Don’t try to force the same solution across every department. Customize your approach based on the specific mix of tasks in each role.

Looking Forward: The Human-Centric Workplace

The companies that get this right are creating workplaces where technology handles the routine and humans handle the remarkable. They’re not eliminating jobs, they’re removing the parts of jobs that were never really suitable for humans in the first place.

This isn’t just about being nice to employees (though that matters). It’s about being smart with your resources. In a world where creativity, critical thinking, and human connection are becoming increasingly valuable, why would you want your people spending time on tasks that a computer can handle?

The future belongs to organisations that see automation as a tool for human amplification, not human replacement. The question isn’t whether to automate , it’s whether you’ll automate thoughtfully, with humans at the centre of your strategy.

Because at the end of the day, the goal isn’t to build a company full of robots. It’s to create a company where humans get to do what humans do best.

robot, artificial intelligence, technology, human, machine, android, humanoid, digital, artificial intelligence, artificial intelligence, artificial intelligence, artificial intelligence, artificial intelligence

What tasks in your organisation are crying out for automation? Start there, and see where it leads. You might be surprised by how much human potential you unlock when you stop trying to automate the human.

Is China’s Rare Earth Grip India’s Achilles’ Heel?

Electric scooters, smartphones, satellite internet—none of these work without a hidden layer of power: rare earth elements (REEs). These 17 obscure minerals, mined deep within the Earth, quietly power everything from TikTok and Instagram Reels to national defence systems. While we debate AI ethics and celebrate the Quantum Year, and discuss clean energy transitions, a quieter but consequential contest is unfolding beneath the surface. We were either not aware of or never cared about this topic, because, as an individual, I also didn’t realise its importance until I took a Public Policy course at The Takshashila Institution, which included a policy simulation about the Rare Earth metal problem, a crucial yet rare resource. It’s one where China produces 61% of the globally mined rare earths and dominates 92% of the worldwide total output. The question isn’t whether this is just smart economics, but whether it’s a long-term strategic threat. Despite holding large reserves, India has been watching from the sidelines—a position we may not be able to afford much longer.

[Image Source: https://www.shutterstock.com/search/rare-earth]

There’s little doubt that China engineered its dominance through years of market manipulation, flooding the world with cheap REEs and absorbing the environmental costs other nations wouldn’t. But this monopoly is far from invincible.

When China restricts exports—as it did to Japan in 2010—it triggers global alarm as well as global action. The US, EU, and Australia have since invested heavily to de-risk their supply chains, and India, custodian of the third-largest rare earth reserves, has a moment to seize. So far, however, lethargy persists: over 90% of our rare earths are still imported from China, despite our efforts to promote Digital India” and “Atmanirbhar Bharat“.

“In July, New Delhi moved beyond rhetoric: the Centre announced a 1,345crore incentive scheme to boost domestic production of rare‑earth magnets—covering everything from oxide refining to finished magnets. Heavy Industries Minister H.D.Kumaraswamy confirmed that two lead manufacturers, including Mahindra&Mahindra and UnoMinda, have already shown interest. Complementing this, the National Critical Mineral Mission, launched in April, aims to enhance exploration and processing capacities, ensuring that India’s mineral policies align with its digital ambitions.

The real threat isn’t only supply chain risk. It’s the illusion of sovereignty in a digital world where the physical foundations—minerals and materials—remain beyond our control. We are building Indian software and apps on foreign hardware powered by foreign minerals; this is technological dependence by another name. “

[Image Source: http://www.ars.usda.gov/is/graphics/photos/jun05/d115-1.html]

China’s willingness to weaponise supply chains should be a wake-up call, especially for young Indians who aspire to global leadership. If India wants to own its digital, defence, and green future, it must first own its mineral base.

Yet, there is reason to resist panic. China’s export controls are forcing global diversification, making it harder for China to maintain its dominance in the rare earths market. New refining, recycling, and urban mining technologies are coming up worldwide. For India, this is not just an industrial challenge but a generational opportunity. The “old economy” of minerals is the new economy of semiconductors, batteries, and climate tech. Indian policy and business must invest urgently in R&D, responsible mining, and circular supply chains, turning critical minerals into a platform for both sustainability and sovereignty.


This transformation can’t happen without India’s youth. Start-ups, student-led research, and youth participation in science and foreign policy debates can redefine what resource control looks like in the 21st century. Imagine not only programming for the global stage but also having authority over the essential components that drive those programs.

[Image Source: https://www.sciencephoto.com/media/1293498/view/rare-earth-element-abundance-infographic-chart\]

Critical minerals may lie buried, but their absence threatens to keep India confined as a consumer, rather than a creator, in the global order. The geopolitics of rare earths—much like the geopolitics of oil before them—may spark disruption, but also drive necessary innovation. Whether China’s grip becomes our Achilles’ heel or a catalyst for self-reliance depends on how firmly we grasp the moment beneath our feet.

India should think in the direction of mine, refine it, and lead with it.

Jai Hind!