After DeepFake, It is DeepSeek for India now

DeepSeek is Rising

“We’re still grappling with the challenges of deepfakes, and now China has unveiled its AI giant—DeepSeek. What does this mean for the future of AI and global tech dynamics?

The global artificial intelligence (AI) landscape witnessed a seismic shift as Chinese startup DeepSeek unveiled its generative AI model, challenging the technological dominance of American giants like OpenAI’s ChatGPT and Google’s Gemini. Unlike its US counterparts, DeepSeek is entirely open-source and operates on lower-end chips, making it a cost-effective alternative at just $5.6 million—barely 2% of OpenAI’s O1 model. By sidestepping high-end hardware, which is under US export restrictions, DeepSeek has sent shockwaves through Silicon Valley, shaking the AI ecosystem.

Despite these advancements, DeepSeek’s prospects in India remain uncertain. The country has maintained a wary stance on Chinese technology, particularly following the 2020 border clashes. This geopolitical tension has already led to the banning of Chinese apps such as TikTok and restrictions on telecom equipment from companies like Huawei and ZTE. As a result, Indian firms developing AI applications are likely to remain reliant on US technologies and Nvidia’s graphics processing units (GPUs), despite growing concerns over technological dependency.

The DeepSeek Disruption

DeepSeek’s appeal lies in its affordability, accessibility, and independence from high-end computing resources. The model’s open-source nature makes it a lucrative option for businesses and developers seeking cost-effective AI solutions. However, concerns over security, data sovereignty, and geopolitical ramifications loom large.

The security risks stem from potential data transfer to external servers, a challenge that applies to all generative AI models, including ChatGPT. While local deployment could mitigate risks, trust remains a key issue.

India’s AI Challenge: Dependence vs. Self-Reliance

India’s AI ambitions are at a crossroads. While global AI competition intensifies, India faces the dual challenge of ensuring technological sovereignty while keeping pace with AI advancements. The government’s efforts to bolster domestic compute infrastructure and large language models (LLMs) are in early stages, with policy discussions underway.

The question of data governance is equally critical. The recently enacted Digital Personal Data Protection (DPDP) Act, along with anticipated rules on cross-border data flow, could impose further restrictions on AI models relying on external infrastructure. Data localization requirements may hinder the adoption of models like DeepSeek if they require data transfers to China, adding another layer of complexity for Indian firms.

The Geopolitical AI Race and India’s Position

The AI race is not just about technological advancements but also about geopolitical influence. The US, through initiatives like the $500 billion Stargate project and export controls on high-end GPUs, is actively shaping the global AI landscape. The restrictions on advanced Nvidia chips, including the A100 and H100 series, have already impacted China’s AI ambitions, pushing Chinese firms to develop alternatives like DeepSeek.

For India, this intensifying AI arms race signals a crucial moment to invest in indigenous AI capabilities.

A robotic hand reaching into a digital network on a blue background, symbolizing AI technology.

The Path Forward: Strengthening India’s AI Ecosystem

While India has ambitious plans for developing its AI capabilities, execution remains a challenge. Building an independent AI ecosystem requires substantial investment in research, compute infrastructure, and regulatory frameworks that encourage innovation while safeguarding national security.

One immediate step is enhancing India’s compute infrastructure. The country must invest in high-performance computing (HPC) resources to support large-scale AI training, reducing reliance on foreign GPUs. Collaborations between academia, industry, and government bodies can accelerate the development of indigenous LLMs, customized for India’s linguistic and socio-cultural landscape.

Furthermore, policies must strike a balance between fostering innovation and addressing security concerns. The voluntary ethics code under development should provide clear guidelines on the adoption of foreign AI models, ensuring that companies using solutions like DeepSeek implement stringent data governance practices.

India Must Secure Its AI Future

A close-up shot of a compass resting on a map of India, symbolizing exploration and travel.

DeepSeek’s rise underscores a broader reality—AI is no longer just about technology; it is a battleground for economic and geopolitical dominance. For India, the choice is clear: either remain dependent on external AI providers, whether from the US or China, or take decisive steps toward technological self-sufficiency.

If India fails to act now, it risks falling behind in the AI revolution, ceding control over critical digital infrastructure. To truly embrace AI’s potential, India must prioritize indigenous AI development, ensure secure data governance, and build an ecosystem that aligns with its national interests. Only then can India assert its position as a global AI leader, rather than a passive consumer in the unfolding AI era.

Global Organizations and ITU Join Forces to Democratize AI Education

Low angle view of European Union flags on flagpoles against a blue sky, symbolizing unity.

Artificial Intelligence | Geneva, 20 January 2025

In a groundbreaking move to address the growing global AI skills gap, the International Telecommunication Union (ITU) has launched the AI Skills Coalition, a collaborative initiative aimed at democratizing access to AI education worldwide. With founding contributors including industry giants like Amazon Web Services (AWS), Microsoft, Cognizant, and regional bodies like the East Africa Community, this coalition represents a united effort to ensure equitable access to AI training and capacity building.

The announcement was made during the World Economic Forum’s Annual Meeting in Davos, where ITU emphasized the critical role of digital technologies in shaping a sustainable future. The AI Skills Coalition will serve as an online platform offering open and accessible training in generative AI, machine learning, and the application of AI for sustainable development. This initiative aligns with the United Nations’ Pact for the Future and Global Digital Compact, addressing the urgent need for global AI capacity building.

“Let’s make sure everyone has a chance to learn the skills they need to benefit from the AI revolution,” said ITU Secretary-General Doreen Bogdan-Martin. “Our new AI Skills Coalition aims to train thousands of people this year, especially those in regions just beginning their AI journey, as part of our commitment to ensure all communities can fully participate in our shared digital future.”

The Global AI Skills Gap: A Pressing Challenge

Recent research cited in ITU and Deloitte’s AI for Good Impact Report reveals that 94% of global business leaders consider AI critical to their organizations’ success. However, the lack of technical skills, the need for upskilling and reskilling, and the challenge of building trust in new technologies remain significant barriers to AI adoption worldwide.

The AI Skills Coalition, a flagship program under ITU’s AI for Good Impact Initiative, seeks to address these challenges by providing educational resources that empower individuals and organizations. The initiative also focuses on reducing the underrepresentation of marginalized groups—such as women, youth, and persons with disabilities—in the development and use of AI technologies.

“Generative AI is rapidly transforming the workforce, with LinkedIn data showing a 142x global increase in professionals adding AI skills in just one year,” noted Kate Behncken, Global Head of Microsoft Philanthropies. “We are proud to collaborate with the ITU AI Skills Coalition to provide accessible AI training, certifications, and capacity-building for policymakers, IT professionals, and organizational leaders.”

A human hand with tattoos reaching out to a robotic hand on a white background.

A Collaborative Approach to AI Capacity Building

The coalition will leverage the global reach of the United Nations Development Programme (UNDP), which operates in over 170 countries and territories, to deliver AI training directly to partner nations.

“Capacity development is critical for addressing the AI equity gap, particularly in developing countries,” said Achim Steiner, UNDP Administrator. “Aligned with the vision of this coalition, we will work with our partners to deliver AI training that equips policymakers and stakeholders with the knowledge needed to responsibly adopt and use AI for sustainable development.”

Phased Implementation and Key Features

The AI Skills Coalition will roll out in phases, with a focus on underserved and marginalized communities. Founding organizations are contributing training materials, financial resources, and outreach support to build a robust platform, set to launch in March 2025.

The platform will feature:

  • A comprehensive training portfolio and a customizable digital library of AI resources.
  • Self-paced courses, webinars, and access to in-person workshops tailored to diverse learning needs.
  • Free access to foundational resources, with advanced certifications available at affordable rates.

Throughout 2025, the coalition will expand its offerings to include specialized programs for the general public, as well as government-focused training in AI governance, ethics, and policymaking—particularly for developing and least developed countries (LDCs).

AI for Good Global Summit 2025

The coalition’s efforts will culminate at the AI for Good Global Summit, scheduled for 8-11 July 2025 in Geneva. The summit will host a series of in-person workshops and skill-building sessions, further solidifying the coalition’s mission to bridge the global AI skills gap.

A Step Toward an Inclusive AI-Powered Future

The AI Skills Coalition represents a significant step toward ensuring that the benefits of AI are accessible to all, regardless of geography or socioeconomic status. By fostering collaboration between global organizations, private sector leaders, and UN agencies, this initiative aims to create a more equitable and inclusive digital future.

As the world continues to embrace AI, initiatives like the AI Skills Coalition remind us that the true potential of technology lies in its ability to empower everyone—not just a privileged few.


Reference:
1) https://aiforgood.itu.int/ai-skills-coalition/
2) https://tinyurl.com/ITU-AI-Skills
3) https://cloud.google.com/learn/what-is-artificial-intelligence

Global Digital Accord: Nurturing Sustainable Tech Innovation

The Global Digital Compact (GDC) is a pioneering international framework designed to shape an inclusive, sustainable, and secure digital future for all. Focused on harnessing the potential of digital technologies while promoting their responsible regulation, the GDC emphasizes the need for cooperation to ensure that the digital landscape is fair, open, and accessible. While ambitious in scope, the framework also faces scrutiny for certain oversights and gaps, underscoring the importance of continued dialogue and refinement to achieve a truly just and equitable digital future.

What is Global Digital Compact and what we want from it?

The Global Digital Compact is a United Nations initiative aimed at establishing shared principles and guidelines for the global governance of digital technologies. It prioritizes safeguarding human rights in the digital realm and promoting the responsible use of emerging technologies like AI. With a focus on sustainable development, the GDC lays out core principles, objectives, and actionable steps to create a fair, inclusive, and secure digital environment for all.

The Association for Progressive Communications (APC), along with its advocacy partners, emphasized six key areas for the Global Digital Compact (GDC) to address:

1. Digital Inclusion:
APC advocated for greater community participation in policymaking, regulatory frameworks enabling diverse connectivity providers, and financing mechanisms for meaningful community-level connectivity. These measures aim to address digital inequity and foster inclusive access to digital technologies.

2. Human Rights Online:
Strengthening human rights law in all internet operations was a key priority. APC called for internet governance grounded in human rights standards, adhering to the principles of necessity and proportionality. Transparency and accountability from states and corporations, addressing structural inequalities, and fostering democratic values were emphasized. A rights-respecting digital future requires parity between online and offline environments.

3. Data Protection:
Concerns were raised about massive data harvesting by big tech and intrusive state surveillance, with a focus on the gendered dimensions of data exploitation. APC advocated for robust data protection regimes, increased transparency, and legal restrictions on surveillance. Intersectional and feminist approaches to data governance, along with Indigenous data stewardship and equality-driven governance, were highlighted.

4. Harmful and Misleading Content:
APC called for improved accountability for tech companies and states in addressing hate speech, misinformation, and online discrimination. Consistent industry-wide content moderation standards aligned with human rights principles were needed. Concerns were also raised about the inconsistent application of rules and the poor record of states in fostering trustworthy information and protecting freedom of expression.

5. A Gender-Just Digital Society:
A gender-inclusive digital future requires an intersectional feminist perspective. APC advocated for recognizing systemic gender-based discrimination and ensuring the inclusion of diverse genders in governance. Priorities included addressing technology-facilitated gender-based violence, ensuring privacy and digital safety, and promoting transparency and accountability in algorithms and AI systems. The work was grounded in the Feminist Principles of the Internet.

6. Earth Justice and Sustainable Development:
APC underscored the need for a precautionary principle in digitalization, advocating for a circular economy approach in technology design and production. Private companies were urged to ensure transparency regarding socio-environmental impacts, while governments and corporations were called to support community-led connectivity initiatives that respect planetary boundaries and the rights of nature.

These key focus areas aim to create a more inclusive, rights-driven, and sustainable digital ecosystem, addressing the intersecting challenges of equity, justice, and environmental stewardship.

Objectives and Actions of the Global Digital Compact
The Global Digital Compact (GDC) goes beyond principles, offering clear objectives and actionable measures to achieve its vision:

  • Strengthening Digital Infrastructure:
    Countries are encouraged to collaborate in developing robust digital infrastructure, with a focus on underdeveloped regions. Proposed actions include investments in 5G networks, digital literacy initiatives, and public-private partnerships to ensure universal connectivity.
  • Governance of AI and Emerging Technologies:
    The GDC emphasizes the need for global regulatory frameworks to guide the ethical development and use of AI and emerging technologies. These frameworks aim to prevent the reinforcement of existing inequalities and mitigate potential harms.
  • Ensuring Data Sovereignty and Privacy:
    The Compact advocates for the global protection of data privacy rights. It calls on nations to safeguard citizens’ digital footprints and empower individuals with control over their data through comprehensive regulatory measures.
  • Addressing Digital Misinformation:
    Governments and technology companies are urged to collaborate in combating misinformation, hate speech, and cybercrime. This includes implementing regulations, promoting transparency, and adopting real-time monitoring mechanisms.
  • Promoting Sustainable Technology Practices:
    The GDC underscores the importance of aligning technological advancements with environmental sustainability. It calls for green technology initiatives and digital solutions that actively contribute to combating climate change.

These objectives and actions aim to create a digital future that is inclusive, ethical, and environmentally conscious, fostering collaboration among nations, organizations, and individuals.

Gaps in the Global Digital Compact

The final text of the Global Digital Compact (GDC) falls short in addressing many critical advocacy points raised by civil society and other stakeholders. Several gaps in the Compact highlight areas of concern:

1. Human Rights

  • The GDC’s language on human rights lacks strength and consistency. References to “international law” are used instead of the more robust “international human rights law.”
  • The Compact does not adequately address state obligations to refrain from mass surveillance or ensure that targeted surveillance complies with principles of legality, legitimacy, necessity, and proportionality.
  • New technologies, such as AI, pose significant risks to human rights, yet the GDC fails to apply human rights obligations consistently throughout the technology lifecycle, from design to application.
  • Restrictions on states and companies deploying technologies incompatible with human rights principles are insufficiently outlined.
  • The UN Office of the High Commissioner for Human Rights (OHCHR) is under-supported. Its critical work on technology, business, and human rights is weakened by a reliance on voluntary funding mechanisms rather than robust institutional backing.

2. Inclusive Internet Governance

  • The GDC undermines the multistakeholder approach, a cornerstone of effective internet governance, by failing to meaningfully include civil society, academia, the private sector, the technical community, and grassroots groups in consultations, implementation, or follow-ups.
  • Proposed mechanisms risk centralising and nationalising internet governance through state structures, privileging the private sector while marginalising non-state actors.
  • The GDC does not prioritize multistakeholder input in designing new bodies or mechanisms, nor in developing financing initiatives and digital public infrastructure. This omission increases the risk of technologies being adopted primarily for data collection without adequate accountability.

These weaknesses reveal the need for stronger commitments to human rights and inclusivity in global digital governance, ensuring that the GDC fulfills its promise of a just and equitable digital future.

Addressing Deficiencies in the Global Digital Compact: Key Areas for Action

While the GDC has notable gaps, many of these can be addressed through careful and vigilant implementation. Civil society has a critical role to play in ensuring that mechanisms, bodies, and processes align with the overarching goal of an “inclusive, open, sustainable, fair, safe, and secure digital future for all.”

Cross-Cutting Areas Requiring Attention:

  1. Strengthened Multistakeholder Cooperation and Coordination
    • Enhanced Collaboration: Governments, civil society, the private sector, and the technical community must work together more effectively at all levels.
    • Streamlining Global Processes: The GDC fails to address the fragmentation among UN and other global digital initiatives. Better coordination is needed among processes like WSIS+20, IGF, and Beijing+30.
    • Civil Society’s Role: Civil society must identify gaps and links between the GDC and other critical areas, including cybersecurity, trade negotiations, consumer rights, food security, labour, intellectual property, and climate justice.
    • Emerging Fields: Big Tech’s ventures, such as investments in nuclear power for data centres and AgTech, highlight the need for cross-field advocacy that integrates digital rights into diverse forums and sectors.
  2. Human Rights as the Cornerstone of Digital Policy
    • Centrality of Human Rights: Digital policy must reaffirm human rights, focusing on development and placing people at the centre.
    • Strengthening Existing Tools: Implementation should leverage and reinforce existing human rights frameworks, particularly in the context of digital standards.
    • Collective Advocacy: Resistance from some states and businesses necessitates coordinated civil society strategies to uphold human rights in digital spaces.
  3. Equitable Distribution of Digital Benefits
    • Inclusion of Marginalized Communities: Implementation must prioritize the participation of excluded groups through mechanisms, mentoring, capacity-building programs, and regional consultations.
    • Capacity Building: Initiatives like AfriSIG and inter-regional consultations should be expanded and strengthened to support effective participation.
    • Gender Inclusion: Connecting the GDC to gender-focused initiatives, such as Beijing+30, is essential for equitable outcomes.
  4. Scrutiny of Financing Mechanisms
    • Risks of Private Sector Domination: Blended financing models must not disproportionately empower the private sector to dictate terms, especially in regions with weaker governmental contributions.
    • Market-First Model Failures: Emphasis should shift from market-first connectivity models, which have left many underserved, to inclusive and equitable financing approaches.
    • Universal Connectivity: Financing mechanisms must prioritize universal access while avoiding further marginalization of vulnerable communities.
  5. Accountability of Big Tech
    • Regulating Corporate Practices: The GDC must ensure that big tech accountability is embedded in its frameworks, focusing on harms caused by their practices.
    • Human Rights Framework: States must be challenged to regulate corporations effectively within a human rights framework.
    • OHCHR Support: Adequate funding for the OHCHR is crucial to enabling oversight of state and corporate compliance with human rights standards.

Conclusion

The success of the GDC will depend heavily on how its principles are operationalized. Vigilance and proactive advocacy by civil society are essential to ensuring that its mechanisms and processes promote an inclusive, fair, and human-rights-respecting digital future.

Feedback on DPDP Rules, by February 18th 2025: IT Ministry

The government has released the draft Digital Personal Data Protection Rules, 2025, aimed at strengthening data privacy. While the rules outline clear guidelines for consent, data retention, and breach notifications, they notably exclude penal provisions. The draft is open for public consultation until February 18, 2025, inviting feedback on its implementation and potential improvements.

On Friday, January 3, 2025, the Union government unveiled the draft Digital Personal Data Protection (DPDP) Rules, 2025, designed to implement the provisions of the Digital Personal Data Protection Act, 2023. Although the Act was enacted more than a year ago, the corresponding enforcement rules have been under development and are now open for public feedback.

The DPDP Act establishes a legal framework to regulate “data fiduciaries”—entities responsible for collecting personal data from “data principals” or individuals—and aims to safeguard this data from misuse while imposing penalties on organizations that breach data protection norms.The DPDP Rules, 2025, represent a significant milestone in building a secure, transparent, and user-focused digital environment.

The proposed rules outline the obligations of data fiduciaries when collecting user data. They require fiduciaries to inform users about the specific data being collected, the purpose of the collection, and provide a clear and detailed explanation enabling users (referred to as “Data Principals”) to give informed and explicit consent for the processing of their personal data.

The draft DPDP Rules are open for public feedback until February 18. According to the Ministry of Electronics and Information Technology (MeitY), submissions will be treated confidentially and will not be disclosed at any stage. Stakeholders can share their inputs through the MyGov portal, where the Ministry is accepting submissions.

Key Highlights:
1. The draft DPDP Rules propose the registration of “consent managers” who will assist data fiduciaries in obtaining user consent in a standardized format. The rules permit the government and its agencies to collect personal data for providing subsidies and benefits, subject to specified standards. Data collected for statistical purposes is exempt from certain restrictions.

2. The rules also mandate the deletion of user data if a service—such as an e-commerce platform, social media, or online gaming—is not used for an extended period, following a 48-hour notice to the user. Data fiduciaries must display the contact details of their data protection officer on their website.

3. The rules require that consent notices be written in clear, plain language and include essential details, such as a list of personal data being collected, to help users make informed decisions about data processing. Data fiduciaries must also provide a communication channel allowing users to withdraw consent or exercise their rights under the Act, such as requesting data erasure.

However, it lacks specificity, as the rules do not require mapping each piece of personal data to its exact purpose. Instead, data must simply be listed separately, leaving room for improvement in clarity and accountability.

4. For Children’s Data, the rules mandate that data fiduciaries adopt appropriate technical and organizational measures to ensure verifiable parental consent before processing any personal data of minors. To achieve this, fiduciaries may rely on voluntarily provided details of identity and age, a virtual token linked to such details issued by authorized entities, or verified details available through services like Digital Locker.

5. The processing of Indian citizens’ data outside the country is subject to future requirements that the government may outline through subsequent orders, ensuring additional regulatory oversight.

6. Users must be notified if their personal data is compromised, ensuring greater transparency and accountability. The rules also mandate that detailed incident disclosures be made to the Data Protection Board within 72 hours of a breach. Data fiduciaries are required to implement technical and operational safeguards to prevent data breaches and must notify the Data Protection Board of India (DPBI) of any breach within 72 hours.

7. The Rules establish specific data retention and erasure timelines for large e-commerce platforms, online gaming services, and social media intermediaries. The system must delete user data if the user hasn’t logged in for three years. While this is a significant move toward better data management, the reasoning behind limiting these requirements to these three categories remains unclear.

8. The rules clarify the processes for exercising rights under the Act, ensuring that both Consent Managers and Data Fiduciaries provide clear instructions on how users can exercise these rights on their websites or apps. This is a promising development in enhancing user control over their data. However, the requirement that Consent Managers must be Indian companies raises concerns about balancing accountability with fostering competition, potentially limiting options for users and companies.

In conclusion, the draft DPDP Rules, 2025, represent a significant step in strengthening data privacy and user rights in India. As the IT Ministry invites public feedback, stakeholders have a crucial opportunity to shape the final framework and ensure its effectiveness in safeguarding personal data.

“From Newcomer to Advocate: My ICANN-IG Journey”

Many people reach out to me on LinkedIn seeking guidance on how I became involved in the Internet Governance (IG) community, became a fellow, and navigated the action items required to engage with the Internet Corporation for Assigned Names and Numbers (ICANN) ecosystem. I always emphasize that this journey is deeply personal. While numerous resources, mentors, and inspiring journeys offer guidance, your unique skill set, dedication, and determination ultimately shape your path and bring true satisfaction to your career.

I began my journey into Internet Governance three years ago, starting as a newcomer to the field. My introduction to the ICANN world came while preparing a document related to India’s Public DNS. This initial exposure sparked my curiosity about ICANN’s pivotal role in managing the global DNS ecosystem. Driven by a desire to learn and grow, I discovered ICANN’s Fellowship Program—a gateway to deeper engagement.

However, my path wasn’t without challenges. I faced rejections, as reflected in my APNIC portal applications—a different fellowship platform from ICANN and even in ICANN applications as well. Despite setbacks, my curiosity and determination led me to immerse myself in the Internet Governance community. Being part of the world’s largest youth population, I realized the importance of contributing to this space and advocating for youth participation. This journey has not only enriched my understanding but also strengthened my commitment to making a meaningful impact.

When I first began exploring the world of Internet Governance, I discovered my initial step into this ecosystem at the national level through the Youth Internet Governance Forum (YIGF). I applied to join the organizing committee and was fortunate to be selected as part of the sponsorship team. That opportunity marked the beginning of my journey into the IG community, where I started networking, engaging with like-minded individuals, and connecting with influential leaders.

Engaging with new people, exchanging ideas, and exploring this dynamic space helped me reconnect with a part of myself I believed was lost. It brought back memories of my college days when I was confident, outgoing, and proactive. This journey not only facilitated personal growth but also reignited my passion and enthusiasm, leaving me feeling reinvigorated and inspired.

Youth IGF was a beautiful and enriching experience for me, offering the opportunity to connect with young, innovative minds from across the nation. It fuelled my curiosity and eagerness to gain even more knowledge in the field of Internet Governance. Following this, I was fortunate to be selected for the ICANN78 Fellowship, which allowed me to attend the Annual General Meeting (AGM).

For those unfamiliar with the ICANN Fellowship, it is held three times a year, corresponding to ICANN’s Annual General Meeting (AGM), Community Meeting, and Policy Forum. Anyone over the age of 21 can apply for the AGM and Community Meeting fellowships, while the Policy Forum is reserved exclusively for fellowship alumni. These programs provide invaluable opportunities to engage with global Internet Governance stakeholders and deepen one’s understanding of the DNS ecosystem.

The ICANN Annual General Meeting (AGM) experience was truly overwhelming, especially as a newcomer. It’s a dynamic environment where you witness the full spectrum of diversity and the multistakeholder approach in action. At first, ICANN may seem highly technical, primarily dealing with IP names and numbers, but it is so much more than that. It unites individuals from diverse fields, including technical experts, civil society, academia, and non-technical professionals.

After attending ICANN 78, I was selected for the Policy Forum for ICANN 80 in Kigali, Rwanda. Though it was yet another new experience, this time I saw some familiar faces, and connecting with people from across the globe felt like a lifelong friendship. ICANN truly enriches you with opportunities to build global relationships while expanding your horizons.

I had the opportunity to be an APSIG Fellow at the 2024 event in Taipei, Taiwan. The community’s essence is the same – the way they work, the way they view the Internet. The key is engagement. One of the most effective ways to deepen your interest in this community is by developing the habit of reading. Reading is not only an excellent exercise for your mind, but it also helps stimulate your neurons, fostering growth and learning. While I’m not offering medical advice, I am encouraging you to embrace reading as a tool to further immerse yourself in the Internet Governance community and all it has to offer. It’s about fostering interest and continuously learning.

Describing ICANN comprehensively in a single article is almost impossible because it is an expansive platform filled with endless opportunities for growth, collaboration, and learning. It’s not just a meeting; it’s a deep dive into the interconnected world of Internet Governance, where every participant has a role to play.

My journey didn’t end here; in fact, it truly began at this point. I realized that beyond my technical understanding of the Internet, I could explore other critical perspectives as well. ICANN introduced me to Universal Acceptance (UA), which emphasizes Internationalized Domain Names (IDNs) and the importance of making domain names accessible in multiple languages. ICANN’s belief in “One World, One Internet” underscores the need for inclusivity, and language is a key component of this vision.

For a country like India, where languages change from state to state, the concept of IDNs holds significant importance. Recognizing this, I started contributing to UA initiatives. I also became involved in the Asia-Pacific Regional At-Large Organization (APRALO), which is part of ICANN’s At-Large Advisory Committee (ALAC). While ICANN’s structure might seem complex and overwhelming at first, diving into its ecosystem helped me build a clearer picture of its processes and impact. Through this journey, I found new ways to contribute and grow, making a meaningful impact in the community.

Inspired by these experiences, I, along with fellow Indian members of the Internet Governance community, am working on launching Local APIGA India. This initiative, part of the broader IG community, focuses on capacity building and educating youth about the ICANN ecosystem. It is envisioned as a regional capacity-building event aimed at empowering the next generation with the knowledge and skills needed to navigate the world of Internet Governance.

For more details, you can visit our website at apigaindia.in. While the event is still in the planning stages, updates and information will be shared soon. Stay connected with us on various social media platforms to keep up with the latest developments!


The SM page for the same are as follows:
Linkedin: https://www.linkedin.com/company/105339260/admin/dashboard/
Facebook: https://www.facebook.com/share/49X39JS2P5LygZML/?mibextid=wwXIfr
Instagram: https://www.instagram.com/apigaindia?igsh=eG5rNmxmeHRtYjM5

Explore These Valuable References for Internet Governance:

Whether you’re interested in national, regional, or global initiatives, these resources will guide you on your journey:

For additional insights or questions, feel free to connect with me on LinkedIn or email me at [email protected].



India Questions ICANN Policy-2(ICP-2)

India’s Ministry of Electronics and Information Technology (MeitY) has opposed ICANN’s raised concerns about how regional bodies manage and distribute Internet addresses, emphasizing the need for a more equitable governance model. As the global body overseeing the Domain Name System (DNS), ICANN’s proposals are central to ongoing discussions about evolving Internet governance frameworks, with India advocating for changes that reflect the interests of all stakeholders, especially nations from the Global South.

The concern arises when in October 2024, ICANN opened a comment period on its Internet Coordination Policy 2 (ICP-2), which sets the criteria for recognizing new Regional Internet Registries (RIRs). These RIRs play a critical role in managing the allocation and registration of IP addresses within their respective regions. A key point of contention in ICP-2 is a clause that grants significant decision-making authority to the Number Resource Organisation-Executive Council (NRO EC). This council, made up of the five existing RIRs, would have the power to propose the recognition or derecognition of new RIRs, subject to ICANN’s approval.

India has raised concerns regarding this proposal, highlights a lack of transparency in ICANN’s decision-making processes, stronger accountability, stresses the importance of global representation, advocating for a more inclusive governance structure that gives equitable voice to nations from the Global South. These reforms, India believes, are essential to ensure a fair and representative future for global internet governance.

Why It Matters:

The outcome of this debate holds significant importance as it will shape the future of global internet governance, influencing how the internet is managed, regulated, and governed worldwide. As a major player in the global digital economy, India’s stance carries considerable weight, with far-reaching implications for the country’s digital growth, security, and its role in shaping international internet policies.

What India Demands:

India has called for a revision of ICANN’s proposal to address its key concerns, including the need for greater transparency, accountability, and global representation. The government stresses the importance of adopting a multistakeholder approach to Internet governance, one that actively involves governments, civil society, and the private sector.

Instead of granting power to the NRO EC, MeitY has recommended a more inclusive, multi-stakeholder approach to recognising new RIRs. The Indian government has urged ICANN to involve its broader community in developing the evaluation process for new RIRs. This approach would help ensure that decision-making is fair, transparent, and not dominated by any group.

MeitY also suggested that ICANN establish an independent body to assess new RIR proposals. Such a body would ensure that the process remains transparent and balanced, helping to build trust among stakeholders and minimise any bias in the system.

India’s Concerns on Amendments and Derecognition of RIRs

India has voiced reservations about the proposed rules for amending ICP-2 and the process for derecognizing Regional Internet Registries (RIRs). The draft policy requires unanimous approval from all existing RIRs for any amendments, a condition India argues grants excessive control to these entities. While acknowledging the importance of involving RIRs, India stressed that no single registry or group should hold veto power.

The proposal, which is believed to shift governance dynamics or increase the role of specific stakeholders, has raised apprehensions about fairness, accountability, and its alignment with national interests. As a key player in the global Internet ecosystem, India has consistently advocated for a more inclusive, multilateral approach to internet governance, one that ensures equal representation of nations in decision-making processes

“Balancing Growth and Bias: Most Indian Firms Concerned About AI Integrity”

“Explore India’s pivotal role in AI governance, addressing data bias, and fostering equitable advancements. Learn how businesses can balance innovation with responsibility to drive ethical AI leadership.”

In recent years, the world has rapidly embraced AI, transforming industries and reshaping everyday experiences. As one of the fastest-growing economies, India is emerging as a global leader in technology and AI adoption. The country’s AI market is projected to reach $17 billion by 2027, with an impressive annual growth rate of 25-30%. While many organizations view AI as a reliable driver of growth at both individual and organizational levels, others struggle with the challenges it poses, particularly data bias. A recent report suggested that 69% of the Indian companies are concerned about data bias while 55% trusting AI/ML and trying to increase their reliability on the AI market.

technology, business, analysis

Data bias is a significant challenge in any system, but in AI, it demands even greater attention and action. When data used in AI systems is biased, it results in outputs that are skewed, discriminatory, or misinterpreted—particularly in critical domains such as finance, healthcare, and recruitment. The impact of AI bias extends beyond technical issues, influencing societal concerns like gender inequality. In response to this challenge, India has partnered with NITI Aayog to address these biases and promote the development of responsible AI. However, the real test lies in how effectively organizations implement these guidelines to ensure fair and ethical AI practices.

Responsible AI practises

The responsibility for addressing AI bias largely rests with companies rather than individual users. Organizations must take proactive steps, such as investing in ethical and robust frameworks, and training data scientists to understand ethical considerations and mitigate risks associated with AI. Collaboration with stakeholders—including academia and industry peers—is crucial to staying informed about best practices and advancements in the field.

India’s NITI Aayog has outlined guidelines for responsible AI, emphasizing integrity, security, inclusivity, and alignment with international standards like the European GDPR (General Data Protection Regulation). Beyond adhering to these guidelines, companies must recognize the unique dynamics of the Indian market, balancing local and international standards to create equitable and trustworthy AI systems that not only meet requirements but also foster customer confidence.

Learning from other International Regulations

Addressing data bias transparently and committing to responsible AI practices are crucial for building public trust and confidence in technology. By 2026, Gartner predicts that 50% of global governments will mandate the implementation of responsible AI practices, reflecting the growing recognition of AI’s transformative impact and the need for ethical oversight.

In India, the regulatory landscape is evolving rapidly, with significant measures like the Digital Personal Data Protection Act (DPDP). This act underscores critical principles such as consent, accountability, and transparency in managing personal data. It aims to balance the protection of individual rights with the lawful processing needs of organizations, ensuring ethical and equitable data practices.

data, chart, graph

As these frameworks take shape, Indian organizations must not only comply with emerging regulations but also take proactive steps to integrate these values into their AI systems. This involves adopting robust mechanisms for consent management, establishing accountability measures, and maintaining transparency throughout the data lifecycle. By aligning with both domestic regulations and global standards, India can lead in fostering responsible AI development, paving the way for innovation that is both ethical and inclusive.

India’s role in shaping AI governance is pivotal, reflecting its proactive stance in past international negotiations. Much like its leadership in climate change discussions, India has the opportunity to champion equitable AI governance by advocating for inclusion and representation from the Global South. However, as the country experiences rapid AI-driven growth, it becomes imperative to address the pressing issue of data bias to ensure advancements are fair, inclusive, and beneficial for all.

To tackle these challenges, businesses must prioritize reliability checks to identify and mitigate AI data biases. They should also evaluate their AI practices to align with ethical standards and global best practices. By adopting a collaborative approach that balances innovation with responsibility, India can position itself as a global leader in AI governance, setting an example of how technology can drive progress while upholding ethical and inclusive values.

The Ongoing Debate on Multistakholderism

Last week, I had the opportunity to engage in a thought-provoking discussion about the widely debated concept of Multistakeholderism‘, a key topic within the Internet Governance community. The session, part of an ongoing program with the Internet Society, featured an open discussion where participants presented both pro and con arguments. The zoom virtual town hall format allowed for dynamic interaction between speakers and the audience, sparking insightful debates supported by facts and evidence on the strengths and challenges of the multistakeholder model.

During the discussion, I began to realize that while the multistakeholder model is often lauded for its emphasis on inclusivity, it does have its drawbacks. At its core, the model advocates for the involvement of a wide range of stakeholders—governments, private sector actors, civil society, technical experts, and others—in decision-making processes related to Internet governance. On the surface, this appears to be a democratic and inclusive approach. However, as the conversation progressed, it became clear that the model is not without its complexities. I’m reflecting on what I’ve learned personally from the discussion.

Why Multistakeholder Model?

The “multistakeholder model” is often portrayed as a single solution, but in reality, it’s not a one-size-fits-all approach. Instead, it’s a flexible framework that enables diverse individuals and organizations to come together, exchange ideas, and build consensus. It’s akin to a well-organized orchestra: while each instrument brings its own sound, they work together to create a harmonious performance. The multistakeholder approach thrives on collaboration, adaptability, and the ability to draw on various areas of expertise to address complex global issues.

For instance, consider the process of designing a public transportation system. A top-down, government-only approach might miss critical insights from local communities, engineers, and environmental experts. In contrast, a multistakeholder approach brings together city planners, citizens, transportation experts, and environmentalists to develop a system that meets the diverse needs of the population. This approach is most effective in situations where decisions affect a broad range of people, require expertise from different fields, and demand legitimacy to ensure successful implementation. Since the 2005 World Summit on the Information Society (WSIS), stakeholders have widely endorsed multistakeholderism as the preferred approach for addressing Internet governance and other complex global challenges.

Critiques of the Multistakeholder Model

The idea of “inclusivity” often implies a balance of power among all stakeholders, but in practice, this balance can be difficult to achieve. The Internet, as a global and decentralized entity, doesn’t belong to any single individual or organization, and no one stakeholder group has full control over its operation or development. However, despite the intention to ensure broad representation, certain groups—especially those with more financial, technical, or political power—can dominate discussions, potentially sidelining less powerful voices. This raises questions about the true level of inclusivity that the model offers, and whether it can truly capture the diverse interests and needs of all who are affected by the Internet, especially marginalized or underrepresented communities.

A key challenge in defining multistakeholderism is that no clear, agreed-upon definition exists for who qualifies as a “stakeholder.” While the concept suggests that anyone with an interest in an issue should have a voice, it doesn’t explain how we select stakeholders or what makes someone legitimately part of the conversation. This lack of clarity leads to different groups using various, sometimes conflicting, definitions of who counts as a stakeholder—despite the fact that the legitimacy of the entire governance process depends on this definition.

For example, in Internet governance, we could consider every user a stakeholder, but how do we actually represent these users? Additionally, one person may be seen as representing multiple identities, such as geographic region, gender, or other aspects, but does that mean they truly represent everyone within those categories? For instance, does the involvement of one woman in a multistakeholder setting truly reflect gender representation? Similarly, does having one civil society representative mean the full complexity of civil society is represented? These questions highlight the complexities and limitations of the multistakeholder model in practice.

In conclusion, while the multistakeholder model is designed to be a collaborative and inclusive framework, its implementation often requires balancing competing interests, addressing power imbalances, and ensuring that all voices are heard and valued equally. This remains a dynamic and ongoing debate that calls for continued dialogue, open forums, and active participation, particularly from younger generations. The future of multistakeholderism depends on our collective ability to engage in these discussions and work towards more inclusive, transparent, and effective governance structures.

“RBI’s Vision for Aatmanirbhar Bharat: Launching Local Cloud Services by 2025”

Addressing going concerns over data privacy and and the longstanding dominance of global IT giants in the cloud services domain, Bharat’s(India) central bank, Reserve bank of India(RBI), is reportedly gearing up to launch its own cloud services by 2025. The aim behind this excellent initiative is to enhance the country’s digital infrastructure and ensure the security of financial data as well. The project described as “first-of-its-kind initiative” objective is to decrease the reliance on the foreign cloud providers like the cloud platform will leverage solutions from domestic IT firms, offering alternatives to services like Amazon Web Services(AWS), Microsoft Azure, Google Cloud, and IBM Cloud and enhance control over critical financial infrastructure.

Indian map close-up with a flag and multiple colored pushpins.

The RBI’s public cloud service plans were first announced in December 2022. The initial development is being led by Indian Financial Technology and Allied Services (IFTAS) in collaboration with private tech companies, with consultancy firm EY appointed as an advisor.The initial funding, amounting to ₹229.74 billion ($2.72 billion), will be sourced from the central bank’s asset development fund. Over time, financial firms will be invited to invest and acquire equity in the cloud service. The RBI has restricted bidding for the project to companies incorporated in India with prior experience in developing cloud-related solutions, underscoring its focus on localizing digital infrastructure for payments and financial data storage and processing, according to the report.

The RBI’s initiative to launch its own cloud services comes with both significant opportunities and challenges, especially given the scale of providing cloud infrastructure to the world’s most populous nation.

The ‘PROS’ includes:
Enhanced Data Sovereignty:The platform would guarantee that critical financial data is stored and processed within India, minimizing the risks linked to foreign data management.

Being self-reliant: By providing a local alternative, the RBI could challenge the dominance of companies like Google, Amazon, and Microsoft in India’s financial ecosystem. Building in-house capabilities would also help mitigate the risks posed by geopolitical tensions affecting foreign tech providers.

Domestic IT Firms growth platform:The project would create opportunities for Indian IT companies, driving growth and innovation within the country.

Improved Security and Control with Innovation: A central bank-controlled platform could provide enhanced security and ensure better regulatory compliance for critical financial data. By fostering competition, this initiative could drive innovation among both domestic and global cloud service providers.

Cost Efficient: In the long term, localized cloud services could help lower operational costs for Indian financial institutions compared to using international providers.

The ‘CONS’ includes:
Major amount of Investment: The project requires a significant upfront investment, with initial funding of ₹229.74 billion ($2.72 billion).

Cloud Technology and the challenges: Building a scalable and secure cloud service that rivals the infrastructure and reliability of global giants like Amazon, Google, and Microsoft is a considerable challenge. Moreover, Indian IT firms may face difficulties in acquiring the advanced expertise necessary to manage a national-level cloud platform capable of supporting India’s expansive financial system.

Adoption Challenges and costs: Many financial institutions are already dependent on global cloud providers, which may make them hesitant to transition to a new, untested local platform. Additionally, continuous investment will be required to stay ahead in the rapidly evolving cloud technology landscape while maintaining high security and uptime standards.

Monopoly factors: The RBI-managed platform could dominate the market, potentially stifling competition and limiting innovation. This could create a centralized ecosystem where fewer players control the landscape, reducing the incentives for both domestic and global providers to innovate or offer competitive services. In the long run, this lack of competition could lead to inefficiencies and slower technological advancements.

Scalability and roll out challenge: Considering India’s vast size and diverse financial ecosystem, scaling the platform to meet the needs of the entire sector could take several years to fully achieve. The process would involve significant time and resources to ensure the platform is capable of handling the complexities of India’s financial institutions, varying regional requirements, and growing data demands. This gradual rollout could delay the widespread adoption and effectiveness of the platform, posing a challenge in meeting the rapidly evolving needs of the financial sector.

RBI’s initiative to develop a localized cloud platform offers promising benefits for data security and the growth of domestic IT, aligning with the vision of Aatmanirbhar Bharat. However, the project faces considerable challenges, particularly in terms of technical complexity, financial investment, and adoption by financial institutions. While this move can enhance self-reliance and reduce dependence on foreign providers, successfully launching such a large-scale initiative would be a monumental achievement. Yet, the obstacles it presents must not be underestimated, as it requires careful planning and execution to meet the diverse needs of India’s financial sector.

Person Using Laptop Computer during Daytime



“Balancing Screens and Minds for teens: The Challenge of Managing Social Media”

Concerns are rising over children spending too much time on social media and joining platforms at inappropriate ages. While India hasn’t addressed this issue yet, Australia is taking action by proposing a ban on social media for users under 16. This growing concern may prompt similar measures in India.

Australia is taking a bold step that few countries have dared to follow, proposing a ban on social media for those under 16. Not to be mistaken for a blanket internet restriction, this initiative specifically targets platforms like Instagram and Facebook due to their potential harmful effects on young people, including impacts on body image, social anxiety, and digital dependency. The question is whether this measure will truly “weather-proof” young Australians from these issues.

Prime Minister  Anthony Albanese stated “This one is for the mums and dads… who, like me, worry deeply about our kids’ online safety. Australian families, your government is on your side.” He also described the proposal, which is set to be introduced to parliament next week, as “world-leading” legislation to curb the “harm” social media inflicts on Australian children. While the specifics are still under discussion, it’s clear that if enacted, this legislation will have far-reaching implications, even for those young users already active on social media.

Some experts believe that banning apps like TikTok, Instagram, and Facebook only delays young people’s exposure to these platforms, rather than teaching them essential skills to safely navigate the online world. Previous efforts to restrict access, including those in the European Union, have largely been ineffective or met with pushback from tech companies. There’s also uncertainty around how such bans would be enforced, especially with the availability of tools to bypass age-verification.

In an open letter sent to the government in October, over 100 academics and 20 civil society organizations, led by the Australian Child Rights Taskforce, called on Prime Minister Albanese to prioritize “safety standards” for social media platforms instead of outright bans. They pointed to UN recommendations, which suggest that national policies should focus on providing children with safe opportunities to engage with digital spaces. At the same time, grassroots campaigners in Australia are pushing for these laws, arguing that bans are necessary to protect children from harmful content, misinformation, bullying, and other social pressures online.

Backlogs of Social Media

Facebook’s internal research, made public in 2021, revealed that Instagram was aware of its negative effects on teenage girls, especially around body image issues, self-esteem, and unrealistic standards of validation based on appearance. Studies indicate that children who spend over three hours daily on social media are twice as likely to experience mental health challenges like depression and anxiety, fueled by comparisons with others’ seemingly ideal lives. Social media usage can disrupt focus during school or homework, while browsing on mobile devices before bed is linked to poor sleep quality. Additionally, frequent social media exposure can impact brain areas related to emotions and learning—particularly significant as children and teens develop their sense of self between ages 10 and 19.

What can resistant bring as a challenge?

Young teens often resist restrictions, feeling entitled to the same access to technology as their peers. This resistance can lead to conflicts with parents, resulting in interpersonal stress and potential mistrust. Bans may not be effective; similar to alcohol restrictions, prohibiting access can sometimes increase desire. Tech-savvy teens are likely to find ways around limitations, which may also impact their ability to process online interactions in the future.Reducing social media time can also help lower risks of cyberbullying, online harassment, and body shaming. Furthermore, limiting app usage encourages children to spend more time outdoors, participate in physical activities, pursue hobbies, and adopt healthier habits overall. Excessive screen time often leads to a sedentary lifestyle, while balanced restrictions can promote a more active, engaged, and socially connected childhood.

But keeping the both sides in vision, challenge is what, how and when to educate young minds to divert and use their neural energy for their better future. Coming to a challenge in a country like India, ban and restrictions won’t change anything only, the real challenge of managing social media use among young people is complex. Simply imposing a ban or restrictions on platforms for those under a certain age might not have the desired impact, as enforcing these rules can be difficult given the sheer number of users and access points. Tech-savvy teens can find ways around restrictions, and blanket bans could push usage underground, making it harder for parents and educators to monitor online activities.

The real issue lies in determining the right approach, timing, and tools to educate young people on using digital platforms responsibly. With India’s vast youth population, the focus should ideally be on proactive digital education, helping young users understand the long-term impact of their online interactions and encouraging them to harness technology in ways that support their growth. Rather than a one-size-fits-all ban, developing age-appropriate digital literacy programs in schools, promoting open discussions at home, and setting practical boundaries can build a healthier digital culture. Empowering young users with the skills to navigate online spaces safely could be more effective and sustainable than enforcing restrictions that may lead to unintended consequences.

Like many other challenges, this issue requires ongoing discussion and a collaborative approach, particularly involving parents. A dedicated research team focused on understanding young minds and their digital behaviors is crucial. Australia has taken steps based on what they believe needs to be done, and now, it will be interesting to see what India contributes to this conversation in pursuit of meaningful change.