India’s DPDP Act – A Ticking Clock for Indian Businesses

It’s May 14, 2027. Your company experienced a data breach three weeks ago that exposed customer names, phone numbers, and payment details. Your team patched the vulnerability, changed a few passwords, and sent an internal email saying “handled.”

Nobody told the Data Protection Board. Nobody notified the affected customers. There was no formal incident response. This is an internal email, and I hope no one noticed.

The Data Protection Board noticed. And now you’re looking at a penalty notice for ₹250 crore, for failing to implement reasonable security safeguards. Another ₹200 crore for not reporting the breach. All because a law that’s been on the horizon for years finally has teeth, and your organization wasn’t ready when it bit. This isn’t a hypothetical designed to scare you. It’s the exact scenario the DPDP Act was designed to address. And May 13, 2027, is the date those consequences switch on.

The Clock Is Already Running

India’s Digital Personal Data Protection Act was passed in August 2023. The operational rules were notified in November 2025. That notification started an 18-month countdown to full enforcement.

We are inside that window right now. May 13, 2027, is the date on which every substantive obligation under the Act becomes simultaneously enforceable: Consent mechanisms, Privacy notices, Breach reporting systems, Data retention policies, User rights, and Children’s data protection – all of it, at once, with no grace period after the deadline.

Here’s the part most businesses haven’t properly absorbed: the 18-month window wasn’t meant for waiting; it was meant for building. The period from November 2025 to May 2027 is intended to be spent creating the systems, controls, contracts, and governance structures that compliance actually requires.

Most enterprises we speak with are treating May 2027 as a start date. That’s exactly the backward direction.

Who does this apply to? Almost Everyone

Before some readers convince themselves this doesn’t apply to them, let’s be clear about scope. The DPDP Act applies to any organization, regardless of size or sector, that processes the digital personal data of individuals in India. Personal data is broadly defined as Names, Phone numbers, Email addresses, IP addresses, Device identifiers, Financial data, Health information, Behavioral data, and Cookies.

If you run an e-commerce platform, a fintech app, a healthcare service, an HR system, a SaaS product, a logistics operation, or an edtech platform, you’re in scope. If you’re a B2B company that stores client contacts in a CRM, you’re in scope. If you’re a startup with a few thousand users, you’re in scope. And here’s what catches international businesses off guard: the Act has extraterritorial reach. If you’re a company headquartered outside India but offering goods or services to Indian users, the law applies to you too, just like its friend in the European region, GDPR. The DPDP Act follows the data, not the geography of whoever holds it. The honest question isn’t “does this apply to us?” For most businesses, it does. The question is what you’re required to do about it, and whether you’ve started.

What the Law Actually Requires

A lot of DPDP “compliance” being done right now is surface-level. A revised privacy policy, A new cookie banner, A legal team sign-off, that’s not compliance, that’s theatre. It will fail the first real audit. Here’s what the Act actually requires, in plain terms.

1) ‘Consent means something.’ The DPDP Act doesn’t accept the kind of consent most Indian businesses currently collect, such as bundled consent (a single checkbox covering multiple purposes), which is invalid. Each processing purpose needs its own specific, informed, unambiguous consent. If you collect data for personalization, analytics, and marketing, that’s three separate consents. If you haven’t redesigned your consent flows yet, this alone is a significant piece of work.

2) Privacy notices that are actually readable. Notices must be standalone documents, separate from your terms and conditions, written in plain language, available in English and regional languages, clearly explaining what data you collect, why you collect it, and what rights the user has. Most current privacy policies don’t meet this bar.

3) A 72-hour breach clock. When a data breach occurs, you have 72 hours to notify the Data Protection Board. Neither to investigate nor to decide whether it’s serious enough to report or to notify. That requires an incident response process capable of identifying, assessing, and escalating a breach within hours, not days, not the exact Indian Stretechable Time zone activity. Most Indian organizations lack a tested incident response plan.

4) Real infrastructure for user rights. Individuals can request to see their data, correct it, or have it deleted. They can nominate someone else to exercise those rights on their behalf. You need a process to receive these requests, verify the person, fulfill the request across every internal system that holds their data, and complete the whole thing within 90 days. Building that end-to-end capability is harder than it sounds, especially when data lives across a product database, a CRM, a support system, and three SaaS tools.

5) Vendor contracts that reflect the new reality. Every third party that processes personal data on your behalf, like cloud providers, analytics tools, payment processors, and support platforms, must now carry compliance obligations. Most contracts signed before 2025 don’t include the security clauses required by the DPDP Act. That means reviewing and renegotiating contracts. Not a handful, every processor in your chain.

6) A named grievance officer. A contact point, whether a person or team, must be publicly listed on your website or app before May 2027. Complaints from users must be addressed. This sounds simple. It requires a working complaints process behind it to mean anything.

The Penalties And Why the Fine Might Not Be the Worst Part

Let’s look at the numbers, because they matter when we talk about Business.

  • ₹250 crore, for failing to implement reasonable security safeguards
  • ₹200 crore, for failing to notify the Board or affected individuals of a breach
  • ₹150 crore, for violations involving children’s data
  • ₹50 crore, for other breaches of Data Fiduciary obligations

These are per-violation figures. A single data breach incident can trigger multiple violations simultaneously, inadequate security, failure to notify individuals, and failure to notify the Board. Cumulative exposure from one incident can exceed ₹650 crore.

But here’s the thing most people miss when they focus on the fines: the Data Protection Board has the authority to order a halt on data processing while an investigation is underway. For a bank, for a payments platform, for any business where data processing isn’t a supporting function, it’s the product (a sigh of relief or tension that again it is the government aristocracy under the veil of y (democracy)). An operational suspension isn’t a fine you can absorb. It’s a threat to the business itself.

The fine you can budget for. The suspension you can’t always survive.

Why “We’ll Sort It Before the Deadline” Doesn’t Work

The organizations that implemented GDPR know exactly how this plays out. The timeline that looks comfortable eighteen months out compresses dramatically once the actual work begins. Data mapping alone, going system by system to understand what personal data you hold, where it lives, who touches it, and why, takes weeks, not days. And you can’t build compliant consent flows, privacy notices, or user rights infrastructure without it. Everything downstream depends on it. Then come the vendor reviews, then the technical implementation of consent management, security hardening, breach notification protocols, then the user rights infrastructure, then testing, validation, staff training, and much more, just a complete PDCA cycle(Plan-Do-Check-Act)

Each of these is a project. They can run in parallel — but they can’t all start in March 2027. Organizations that begin serious implementation work after mid-2026 will be operating under severe time compression. Some won’t make it.

The correct starting point is not a revised privacy policy. It involves appointing a compliance owner with actual authority, mapping your data, assessing your gaps against the specific obligations that apply to your business, and beginning structured implementation in that order.

The Opportunity Nobody Talks About

Here’s the thing about the DPDP Act that doesn’t get said enough. It’s not just a compliance burden, it’s an opportunity to do something your customers will actually notice and value: give them genuine control and transparency over their own data.

The businesses that approach this honestly, building systems that actually work rather than ones that look compliant enough to pass an audit, will emerge from this with something real and more trust. Better data hygiene. Stronger processes. A privacy posture that holds up as regulation tightens globally, because India won’t be the last place to legislate on this.

Those treating it as a box to tick will find themselves scrambling when enforcement begins. And scrambling is expensive, or we can say far more expensive than building it right the first time.

Where Does Your Organization Stand?

Some honest questions worth answering, not for anyone else, just for yourself:

  • Do you know exactly what personal data your organization holds, where it lives, and why you have it?
  • Have your consent flows been redesigned for DPDP, separate, specific, purpose-by-purpose consent?
  • Do you have an incident response process capable of moving at 72-hour speed?
  • Have your vendor contracts been reviewed against DPDP requirements?
  • Is there a named person in your organization with the authority and budget to get this done?

If the answers are unclear, that’s your signal. Not to panic, but to start. The deadline is fixed, the work is substantial, and the window is narrowing by the day.

May 13, 2027, is not a target date. It’s a cutoff. The businesses that will be fine on May 14, 2027, are those that started in 2026, not those that haven’t even started the scope.

“Is Public Interest Technology Just a Buzzword?”

How can inclusive design be effectively integrated into a business strategy, rather than remaining a side project? What role must policy, public procurement, regulation, and community engagement play so that inclusion is viable, not optional? How do technologists, designers, business leaders, and policymakers align so that public interest and profit aren’t at odds but intertwined?

I was introduced to terms like Internet governance four years ago. Since then, one new word after another has surfaced in the tech-policy space. When I came across the phrase public interest technology, I found myself pausing: How exactly does this connect to what we often see around us? And in a world where almost everything is built around making a profit, can this idea really find a place?

Public interest technology refers to the study and application of technology expertise to advance the public interest , to generate benefits for society and promote the public good. It doesn’t mean simply building the “next app” for the mass market, but asking: Who is this technology for? What impact does it have beyond revenue? How are communities being engaged so that the technology doesn’t inadvertently exclude or harm?

When I say “inclusion and equity” in this context, I do not mean only gender or rural-versus-urban divides (though both matter). I’m talking about a broader idea: technology designed and governed so that anyone, regardless of income, language, disability, digital-skill level, location, or other circumstance; has the capacity and opportunity to engage, benefit from, and not be left behind. For example: accessibility for persons with disabilities, multilingual interfaces, low-bandwidth versions of apps, and inclusive design that anticipates users with low digital literacy. These dimensions matter because the barriers to participation in tech are many and intersecting.

From one perspective, the appeal is compelling. Imagine designing a digital service with accessibility built in, community-driven features, fairness in algorithms, and transparency in how data is used. When implemented effectively, technology can expand access to essential services (such as education and healthcare), empower historically marginalized groups, and help reduce inequality. Technology becomes not just a tool for commerce, but a tool for inclusion and justice.

Yet from another view, the feasibility question looms large. Many tech firms, investors, and start-ups operate under business models that expect scale, monetisation, rapid growth, and market competition. Designing for inclusion or public interest often involves additional costs, a slower rollout, more engagement with user communities, localization, and more careful governance. The incentives may not align: underserved populations may be more complex to serve, with less immediate profit potential; inclusive design may not be rewarded in the same way as a feature that boosts user numbers or ad revenue. There are also measurement problem like how do you capture “fairness”, “access”, “dignity” as business metrics? Without strong alignment of incentives, inclusive features risk being sidelined.

So where do these two paths meet? Where does the promise of public interest technology sit in a profit-driven world? One possibility is when value is redefined: if inclusive design becomes a source of new markets (for example, by reaching underserved users), if social license, trust, and reduced risk become integral to the business strategy. Another is when business models become hybrid, combining private sector, public funding, and philanthropic support. Additionally, when regulations, procurement policies, or public-sector contracts demand or reward inclusive features, they thereby shift incentives. Or when the ecosystem builds the infrastructure, norms, and tools, inclusive design becomes less costly, easier, and standardized.

A contemporary office desk setup with laptops, gadgets, and accessories, creating a tech-savvy workplace.

When I think of my context (India and the Global South), the stakes become quite concrete. The digital divide is real. Multilingual diversity, rural-urban gaps, access issues, these are not abstract. Suppose technology is designed with inclusion in mind. In that case, huge populations can be brought into digital services, and new users can be accessed. However, the pressures of cost, scalability, and profit motive remain simultaneously. The question becomes: How can inclusive design be effectively integrated into a business strategy, rather than remaining a side project? What role must policy, public procurement, regulation, and community engagement play so that inclusion is viable, not optional? How do technologists, designers, business leaders, and policymakers align so that public interest and profit aren’t at odds but intertwined?

In what ways can we expect technology companies to embed public interest values when the ecosystem rewards speed, scale, and monetisation? Conversely, what does a truly public-interest-oriented technology initiative look like when it must survive and sustain itself in a market economy? Perhaps the intersection lies not in choosing one side over the other, but in asking how the structures around technology, business models, funding mechanisms, regulation, community participation, need to shift so that public interest becomes part of the equation rather than an afterthought.

I’m not claiming I have the answer. But I believe this is one of the questions we must keep alive: Can technology designed for profit also truly serve inclusion and equity — and if so, under what conditions?

Quantum India: At the Edge of a New Technological Epoch

India’s quantum technology ecosystem is expected to undergo significant change around 2025. There is a noticeable uptick in conversations and advancements on quantum innovation in academic institutes, startup ecosystems, and critical policy domains. Even if artificial intelligence still makes the news, quantum computing and communication are subtly paving the way for a more profound technological shift that may completely alter how India protects its data, models complexity, and constructs infrastructure for the future.

Creative depiction of 'quantum' using wooden letter blocks on a blurred natural background.

The establishment of the National Mission on Quantum Technologies and Applications (NMQTA) in 2020, supported by a ₹8,000 crore budget, set the foundation for this momentum. As we move forward in time, we are starting to see results. This year, Bengaluru-based startup QpiAI unveiled a 25-qubit quantum computer—an achievement that signals

Research in quantum sensing, encryption, and simulation has accelerated at government and academic institutions such as IISER Pune, C-DAC, and IIT Madras. Infrastructure development and experimentation are being accelerated by partnerships with worldwide colleagues. In addition to enhancing India’s technical prowess, these partnerships establish the nation as a legitimate player in the development of international quantum governance and standards.

Elegant black globe and notebook against warm bokeh lights, perfect for educational themes.

Even with the advancements, there is still a long way to go. The lack of qualified quantum professionals is one of the most urgent issues. Due to the interdisciplinary nature of quantum development, knowledge of physics, computer science, engineering, and systems design is required. It will take significant expenditures in curriculum reform, research money, and domestic fellowships that encourage sustained involvement in the subject to build such capacity at scale.

Hardware independence is another important difference. India still relies on imported parts for quantum processors, cryogenic equipment, and specialized chips, even if software algorithms and simulations are being produced domestically. This reliance may limit India’s innovation cycle’s speed and independence. It will be necessary to develop domestic hardware capabilities through public-private cooperation and focused industrial support in order to create a fully robust quantum ecosystem.

India, however, has a clear edge. Our experience developing scalable digital public goods, such as DigiLocker, and UPI, has demonstrated that technology change need not be exclusive or top-down. When used properly, quantum technologies have the potential to improve public services, ranging from critical infrastructure protection and defense encryption to healthcare diagnostics and climate modeling.Equity and ethics must be prioritized, though, just like with any new technology. Quantum systems could lead to the concentration of political, economic, and technological power. We run the risk of reproducing current digital gaps in even more enigmatic and unaccountable ways if we don’t have broad access, open research, and transparent governance.

Abstract green matrix code background with binary style.

More than just celebration is required at this time; strategic patience, policy vision, and group creativity are needed. Quantum is a foundation to carefully lay, not a fad to follow. And 2025, with all of its fervor and focus on quantum, might be regarded as the year India ceased to be a spectator and began to influence the discourse.

As someone who works at the nexus of technology, policy, and community empowerment, I see this change as a chance to innovate, but to do so in a way that is inclusive, ethical, and deliberate. The tale of quantum technology in India is still being written. Let’s make sure it’s a story that everyone can enjoy.

Are you keeping up with India’s progress in tech policy? Leave a comment below, follow me on linkedin or connect with me via email at [email protected]

The VPN Dilemma: Balancing Privacy, Security, and Digital Innovation

Hello, I’m new to the community. I’ve been facing issues connecting to 1.1.1.1 with WARP since yesterday. It was working fine before, but the problem started after my ISP performed some maintenance. I suspect the issue might be related to the ISP. Is there any possible solution for this?When I searched Reddit for answers about why WARP (aka 1.1.1.1) is not working, I found many similar comments, like:
“I believe that ISP has to do something with that because I am getting this issue after ISP maintenance.”

Curiosity led me to search for more articles on Reddit and other platforms, but unfortunately, I found very few, and they contained too little information.

Drawing from my five years of experience working and writing on technological aspects, I delved into understanding the dynamics of blocking services like 1.1.1.1. The reasons often seem to be tied to political and geographical factors, with the most common justification being “national security” and concerns over confidential data.

“I have been using 1.1.1.1 WARP from India, but 1.1.1.1 WARP mode is not working on the Jio network, while the normal private DNS is functioning. Reset network settings: Done. Reboot device: Done. Always-on VPN: Done. Clear cache and storage: Done. Uninstall and reinstall: Done. Reset private keys: Done. Still, WARP mode is not working. What should I do? And what is the reason behind this?”(solution quoted on the community page)
Many more solutions like this have been shared in the community pages, but sadly, nothing works. I am obliged to install another VPN, as I am left with no other option due to the urgency of the work.

Searching for the exact reason behind this, I came across some information that I’m not entirely sure is legitimate but seems relatable—or at least understandable.

One random user explained:
“Basically, the rule in India is that you can operate a VPN as long as you maintain data related to the user, including their name, ID, IP accessing from, and IP accessing to. I think the 1.1.1.1 client actually operated anonymously (because if I remember, you didn’t actually need to log in to use it). iCloud+ Private browsing maintains that information (account-related, etc.) so it should be safe. Similarly, running your own Tailscale cluster and enterprise VPNs are not impacted—for example, Cloudflare for Teams is allowed, and the Cloudflare One Agent app can be downloaded and is still available.”

Another user added:
“Cloudflare stores user data on the Zero Tier corporate plan, which is tied to accounts. The free 1.1.1.1 app did not require an account, hence it was removed. I cannot answer as to why Proton VPN continues to work or has not been removed. I only gave an opinion as to why the free Cloudflare product may have been removed. For what it’s worth, you can set up your own VPN and run it, and as long as you maintain a user login and account history, you can operate a VPN.”

The list of removed VPNs includes other services like Hide.me and PrivadoVPN. Apple, citing a demand from the Indian Cyber Crime Coordination Centre—a division of the Ministry of Home Affairs—stated that these app developers had created software that contravenes Indian law.

On the other hand, several VPN providers have robustly opposed the Indian government’s mandate. When the framework was introduced, prominent developers like NordVPN, ExpressVPN, Surfshark, and ProtonVPN publicly criticized the requirements, with some even indicating plans to remove their server infrastructure from India. For example, Surfshark’s services are no longer purchasable via UPI, a payment method that was available before the rules came into effect. Despite these challenges, NordVPN, ExpressVPN, and Surfshark continue to operate in India, although they have scaled back active promotion of their apps in the country.

The Indian government’s actions against VPN service providers hold even greater significance when considering the country’s position as one of the world’s largest VPN markets, with substantial growth anticipated in the coming years.

In 2023, India’s VPN market generated an impressive $4.166 billion in revenue and is projected to reach $7.681 billion by 2030, growing at a compound annual growth rate (CAGR) of 9.1% from 2024 to 2030. With an estimated 270 million VPN users in 2021, the market remains dominated by a limited number of providers, including Surfshark, NordVPN, ExpressVPN, PureVPN, IPVanish, and others. Despite regulatory challenges, these players continue to cater to a substantial user base in India.

The restriction on VPN services is not unique to a major country like India; several other nations are also engaging in this “banning game” under the guise of national security and data regulations. Countries such as China, Russia, Germany, and Italy have also implemented measures to control or restrict VPN usage, citing similar justifications of safeguarding national interests and ensuring compliance with local laws.

I referenced the community pages solution and inquiries because I haven’t found any direct comment or official report from the Ministry of Home Affairs (MHA), Government of India, regarding the banning of these regulations. This raises the question: while policymakers, law experts, diplomats, and technocrats may have discussed these bans, similar to the DPDP, why are such policies put out for public comment even after being enforced?

Close-up view of a mouse cursor over digital security text on display.

Why is everything being imposed in the name of national security? The challenge is that, while we advocate for encryption and data privacy, we also ask for data storage, suggesting that privacy might, in fact, be a myth. Our devices, always with us, listen even when not in use, reinforcing this paradox.

It’s a social dilemma of the Internet age. On one hand, we promote privacy and encryption, while on the other, innovators are developing AI systems that collect all our information. I’m not arguing that imposing regulations on the majority is wrong, but is there a way to balance technology, innovation, and regulation? This is simply a thought from a technical writer’s perspective.

You are under surveillance!

You search for a pair of shoes on a search engine, and suddenly, every ad you see is about shoes. You browse a housing site, and before you know it, your phone is buzzing with calls and messages about properties. You search for a nearby restaurant or explore a business idea, and bam! Your screen is overflowing with ads instead of the information you actually wanted. It feels like a hidden camera is always watching, anticipating your every move, doesn’t it? It’s like having a personal assistant—except you never asked for one! And this assistant? It’s so efficient, it even seems to work ahead of your own thoughts. Welcome to the digital world!

This type of constant surveillance is what we call surveillance capitalism. Big tech companies—let’s say the big four—use this model to turn your data into a resource, treating your searches and interests as their products. Whether you’re intentionally seeking information or just satisfying a passing curiosity, the moment you enter your data, it’s no longer just yours. Even if a website says it’s “encrypted,” that data is fuelling the encryption of their own massive datasets, which they use to craft algorithms that steer your next online experience. Search for anything, and in the background, those algorithms are quietly deciding what to show you next.

It’s not just that you’re searching the web—the web is also searching YOU. And while it may seem convenient to have such personalized suggestions, it’s important to realize that this is really about influence. These companies aren’t just catering to your needs; they’re shaping what you’ll do next.

Surveillance capitalism refers to the practice of monetizing data collected by tracking people’s online and real-world behaviors. This type of consumer surveillance is primarily used to tailor marketing and advertising strategies. The term **surveillance capitalism** was first introduced by John Bellamy Foster and Robert W. McChesney in a July 2014 article in *Monthly Review*, a socialist magazine based in New York. Their original concept centered on the U.S. military’s surveillance of citizens.

The term surveillance capitalism is more closely associated with the economic theory proposed by Harvard Business School Professor Emerita Shoshana Zuboff in September 2014. It describes the large-scale monetization of individuals’ raw personal data, used to predict and influence their behavior. Surveillance capitalism operates through steps like data collection, prediction, and the creation of behavioral markets. While it’s not tied to any specific tech or business process, it represents a business philosophy driving the massive data economy. Most people don’t realize the extent of this data collection until their privacy is breached, revealing that their confidential information has been commercialized and turned into profits—often to the tune of billions—by other companies.

There are no serious proposals for regulating the data collecting abilities of technology companies. However, Google did pay a large data privacy settlement in November 2022.

In her book, Zuboff predicted that data collection will continue to grow as it becomes increasingly central to the market and as technology becomes more embedded in daily life. She highlighted the rising use of IoT devices, like fitness trackers, which provide new opportunities for sharing user data with marketers and advertisers. Zuboff also referenced a 2016 Microsoft patent for software designed to detect users’ mental states. She warned that this type of technology could lead to a new level of privacy violations, as it would activate sensors to capture voice, speech, videos, images, and movement.

The question now is, can we regain control over our data in this system that’s so deeply ingrained in our digital lives? Or is this just the new normal? It’s something worth thinking about as we continue to navigate this always-connected world.

NetMission.Asia Ambassador: A journey of Exploring Internet Governance through an Asia Pacific Perspective”

Initially, my encounter with the term “internet governance” left me with a vague understanding, as Google’s explanation provided only a basic overview. However, my curiosity was piqued, prompting me to delve deeper into the subject. This journey into the realm of internet governance commenced last year, around mid-April, with my involvement in ‘Youth Internet Governance-INDIA’ (https://youthigf.in/). Through YIGF-India, I gained valuable insights into Internet governance, particularly from the perspective of my home country, India. Expanding my horizons to encompass the Asia Pacific region, I embarked on a new path with NetMission.Asia (https://netmission.asia/). NetMission.Asia is described as a network comprising passionate young individuals from Asia, committed to engaging and empowering youth in internet governance discourse. Their goal is to foster youth mobility and effect positive change within Asia through impactful initiatives in Internet governance.

The journey as an Ambassador and eager learner commenced in December 2023. Being selected and introduced to our supportive buddies by the NetMission team marked a warm and engaging beginning to our experience. The orientation day provided us with invaluable insights into how NetMission.Asia is actively contributing to fortifying the role of the Asia Pacific region in shaping and comprehending Internet Governance. Throughout this journey, we underwent significant learning experiences, delving into diverse topics such as the essence of Internet Governance, the pivotal role of the Asia Pacific in this domain, and nuanced concepts like Diversity, Inclusivity, Green Tech, Web 3.0, and the Digital Economy. Our exploration extended to encompass emerging technologies, cybersecurity, privacy, and fostering a safer internet, among other crucial aspects.

Participating in virtual meetings with professionals actively engaged in various levels of the Internet Governance (IG) platform, such as UNIGF, ICANN, APNIC, IETF, and ITU, proved to be highly informative, enriching, and interactive. Engaging in breakout groups for each session provided ample opportunity for brainstorming and exchanging ideas. Documenting our learnings in worksheets, summarizing viewpoints from a visionary perspective, and collaborating in diverse groups under the banner of different organizations were all integral components of this journey. However, the highlight undoubtedly was the opportunity to showcase our achievements through case studies with our respective groups, an aspect of the experience that I found particularly rewarding.

Over two months, juggling regular Thursday sessions alongside daily tasks posed a significant challenge. However, despite its demanding nature, the experience proved to be immensely rewarding. I sincerely hope that more individuals get the chance to engage with NetMission, enabling young minds to contribute their unique perspectives on Internet governance in their respective countries and across the Asia Pacific region. In summary, if I were to describe this journey in a few words, I would call it “wonderful, amazing, and transformational.

Geo-fencing: Location On Work

In the world of technology, tracking is not a strenuous task, which will require meticulous efforts. Geo-fencing is one of the technology blessings we are working with. But what is this geo-fencing, how has it developed, in what ways it works, how is it useful and where is it used? Let’s discuss all these answers one by one via this article.

GEO-FENCING

In the word Geo-fencing, the Prefix “Geo” is a Greek word meaning “earth or land” and “fencing” means “drawing an imaginary border” Thus, Geo-fencing defines as setting up fencing or a virtual perimeter boundary to know whenever an object enters within the marked fencing zone.

As the definition explained above, defines Geo-fencing technology as a location-based service (LBS). In this, the app or any other medium by which the service is in use depends on GPS (Global Positioning System), Wi-Fi or cellular data and RFID(Radio-Frequency Identification) to activate the organized action which is based on whenever a device enters or exits the set virtual boundary locations or Geo-fence. The alert can be sent in many ways set up by the developer, it can be in a trigger form of text, pop-up, push notifications, track alert messages etcetera. 

How the Geofencing Work?

The developer set up the virtual boundary using GPS or RFID services or even an IP address in some cases to set up the fencing zone and then set up a per-planned alert system for the device which is going to enter or exit from the fencing zone. As soon as you enter the fence, will be tracked by the developer in case of tracking; you will get a push notification, if the fencing is set up for some marketing or business deals, you will get a message if the fencing is set up for any other purposes related to work personal or professional. So, therefore we can say that Geo-fencing has made life easy for everyone except those who are in the adversary zone. The fence in the Geo-fencing can vary in the perimeter zone, i.e., they can be changed,  reduced or increased depending upon the user and developer. 

Example: If you are running a salon and you want the customers in closer proximity to your location to know about the venue, you can set up the fencing perimeter and send the alerts in whatever format you want to give. 

Geo-fencing Application

In this era of digitization, Geo-fencing has become a crucial way for every sector whether it is a public or private one; whether it is in the security zone or marketing world; whether it is in IT or business firms. Once geographic fencing is set, the opportunities and usage are seemingly endless and that’s one of the reasons that it has become especially popular in marketing and social media lines.

Some of the common Geo-fencing Applications are as follows:

Security: Geo-fencing can be used to make your devices more secure. Like you can set your own Geo-fencing for your device for a specific area like your home, to get push-up notifications whenever someone enters your home.

Social networking: With Geo-fencing development comes its usage in one of the most popular platforms of the last decade called social media. Geo-fencing is the social media app network that gives the application of location status, location sending, and location-based stories to other devices and all these are all made possible with Geo-fencing. 

Human resources: For fencing the on-field employees, and workers and to track the employees, companies nowadays use Geo-fencing to keep a record of employees. Geo-fencing is also useful as a way to automate time cards, employee clocking means keeping track of when they go in and out, within the premises.

Marketing: Geo-fencing is a popular way for business firms to promote themselves by an alert pop-up whenever you are within the fencing range of the company. One of the best use of Geo-fencing is that it helps businesses in targeted ads to a specific audience instead of mass-adherence to figure out the right set of strategies with the right set of people based on the user’s location data.

Telematics: Telematics, the process of merging telecommunications and informatics via any device- Geo-fencing plays a very useful role here as well by allowing companies to set virtual zones around sites, work premises and secure zones. 

Smart appliances: Smart appliances have made us enter the smart world and Geo-fencing is one of the smartest use of these smart appliances  With the capability of smart work of appliances, it’s easier than ever before like reminding you of some household chores, reminding you some office-related files, kids assignments and all. 

The use of Geo-fencing in handling Pandemic COVID19:

When the entire nation is struggling for survival from the pandemic coronavirus, people in technology are working to tackle this problem via the use of technology. Developers from different zones of the country have developed a geo-fencing-based app for COVID-19 to track the people who are on the fence about getting affected by the Coronavirus.

Ministry of Electronics and information technology (MEITY)-GOI has developed an app called ‘AAROGYA SETU’ for citizens to know the risk of contracting COVID-19 by Geo-fencing tracking service. The tracking is done via Bluetooth & location-generated social graphs, which can show your interaction with anyone who has tested positive-All you have to do after the installation is to switch on the Bluetooth and location. By switching on the following you will be in the line of sight of developers and once you crossed paths with the red zone area you will get an alert message based on the information. Thus, Geo-fencing is playing a crucial role in handling this pandemic.

Geo-fencing Future

In this world of data-privacy where everyone is concerned about their data getting stolen, Geo-fencing faces the same criticism of possibilities of a data breach but as said by Nasscom chief R. Chandrasekhar, ‘There is nothing called fully perfect security in IT’, thus we can’t play the data-breach game with Geo-fencing anymore. According to a press release from Markets and Markets (https://www.marketsandmarkets.com/), the Geo-fencing industry is expected to grow by over 27% by 2022, citing “technological advancements in the use of spatial data and increasing applications in numerous industry verticals.”

References:

https://en.wiktionary.org/wiki/Wiktionary

https://meity.gov.in

https://en.wikipedia.org/wiki/Geo-fence

HTTP V/S HTTPS

HTTP (HTTP://)– Hyper Text Transfer Protocol is a Protocol designed for communication between client (Web browser) and server(Web server). It was projected in 1989 by the world wide web. It operates on Port 80 and transfers data in plain text. There were a few revisions in HTTP until http1.1 released in 1996.Then after finding so many loopholes, There was a mega release of HTTP/2 in 2015. Later, HTTP/3 as the proposed successor to HTTP/2 came out, which is already in use on the web, using UDP instead of TCP for the underlying transport protocol. 

Advantages of HTTP:-

  1. HTTP can be implemented with other networks as well as protocols.
  2. HTTP pages are stored on computers as internet caches.
  3. The platform of HTTP is independent, thus allowing cross-platform porting.
  4. It can be used over Firewalls.

Issues with HTTP:-

  1. HTTP is a stateless protocol, which means it does not require the HTTP server to retain information or status about each user for the duration of multiple requests. Each time the requests will be treated unique or new irrespective whether it is new or old.
  2. No privacy, as open for all, and anyone can see the content.
  3. Data Integrity is 0, here as security and privacy are absent here and anyone can alter the content.
  4. Anybody irrespective of a genuine user or not, can intercept the request and can get the username and password.

HTTPS (HTTPS://)– Hyper Text Transfer Protocol Secure, an advanced as well as the secured version of HTTP. It allows secured transference with the help of SSL (Secure Sockets Layer). HTTPS is a combination of SSL/TLS with HTTP. It provides encrypted data and secured transference with the help of key-based encryption algorithms, in which key is generally 40 or 128 bits in strength. It operates on port 443 and transfers data in Cipher (encrypted) format.

Advantages of HTTPS:-

  1. Sites running over HTTPS are redirected, which means even if you type in HTTP:// by mistake, it will redirect to an HTTPS over a secured connection.
  2. Secured with SSL/TLS and provide full encryption over data.
  3. Each SSL Certificate contains unique, authenticated information about the certificate owner.

Issues with HTTPS:-

  1. HTTPS protocol can’t stop stealing confidential information from the pages if they are saved as cache memories on the browser.
  2. SSL data can be encrypted only during transmission via a network, thus the text in the browser memory is still not cleared with SSL.

Difference between HTTP and HTTPS :-

                 HTTP

               HTTPS

-Hyper Text Transfer Protocol

-Hyper Text Transfer Protocol Secure

-Less secure and encryption is absent.

-Secure and encrypted with SSL/TLS.

-Uses Port 80.

-Uses Port 443.

-Doesn’t scramble data before transmission, thus vulnerable to hackers.

-Scramble Data before transmission, thus secure.

-It operates on TCP/IP level protocol.

-It operates on the same HTTP protocol but with SSL/TLS.

-No SSL and data encryption.

-SSL and data encryption are required.

-Fast in procession.

-Slow in processing in comparison to HTTP.

-It operates on an Application layer.

-It operates on the Transport layer.

-It transports plain text information.

-It transports cipher text information.

SSL/TLS-Secure Connection

Whenever we browse the internet, we see some site URLs, there is a padlock present and in some, it is absent. The presence of this padlock symbolizes secure communication between the user and the server. This padlock consists of a secure communication certificate and that certificate communication is called SSL Certificate communication i.e., Secure Socket Layer. SSL’s function is to build a secure chain of trust between the user and the server. The certificate is provided by a Certificate Authority (CAs) like Let’s Encrypt, Bypass, Comodo, GeoTrust et cetera, which actually builds the chain of trust running the certificate validation in a hierarchical manner.

Most modern web browsers have flagged sites without SSL/TLS as insecure or unsafe. Going forward, SSL/TLS certificate may become a mandatory website hosting requirement. By hosting a website with SSL/TLS certificate, it provides security to the data transferred between the website and the Website visitor, by encrypting the communication, in addition to this the SSL/TLS certificate also helps to verify the identity of the site, thereby helping users to surf on a secure and encrypted connection. The SSL certificate consists of Website Owner information including Domain and sub-domain name, the Validity period of the certificate, Public key used for encryption

TLS is the new or updated version of SSL; TLS has evolved from SSL (Secure Socket Layer) only, which was developed by Netscape Communication in 1994. SSL 1.0 was never used but followed by SSL and 3.0. TLS 1.0 is based on SSL 3.0. TLS 1.3 is the latest version, published in the year 2018  and almost all Cas are using or moving to TLS1.3. The presence of secure connection or TLS can be seen through HTTPS presence in URL, which is an implementation of TLS encryption on top of HTTP protocol, which is used by all the websites running web services. Hence, any website over https is deploying TLS only.

                       USER——–(SSL/TLS HANDSHAKE)——–CLIENT

SSL CERTIFICATE VALIDATION AT DIFFERENT LEVELS:

1)    DOMAIN VALIDATED CERTIFICATE: In this validation, only a domain name is validated and a certificate is issued in this validation name only. That’s why it is the easiest validation in the SSL certificate validation game. It is beneficial for servers who are just willing to take SSL for namesake or blogs, and small enterprises not dealing with products or selling.

2)    ORGANISATION VALIDATED CERTIFICATE: In this validation, additional details like the address of that particular server with the domain name will be required for the validation check to pass. Thus, it is a bit more stringent than domain one. The additional details validation makes it more trustworthy on the user’s end.

3)     EXTENDED VALIDATION CERTIFICATE: This is the most cost-equipping, trustworthy, time taking validation. This is required by all the large e-commerce, enterprises and business to mark up with the customer trust level.

TYPES OF SSL CERTIFICATES:

1)    Single Domain SSL: As the name defines, it is a single domain name, thus, only and only single name domain SSL will be generated, and no other name or sub-domain name will be able to use the certificate.

2)    Wildcard SSL certificate: The domain and all sub-domain along with this will be able to use the certificate known as Wildcard SSL. The sub-domain list can be seen by clicking on the padlock icon in the URL.

3)    Multidomain SSL certificate: Multiple distinct domains can use a single certificate issued in the name of all the distinct domains. The domains are neither the sub-domain of a single domain nor the multiple pages of a single domain.

TLS/SSL HANDSHAKE:

(Image Source: https://www.geeksforgeeks.org/secure-socket-layer-ssl/)

Phase 1:  This is Establish Connection Phase. The client sends a ‘HELLO’ message with its TLS version, List of Cipher Suites and Random Client’s Number and the server replies with a ‘Hello’ message along with its SSL certificate, Cipher suite chosen and a Random Server’s number.

Phase 2: This is the Pre-secret master key Generation Phase. A client sends one more random string which is encrypted with a Public key (which is taken from Server’s SSL certificate), commonly called a ‘pre-secret master key’. The server decrypts this secret key with the private key of its certificate.

Phase 3: This is thesession key Generation Phase. The client as well as the server generates the session key using its own random numbers and pre-secret master key. The session key at both ends generated will be the same.

Phase 4: Handshake Ends. The session key will be verified and authenticated at both ends, it should be the same, then only a secure connection is established and the data moves now in an encrypted manner. If anyhow the key differs, the connection won’t be established. Once the connection is established both client and server send a ‘Finished’ message to each other and a green signal for encrypted data transfer will proceed.

This TLS/SSL handshake is validated till TLS1.2, in TLS 1.3 the handshake has been changed a little bit. In place of a 4-way handshake, it is now based on 2-step handshake validation or completed in just one round trip of a handshake. The TLS1.3 is more secure, encrypted and less time taking than all the previous versions.

UPGRADE IN TLSV1.3:

                              (Image Source: https://timtaubert.de/images/tls-hs-static-rsa.png)

Phase 1: Establish Connection. Same as TLS1.2 Phase 1, TLS1.3 also commences the handshake with the “Hello” message with an add-on of a list of supported cipher suites and a guess of which key agreement protocol will be chosen by the server along with the Client’s chosen key agreement protocol.

Phase 2: Validation Completion. The server replies with a “Hello” message with the key agreement protocol that it has chosen, key share, certificate and ‘Finished’ message.

The Server “Finished” message, which was sent in the 6th step in the TLS1.2 handshake, is sent in the second step in TLS1.3. Thus, completing the round trip in just 2 steps.

Phase 3: Finished Message. In the last step, the client will validate the server certificate, and generate a key share while using the key of the server. Once all the checklists are done client sends a “Finished” message. Now, the data encryption begins.

Cipher Suite:  A complete set of cryptographic algorithms require to secure a network connection through SSL/TLS. For each set, there is a specific algorithm. The SSL/TLS does the Handshake process for building the secure connection and during the handshake, the client and the web server will use the following cipher suite components:

O  A key exchange algorithm is used to determine how symmetric keys in the handshake will be exchanged. Example: RSA (Rivert-Shamir-Adleman).

O  An authentication algorithm, which function is to tell how the authentication at both ends client as well as server will be implemented and finished. Example: DSA (Digital Signature Algorithm).

O  An Encryption cipher, to encrypt the data. Example: AES (Advanced Encryption Standard)

O  A Message Algorithm, a function is to check and administrate how the data integrity checks will be carried out. Example: SHA (Secure Hash Algorithm)

Routing: Choosing the Best Pathways since 1976!

Routing directs network traffic through routers, enabling smooth data flow. Routers use administrative distance (AD), metrics, and protocols like OSPF and BGP to select optimal paths. Routing tables and FIBs manage network efficiency. Key types like static, dynamic, and backup routes ensure secure, streamlined traffic control. In today’s hyper-connected world, effective routing underpins seamless communication across networks, influencing everything from social platforms to global data transfers.

(Image Source: https://www.cisco.com/c/en/us/products/routers/what-is-routing.html)

Routing basically means ‘to route’. The aim of the routing process is to provide a way out for the network traffic to reach from the source to the destination and this destination can vary from 1 to multiple locations. Thus, Routing can be defined as the path/route for network traffic flow from the source to the destinations, being both in same network or different networks. Routing is controlled by the Router and router is the device which actually defines the whole path for routing.

Routing Process:

Routing depends on various factors like Administrative distance, ASN, Interface, next hop and mainly on Destination Network. Whenever the traffic flows out from the source and reach to the Router, the router at first check the destination IPv4 or IPv6 address and then proceed further by checking the Forward Information Base or FIB which consists of 3 main elements and those are Destination Network, Next Hop and Outgoing Interface. This FIB is generated by the RIB aka Routing Information Base which contains prefixes, routing tables, metrics, and next hop information. We will read about all this later in this document.

So once the Datagram reach to the router, the router will check the destination IP address and referring the FIB, it will send the information to the destined IP address and this network flow can be of unicast nature or multicast nature. It is not bounded to one form of flow only.

Routing Components:

1.) Router: The router is a hardware device which functions to flow the network traffic in multiple or unicast way. It uses routing tables, and algorithms to decide the right path and to ensure to let the traffic reach its right destination.

2.) Administrative Distance (AD): Numerical values assigned to different routes or protocols from 0 to 255, basis on which the preferred path is selected or rejected. It is a numerical value of trustworthiness of a routing information gained from different sources. The higher the AD value, lower will be the chances of its selection. Thus, can say that AD is inversely proportional to the numerical value. AD is one of the most important or the prior element checked by the router to forward any traffic. For example, if a router receives a route for a particular destination from two protocol one follows RIP with value as 120 AD and another static route as 10 AD, then the router will prefer the static route only as it is having lesser AD value.

3.) Routing Protocol: Set of rules and procedures to make a protocol that function is to maintain Routing Tables is called Routing Protocol. Example: OSPF, BGP, EIGRP.

4.) Routing Table: It is a database of a router which contains information like destination Network, network topology, or available routes in the network. Thus, this is very useful for the routers and on basis of this the RIB (Routing Information Base) is prepared and maintained which further generates the FIB (Forward Information Base).

5.) Interface: A connection point located on a router device to connect to a network and each interface has its own IP address and subnet mask assigned. Signifies as G0/1 or other symbols but in this manner only. This interface can be physical or virtual as well. Each interface on a router can also have the configuration of other factors like default gateway, access control lists (ACLs), quality of service (QoS) policies etcetera.

6.) Metrics: Metrics including various factors like hop count, bandwidth, or delay determines the best route for any data gram.

7.) Path select Algorithm: By considering different factors like metrics, AD, policies, the path select algorithm provides and choose the best available paths and then the traffic to the destined IP is sent.

Types of Routing:

1.) Static Route: Manually modifies, added and maintained by a Network Administrator only.

2.) Dynamic Route: Gradual addition of routes that are learned by Network devices from the different routing protocols and they share the best possible route information with each other.

3.) Default Route: These are the routes which are assigned as the default one when the device lacks the routing destination information in its routing table, then the device transfers the traffic to the default gateway or route which then send it the appropriate destination.

4.) Black Hole Route: The main purpose of black hole route is security, thus whenever a selected segment or IP address from which the traffic is blacklisted try to hit the router, that IP address will by default fall down in the black hole route and get discarded. This is also called as null route.

5.) Interior Route: The interior route is basically a route which is flowing inside a same Autonomous System (AS) only and managed by interior gateway protocols only, like the Route in Corporate Network.

6.) Exterior Route: The routes that are learned from outside the AS like via the internet are exterior routes only and they follow the exterior gateway protocols.

7.) Floating Static Routes: It is also called as backup route, because of its function that whenever the primary route fails to reach the destination, the backup or floating static route will function and let the datagram reach the appropriate destination. The AD of floating static route is higher than the primary ones.

Forwarding Information Base or FIB:

Forwarding Information Base (FIB) is a database table used by a router to know the next-hop address and interface for forwarding a packet. FIB is generated by the routing information base or RIB. When a packet arrives at the router, the router checks the destination IP address against and refer the FIB to determine where to forward the packet by seeing the destination network, next hop and outgoing interfaces and on basis of the information the router forwards the packet. FIB entries are typically stored in a hash table or a database structure, which allows for fast lookup and retrieval of the next-hop address or interface.

Routing Information Base:

It is a database where routes and route related metadata is stored by a routing protocol – allowing the routing protocol to select a ‘best’ path to a given destination. Each protocol has its own separate RIB. RIB functions as a backbone for FIB, without which the FIB can’t function. RIB consists of Routing tables, prefixes, next hop information and metrics.

Routing Protocols:

OSPF:

Open Shortest Path First, is a link-state routing protocol used to map the path with the shortest distance. It is a dynamic interior gateway protocol which uses link-state Algorithm and can work as OSPFv2 for IPv4 address using RFC 2328 and OSPFv3 for IPv6 address using RFC 5340. The OSPFv3 can be used for IPv4 and IPv6 as well by using RFC5838. The AD in case of OSPF is 110 and it is a fixed numerical value. The OSPF supports Hierarchical Routing. OSPF processes as by first giving a ‘Hello packet’ to the neighboring routers in same AS which will lead to exchange of topology among neighbors via Link-state advertisement. Once the hello is sent to all, then a topology map will be prepared for the network by creating a link-state Database. Using the database calculation for the best path is done and that is update in the OSPF tables. OSPF divides the routers into different areas starting from area0 to area n respectively. OSPF Router can Internal router which is for same area and External router which is for different areas. OSPF also supports other features like support for multiple paths to a destination, unequal cost load balancing, and authentication mechanisms to ensure secure routing information exchange.

Intermediate System to Intermediate System:

Intermediate System-to-Intermediate System (IS-IS) is a link-state, Interior gateway protocol that uses modified version of Dijkstra Algorithm. The AD value for IS-IS is 115. An IS-IS network has range of components, routers, areas, and domains. Just like OSPF it also organizes routers into areas and multiple areas together form a domain. It uses two network addresses, one is Network Service Access Point (NSAP) and other is Network Entity Title (NET).

Routing Information Protocol:

RIP is an Interior gateway protocol that also runs on Application layer of the OSI model. Like OSPF it has also two versions as RIPv1 and RIPv2. The former version functions to find network path based on IP destination and the hop counts by broadcasting IP tables to all routers in the network. While the later one or RIPv2 being more precise sends the IP tables to multicast addresses only. RIP, AD is with a fixed value of 120. RIP is not a suitable protocol for larger networks as it has limitations of hop count as 15.

Enhanced Interior Gateway Routing Protocol:

EIGRP is a distance vector and link-state routing protocol. Thus, also known as ‘Hybrid Protocol’. EIGRP is a Cisco proprietary protocol that was designed to follow on from the original IGRP protocol. EIGRP has features like bandwidth, reliability, maximize efficiency etc., whenever multiple paths to the same destination are available, EIGRP will select the path with the lowest metric, regardless of the administrative distance. In EIGRP, the router takes information from the routing table and keep a record of the same, whenever a change or update occurs in the path the router informed the neighbors and they do update the tables accordingly The AD for EIGRP is 90 for internal EIGRP routes, and 170 for external EIGRP routes.

Broader Gateway Protocol:

BGP is distant-vector routing protocol designed to replace Exterior gateway protocol. The AD vale for BGP is 20 for eBGP (external BGP) routes and 200 for iBGP (internal BGP) and in the time of selection with multiple path value the BGP will always select the one with lower AD value independent of Metric value. BGP uses best path selection Algorithm. No auto-discovery of table like events happen in BGP case, i.e., user has to configure BGP manually.

Routing Algorithms:

Routing algorithms are the algorithms that implements different routing protocols by assigning a cost number to each link, which is calculated using various network metrics and aim is always to transfer the data packet with the lower cost value.

1) Distance Vector Routing: This routing algorithm updates the best path information to all known destinations irrespective of same AS and different AS.

2) Link State Routing: In Link State Routing, Same AS network discovery of best path among neighboring routers take place. Using the information, a map is created and best path is then calculated.