Generate
Back to Blog
Timeline infographic of major privacy events and data breaches during 2025

Another Year of Record Breaches

The breach statistics from 2025 follow a pattern that has repeated since the mid-2010s: bigger numbers, broader impact, and faster exploitation of stolen data. In January, a major US health-insurance provider disclosed that 37 million member records had been accessed through a compromised API endpoint that had been open since the previous autumn. In March, a breach at a European mobile carrier exposed names, national ID numbers, and call metadata for 14 million subscribers. By July, a credential-stuffing campaign had compromised accounts at three separate financial-services firms using passwords recycled from an older, unrelated breach.

The pattern isn't new. What's changed is the speed of exploitation. In 2018, stolen data might sit on a dark-web marketplace for weeks before being used. In 2025, automated tools parse breach dumps within hours and begin testing credentials against banking, email, and social-media platforms almost immediately. The window between "data stolen" and "accounts compromised" has shrunk from weeks to single digits of hours in many cases.

Breach fatigue is a documented psychological phenomenon. People who have been notified of five or six breaches stop reading the notifications. The "important information about your account" email goes straight to the archive folder. The credit-monitoring offer (free for twelve months, then a paid subscription) goes unused. The password-change recommendation gets ignored because the person can't remember which password they used for the breached service. The cumulative effect is a population that knows its data has been stolen but has run out of energy to respond to each individual incident.

Regulatory Progress and Its Limits

The GDPR, now seven years into enforcement, has produced significant fines and some genuine changes in corporate behaviour. The largest fines have targeted major technology companies for advertising practices, consent mechanisms, and international data transfers. Smaller companies have received enforcement actions for inadequate security, excessive data retention, and failure to honour subject access requests.

The gap between regulation and enforcement remains wide. Data protection authorities across Europe are understaffed relative to the number of complaints they receive. The Irish Data Protection Commission, which oversees many of the largest technology companies due to their European headquarters being in Ireland, has faced criticism for slow case processing and settlements that represent a fraction of the maximum penalty. A fine of 400 million euros sounds large in absolute terms. For a company with annual revenue exceeding 100 billion, it's a rounding error on the quarterly report.

In the United States, federal privacy legislation remains absent. The patchwork of state laws has grown: California's CPRA, Virginia's CDPA, Colorado's CPA, Connecticut's CTDPA, and several others are now in effect, each with slightly different definitions, thresholds, and enforcement mechanisms. A company operating nationally needs to comply with all of them simultaneously, which creates compliance overhead but doesn't necessarily produce better privacy outcomes for consumers. The laws vary in whether they include a private right of action, which determines whether individuals can sue or must rely on a state attorney general to bring enforcement.

India's Digital Personal Data Protection Act, enacted in 2023 and now entering its implementation phase, covers the world's largest population by raw numbers. Its effectiveness will depend on the rules and exemptions that the government specifies through delegated legislation, several of which remain unpublished as of mid-2025. The broad government-access exemptions in the act have drawn criticism from privacy advocates who argue that the law protects individuals from commercial data misuse but not from state surveillance.

Consent Fatigue Is a Design Problem

Cookie consent banners were supposed to give individuals control over tracking. In practice, they've become an obstacle course designed to make acceptance the path of least resistance. The "accept all" button is prominent and colourful. The "manage preferences" option leads to a multi-step process with toggles, categories, and descriptions that most people don't read. Rejecting all non-functional cookies sometimes requires clicking through multiple screens, while accepting everything takes a single tap.

This isn't an accident. It's a design choice that exploits the gap between what the law requires (meaningful consent) and what the enforcement bodies have the resources to challenge (individual implementations of consent mechanisms). The technically-compliant-but-deliberately-hostile consent banner is a form of malicious compliance. The user has a choice. The choice has been engineered to produce a specific outcome.

The alternative models are emerging but not yet dominant. Global Privacy Control (GPC) is a browser-level signal that tells websites the user doesn't want their data sold or shared. California's CPRA requires businesses to honour GPC signals. Firefox and Brave support GPC natively. Chrome doesn't, which limits the signal's reach given Chrome's market share. The browser-level approach is better than per-site consent because it removes the repetitive interaction, but it only works if websites honour it and regulators enforce compliance.

Data Minimisation as a Competing Model

The dominant business model for consumer technology remains data maximisation: collect as much as possible, retain it indefinitely, and find monetisation opportunities later. This model treats personal data as an asset on the balance sheet, even when the data has no immediate use and the company has no specific plan for it.

Data minimisation inverts this. Collect only what's needed for the specific service being provided. Retain it only as long as necessary. Delete it when the purpose is fulfilled. This isn't a fringe position. It's an explicit requirement under GDPR (Article 5(1)(c)) and a principle in most modern privacy legislation. The gap between the legal requirement and actual corporate practice is substantial.

Some companies have adopted minimisation as a competitive advantage. Privacy-focused email providers like Proton Mail and Tuta don't read email content for advertising. Privacy-focused search engines like DuckDuckGo and Startpage don't build user profiles from search queries. These services trade the revenue that personalised advertising would generate for user trust and subscription revenue. The model works at the scale of privacy-conscious niche markets. Whether it scales to the mass market depends on how many consumers value privacy enough to pay for it or accept less personalised services.

Synthetic Data and the Decoupling of Identity from Utility

Synthetic data generation has moved from a development-tools niche into a broader conversation about privacy architecture. The core insight is that many interactions that currently require real personal data don't actually need it. Signing up for a newsletter doesn't require a legal name. Testing a product demo doesn't require a real address. Filling out a form to download a whitepaper doesn't require a phone number that reaches the person.

Tools like Firefox Relay provide disposable email addresses that forward to a real inbox. Apple's Hide My Email generates random addresses for the same purpose. Surfshark Alternative ID and Another.IO go further, generating complete synthetic identities with names, addresses, and phone numbers that pass validation but don't correspond to real individuals.

The privacy benefit is that the service receives functional data (a valid-looking email, a plausible address) without receiving identity data (the user's actual name and location). The data that enters the service's database, and eventually the data-broker pipeline, is fictional. It can't be used to profile the real person behind the interaction because there's no link between the synthetic identity and the real one.

The Insurance Industry and Privacy Risk Transfer

Cyber insurance has become a significant market, with premiums growing as breach frequency and severity increase. Insurers are starting to influence corporate privacy practices through underwriting requirements: companies that want affordable cyber insurance need to demonstrate specific security controls, incident-response plans, and data-handling practices.

This market-driven mechanism may prove more effective than regulation in some contexts. A regulator might fine a company after a breach. An insurer raises the premium before the breach happens, creating a financial incentive to invest in prevention rather than relying on post-incident response. Companies that can demonstrate data minimisation, strong access controls, and encrypted storage get better rates than companies that hoard data in unencrypted databases.

The limitation is that insurance incentivises risk reduction, not privacy protection. A company that collects excessive personal data but stores it securely might get good insurance rates despite the privacy implications. The insurer cares about whether the data will be stolen, not about whether the data should have been collected in the first place.

Children and Age-Appropriate Design

The UK's Age Appropriate Design Code (also called the Children's Code), enforced by the ICO since 2021, requires services likely to be accessed by children to provide high-privacy defaults, limit data collection to the minimum necessary, and avoid using nudge techniques to encourage children to lower their privacy settings. The code has had measurable effects: TikTok, Instagram, and YouTube all changed their default settings for accounts identified as belonging to minors.

The enforcement challenge is age verification. Requiring proof of age creates its own privacy problem: the verification process collects additional personal data (ID documents, biometric scans) that then needs protection. Age estimation technologies (using facial analysis to guess approximate age) are less invasive but less accurate, and their error rates disproportionately affect certain demographic groups.

The fundamental tension is that the internet wasn't designed with age verification in mind. Retrofitting it is difficult, privacy-invasive, and imperfect. The children who most need protection are often the most adept at circumventing age gates, and the age-verification data itself becomes a target for breaches.

Corporate Data Practices That Haven't Changed

Despite regulatory pressure and public awareness, several corporate practices remain widespread in 2025. Dark patterns in consent interfaces persist because enforcement is slow and penalties are low relative to the revenue generated by the data collected through those patterns. Data retention periods remain undefined at many companies because "delete after purpose is fulfilled" requires defining the purpose, and vague purposes ("improving services," "analytics," "future product development") justify indefinite retention.

Cross-device tracking has become more sophisticated as third-party cookies phase out. Probabilistic fingerprinting, first-party data partnerships, and authenticated traffic solutions have replaced cookie-based tracking for many advertisers. The tracking still happens. The mechanism has changed. The user sees fewer cookie banners and might assume less tracking is occurring, when in fact the tracking has moved to methods that don't require cookies and so don't trigger consent mechanisms.

Employee data collection has expanded, particularly for remote workers. Monitoring software that tracks keystrokes, screenshots, application usage, and mouse movements has become common in companies that transitioned to remote work. The privacy implications for employees are significant but receive less attention than consumer privacy because employment creates a power imbalance that makes meaningful consent questionable.

What Individual Action Looks Like in 2025

Individual action can't fix systemic problems, but it can reduce personal exposure. Password managers, enabled on roughly 30% of internet users according to recent surveys, eliminate password reuse and make credential-stuffing attacks ineffective against the accounts they protect. Two-factor authentication, now offered by most major services, adds a second barrier that survives a password breach.

Email aliasing (using a different address for each service) limits the blast radius of a breach at any single service and prevents cross-service tracking via email address as a join key. VPNs prevent ISP-level traffic logging and hide the user's IP address from the websites they visit, though the VPN provider itself can see the traffic unless the service has been independently audited for its no-logs claims.

Browser choice matters more than most people realise. Firefox with Enhanced Tracking Protection, Brave with its built-in ad and tracker blocking, and Safari with Intelligent Tracking Prevention all provide meaningfully better tracking resistance than Chrome's default configuration. Extensions like uBlock Origin, Privacy Badger, and the Electronic Frontier Foundation's HTTPS Everywhere add additional layers.

The trajectory of online privacy in 2025 is a slow shift from a data-maximalist default to a data-minimalist aspiration. The tools, regulations, and practices that support privacy are better than they were five years ago. The threats, incentives, and corporate behaviours that undermine privacy are also more sophisticated than they were five years ago. The trajectory matters more than the snapshot, and the trajectory, measured in regulation enacted, technical countermeasures deployed, and public awareness raised, points gradually in the right direction.