Creating an account on most platforms starts with an email address. That email becomes the thread connecting the account to a real person. Password resets go to it. Marketing emails go to it. Breach notifications go to it. Data brokers index it. The account is only as anonymous as the email behind it, which is to say, not anonymous at all.
Another.IO took a different path. Account creation requires zero personal data. Instead of an email and password, the system generates a random 16-digit code. That code is the account. No email, no name, no phone number, no address. The code is the only credential, and it contains no information about the person holding it.
This isn't a marketing gimmick. It's an architecture decision with specific trade-offs that users should understand before relying on it.
How Code-Based Accounts Work
The registration flow is deliberately minimal. The user requests a new account. The system generates a cryptographically random 16-digit alphanumeric code. The code is displayed once. The user copies it and stores it somewhere safe. All future authentication uses this code as the sole credential.
There's no email confirmation step because there's no email. No password creation step because the code replaces the password. No profile page because there's no profile data to display. The server stores a salted hash of the code, not the code itself. If someone gains access to the database, they see hashes. The hash can't be reversed to recover the original code, and the code can't be linked to any external identity because it was never connected to one.
That last part is the important bit. Not "the company promises not to link data." Not "data is encrypted at rest." The data doesn't exist. There's nothing to link, nothing to encrypt, nothing to subpoena, nothing to sell.
What This Removes from the Chain
Traditional account systems create an identifiable chain. Each link is a potential leak, and the chain connecting them makes every leak worse.
An email address links the account to a real person. Password reset flows send tokens to that email, so if the email account is compromised, every account connected to it falls too. Marketing databases aggregate email addresses across services, building cross-platform profiles. A single email used on ten services gives data brokers ten data points to correlate. Breach dumps indexed by email let attackers check whether someone has an account on a specific service before even trying to access it.
Code-based authentication strips all of this away. No email to correlate. No password reset flow to intercept. No personal data in the database to leak. No cross-platform profiling. Two accounts on two different code-based services can't be linked to the same person because neither contains identifying information. The chain doesn't just have fewer links. It doesn't exist.
What a Breach Looks Like
The difference between breaching a traditional service and breaching a code-based service isn't gradual. It's categorical.
A traditional service breach yields email addresses, hashed passwords (sometimes plaintext), names, addresses, phone numbers, IP logs, and session histories. The attacker can identify specific users, attempt credential stuffing on other platforms, and sell the data to brokers who merge it with existing profiles. The harm persists long after the breach is patched. People don't get new identities because a company got hacked.
A code-based service breach yields salted hashes of random 16-digit codes. No emails to search. No names to identify. No passwords to reuse elsewhere. The hashes are useless because the codes are random (dictionary attacks don't work against random strings). The attacker can't identify who owns any account, can't contact any user, and can't use the data on any other platform. There's nothing worth stealing from the user table.
Security teams sometimes push back on this framing: "no system is breach-proof." True. But there's a meaningful difference between a system where a breach exposes millions of identities and a system where a breach exposes millions of meaningless hashes. The second one makes the news for about a day. The first one follows people for years.
Privacy by Design Versus Privacy by Policy
Most privacy protections are policy-based. The company promises not to share data. Promises to delete it on request. Promises to encrypt it at rest. These promises depend on the company keeping them, and on the company surviving long enough to keep them. Acquisitions change policies. Bankruptcies sell data as assets. New management decides the old privacy commitments aren't commercially viable.
Privacy by design eliminates the data rather than promising to protect it. If personal data was never collected, no policy change, acquisition, or bankruptcy can expose it. The protection is structural, not contractual. A promise is only as strong as the incentive to keep it. An absence is permanent.
Code-based authentication is one implementation of this principle. There are others: zero-knowledge proofs, decentralised identity systems, client-side encryption with no server-side key. But the 16-digit code approach has an advantage that the others don't: simplicity. There's nothing to explain about what data the service holds because the answer is "none." That answer doesn't require a ten-page privacy policy to communicate. It requires one sentence.
The Trade-Off Nobody Should Ignore
Every security decision involves a trade-off. Here it's explicit: if the code is lost, the account is lost.
There's no "forgot password" flow. There can't be one. A forgot-password flow requires a second channel (email, phone) to verify identity and deliver a reset token. Code-based authentication has no second channel because collecting one would defeat the entire purpose. The moment you ask for an email "just for recovery," you've recreated the exact vulnerability the system was designed to avoid.
This puts credential management entirely on the user. The code goes in a password manager, a physical note, or an encrypted file. Relying on memory isn't viable for a 16-digit random string. If the storage fails (password manager corrupted, note destroyed), the account is permanently inaccessible. There's no customer support path for lost codes. Support staff can't look up an account by email because there's no email. They can't verify identity through security questions because there are no security questions.
This trade-off isn't for everyone. Users who frequently lose credentials, who don't use password managers, or who expect customer support to recover access will find code-based authentication frustrating. The system is designed for people who prioritise privacy over convenience and who accept the responsibility that comes with that priority. That's a genuine limitation, not a disclaimer.
Pseudonymous Is Not Anonymous
Some platforms advertise "anonymous" accounts that are actually pseudonymous. The account uses a username instead of a real name but still requires an email for registration. Surface-level anonymity: other users can't see the real name. But structural anonymity? No. The platform still has the email, and the email links to a real identity.
This distinction matters because pseudonymous accounts are vulnerable to the same breach and correlation risks as fully named accounts. The email address is the weak link, and it's still in the database. An attacker who obtains the user table maps pseudonyms to real identities through the email field. The "anonymous" account was never anonymous. It was just wearing a mask over an identity the database knew the whole time.
Code-based accounts have no such weak link. The pseudonym question is irrelevant because there's no identifier of any kind connecting the account to a person. Not pseudonymous. Not hidden behind a fake name. Genuinely anonymous in the structural sense: not connected to any name at all.
Who Actually Needs This
Code-based anonymous accounts aren't the right choice for every service. They're the right choice when the threat model includes at least one of the following.
Targeted surveillance. Users in countries with authoritarian governments, whistleblowers communicating with journalists, activists organising in hostile environments. The risk scales with the amount of identifying data a platform holds. A service that stores no emails and no names gives a government agency nothing to subpoena.
Data broker aggregation. Users actively reducing their data footprint need platforms that don't contribute to the aggregation cycle. Every email address provided to a new service is another node in the broker graph. A code-based account is a dead end for enrichment algorithms. Nothing to enrich.
Credential reuse attacks. Users who reuse passwords across platforms (against advice, but commonly) are vulnerable to credential stuffing. A code-based account is immune because the code is system-generated, random, and unique. There's no human-chosen password to reuse.
Post-breach harassment. Data breaches at email-based services expose users to spam, phishing, and targeted harassment. The breach at service A reveals the email, which gets used to find accounts on services B, C, and D. Code-based accounts break this chain at the first link.
The Physical-World Precedent
This model isn't new conceptually. Safety deposit boxes at some banks are identified only by a key number. The bank doesn't record the holder's name. The key is the only proof of ownership. Lose the key, lose access. The bank can't help because the bank doesn't know whose box it is. Bearer bonds before electronic registration worked the same way: whoever held the certificate owned it. No ownership record. No recovery path.
Prepaid SIM cards in jurisdictions that don't require registration follow the same pattern. A SIM purchased with cash creates a phone number with no linked identity. The number works, but the carrier can't identify the subscriber. Each of these systems accepts the same trade-off: anonymity in exchange for personal responsibility over the access credential. The 16-digit code is simply the electronic equivalent of a physical bearer instrument.
Technical Details for the Curious
A 16-digit alphanumeric code with 36 possible characters per digit provides approximately 82 bits of entropy. At a million guesses per second, exhausting the keyspace would take longer than the age of the universe. Rate limiting and account lockout reduce the practical attack surface further.
The code is hashed with a per-account salt using a slow hash function (bcrypt, scrypt, or Argon2). The slow hash adds computational cost to each guess, making offline brute-force impractical even with the salted hashes in hand. The code is transmitted over TLS during authentication. It's never logged, never stored in plaintext server-side, and never included in analytics events. It appears exactly twice: when generated (shown to the user) and when submitted for authentication (hashed and compared).
One underappreciated benefit of this architecture is audit simplicity. When a regulator asks "what personal data do you store about your users," the answer is genuinely "none." That isn't a legal fiction or a creative interpretation of what counts as personal data under GDPR. It's the literal truth. There are no names, no emails, no phone numbers, no addresses, no IP logs tied to identifiable accounts. The compliance conversation is short because there's nothing to discuss.
Users can generate a new code while authenticated, invalidating the old one. This provides credential rotation without requiring personal information. The system's simplicity is its strength. Fewer moving parts. Fewer data flows. Fewer attack surfaces than traditional email-and-password authentication. The trade-off is that the system can't do things that require knowing who the user is. Because it doesn't.