Showing posts with label World Economic Forum. Show all posts
Showing posts with label World Economic Forum. Show all posts

#WhyID ?

We were pleased to be invited to participate in the World Economic Forum workshop last week on Cybercrime 2025 focusing on Digital Identity.

One of the participants presenting were Access Now, who “defends and extends the digital rights of users at risk around the world” [https://www.accessnow.org] who are running a campaign that I’d urge you to add your signature of support for; called #WhyID.

They ask that at the onset of any digital identity programme in any given region or country the #WhyID question must be asked;

Given that our aspiration is a global identity ecosystem, then I guess responding to these questions are even more important for us as an organisation. So here goes;

1. Respond to WhyID?:

     Why do we need these foundational digital identity systems? What are their benefits?

We need foundational digital identity as we live in an increasingly digital world that has little trust, and in a digital world where the majority of entities are based on self-asserted identity.

In short, the benefits, if we do this correctly, are;

o   The move from self-asserted identity and identity attributes, to trusted identities with attributes from truly authoritative sources.

o   The move from identities that operate only within a locus-of-control, to identities that can be reused anywhere, by anyone, globally.

o   The move from identities that need to have a central authority at its heart, to a decentralised, privacy enhancing ecosystem [and one that is NOT blockchain][1].

o   The move from a binary level of trust, to one where the entity taking the risk (remembering that risk is bi-directional, yet asymmetric) is able to understand the risk of every component part.

o   The elimination of billions of dollars of fraud and crime.

o   The elimination of identity theft and impersonation.

o   The ability to understand information from trusted, traceable and reputable sources, vs. un-trusted, self-asserted and fraudulent entities (trolls, sock-puppets, state sponsored misinformation etc.).

o   The ability to leverage global ecosystem for secure and trusted IoT devices and secure and trusted communications.

    Why are such programmes deployed without sufficient evidence of the benefits that they should deliver? How do these programmes plan to reduce the risk to and safeguard the rights and data of users?

We agree; most programs are designed to only fix one particular issue, and are limited in both scope and thus design.
 
In contrast, we started by looking at why Identity systems fail[2], from there developing this understanding of what you need to “do differently” to build a set of principles[3], and from there designing a system[4] to meet those principles.
 
Thus the model builds in privacy by design, ensuring anonymity where needed and places the identity of an individual entity under the full control of said entity, with no intermediate systems or infrastructure that can be compromised.

    Why should it be mandatory – either explicitly or de facto – for users to enrol onto these programmes? These programmes are either mandatory through legislative mandates or through making them a precondition to essential services for users.

We feel it should not; an entity should be able to generate its own root with 100% anonymity, and with total control over that root. Said entity should be able to generate personas (the join of said entity and an entity that is authoritative for one facet of said entities overall identity) only when there is a benefit to said entity [you only need a passport because you want to travel across borders that require passports].
 
Most entities will see the benefit, especially as the use of a common (cryptographic) root [albeit 100% anonymous] allows multiple privacy enhancing assertions to be made from disparate personas as a provably linked set [only the one entity could have made them]. For example: “I am over 21” & “Here is payment for alcohol”.

    Why are these programmes centralised and ubiquitous? Why is one digital identity linked to multiple facets of a citizen’s life?

We see this as one of the fundamental questions; and our stance is that designing a digital identity system in this manner is fundamentally wrong, technically unnecessary and ultimately causes any digital ecosystem to fail or implode.
 
While there a great benefits to having multiple, disparate, trusted attributes all under a central “root” (after all, this is what happens in real-life); you can only make this work if that root is 100% anonymous; the design must also take into account when the entity in question decides their level of trust in the ecosystem is insufficient and allow them to have multiple, unconnected roots.

    Why are countries leapfrogging to digital identity programmes, especially in regions where conventional identity programmes have not worked? The scalability of digital identity programmes also makes their harms scalable.

We believe (based on historical evidence) that identity ecosystems implemented at a national level either fail, implode to a sub-set of services and fail to federate (be trusted) outside of that particular locus-of-control.

Instead, giving away for free, an eco-system and a standard that needs no central infrastructure; which is therefore simple to adopt; where the government or organisation is only responsible for its people and only for those attributes for which it is truly authoritative delivers all the benefits to countries and their citizens without the potential harms that come when such a system is scaled.

     Why are these digital identity programmes not following the security guidance coming out of various expert academic and technical standard-setting bodies on the use of biometrics in identity systems?

We’d go further than this and suggest that any biometric used for authentication should never be stored by any third-party.
 
This does not of course preclude the nefarious collection of biometric information (fingerprint from a glass) or the (legal, or illegal) use of biometric recognition systems (typically facial or gait) linked to surveillance systems.
 
Instead, a digital identity ecosystem must be designed to understand the authentication of the entity to the digital as well as the level of trust it can place in an assertion of biometric authentication (but not validating the raw biometric itself) and in such a manner as to render replay attacks useless.

    Why are some private sector enterprises being privileged with access and ability to access the ID systems and build their private businesses on top of them? What safeguards are being implemented to prevent the misuse of information by the private sector? What should be the role of the private sector in the identity ecosystem?

The driver for most companies is the ability to make money; either from building large identity infrastructure (either traditional or more recently blockchain), in the form of consultancy or through controlling access to attributes.
 
Instead we believe that no big infrastructure is required; and organisations that are authoritative for facets of an entities identity must be able to add the necessary service to their existing systems to be able to sign trusted attributes that can be held, maintained and managed by the entity to which they pertain.
 
In addition, organisations wishing to consume said trusted, authoritative attributes when proffered them by said entity, must be able to add the necessary service to their existing system to accept and validate these.
 
We envisage both add-on's being open-source and royalty free to ensure proper security validation and widespread global take-up.

Those who promote these programmes must first critically evaluate and answer these basic WhyID questions, along with providing evidence of such rationale. In addition to answering these questions, these actors must actively engage and consult all actors. If there is no compelling rationale, evidence-based policy plan, and measures to avoid and repair harms, there should be no digital identity programme rolled out.

2. Evaluate and, if needed, halt: The potential impact on human rights of all existing and potential digital identity programmes must be independently evaluated. They must be checked for necessary safeguards and detailed audit reports must be made public, for scrutiny. If the necessary safeguards are not in place, the digital identity programmes must be halted.

We would agree; (and probably go further) as we believe that
adopting the Identity 3.0 principles[3]
and the associated global ecosystem will both protect human rights and provide greater
benefits for the government and its citizens.

 3.  Moratorium on the collection and use of biometrics (including facial recognition) for authentication purposes: Digital identity programmes should not collect or use biometrics for the authentication of users, until it can be proven that such biometric authentication is completely safe, inclusive, not liable to error, and is the only method of authentication available for the purpose of the programme. The harms from the breach of biometric information is irreparable for users and the ecosystem.

Our belief is that your biometrics (as they relate to authenticating your identity) should be collected, stored and validated under your direct and exclusive control.

Any relying entity wanting to validate the level to which an entity is authenticated should, along with the relevant signed attributes, be able to understand everything about how authentication was achieved (device, version, pass threshold, number of attempts etc.) allowing them to make their own risk assessment of whether that is adequate for them, of course with the option to then use some form of “step-up” authentication should the biometric threshold be inadequate.

This way there can be no collection, and thus no breach, of an entity’s biometric information.