#WhyID ?

We were pleased to be invited to participate in the World Economic Forum workshop last week on Cybercrime 2025 focusing on Digital Identity.

One of the participants presenting were Access Now, who “defends and extends the digital rights of users at risk around the world” [https://www.accessnow.org] who are running a campaign that I’d urge you to add your signature of support for; called #WhyID.

They ask that at the onset of any digital identity programme in any given region or country the #WhyID question must be asked;

Given that our aspiration is a global identity ecosystem, then I guess responding to these questions are even more important for us as an organisation. So here goes;

1. Respond to WhyID?:

     Why do we need these foundational digital identity systems? What are their benefits?

We need foundational digital identity as we live in an increasingly digital world that has little trust, and in a digital world where the majority of entities are based on self-asserted identity.

In short, the benefits, if we do this correctly, are;

o   The move from self-asserted identity and identity attributes, to trusted identities with attributes from truly authoritative sources.

o   The move from identities that operate only within a locus-of-control, to identities that can be reused anywhere, by anyone, globally.

o   The move from identities that need to have a central authority at its heart, to a decentralised, privacy enhancing ecosystem [and one that is NOT blockchain][1].

o   The move from a binary level of trust, to one where the entity taking the risk (remembering that risk is bi-directional, yet asymmetric) is able to understand the risk of every component part.

o   The elimination of billions of dollars of fraud and crime.

o   The elimination of identity theft and impersonation.

o   The ability to understand information from trusted, traceable and reputable sources, vs. un-trusted, self-asserted and fraudulent entities (trolls, sock-puppets, state sponsored misinformation etc.).

o   The ability to leverage global ecosystem for secure and trusted IoT devices and secure and trusted communications.

    Why are such programmes deployed without sufficient evidence of the benefits that they should deliver? How do these programmes plan to reduce the risk to and safeguard the rights and data of users?

We agree; most programs are designed to only fix one particular issue, and are limited in both scope and thus design.
 
In contrast, we started by looking at why Identity systems fail[2], from there developing this understanding of what you need to “do differently” to build a set of principles[3], and from there designing a system[4] to meet those principles.
 
Thus the model builds in privacy by design, ensuring anonymity where needed and places the identity of an individual entity under the full control of said entity, with no intermediate systems or infrastructure that can be compromised.

    Why should it be mandatory – either explicitly or de facto – for users to enrol onto these programmes? These programmes are either mandatory through legislative mandates or through making them a precondition to essential services for users.

We feel it should not; an entity should be able to generate its own root with 100% anonymity, and with total control over that root. Said entity should be able to generate personas (the join of said entity and an entity that is authoritative for one facet of said entities overall identity) only when there is a benefit to said entity [you only need a passport because you want to travel across borders that require passports].
 
Most entities will see the benefit, especially as the use of a common (cryptographic) root [albeit 100% anonymous] allows multiple privacy enhancing assertions to be made from disparate personas as a provably linked set [only the one entity could have made them]. For example: “I am over 21” & “Here is payment for alcohol”.

    Why are these programmes centralised and ubiquitous? Why is one digital identity linked to multiple facets of a citizen’s life?

We see this as one of the fundamental questions; and our stance is that designing a digital identity system in this manner is fundamentally wrong, technically unnecessary and ultimately causes any digital ecosystem to fail or implode.
 
While there a great benefits to having multiple, disparate, trusted attributes all under a central “root” (after all, this is what happens in real-life); you can only make this work if that root is 100% anonymous; the design must also take into account when the entity in question decides their level of trust in the ecosystem is insufficient and allow them to have multiple, unconnected roots.

    Why are countries leapfrogging to digital identity programmes, especially in regions where conventional identity programmes have not worked? The scalability of digital identity programmes also makes their harms scalable.

We believe (based on historical evidence) that identity ecosystems implemented at a national level either fail, implode to a sub-set of services and fail to federate (be trusted) outside of that particular locus-of-control.

Instead, giving away for free, an eco-system and a standard that needs no central infrastructure; which is therefore simple to adopt; where the government or organisation is only responsible for its people and only for those attributes for which it is truly authoritative delivers all the benefits to countries and their citizens without the potential harms that come when such a system is scaled.

     Why are these digital identity programmes not following the security guidance coming out of various expert academic and technical standard-setting bodies on the use of biometrics in identity systems?

We’d go further than this and suggest that any biometric used for authentication should never be stored by any third-party.
 
This does not of course preclude the nefarious collection of biometric information (fingerprint from a glass) or the (legal, or illegal) use of biometric recognition systems (typically facial or gait) linked to surveillance systems.
 
Instead, a digital identity ecosystem must be designed to understand the authentication of the entity to the digital as well as the level of trust it can place in an assertion of biometric authentication (but not validating the raw biometric itself) and in such a manner as to render replay attacks useless.

    Why are some private sector enterprises being privileged with access and ability to access the ID systems and build their private businesses on top of them? What safeguards are being implemented to prevent the misuse of information by the private sector? What should be the role of the private sector in the identity ecosystem?

The driver for most companies is the ability to make money; either from building large identity infrastructure (either traditional or more recently blockchain), in the form of consultancy or through controlling access to attributes.
 
Instead we believe that no big infrastructure is required; and organisations that are authoritative for facets of an entities identity must be able to add the necessary service to their existing systems to be able to sign trusted attributes that can be held, maintained and managed by the entity to which they pertain.
 
In addition, organisations wishing to consume said trusted, authoritative attributes when proffered them by said entity, must be able to add the necessary service to their existing system to accept and validate these.
 
We envisage both add-on's being open-source and royalty free to ensure proper security validation and widespread global take-up.

Those who promote these programmes must first critically evaluate and answer these basic WhyID questions, along with providing evidence of such rationale. In addition to answering these questions, these actors must actively engage and consult all actors. If there is no compelling rationale, evidence-based policy plan, and measures to avoid and repair harms, there should be no digital identity programme rolled out.

2. Evaluate and, if needed, halt: The potential impact on human rights of all existing and potential digital identity programmes must be independently evaluated. They must be checked for necessary safeguards and detailed audit reports must be made public, for scrutiny. If the necessary safeguards are not in place, the digital identity programmes must be halted.

We would agree; (and probably go further) as we believe that
adopting the Identity 3.0 principles[3]
and the associated global ecosystem will both protect human rights and provide greater
benefits for the government and its citizens.

 3.  Moratorium on the collection and use of biometrics (including facial recognition) for authentication purposes: Digital identity programmes should not collect or use biometrics for the authentication of users, until it can be proven that such biometric authentication is completely safe, inclusive, not liable to error, and is the only method of authentication available for the purpose of the programme. The harms from the breach of biometric information is irreparable for users and the ecosystem.

Our belief is that your biometrics (as they relate to authenticating your identity) should be collected, stored and validated under your direct and exclusive control.

Any relying entity wanting to validate the level to which an entity is authenticated should, along with the relevant signed attributes, be able to understand everything about how authentication was achieved (device, version, pass threshold, number of attempts etc.) allowing them to make their own risk assessment of whether that is adequate for them, of course with the option to then use some form of “step-up” authentication should the biometric threshold be inadequate.

This way there can be no collection, and thus no breach, of an entity’s biometric information.

Ten reasons blockchain may not be the solution for a global identity ecosystem!

I’ve lost count about the times that I’ve presented on the problems posed by designing a single, global, identity ecosystem and people come up afterwards and say “so what are you proposing – blockchain?”; to which my standard response is “blockchain may play a part in some aspects of a solution, but it is not THE solution!”.

So, what is behind that assertion?

Vint Cerf on Blockchain
First: the problem as I see it, is "the solution is blockchain - now what's the problem" crowd. Driven partially by VC funding, partially by its proponents trying to find other viable solutions beyond alt-currency and land registry.
Blockchain is just a database – yes, it’s a special kind of database, with some interesting properties around pseudo-privacy and provable immutability, but also with some interesting issues as it’s a public ledger – more on that later.  But the bottom line is that I’m with Vint Cerf on this one as my starting point for a debate.


Second: Blockchain does not pass the "sniff" test for a global identity solution. It does not pass the acid test of "will the Chinese use a US run solution or vice versa". - remember - someone has to own, control, manage and upgrade the model etc. even if its distributed. Global governments want to have a large portion of control of the Identities (or more correctly Identity Attributes) that matter to them, particularly around citizen attributes.

Third: The locus-of-control problem - see Jericho Forum Commandment[i] #8 “Authentication, authorisation and accountability must interoperate / exchange outside of your locus / area of control”. This is the “we can only make it work if we control everything ourselves” – it’s the mentality the security and identity industry has had for over half-a-century, whether it’s “put it all into AD”, “everyone must have my product for it all to interoperate”, or the “we can only make identity work if I run the central database” (look at any government developed identity system).

This is really key, because it goes to trust and risk; how do you trust (or perform a risk calculation on) something you do not manage – and the reality is you generally don’t – you insist on doing your own identity proofing and creating an identity that YOU manage, in your identity system – which is why corporates end up with poorly managed contractors and third parties alongside (reasonably managed) staff identities; or governments end up creating dummy citizen identities so foreign nationals can pay tax.

Fourth: We've already seen the need to fork bitcoin[ii], or Estonia (the poster child for state-mandated ID systems) who found a security problem with its ID cards[iii] - can you imagine needing to do this for 7.5bn people (let alone 20bn+ IoT devices).

Fifth: A truly distributed blockchain cannot handle the growth or transaction rate for 30bn+ (and growing) identities together with all their attributes. Think how many identity transactions need to be carried out on a global scale - unless it’s a private blockchain (but then go back and see the second problem above).

Sixth: Identity and attribute revocation – once it’s on the blockchain, how do you revocate? - a total or binary revocation is often unwanted; example my old passport even if expired (revoked) while I cannot use it for border entry, it is still a government issued document with my photo and (immutable) date-of-birth; depending on the risk-assessment by the entity I assert it to, this may be perfectly adequate for proving my age. Conversely, under GDPR “right to be forgotten”, how can I completely erase any trace of an aspect (or persona) of my identity, when it’s stored on an immutable public ledger?

Seventh: Blockchain, or to give it its full name “public distributed ledger” can have serious problems when it comes to privacy, given its public and distributed nature. Any solution will need to store SPI (sensitive personal information) and while I agree there are technological measures to protect said attributes, often the very existence of an attribute (but not its contents), or a reference to an external organisation or system can lead to inferences being drawn. For example: a reference to a particular ethnic group may result in an entity being arrested, targeted or killed.

Eighth: Blockchain relies on the always-on, or certainly the always-accessible, nature of its design. While there are proposed solutions that allow a currency transaction to take place between two off-line parties that is then later uploaded; the real-time verification of a UK drivers’ licence in the mid-west USA where there is no Internet for miles is a problem yet to be solved (or I suspect, even thought about) in the blockchain world.

Ninth: Most of the blockchain identity solutions rely heavily on PKI to make it secure; the problem for a PKI solution is that within the short-term life-cycle of a global identity ecosystem, quantum computing will likely break PKI as it stands. Therefore, a heavy reliance on PKI may not be an optimal design solution.

Tenth (and finally): Smart contracts are cited by many as the way you make Identity on the blockchain work. I like the David B. Black quote[iv]They’re not smart. They’re not contracts. They’re rife with security issues. And they violate the core principles that are supposed to make blockchain wonderful. Other than that, they’re great!” A smart contract is visible to all users of the blockchain including bugs and security holes and may not be quickly fixed – indeed if fixing the bug requires a fork of the blockchain, once implemented on a global scale it may be impossible to fix.

Conclusion:
I have no doubt that many of these issues can be technically solved, but in solving the problems the solution becomes increasingly complex, convoluted and difficult to understand/implement.

If I have learnt nothing from a long security career, it is that complexity is the enemy of good security. The global identity ecosystem model must be simple if it’s to stand any chance of working, let alone achieving global adoption.

I would commend the Identity 3.0 key principles[v] that we developed to try and get the fundamentals right.

There ARE better solutions, see the work out of the Jericho Forum and the Global Identity Foundation - but it all starts with needing to get your mindset out of trust = a central system that I control.

References and footnotes:

Jericho Forum Commandments Jericho Forum Identity Commandments: https://www.globalidentityfoundation.org/downloads/Identity_30_Principles.pdf

Jericho Forum Identity videos:

IoT's dirty little secret

IoT has a dirty little secret, they tend to only work if you connect via that devices hub; generally a cloud system. Should that hub go down, or the company simply decide not to support it any more, or go bust, then all you have is a non-functional brick.

This was recently brought home to the purchasers of the IoT devices from Best Buy who’s Insignia 'smart' home gear become very dumb (https://www.theregister.co.uk/2019/11/05/best_buy_iot/) or more recently “Pets 'go hungry' after smart feeder goes offline” (https://www.bbc.com/news/technology-51628795)
If that device was $20 and you got 5 years use of you may take the pragmatic view and simply buy the latest and greatest widget. But if you purchase a new car - and it’s Internet connected; it’s effectively an very expensive IoT device.  Before you collected it the salesman told you to pre-install the app on your phone and create an account, and on collection you are walked through how to connect the app to the vehicle - only you actually didn’t; in effect you connected your app to the Volvo / BMW / Mercedes cloud service and that service paired your account to the vehicle.

The problem is the same, should Volvo / BMW / Mercedes decide to discontinue support, or (however unlikely) go bust, then I’ve gone from having a smart vehicle, to a dumb one!  In essence I’m at their mercy, and the smarter these vehicles get and the more we rely on those smart features the more of a problem this becomes until the point that, although buying a car may seem like good value, in effect you are just being allowed to borrow it.

The problem gets worse when you get into the home - connecting a set of disparate IoT devices requires your control centre (typically a smart speaker) to connect to the cloud service. Then, in turn, you tell that cloud service how to talk to each device, via the cloud services of each individual device manufacturer.
Firstly, all of those devices are communicating through your home router, opening up multiple avenues of attack for the bad guys, but; Second, WHY? Surely when I turn on the light my intelligent light switch should talk directly to my intelligent light.

The challenge is that when I buy a new IoT light bulb, how do I make in “my light-bulb” or probably, and more realistically “my homes light bulb”, such that my homes IoT enabled light switch can control it - directly (on the same network) and without needing to go out to a cloud service.

The Identity 3.0 concepts of “personas” and “context” allow you to do just that. The (digital) join between Entity:Human Myself and Entity:Device Volvo XC90 creates a unique personal for the vehicle; “My Volvo XC90”, with a set of cryptographic keys that allow me to directly and securely connect the the vehicle.
In the house, the connection between Entity:Organization House and Entity:Human Myself gives me a persona as a member of the organization. In turn the new IoT light bulb and IoT light switch are also enrolled with personas making them the houses IoT devices. Now anyone (just as you do today) can operate the switch and the light turns on, but as a member of “house” I can also use my voice or smart-device to control that light.

Not only is this more secure, it is more logical to set up and maintain; and more importantly, keeps working even when the manufacturers cloud service goes off-line, or goes bust!

See: https://www.globalidentityfoundation.org/downloads/Briefing_-_Infrastructure+IoT.pdf

The problem(s) with using biometrics

When talking to people about Identity, usually at some point in the conversation, people say "so the future is biometrics?".  And my response is "maybe".....

So here are a few musings on the problems that biometrics face (if you pardon the pun).
  1. Biometrics are to do with authentication of an entity - NOT identity; and the authentication that provides the gateway to whatever identity system you are using. I'm constantly amazed by the amount of security and identity professionals that confuse / mix / interchange these two terms.
     
  2. Biometrics if stolen cannot be replaced; which is sort of true, but in reality you leave your fingerprints, face and even DNA everywhere. The issue is a replay attack against devices that have your biometric registered, from the "gummy bear" attack against fingerprint sensors, to the dummy head attack against the iPhone 10.
     
  3. Biometrics cannot be revoked; if you are concerned that someone out there is spoofing your biometric information you cannot toss it away and replace it, as you would a password or a credit card. Yes, there are techniques like salting and one-way encryption that reduce the potential damage. But there will always be a poorly designed system with the potential for a leak of biometric credentials, ruining them for all other systems.
     
  4. If you rely on a device to validate biometrics then you (as the relying party) must understand the actual model the entity is using to understand;
    • the technology behind the biometric match and what exploits can be used against it
    • the threshold settings on a biometric match within the device / firmware / software
    • the match confidence, or how well the biometric passed validation
        
  5. As the end user (and the owner of the biometrics) HOW DO I KNOW where my biometrics are stored? When I register my biometrics, I have no actual idea what happens with my biometrics, and have no (easy) way of validating the vendor assurances of "it is secure and well designed".
    I hope that on my fingerprint is stored only on my smartphone, AND in a non-reversible format, AND is not being shipped externally (even on backup).  BUT I HAVE NO IDEA; for all I know my registered fingerprint could be stored and shipped externally as a plain image, and when I authenticate it's being manually verified by a bank of humans in a low wage country.
     
  6. Biometrics on mobile devices are not the gold standard; many app developers regard the move to biometrics, particularity fingerprint, as far superior to other authentication methods.  Unfortunately the fingerprint API (Android) simply says (binary) "biometric authentication passed"; so on a smartphone where your have enrolled fingerprints from yourself, your partner, best mate etc. then opening the banking app, any of those enrolled fingerprint work; however the bank regards that authentication as the current "gold standard" and applies a higher level of "certainty" that it is the account owner using the smartphone!
So what SHOULD this look like?

For the owner of the biometrics; Provable assurance that my biometrics are secure and exclusively under my control. This means;

  • The only place my biometric should be stored is on a device under my exclusive control
  • That my biometric should not be directly used outside of said device and should only be released as a cryptographic assertion of "sameness"
  • That where a device is only partially under my control (say, a smartphone) then biometrics should only unlock a cached assertion of sameness.

For the receiver of the authentication / identity / attributes (and the entity usually taking the majority of the risk in the transaction), if they are to make a good, risk-based, decision then it is critical that they are able to fully understand how well the entity is connected to the digital infrastructure they are using.

The Right to Privacy in the Digital Age?



In December 2013, the United Nations General Assembly adopted resolution 68/167[1], which expressed deep concern at the negative impact that surveillance and interception of communications may have on human rights. The General Assembly affirmed that the rights held by people offline must also be protected online, and it called upon all States to respect and protect the right to privacy in digital communication.

As the previous High Commissioner cautioned in past statements [September 2013 and February 2014], such surveillance threatens individual rights – including to privacy and to freedom of expression and association – and inhibits the free functioning of a vibrant civil society[2].
 

Yet this week we have headlines that “Facebook encryption threatens public safety[3]” from the UK Home Secretary and her US and Australian counterparts.

Now while I’m not Facebook's greatest fan (I won’t install it on my Smartphone), history tells me that the moment I hear politicians talk about encryption coupled with the words “paedophiles and terrorists” as their headline justification I start to worry; as it usually means there is little valid argument; but they would like to trample on people’s Human Rights on a wave of moral outrage!

Existing laws allow for orders for wire taps of products like WhatsApp and can get some data, (IP addresses, phone numbers, contact lists, avatar photos etc.); and while you cannot get encrypted messages and attachments, you use this and other evidence to apply to a court and convince a judge that you have sufficient grounds for a warrant to arrest and seize their end-point device!

Having worked with the police in the 1990s to get the solid evidence so that they could arrest one of our employees for accessing indecent images of children, I know first hand that our existing laws were more than adequate to get an arrest warrant.

There are a number of root-cause problems here that have been rehashed over the many, many years I’ve been listening to this debate as it continually rears its head.

The first is the Phil Zimmerman[4] quote[5] “If privacy is outlawed, only outlaws will have privacy.” Which is often misquoted as “If encryption is outlawed, only outlaws will have encryption”. This probably applied double to those terrorist organisations who are well funded enough to write their own encryption products and even use steganography[6] to hide it in plain sight.

The second is the “our government wants a back-door into your encryption”. – The problem here (especially for International tech companies) is “which government is entitled to have a back door key?” – because if the US demands it, then other countries will also demand it, usually as a condition of doing business in their jurisdiction – so it rapidly becomes “any legitimate government” – but legitimate does not equate to benign, or even non-repressive towards certain sections of it citizens.

The third is that international business needs to be able to ensure that its business communications are secure; I can remember the time in the 90s when France would not allow secure encryption of our corporate WAN links into the country – this can negatively affect business investment decisions if you can not ensure the security of your business (physical or digital) in that country.

History is littered with failed and flawed attempts to get back-doors into encryption; so I would recommend that any politician who actually wants to make this suggestion goes and talks to the (white-hat) hackers at Blackhat and Defcon, or those of us who have been implementing security in large corporates for many years, and they will tell you that the encryption genie escaped the bottle a long time ago and the only person you will actually harm are the 99% plus of citizens who are law abiding; and the companies and organisations that need MORE strong encryption to protect us from the evils on the Internet.

In our work on Identity we have identified the need not only for strong encryption, but also for it to be open source and peer reviewed so all parties can assure themselves that there are no back doors – this builds trust in both the identity and digital ecosystem; then this must be coupled with 100% anonymity at the root of an entity’s identity, which ensures privacy and delivers primacy and agency.

It may seem counterintuitive, but by doing this you end up with a more accountable, more trustworthy digital ecosystem.



The usability challenge

I’ve been in the information security game for 25 years; and the one thing I’ve learnt is that whereas I might care about security, privacy and identity, the average person can't be bothered. In fact it gets worse, as today the craving for a friction-less user experience trumps almost everything else.

What brought this home to me was a presentation at this year’s Usenix's Enigma 2018 security conference in California, where Google software engineer Grzegorz Milka revealed (presentation link) that as of today less than 10 per cent of active Google accounts use two-step authentication to lock down their services. 

This free, two-step authentication service, was introduced over seven years ago (Sept 2010) initially for Gmail accounts, but its it take-up by Joe Public has been negligible.

Last summer I needed to contact Amazon after some fraudulent activity on my account, their only advice was to change my password (which I had already done – as well is deleting all credit cards from the account) – when I asked about two-factor authentication their support line denied it existed.

However while checking my Amazon account settings in case anything had been changed I stumbled across it – and guess what; it can leverage my already existing Google two-step authentication service. [Your Account › Login & security › Advanced Security Settings if you are interested – it also supports the Microsoft authenticator.]

As we design Identity 3.0; the next generation for digital identity, the challenge has been “how do we make it simple?”

But I think, based on everything I have learnt to date, that we need to go significantly further than this if it’s to be universally adopted, and add to our design criteria;

  1. How do we make it the simplest, most friction-less, option?
  2. How do we make security, privacy and primacy near-invisible?
  3. How do we make it the default?

Because only then will we get the other 90% to adopt a security and privacy enhancing approach and start to beat the bad-guys.

Paul Simmonds, CEO Global Identity Foundation, January 2018 

Who do you trust with YOUR biometrics?

The UK Government has been promising a biometrics strategy since 2012, but it has been repeatedly delayed and is now due to be published sometime in 2018.
The committee chairman Norman Lamb has written (see link below) expressing his disappointment in the government's position and asked for more clarity on the delay.
"I remain concerned about how the Review will be implemented as well as uncertainty on the government's position on other important areas – DNA, fingerprints and so on,"
www.parliament.uk/documents/commons-committees/science-technology/Correspondence/171219-Letter-to-Baroness-Williams-Biometrics-strategy.pdf
Biometrics are seen by many (alongside blockchain) as "the holy grail" of identity; but are in fact a potential nightmare!
Some of the problems are as follows;
  1. Who do you trust with YOUR biometrics.
    We assume the fingerprint on our phone is secured "on device", never backed-up and the NSA (or other spooks) never have access; but how does Joe Public know this? Let alone verify this? But at least the phone is a device that you (sort of) own and control.
    For everything else the question you should be asking is; “where is your biometric held and processed”? and it is usually impossible to find this out, let alone verify this – even if Joe Public knew or understood the nasty questions to ask.
     
  2. We've already seen the first beaches of biometric data;
    Biometric data stolen from corporate lunch rooms system
    https://www.theregister.co.uk/2017/07/10/malware_scum_snack_on_lunchroom_kiosks/

    UIDAI says Aadhaar system secure, refutes reports of biometric data breach
    http://smartinvestor.business-standard.com/pf/pfnews-479889-pfnewsdet-UIDAI_says_Aadhaar_system_secure_refutes_reports_of_biometric_data_breach.htm

    OPM Now Admits 5.6m Feds’ Fingerprints Were Stolen By Hackers
    https://www.wired.com/2015/09/opm-now-admits-5-6m-feds-fingerprints-stolen-hackers/

     
  3. Processing and replay attacksWhen you place your finger on a sensor or have your photo taken on (for example) a door entry system; where is this processed?
    • Is it on a secure chip?
    • Is it passed as a raw data capture over the wire or is there some form of encoding performed?
    • Can what is passed over the wire be replayed?
    • Is it processed on a central PC or server controlling the doors, or possibly multiple doors?
    • Or (because it faster/cheaper) processed on a cloud system?
    • Or maybe it goes off to an army of people in China or India (because people are better at facial recognition matching) who get the two photos to compare and then click “match” or “no-match”?  Bottom line, even if you knew to ask, there is no easy way of knowing how the system is architected, or how your data is being handled. Lifting fingerprints off the wine glass to gain entry are the stuff of “Mission Impossible” and films of a similar genre, but how many of these solutions are vulnerable to such an attack, or liable to a replay attack?
       
  4. The locus-of-control problem (Jericho Forum Commandment #8)1;
    Biometric identity systems turns a variable (maybe Person) to a binary (IS Person), according to some undisclosed (to the person taking the risk) criteria, which is probably de-tuned to ensure minimal false positives.
    Thus in a global identity ecosystem if I get an "IS DEFINITELY Person" from a Kazakhstan Bank system - do I trust it? (probably not), thus;
    The only solution is that you have to enrol your biometric with MY system which I control and thus I understand the risk - back to the locus of control problem!
     
  5. The authentication ennoblement problem;
    Using biometrics are often perceived as, or sold as, a more secure method of authentication, quite possibly as they are usually more expensive and difficult to implement.
    More worryingly, biometric authentication is usually implemented using a system that is in effect a black box, with the person taking the risk having no knowledge of how that binary “passed authentication” is actually arrived at; or even how many attempts it took, or the probability of the match.
     
  6. The access-creep problem;
    Try this simple experiment; register your fingerprint as (say) #6 on your friends or partners phone (with their permission), now set them up for e-banking, and get them to enable fingerprint authentication. Now simply access their e-banking app using your fingerprint!
    Yes it's trivial, but its access-creep - yes my partner was happy for me to unlock their phone (useful for me to look at traffic if they are driving or to make a phone call) but they never understood this would also let me have full access to their bank account.
    The bank (or whoever) just sees fingerprint authentication as a "stronger" authentication method than PIN and applies, possibly incorrectly, a higher level of trust. At least with a PIN code I can have a four digit PIN for my phone unlock and an (enforced) six-digit PIN (and thus different PIN) for my e-banking.
The conclusion we came to when looking at how a global identity ecosystem should be designed, is that YOUR biometric should only be held in a dedicated device, under your control, which only releases a crypto-proof of "sameness".
Ultimately, biometrics will only be truly usable if the entity taking the risk is able to evaluate the complete risk-chain; from how the human authenticates to the device, and then all the attributes of the components in-between.
If you couple this with the device able to assert both "model" and "chain of custody" (and potentially "duress"), then you have a robust model by which the entity you are asserting the sameness to; can make a risk-based decision as THEY understand the full risk-chain.


References:

  1. Jericho Forum Commandment #8 – “Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control”
    https://collaboration.opengroup.org/jericho/commandments_v1.2.pdf

Paul Simmonds, CEO Global Identity Foundation, December 2017