IoT's dirty little secret

IoT has a dirty little secret, they tend to only work if you connect via that devices hub; generally a cloud system. Should that hub go down, or the company simply decide not to support it any more, or go bust, then all you have is a non-functional brick.

This was recently brought home to the purchasers of the IoT devices from Best Buy who’s Insignia 'smart' home gear become very dumb (https://www.theregister.co.uk/2019/11/05/best_buy_iot/) or more recently “Pets 'go hungry' after smart feeder goes offline” (https://www.bbc.com/news/technology-51628795)
If that device was $20 and you got 5 years use of you may take the pragmatic view and simply buy the latest and greatest widget. But if you purchase a new car - and it’s Internet connected; it’s effectively an very expensive IoT device.  Before you collected it the salesman told you to pre-install the app on your phone and create an account, and on collection you are walked through how to connect the app to the vehicle - only you actually didn’t; in effect you connected your app to the Volvo / BMW / Mercedes cloud service and that service paired your account to the vehicle.

The problem is the same, should Volvo / BMW / Mercedes decide to discontinue support, or (however unlikely) go bust, then I’ve gone from having a smart vehicle, to a dumb one!  In essence I’m at their mercy, and the smarter these vehicles get and the more we rely on those smart features the more of a problem this becomes until the point that, although buying a car may seem like good value, in effect you are just being allowed to borrow it.

The problem gets worse when you get into the home - connecting a set of disparate IoT devices requires your control centre (typically a smart speaker) to connect to the cloud service. Then, in turn, you tell that cloud service how to talk to each device, via the cloud services of each individual device manufacturer.
Firstly, all of those devices are communicating through your home router, opening up multiple avenues of attack for the bad guys, but; Second, WHY? Surely when I turn on the light my intelligent light switch should talk directly to my intelligent light.

The challenge is that when I buy a new IoT light bulb, how do I make in “my light-bulb” or probably, and more realistically “my homes light bulb”, such that my homes IoT enabled light switch can control it - directly (on the same network) and without needing to go out to a cloud service.

The Identity 3.0 concepts of “personas” and “context” allow you to do just that. The (digital) join between Entity:Human Myself and Entity:Device Volvo XC90 creates a unique personal for the vehicle; “My Volvo XC90”, with a set of cryptographic keys that allow me to directly and securely connect the the vehicle.
In the house, the connection between Entity:Organization House and Entity:Human Myself gives me a persona as a member of the organization. In turn the new IoT light bulb and IoT light switch are also enrolled with personas making them the houses IoT devices. Now anyone (just as you do today) can operate the switch and the light turns on, but as a member of “house” I can also use my voice or smart-device to control that light.

Not only is this more secure, it is more logical to set up and maintain; and more importantly, keeps working even when the manufacturers cloud service goes off-line, or goes bust!

See: https://www.globalidentityfoundation.org/downloads/Briefing_-_Infrastructure+IoT.pdf

The problem(s) with using biometrics

When talking to people about Identity, usually at some point in the conversation, people say "so the future is biometrics?".  And my response is "maybe".....

So here are a few musings on the problems that biometrics face (if you pardon the pun).
  1. Biometrics are to do with authentication of an entity - NOT identity; and the authentication that provides the gateway to whatever identity system you are using. I'm constantly amazed by the amount of security and identity professionals that confuse / mix / interchange these two terms.
     
  2. Biometrics if stolen cannot be replaced; which is sort of true, but in reality you leave your fingerprints, face and even DNA everywhere. The issue is a replay attack against devices that have your biometric registered, from the "gummy bear" attack against fingerprint sensors, to the dummy head attack against the iPhone 10.
     
  3. Biometrics cannot be revoked; if you are concerned that someone out there is spoofing your biometric information you cannot toss it away and replace it, as you would a password or a credit card. Yes, there are techniques like salting and one-way encryption that reduce the potential damage. But there will always be a poorly designed system with the potential for a leak of biometric credentials, ruining them for all other systems.
     
  4. If you rely on a device to validate biometrics then you (as the relying party) must understand the actual model the entity is using to understand;
    • the technology behind the biometric match and what exploits can be used against it
    • the threshold settings on a biometric match within the device / firmware / software
    • the match confidence, or how well the biometric passed validation
        
  5. As the end user (and the owner of the biometrics) HOW DO I KNOW where my biometrics are stored? When I register my biometrics, I have no actual idea what happens with my biometrics, and have no (easy) way of validating the vendor assurances of "it is secure and well designed".
    I hope that on my fingerprint is stored only on my smartphone, AND in a non-reversible format, AND is not being shipped externally (even on backup).  BUT I HAVE NO IDEA; for all I know my registered fingerprint could be stored and shipped externally as a plain image, and when I authenticate it's being manually verified by a bank of humans in a low wage country.
     
  6. Biometrics on mobile devices are not the gold standard; many app developers regard the move to biometrics, particularity fingerprint, as far superior to other authentication methods.  Unfortunately the fingerprint API (Android) simply says (binary) "biometric authentication passed"; so on a smartphone where your have enrolled fingerprints from yourself, your partner, best mate etc. then opening the banking app, any of those enrolled fingerprint work; however the bank regards that authentication as the current "gold standard" and applies a higher level of "certainty" that it is the account owner using the smartphone!
So what SHOULD this look like?

For the owner of the biometrics; Provable assurance that my biometrics are secure and exclusively under my control. This means;

  • The only place my biometric should be stored is on a device under my exclusive control
  • That my biometric should not be directly used outside of said device and should only be released as a cryptographic assertion of "sameness"
  • That where a device is only partially under my control (say, a smartphone) then biometrics should only unlock a cached assertion of sameness.

For the receiver of the authentication / identity / attributes (and the entity usually taking the majority of the risk in the transaction), if they are to make a good, risk-based, decision then it is critical that they are able to fully understand how well the entity is connected to the digital infrastructure they are using.

The Right to Privacy in the Digital Age?



In December 2013, the United Nations General Assembly adopted resolution 68/167[1], which expressed deep concern at the negative impact that surveillance and interception of communications may have on human rights. The General Assembly affirmed that the rights held by people offline must also be protected online, and it called upon all States to respect and protect the right to privacy in digital communication.

As the previous High Commissioner cautioned in past statements [September 2013 and February 2014], such surveillance threatens individual rights – including to privacy and to freedom of expression and association – and inhibits the free functioning of a vibrant civil society[2].
 

Yet this week we have headlines that “Facebook encryption threatens public safety[3]” from the UK Home Secretary and her US and Australian counterparts.

Now while I’m not Facebook's greatest fan (I won’t install it on my Smartphone), history tells me that the moment I hear politicians talk about encryption coupled with the words “paedophiles and terrorists” as their headline justification I start to worry; as it usually means there is little valid argument; but they would like to trample on people’s Human Rights on a wave of moral outrage!

Existing laws allow for orders for wire taps of products like WhatsApp and can get some data, (IP addresses, phone numbers, contact lists, avatar photos etc.); and while you cannot get encrypted messages and attachments, you use this and other evidence to apply to a court and convince a judge that you have sufficient grounds for a warrant to arrest and seize their end-point device!

Having worked with the police in the 1990s to get the solid evidence so that they could arrest one of our employees for accessing indecent images of children, I know first hand that our existing laws were more than adequate to get an arrest warrant.

There are a number of root-cause problems here that have been rehashed over the many, many years I’ve been listening to this debate as it continually rears its head.

The first is the Phil Zimmerman[4] quote[5] “If privacy is outlawed, only outlaws will have privacy.” Which is often misquoted as “If encryption is outlawed, only outlaws will have encryption”. This probably applied double to those terrorist organisations who are well funded enough to write their own encryption products and even use steganography[6] to hide it in plain sight.

The second is the “our government wants a back-door into your encryption”. – The problem here (especially for International tech companies) is “which government is entitled to have a back door key?” – because if the US demands it, then other countries will also demand it, usually as a condition of doing business in their jurisdiction – so it rapidly becomes “any legitimate government” – but legitimate does not equate to benign, or even non-repressive towards certain sections of it citizens.

The third is that international business needs to be able to ensure that its business communications are secure; I can remember the time in the 90s when France would not allow secure encryption of our corporate WAN links into the country – this can negatively affect business investment decisions if you can not ensure the security of your business (physical or digital) in that country.

History is littered with failed and flawed attempts to get back-doors into encryption; so I would recommend that any politician who actually wants to make this suggestion goes and talks to the (white-hat) hackers at Blackhat and Defcon, or those of us who have been implementing security in large corporates for many years, and they will tell you that the encryption genie escaped the bottle a long time ago and the only person you will actually harm are the 99% plus of citizens who are law abiding; and the companies and organisations that need MORE strong encryption to protect us from the evils on the Internet.

In our work on Identity we have identified the need not only for strong encryption, but also for it to be open source and peer reviewed so all parties can assure themselves that there are no back doors – this builds trust in both the identity and digital ecosystem; then this must be coupled with 100% anonymity at the root of an entity’s identity, which ensures privacy and delivers primacy and agency.

It may seem counterintuitive, but by doing this you end up with a more accountable, more trustworthy digital ecosystem.



The usability challenge

I’ve been in the information security game for 25 years; and the one thing I’ve learnt is that whereas I might care about security, privacy and identity, the average person can't be bothered. In fact it gets worse, as today the craving for a friction-less user experience trumps almost everything else.

What brought this home to me was a presentation at this year’s Usenix's Enigma 2018 security conference in California, where Google software engineer Grzegorz Milka revealed (presentation link) that as of today less than 10 per cent of active Google accounts use two-step authentication to lock down their services. 

This free, two-step authentication service, was introduced over seven years ago (Sept 2010) initially for Gmail accounts, but its it take-up by Joe Public has been negligible.

Last summer I needed to contact Amazon after some fraudulent activity on my account, their only advice was to change my password (which I had already done – as well is deleting all credit cards from the account) – when I asked about two-factor authentication their support line denied it existed.

However while checking my Amazon account settings in case anything had been changed I stumbled across it – and guess what; it can leverage my already existing Google two-step authentication service. [Your Account › Login & security › Advanced Security Settings if you are interested – it also supports the Microsoft authenticator.]

As we design Identity 3.0; the next generation for digital identity, the challenge has been “how do we make it simple?”

But I think, based on everything I have learnt to date, that we need to go significantly further than this if it’s to be universally adopted, and add to our design criteria;

  1. How do we make it the simplest, most friction-less, option?
  2. How do we make security, privacy and primacy near-invisible?
  3. How do we make it the default?

Because only then will we get the other 90% to adopt a security and privacy enhancing approach and start to beat the bad-guys.

Paul Simmonds, CEO Global Identity Foundation, January 2018 

Who do you trust with YOUR biometrics?

The UK Government has been promising a biometrics strategy since 2012, but it has been repeatedly delayed and is now due to be published sometime in 2018.
The committee chairman Norman Lamb has written (see link below) expressing his disappointment in the government's position and asked for more clarity on the delay.
"I remain concerned about how the Review will be implemented as well as uncertainty on the government's position on other important areas – DNA, fingerprints and so on,"
www.parliament.uk/documents/commons-committees/science-technology/Correspondence/171219-Letter-to-Baroness-Williams-Biometrics-strategy.pdf
Biometrics are seen by many (alongside blockchain) as "the holy grail" of identity; but are in fact a potential nightmare!
Some of the problems are as follows;
  1. Who do you trust with YOUR biometrics.
    We assume the fingerprint on our phone is secured "on device", never backed-up and the NSA (or other spooks) never have access; but how does Joe Public know this? Let alone verify this? But at least the phone is a device that you (sort of) own and control.
    For everything else the question you should be asking is; “where is your biometric held and processed”? and it is usually impossible to find this out, let alone verify this – even if Joe Public knew or understood the nasty questions to ask.
     
  2. We've already seen the first beaches of biometric data;
    Biometric data stolen from corporate lunch rooms system
    https://www.theregister.co.uk/2017/07/10/malware_scum_snack_on_lunchroom_kiosks/

    UIDAI says Aadhaar system secure, refutes reports of biometric data breach
    http://smartinvestor.business-standard.com/pf/pfnews-479889-pfnewsdet-UIDAI_says_Aadhaar_system_secure_refutes_reports_of_biometric_data_breach.htm

    OPM Now Admits 5.6m Feds’ Fingerprints Were Stolen By Hackers
    https://www.wired.com/2015/09/opm-now-admits-5-6m-feds-fingerprints-stolen-hackers/

     
  3. Processing and replay attacksWhen you place your finger on a sensor or have your photo taken on (for example) a door entry system; where is this processed?
    • Is it on a secure chip?
    • Is it passed as a raw data capture over the wire or is there some form of encoding performed?
    • Can what is passed over the wire be replayed?
    • Is it processed on a central PC or server controlling the doors, or possibly multiple doors?
    • Or (because it faster/cheaper) processed on a cloud system?
    • Or maybe it goes off to an army of people in China or India (because people are better at facial recognition matching) who get the two photos to compare and then click “match” or “no-match”?  Bottom line, even if you knew to ask, there is no easy way of knowing how the system is architected, or how your data is being handled. Lifting fingerprints off the wine glass to gain entry are the stuff of “Mission Impossible” and films of a similar genre, but how many of these solutions are vulnerable to such an attack, or liable to a replay attack?
       
  4. The locus-of-control problem (Jericho Forum Commandment #8)1;
    Biometric identity systems turns a variable (maybe Person) to a binary (IS Person), according to some undisclosed (to the person taking the risk) criteria, which is probably de-tuned to ensure minimal false positives.
    Thus in a global identity ecosystem if I get an "IS DEFINITELY Person" from a Kazakhstan Bank system - do I trust it? (probably not), thus;
    The only solution is that you have to enrol your biometric with MY system which I control and thus I understand the risk - back to the locus of control problem!
     
  5. The authentication ennoblement problem;
    Using biometrics are often perceived as, or sold as, a more secure method of authentication, quite possibly as they are usually more expensive and difficult to implement.
    More worryingly, biometric authentication is usually implemented using a system that is in effect a black box, with the person taking the risk having no knowledge of how that binary “passed authentication” is actually arrived at; or even how many attempts it took, or the probability of the match.
     
  6. The access-creep problem;
    Try this simple experiment; register your fingerprint as (say) #6 on your friends or partners phone (with their permission), now set them up for e-banking, and get them to enable fingerprint authentication. Now simply access their e-banking app using your fingerprint!
    Yes it's trivial, but its access-creep - yes my partner was happy for me to unlock their phone (useful for me to look at traffic if they are driving or to make a phone call) but they never understood this would also let me have full access to their bank account.
    The bank (or whoever) just sees fingerprint authentication as a "stronger" authentication method than PIN and applies, possibly incorrectly, a higher level of trust. At least with a PIN code I can have a four digit PIN for my phone unlock and an (enforced) six-digit PIN (and thus different PIN) for my e-banking.
The conclusion we came to when looking at how a global identity ecosystem should be designed, is that YOUR biometric should only be held in a dedicated device, under your control, which only releases a crypto-proof of "sameness".
Ultimately, biometrics will only be truly usable if the entity taking the risk is able to evaluate the complete risk-chain; from how the human authenticates to the device, and then all the attributes of the components in-between.
If you couple this with the device able to assert both "model" and "chain of custody" (and potentially "duress"), then you have a robust model by which the entity you are asserting the sameness to; can make a risk-based decision as THEY understand the full risk-chain.


References:

  1. Jericho Forum Commandment #8 – “Authentication, authorization, and accountability must interoperate/exchange outside of your locus/area of control”
    https://collaboration.opengroup.org/jericho/commandments_v1.2.pdf

Paul Simmonds, CEO Global Identity Foundation, December 2017