Erik Romijn

Vulnerability in Apple portal compromised iOS keychain access groups

When writing web applications, we make use of the many features of browsers to improve the experience, like javascript validation. But convenient as they may be, we can never ever rely on these features when it comes to protecting the data of other users. The browser is entirely controlled by a potentially malicious user, which means that whatever the browser does, our systems should be designed not to allow harm to any other user. The browser is not your friend.

A little while ago, I discovered a vulnerability in the Apple provisioning portal, where developers register App IDs and provisioning profiles. The portal made the mistaken assumption that it could rely on the browser. The impact was that any developer could submit an app that would be able to read the Keychain entries created by another app if the other app used Keychain access groups, a commonly used and widely recommended feature.

The end result is that it allowed any iOS developer to create an app that passes App Store validation and could read the secrets stored by specific third-party apps, such as Dropbox, PayPal or the Google Authenticator. I first noticed this vulnerability was fixed on October 10, 2014, over one year after my initial report to Apple.

What is Keychain and what are access groups?

Keychain is a system on iOS and Mac OS X, which allows secure storage of sensitive data, like passwords. This allows developers to store data like the your Facebook password in a secure but simple way.

By default, on iOS, items are bound to the app that created them: other apps can not see the items. However, in some cases it is desirable for a developer to create multiple apps that share Keychain entries. In that case, they can store them under a custom Keychain access group. Every app that is part of the group, can read all items stored for the group.

How does Keychain know who can join an access group?

It’s important to restrict who can use which Keychain access group. We don’t want any arbitrary app on a device to be able to access all Keychain items: this would defeat the purpose of the Keychain. Therefore, only apps from the exact same developer are allowed to share items. For clarity, there is nothing wrong with using Keychain access groups, or the way Dropbox, PayPal or Google used them. This method is documented and recommended by Apple and myself.

The access control to shared items is enforced through App ID prefixes and provisioning profiles. Every iOS app has an App ID, determined by the developer. This is prefixed by a short random string, the App ID prefix, which can not be freely chosen by the developer. Multiple apps by the same developer can have the same App ID prefix, but not multiple apps from different developers. App IDs and App ID prefixes are not secret.

Once an App ID is registered, the developer can create a provisioning profile, and set the Keychain access group name in the entitlements file. The access group name must include the App ID prefix, and this prefix must match the App ID prefix stored in the provisioning profile. If the access group in the entitlements does not match the provisioning profile, access is denied. If they do match, the app can then read and write items in this group.

In the end, I can make a Keychain access group for my own apps, but you can’t join my access group, because you can not select the same App ID prefix as I have. Apple will not issue you a provisioning profile with my App ID prefix.

What was the vulnerability?

When a developer registers an App ID, they select the prefix from a dropdown. This lets them choose whether the prefix should be the same as one of their existing apps, allowing them to be in the same Keychain access group.

This dropdown is implemented with a select field. Normally, that does not allow the user to select an arbitrary value. However, being the user of the browser, it is possible to edit the available choices, with a basic web inspector tool or some javascript. That’s why it’s essential to always validate the choice on the server side as well. Unfortunately, this particular dropdown did not validate what had actually been submitted, allowing any developer to register an App ID with any desirable prefix.

The result is that any iOS developer could create a valid app, which would pass App Store validation, that could access all Keychain entries belonging to a third-party app, if the third-party app uses a Keychain access group. Apps that do this, for example, are Dropbox, PayPal, or the Google Authenticator app, which all store very sensitive data.

Proof of concept with Dropbox

I built a small proof of concept, using the Keychain entries of the Dropbox app as my target. Note that this issue is not a vulnerability in Dropbox or its app: I am simply using it as an example.

First of all, I created a very basic iPhone app, that looks for all Keychain entries it can find, and dumps them on the screen. We can see in the (publicly available) app entitlements of the Dropbox app that it uses a Keychain access group:

$ codesign -d --entitlements - Payload/Dropbox.app
....
<key>keychain-access-groups</key>
<array>
    <string>8KM394JM3R.com.getdropbox.DropboxKeychainFamily</string>
</array>
....

Then, I registered an App ID with this same prefix:

The portal accepted my chosen App ID prefix. This is the core of the vulnerability: it should have validated whether that prefix belongs to my account, and rejected it, but it failed to check that in the past:

Then, I created my provisioning profiles, with the prefix that belongs to Dropbox:

I then set the same Keychain access group as Dropbox in my app:

I needed my custom ad-hoc provisioning profile to test this on my device. If I tried it with my normal certificate, it would complain that I’m not allowed to use a custom Keychain access group: that’s because the App ID prefix does not match, as explained previously:

Using the provisioning profile with the matching App ID prefix, the one Apple should not have given me, I ran the app on my device, which has Dropbox installed, and it shows the Keychain entry of the Dropbox app, including the authentication key under v_Data:

With the App Store provisioning profile I created, it would also pass App Store validation:

What else is affected?

In iOS 8 the effects of this issue are probably more serious. When apps develop widgets, the only way to share data is through App Groups, a mechanism similar to Keychain Access Groups. The possibility of developing widgets will probably increase the number of developers using these sharing features. I haven’t tested this in iOS 8 though.

With the issue now resolved, this can’t be exploited in new scenarios. However, others could have discovered this issue as well, and used it in the past to set up App IDs that may still provide them with access to other apps’ data today.

What can we learn?

Never ever trust the browser to verify anything for you, when it comes to security sensitive issues. The browser is not your friend. Javascript validation or select inputs can be good for user experience, but you must always revalidate everything on the server side.

If you use Django, this is the default behaviour for ChoiceField: the same list that is used to populate the field, is also used in validation. If the browser sends a value that is not in your server side list, validation fails.

Reporting and fix timeline

I can’t pinpoint the exact date this issue was fixed, but I noticed the fix on October 10, 2014, which was 13 months after my initial report:

  • 2013-09-01: I sent the initial report of the vulnerability.
  • 2013-09-04: Apple requested a proof of concept or other example file to reproduce the issue.
  • 2013-09-07: I replied with details of a complete walkthrough in a private blog post, along with a further clarification of the impact.
  • 2013-10-07: I sent a request for a status update.
  • 2013-10-09: Apple replied that they were unable to decrypt my message.
  • 2013-10-13: I resent my request for an update.
  • 2013-10-21: Apple replied that they were unable to decrypt my message.
  • 2013-10-21: I resent my request for an update in cleartext.
  • 2013-10-22: Apple replied that they were also unable to decrypt the original report. This is a bit curious, because I received a human reply on 2013-09-04 that did not hint at this.
  • 2013-10-30: I resent the full original report along with the later clarification, in cleartext.
  • 2013-11-05: Apple sent a manual confirmation, saying that the issue is being investigated.
  • 2013-12-14: I asked for an update, whether Apple has been able to reproduce the issue, and a timeline on when it might be resolved.
  • 2013-12-16: Apple asked me whether I would like to be given credit, and in which format.
  • 2013-12-17: I replied with details for the credit page. Asked whether this meant that the issue has been resolved.
  • 2014-01-13: I asked for an update, a possible timeline, and offered to provide any additional necessary information.
  • 2014-01-18: Apple responded: “We are testing a comprehensive fix for the issue you reported.”. Specifically noted that this information was confidential.
  • 2014-03-14: I asked for a further update, and when the issue will be resolved, as a publication has been prepared.
  • 2014-03-22: Apple responded: “We have no new status to report at this time”.
  • 2014-09-18: I asked for an update, as the issue had almost been open for a year.
  • 2014-09-30: Apple responded: “We have no new status to report at this time”.
  • 2014-10-10: I discovered the vulnerability had since been fixed.