We are developing an application in react-native. Our security team has raised a vulnerability in iOS binary. The description of the vulnerability is given below.
The application has references to potentially risky symbols that
modify the default SSL certificate validation.
The behavior of iOS's TLS/SSL libraries' certificate validation is
intended to be secure by default (CFNetwork, Foundation, etc). These
libraries validate a number of security items such as certificate
expiration dates and known root certificates.
This application was found to reference symbols that can be used to
modify the default validation behavior for TLS/SSL certificates. When
the default validation is modified the potential exists to
inadvertently weaken the security of the TLS/SSL protocol and thereby
making the traffic more susceptible to interception or modification by
attackers. The binary contains references to methods that may result
in vulnerable SSL connections:
_kCFStreamSSLValidatesCertificateChain
We have not implemented SSL pinning in the app. We are not sure how to fix this issue and the security team has not been able to provide any additional information. They use some automated tool which identified the issue and provides only following recommendation which we could not find very helpful in remediating the vulnerability.
Review the application's and third party library's code for the use of
these symbols to ensure they are not being used to weaken the TLS/SSL
security. Some of the symbols detected may be deprecated or involve
non-public APIs so a review of the code is encouraged for compliance
with Apple's App Store policies.
Please note that the presence of these symbols does not indicate
insecure usage as they may be used to check for or explicitly set
default values. These symbols may also be found in code designed to
perform non-standard certificate validation, such as custom SSL
pinning implementations.
Any help provided by the community in identifying and fixing this issue would be really appreciated.
Related
Suppose I have a mobile app which makes API calls to a server using HTTPS.
Would a malicious user be able to install Wireshark + Android emulator to inspect the API calls and by doing so get access to sensitive data like an API key?
I guess my question is whether Wireshark (or some other tool) can inspect the request before it gets encrypted.
If you control the client, then of course yes. Anything the client knows, its user may also know.
Without controlling the client, no, an external attacker cannot inspect or change https traffic unless they know the session keys. For that, they would typically use a fake certificate and make the client accept it (it won't do it by itself, and we are back at controlling the client).
Would a malicious user be able to install Wireshark + Android emulator to inspect the API calls and by doing so get access to sensitive data like an API key?
I guess my question is whether Wireshark (or some other tool) can inspect the request before it gets encrypted.
Yes this possible if the user controls the device he wants to intercept the API calls.
In the blog post Steal that API Key with a Man in the Middle Attack I show how a proxy tool(MitmProxy) can be used to intercept and introspect the https calls:
While we can use advanced techniques, like JNI/NDK, to hide the API key in the mobile app code, it will not impede someone from performing a MitM attack in order to steal the API key. In fact a MitM attack is easy to the point that it can even be achieved by non developers.
In order to protect https calls from being intercepted, introspected and modified the solution is to use certificate pinning:
Pinning is the process of associating a host with their expected X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a pinset (taking from Jon Larimer and Kenny Root Google I/O talk). In this case, the advertised identity must match one of the elements in the pinset.
and you can learn how to implement it in the article Securing HTTPS with Certificate Pinning on Android:
In this article you have learned that certificate pinning is the act of associating a domain name with their expected X.509 certificate, and that this is necessary to protect trust based assumptions in the certificate chain. Mistakenly issued or compromised certificates are a threat, and it is also necessary to protect the mobile app against their use in hostile environments like public wifis, or against DNS Hijacking attacks.
You also learned that certificate pinning should be used anytime you deal with Personal Identifiable Information or any other sensitive data, otherwise the communication channel between the mobile app and the API server can be inspected, modified or redirected by an attacker.
Finally you learned how to prevent MitM attacks with the implementation of certificate pinning in an Android app that makes use of a network security config file for modern Android devices, and later by using TrustKit package which supports certificate pinning for both modern and old devices.
While certificate pinning raises the bar, its still possible to intercept, introspect and modify https traffic, because it can be bypassed, as I demonstrate in the article Bypassing Certificate Pinning:
In this article you will learn how to repackage a mobile app in order to make it trust custom ssl certificates. This will allow us to bypass certificate pinning.
Conclusion
While certificate pinning can be bypassed I still strongly recommend its use, because it will protect the https communication channel betwwen your mobile app and API server in all other scenarios where is not the user trying to perform the Man in the Middle attack:
In cryptography and computer security, a man-in-the-middle attack (MITM) is an attack where the attacker secretly relays and possibly alters the communications between two parties who believe they are directly communicating with each other. One example of a MITM attack is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. The attacker must be able to intercept all relevant messages passing between the two victims and inject new ones. This is straightforward in many circumstances; for example, an attacker within reception range of an unencrypted wireless access point (Wi-Fi[1][2]) could insert themselves as a man-in-the-middle.[3]
Going the extra mile?
OWASP Mobile Security Project - Top 10 risks
The OWASP Mobile Security Project is a centralized resource intended to give developers and security teams the resources they need to build and maintain secure mobile applications. Through the project, our goal is to classify mobile security risks and provide developmental controls to reduce their impact or likelihood of exploitation.
HTTPS request is encrypted on your host (client) before sending over the network, so it is not available for Wireshark. Wireshark can get hostname of the HTTPS web serserver you connect but not the URL.
I'm using the ClientCAs and ClientAuth options in tls.Config to do cert-based client authentication in my Go HTTP application. Is it possible to also verify the client certs against a provided CRL? I see in the x509 package there are some functions around CRLs, but I'm not sure how to configure the HTTP server to use them (ie. there doesn't seem to be any options in tls.Config that would cause a CRL to also be used).
Is it possible to also verify the client certs against a provided CRL?
Yes, it is possible, by means of the functionality provided in the crypto/x509 package (as you correctly stated in your question). However, higher-level interfaces such as crypto/tls.Config (consumed by net/http) do not offer that. A good chance to implement a check against a CRL probably is by inspecting net/http.Request.TLS.PeerCertificates.
A little bit of background: crypto/tls is maintained by security expert Adam Langley who has an opinion on revocation checking (original source is his blog). Though I have no evidence, one might assume that this was a deliberate design decision.
You can use VerifyPeerCertificate to validate connections against your CRL file. Simply iterate over your list of revoked certificates and check if the serial number of the revoked certs is equal to the peer certificiate's serial number.
I am almost ready to submit a Windows 8 Store app to the store. As part of this process you must answer the question:
Does your app call, support, contain, or use cryptography or encryption?
It goes on to mention these possibilities:
Any use of a digital signature, such as authentication or integrity checking
Encryption of any data or files that your app uses or accesses
Key management, certificate management, or anything that interacts with a public key infrastructure
Using a secure communication channel such as NTLM, Kerberos, Secure Sockets Layer (SSL), or Transport Layer Security (TLS)
Encrypting passwords or other forms of information security
Copy protection or digital rights management (DRM)
Antivirus protection
(emphasis mine.) There are some exemptions:
Password encryption
Copy protection
Authentication
Digital rights management
Using digital signatures
My app was originally a Windows Phone app with limited ability to store or export data locally, so we have functionality to backup to or restore from SkyDrive. (For the purposes of this question the fact that SkyDrive may soon change its name is not relevant.) We put this same capability into the Windows Store app. The connection to SkyDrive is https - in other words we are using SSL.
Does this mean I need an Export Commodity Classification Number (ECCN)? Really?
From this page, Understanding export restrictions on cryptography, it looks like the answer is yes, SSL counts unless you are not transporting content over the wire. But I'm not a lawyer.
Does your app call, support, contain, or use cryptography or encryption?
This question helps you determine if your app uses a type of cryptography that is governed by the Export Administration Regulations. The question includes the examples shown in the list here; but remember that this list doesn't include every possible application of cryptography.
Important When you answer this question, consider not only the code you wrote for your app, but also all the software libraries, utilities and operating system components that your app includes or links to.
Any use of a digital signature, such as authentication or integrity checking
Encryption of any data or files that your app uses or accesses
Key management, certificate management, or anything that interacts with a public key infrastructure
Using a secure communication channel such as NTLM, Kerberos, Secure Sockets Layer (SSL), or Transport Layer Security (TLS)
Encrypting passwords or other forms of information security
Copy protection or digital rights management (DRM)
Antivirus protection
For the complete and current list of cryptographic applications, see EAR Controls for Items That Use Encryption.
Is the cryptography or encryption limited to one or more of the tasks listed here?
If you answered yes to the first question, then the second question lists some of the applications of cryptography that are not restricted. Here are the unrestricted tasks:
Password encryption
Copy protection
Authentication
Digital rights management
Using digital signatures
If your app calls, supports, contains, or uses cryptography or encryption for any task that is not in this list then your answer to this question is No.
I have an OAuth2 api exposed that runs over HTTPS. Since OAuth2 relies on the security of HTTPS (doesn't do any of it's own signing) I added a note in the developer docs encouraging developers to make sure they validate the ssl certificate in their client applications.
I noticed that some apps make the crt file publicly available or include it in their client: https://github.com/stripe/stripe-ruby/tree/master/lib/data
I assume this is just to make sure it is using the right certs (and not any system installed ones)? If so, is it a good idea to make this crt file publicly available to developers on your API page and what is an easy command/way to generate this file?
Thanks!
When one makes the certificate public this way, he encourages clients to do binary comparison of certificates, i.e. validate the certificate not in a way defined by corresponding standards by building a certificate chain and validating them) but simply by comparing the presented certificate with the one stored in the client.
This method is broken in several ways:
binary comparison doesn't let the client know that the certificate was revoked
with binary comparison the change of server certificate would require updating all clients so that new certificate would be included there. Failure to upgrade would mean impossibility to connect.
Consequently inclusion of the certificate and "straightforward" use of such certificate makes no sense, neither for server owners nor for clients.
The only case when binary comparison is applicable is when self-signed certificates are used (in which case building and validating a chain won't work). But self-signed certificates is a bad idea in any case (due to reasons listed above and some other reasons).
Can a particular web page in a web site, authonticate a web request using client side SSL certificate, while others don't?
It's possible using SSL/TLS renegotiation. The way to configure it depends on the server you're using (and whether it supports it).
Note that, at the end of last year (October/November 2009), an SSL/TLS protocol flaw was discovered regarding this feature.
SSL/TLS stacks that support renegotiation based on code before that will be vulnerable to the attack. Most libraries did an emergency security update where they disable renegotiation altogether (therefore removing the client-certificate renegotiation). In February 2010, RFC 5746 was published with a fix to this problem, but not all stacks implement it yet.
I believe the most common way to secure a single page is to put it in a subfolder that is SSL-secured.
This article may help:
http://www.leastprivilege.com/PartiallySSLSecuredWebAppsWithASPNET.aspx