Account password transmission and PCI DSS compliance - ssl

i'm developing an android app that must be PCI PA-DSS compliant, my question is about this requirement in the PA-DSS_v3-1 document
3.3.1 Use strong cryptography to render all payment application passwords unreadable during transmission.
let's say i have a "change your pasword" feature in my app that transmits the user's account password over a ssl/tls encrypted connection to the server. Is this encryption sufficient to comply to the requirement? do i need to implement some kind of encryption before sending it through ssl?
thank you.

The PCI standard can be vague at times and a little 'open ended', but from our experience, its quite OK as you have it.
SSL/TLS IS the encryption, just use it for your forgotten password feature and you'll be fine.

Related

Does sip sip signalling part for webrtc call has to be encrypted as TLS?

If the signalling part of webrtc is handled by sip, is it mendatory that sip signalling has to be encrypted by TLS. Is it mendatory that sip over websocket has be encrypted as TLS
WebRTC uses TLS sessions or QUIC for its signaling transport – both are encrypted in nature. All other avenues for non-encrypted signaling don’t really exist in WebRTC. Theoretically they might work, but some browsers will either block them altogether or require the user to grant access to the camera and microphone on each interaction. Deploying anything serious to production with WebRTC without encrypting signaling for their browser implementations is not a real alternative.
WebRTC “forces” you to encrypt your signaling. What is left out of scope of WebRTC are things like authentication, authorization and identity management. You are free to do as you please in that domain – just make sure to do something here – and not leave this wide open for pranksters or worse.
For native applications on mobile, desktop or embedded – you can do whatever you like. That said, the mindset must be the same – mandatory encrypted signaling.
reference : Best webrtc related blog

is there any security issue that can be expected when the mqtt client doesn't provide public key certificate during TLS handshake?

I am building up a small iot-like system, where mqtt devices(clients) are sending and receiving security-related critical information or commands.
I have got to know that TLS connection can be built optionally without client authentication thru PK certificate on the client side.
Normally, mqtt client devices don't have enough resources to support PKI, where at first it has to store a certificate and from time to time, to update it with newly issued ones when validity has passed or when the original certificate has been revoked.
That was, I think, why many of mqtt brokers have an option to configure on/off the client authentication during TLS handshake.
However, my concern is if there would be any security issue from passing the client authentication step, like, for example, a chance that some other malicious devices impersonating one of my devices can connect to the broker could obtain those critical information and commands.
My question is what best options and practices I can take to minimize that kind of risk considering the constraint resource of devices.
Missing client authentication means that everybody including an attacker can claim to be a valid client. There can be use cases like public services where this is not a problem and there are other use cases where the server wants to restrict access to specific known clients only.
There is no definitive answer to this question, it will always depend on the following factors, and only you as the designer can answer them:
What is the threat model you are working with? E.g. Who are you trying to keep out of the system and why, what are the consequences of somebody connecting a rouge client?
How much are you prepared to spend? If you intend to deploy client certificate or even a unique username/password for each device, how will it be protected? Does the hardware you intend to use support a secure enclave/hardware secret store? Meaning how hard would it be for an attacker to extract the client username/password or secret key from the device?
What other security measures do you have in place? Do you have Access Control Lists to protect which topics a client can publish/subscribe to? Do you have monitoring in place to detect malicious actions from clients so they can be disconnected and banned?

difference between https ,ssl and pci dss compliance

Hello anybody describe me. I am always confused .
what is difference between HTTPS, SSL and PCI compliance.
how HTTPS work ??
how SSL work ??
how PCI work ??
SSL
Definition: SSL (Secure Sockets Layer) is a security protocol commonly used in circumstances like E-Comm, with Verisign etc to protect personally identifiable information during web transactions, as well as other sensitive data like credit card numbers and logins. SSL certificates generally need to be bought and installed on your web server.
More reading here: https://www.globalsign.com/en/ssl-information-center/what-is-an-ssl-certificate/
HTTPS
Definition: HTTPS is HTTP + SSL certificate.
HTTPS (Hyper Text Transfer Protocol Secure) is the secure version of HTTP. The "S" implies that all data that is sent over the browser is encrypted.
Example: Google Searches
PCI
Definition: PCI DSS Compliance Payment Card Industry Data Security Standard is the global data security standard for credit card payments. I agree that it doesn't make sense to "implement PCL like SSL". PCI compliance governs everything from the hardware (card reader or point of sale) to your payment gateway. It's much easier to go with a payment processor that is already PCI compliant, as adhering to standards independently is probably not worth your time. Square has a basic guide here: https://squareup.com/guides/pci-compliance
SSL, which has since been replaced by Transport Layer Security (TLS), is basically a set of cryptography protocols to ensure private communication from a client endpoint (e.g. web browser) to a server. Apart from private communication, when properly implemented, it also includes mutual authentication of the client and server (i.e. the client verifies that it's communicating with the server it thinks it's communicating with and the server verifies that the client is really who they claim they are) and some kind of tamper resistance; these are to prevent man-in-the-middle attacks and provide a partial defense against replay attacks.
HTTPS just means that you're using HTTP over TLS or SSL.
As I describe in my comments, PCI standards are very different than either SSL or HTTPS. PCI standards are exactly that - a standard for data security, not a specific network or cryptography protocol.
Here is a description of what PCI compliance means (from the PCI Compliance Guide FAQ):
The Payment Card Industry Data Security Standard (PCI DSS) is a set of
security standards designed to ensure that ALL companies that accept,
process, store or transmit credit card information maintain a secure
environment.
It's very important to note that there's a lot more to PCI compliance (and to software security in general) than just secure data exchange. In fact, the FAQ I link to above addresses that specifically; in response to the question "Am I PCI compliant if I have an SSL certificate?" they say the following:
No. SSL certificates do not secure a web server from malicious attacks
or intrusions. High assurance SSL certificates provide the first tier
of customer security and reassurance... but there are other steps to
achieve PCI compliance.
Some examples of other things you have to consider for data security:
Do you store passwords and other sensitive data properly on your server (e.g. salting them, etc.)?
Do you have adequate network security (e.g. firewalls) in place? (Note that, as described in the book I link to below, even then you shouldn't assume that merely having a firewall is a complete defense against security problems).
Do you have adequate physical security in place? For example, how feasible would it be for someone to walk into your server room and gain access to the servers? Do you have to scan a badge to get in to the server room, and is access restricted to authorized employees?
Do you run code with least permissions?
Has code been reviewed and tested for common security bugs like buffer overruns and integer overflows?
There's an excellent book out there called 24 Deadly Sins of Software Security that describes common security bugs.

Authenticating a client to a server

I have a small device that contains a client program which communicates with a server over the internet. Pretty standard stuff.
I have a requirement that the server be able to authenticate messages coming from the device, meaning that all communications from the device be from the authentic client and not from some impostor. It's assumed that an attacker can reverse engineer the client and also load his own programs onto the device.
I'm questioning whether this is even possible. I could certainly load a client certificate into the client, but an attacker could get to this and use it himself. The cost of the device must remain low, so no fancy hardware tricks. Any ideas on how I could do this?
Depending on the device, and what kind of abuse you are talking about, you could use a scheme that needs some kind of activation. Like entering a master key into memory only - so its lost if power is lost - a technic used on some crypto cards.
A way to counter stolen devices could involve some kind of lease of keys that needs renewal on a regular basic by specifying a secret.
A way to counter an imitation/copy could be to works with a common state between the client and server that keeps changing. Like negotiating new encryption keys regularly.
We use a similar thing with our apps and web services. We call it ApiValidation where the client in each request to the service adds a header called ApiID which the server can decode to see if the client is authorized or not.

Does using SSL mean you have to say your app uses Cryptography?

I am almost ready to submit a Windows 8 Store app to the store. As part of this process you must answer the question:
Does your app call, support, contain, or use cryptography or encryption?
It goes on to mention these possibilities:
Any use of a digital signature, such as authentication or integrity checking
Encryption of any data or files that your app uses or accesses
Key management, certificate management, or anything that interacts with a public key infrastructure
Using a secure communication channel such as NTLM, Kerberos, Secure Sockets Layer (SSL), or Transport Layer Security (TLS)
Encrypting passwords or other forms of information security
Copy protection or digital rights management (DRM)
Antivirus protection
(emphasis mine.) There are some exemptions:
Password encryption
Copy protection
Authentication
Digital rights management
Using digital signatures
My app was originally a Windows Phone app with limited ability to store or export data locally, so we have functionality to backup to or restore from SkyDrive. (For the purposes of this question the fact that SkyDrive may soon change its name is not relevant.) We put this same capability into the Windows Store app. The connection to SkyDrive is https - in other words we are using SSL.
Does this mean I need an Export Commodity Classification Number (ECCN)? Really?
From this page, Understanding export restrictions on cryptography, it looks like the answer is yes, SSL counts unless you are not transporting content over the wire. But I'm not a lawyer.
Does your app call, support, contain, or use cryptography or encryption?
This question helps you determine if your app uses a type of cryptography that is governed by the Export Administration Regulations. The question includes the examples shown in the list here; but remember that this list doesn't include every possible application of cryptography.
Important When you answer this question, consider not only the code you wrote for your app, but also all the software libraries, utilities and operating system components that your app includes or links to.
Any use of a digital signature, such as authentication or integrity checking
Encryption of any data or files that your app uses or accesses
Key management, certificate management, or anything that interacts with a public key infrastructure
Using a secure communication channel such as NTLM, Kerberos, Secure Sockets Layer (SSL), or Transport Layer Security (TLS)
Encrypting passwords or other forms of information security
Copy protection or digital rights management (DRM)
Antivirus protection
For the complete and current list of cryptographic applications, see EAR Controls for Items That Use Encryption.
Is the cryptography or encryption limited to one or more of the tasks listed here?
If you answered yes to the first question, then the second question lists some of the applications of cryptography that are not restricted. Here are the unrestricted tasks:
Password encryption
Copy protection
Authentication
Digital rights management
Using digital signatures
If your app calls, supports, contains, or uses cryptography or encryption for any task that is not in this list then your answer to this question is No.