Setting the AZURE SAS token expiry beyond 5 years and the risks involved - azure-iot-hub

Through the use of MQTT protocol and SAS (shared access signature) tokens, my device is connected to AZURE IoT HUB.
I would like to set the SAS token's expiry time to 5 years because it is hardcoded into the device firmware. The device will be connected to IoT Hub then a message will be routed to Azure Storage using a custom endpoint.
What will the risks be?
Due to these devices' remote location, SAS tokens or firmware cannot be updated frequently.

Unless you are running highly sensitive use case which can be affected by another device impersonating its identity, it should not matter. MQTT protocol by design does not allow two connections with the same identity (it disconnects the device if it gets another CONNECT request).
The key or token will need to be in the firmware with physical access to the attacker. In most cases losing physical access to the device will be worse than losing the key or token to impersonate that device. If the key or token is stored in a secure element then it will be more secure.
If you need longer than 5 years, just store the symmetric key and generate token as needed.

Related

Sonos Authorization API - Client Credentials

Is there a maximum limit of client credentials my control integration can use ?
I would like to control the sonos devices via a small IOT device that has it's own event callback URL without using an external server.
Is this possible ?
In the client credentials I can only configure one event callback url.
I would build a tiny webservice in the IOT device that receives event callbacks for status changes. (Volume changes / Playback states / group states)
Every IOT device needs a different callback url (and different client credentials).
Is this possible ?
If this is possible can we use a self-signed certificate for our IOT webservice ?
Is there a maximum limit of client credentials my control integration
can use?
There's no maximum limit. But note that your application uses the API key and secret, not the customer, therefore you should not need more than one. You don't need to create a unique API key and secret for each customer. You may need to create multiple credentials if you wish to have a one for your development app and one for your production app.
I would like to control the sonos devices via a small IOT device that
has it's own event callback URL without using an external server. Is
this possible?
We do not currently support a redirect URI to a local device or app (for example - a Redirect URI that opens an app on the device).

Multiple Gateways to handle production and sandbox requests separately

I am to configure the gateway in separate environment(production and Sandbox) and I have a Doubt:
https://docs.wso2.com/display/AM200/Maintaining+Separate+Production+and+Sandbox+Gateways#MaintainingSeparateProductionandSandboxGateways-MultipleGatewaystohandleproductionandsandboxrequestsseparately
In the store and publisher configuration I need to configure the <RevokeAPIURL>
In the document https://docs.wso2.com/display/CLUSTER44x/Clustering+API+Manager+2.0.0#ClusteringAPIManager2.0.0-ConfiguringtheAPIPublisher
<RevokeAPIURL>https://<IP of the Gateway>:8243/revoke</RevokeAPIURL>
How I have the Gateway Production and Sandbox separated what address gateway I have in this configuration?
Thanks a lot.
<RevokeAPIURL> is used by store node to call revoke and token APIs of gateway node, when you (re)generate tokens (by client credential grant type) from store UI.
But in this deployment pattern, there is a kind of limitation that is you have to pick one gateway node and configure that for <RevokeAPIURL> in store node's api-manager.xml.
Fo example, let's say you configured prod-gateway there. So, when you generate keys (either prod or sandbox) from store UI, it will call prod-gateway's revoke and token APIs. Since both gateways are pointing to the same keymanager (or km cluster), token generation should work without a problem.
The only downside is with caching. When you regenerate sandbox keys from store UI, it calls prod-gateway and clears the key cache of that gateway only. Therefore, the sandbox-gateway key cache won't be invalidated. So you will be able to call your sandbox-gateway's APIs with old revoked token for about 15 more minutes until the cache expires.
But, if you don't use Store UI to generate keys (i.e. client credentials grant type), which do not happen in a typical production environment where password grant type is usually used, (and calls gateway's token API for that directly), you won't experience this limitation.

How to support secure creation of a user account over an external API via a queued job?

So this question is delving into security and encryption and the problem potentially hasn't been encountered by many. Answers may be theoretical. Let me outline the scenario...
A website frontend is driven via a backend API. The backend has an endpoint handling a generic registration form with username and password. It's using SSL.
The backend API handles registration via an async job queue. The queue does not return responses to the API server. It's a set and forget operation to queue up the registration.
Queued jobs are picked up by workers. The workers take care of creating the user account. These workers need access to the plaintext user password so that they can trigger a third-party API registration call with the password.
So the real crux of the problem is the syncing of the password to the third party API while not revealing it to prying eyes. The queue poses the problem of not having direct access to the plaintext password from global POST data anymore, meaning it needs to be stored in some fashion in the queue.
The queue can easily store the hashed password and copy it directly to the users table. This solution does not allow for syncing of the password with the third party API, however, as it's already encrypted. I toyed with two-way encryption, but am whole-heartedly concerned with leaving the password prone to decryption by an attacker.
Can anybody think of a secure way to handle this scenario of password syncing?
The queue is a requirement and it's assumed that this is readable by anyone with access to the server. The passwords don't necessarily have to be synced; the password for the third-party API could be a derivation of the original so long as there's a secure means to decrypt via the logged in user without supplying their password. This is essentially to simulate Single Sign-On with a third party API that does not support SSO.
There are a few ways to sync passwords:
Both auth stores use reversible encryption so that each system can extract the real values to send to the other system;
Both use the exact same encryption so that you send the encrypted text through and therefore can be understood by both.
One system is the "master" in which the users always authenticate through and the "slave" systems simply receive acknowledgement that the user has logged in. This can take the form of machine generated passwords created by the master for use in account creation on the slaves.
One system is the "master" that all other systems make calls into for account validation. Similar to using LDAP or MyOpenID.
There are certainly issues you can run into with a multi-master password sync'ing such as ensuring password changes are properly replicated when a user changes their password.
In your case, it sounds like the user never directly interfaces with the 3rd party API. If that's accurate, have the users authenticate against your system. Generate the 3rd party API password when needed, store it with their account and auto log them into the other system as necessary. Your primary password can be stored in irreversible encryption; however the 3rd party one would have to use reversible encryption. The Queue would never have to have the initial password and instead would simply generate a new one and store it with the local account.

Prevent sharing login credentials between users in WCF

I have a service hosted in a Worker Role in Azure. Clients connect over NetTcp bindings using certificates for mutual client/service authentication and with a custom username password validation.
Clients also receive event notifications that are broadcast through the Azure service bus using shared secret authentication.
I want this to be secure and not allow one person to share his/her login information with friends or anyone else - their login is for their use only. Similarly, a user that forgets to log off at one machine and then logs in to the service from another machine (i.e. tablet, work computer etc.) should trigger a automatic shutdown of the application that was not logged off from.
I am using a per-call serivce, and to have implement a solution using sessions would require alot of rewiring.
I figure I need to keep track of the users' context when they make a operation call and track which IPs are currently using that login/credential. I would like to be able to have some kind of "death touch" whereby the service can send a kill command to a client when multiple logins are detected.
Any suggestions or pointers to patterns that deal with this issue would be appreciated.
Thanks.
Even if you did go with PerSession you would still need to determine if the same user was in more than one session and you have the overhead of session.
I have only tested this over WSHttpBinding and not hosted in an Azure Role so please don't vote it down if it does not work on NetTcp Azure Role - comment and I will delete it. Even with PerCall the SessionID is durable and SessionID is available on both the client and server. More than one user could have the same IP address but SessionID is unique to the session. Clearly you would need to record the userID, SessionID but table storage is cheap.
Maybe update license model for concurrent usage. By recording userID and sessionID you could write an algorithm to calculate max concurrent usage.

Don't want to store the secret Facebook/Twitter API key on mobile devices, design patterns?

I have an issue with letting my secret API key be all over the world on potentially thousands of mobile devices. It could easily be compromised and used for malicious purposes by a hacker.
So what are the options for me?
I would guess a private server which has the secret API key and a web service that encapsulates all method calls. So instead of the mobile device having the secret key and does something like:
List<Friends> = service.GetFriends(secretKey);
If my secret API key is compromised and is used for spamming/abuse purposes, I must shut down the use for all my users, leaving my application dead in the sea.
So my idea is that I can use the mobile device unique device ID and do:
List<Friends> = myService.GetFriends(deviceID);
Of course, a malicious hacker could just call my web service with a fake deviceID, but at least I now have control to blacklist deviceID's. It also introduces some potential bandwidth isssue, but that is of a less concern.
A true PKI is probably out of the question, since the targetted device doesn't handle HTTP client certificates in the current version.
Any other good ideas?
You don't want to publish your API key to Facebook or Twitter, even obfuscated within a client.
Your instincts are correct to proxy through a service you control. Limiting access to that service is up to you - how much risk is unauthorized use? Device ID is a pretty good clue to device/user identity, but can be faked.
There are stronger authentication methods you could use (SMS auth, etc) to establish a long-term session or device key, however these are more complex and impose a higher burden on the end user.
TL;DR
Protect your platform API keys. Secure your own API enough to protect your needs.