I am working on a webserver which uses rollbarApiKey and segmentApiKey to send data analytics and error logs to relevent hosts.
My understanding is that i have to expose the API keys, which i am currently doing in a /deploy-config.js file. Is it possible to not expose them publicly? Use keys to communicate with rollbar and segment without exposing on a public directory?
Thanks,
In any case that you don't want to expose keys, the solution is always the same: drive the interaction through your own API, and on your server connect to the protected service.
This will increase load on your system. That is the cost of protecting the keys.
There is no way to send keys to a client and then expect the client to not be able to access the keys.
Related
I am using wordnik api from client side. And, as a protection of api key, i want to only allow my website(domain) to send request with that api key. For example, in Firebase, we can control which ip addresses or domains can send requests with that key. Is this possible in wordnik api?
Unfortunately, no, we don't offer a way to limit access by domain or IP. We encourage you to call Wordnik from the server side so that your key is not made public in the client.
I need to secure the communication between two backend servers. A simple api key was rejected by our security policy since attackers would be able to intercept it.
IP restriction also, because it could be spoofed.
I was suggested to use a nonce, but wouldn't this mean that a request requires two roundtrips? I don't really like the idea of having twice the latency.
without your description it's hard to be able to suggest the best way to do it.
If both servers are inside the same Datacenter, you can have some solution to have a private network.
If that's not the case, you can have and authentication system (oauth?) with a token which will be create and validated.
You can use some other techno to sign your data with private and public keys.
The nonce could be good too.
But if your servers are in a DC, they should have the same IP and not move. So why not have a whitelist (IP based) and something else like a nonce or a token
Here's my scenario:
I need to host a WCF web service app that will be consumed by multiple customers. Each customer is responsible for their own client app, and they will be building their client apps with different technologies. It's likely that none of their clients will be .Net (probably will be Java or something else).
I need to implement Message Level Security to abide by their policies (Transport security is not sufficient).
Given the above requirements, I am having a hard time understanding how to implement Message Security in WCF that can be used by clients that I do not control. Everything I've read discusses the scenario where I would be building my own client, and that the client would even be in my network's domain.
If I implement Message Security with Certificate, can I install one certificate on my server and have each client be responsible for installing their own certificates on their servers? Would we then be able to use Message Security by simply sharing the Public Keys?
Basically, what you're saying here in your last paragraph is true. You'd give the subscribers of your WCF service the public key (.cer) file that they'd install and register within the LocalMachine/My store of their client machines.
On the host side, you'd install the cert public key in your LocalMachine/TrustedPeople store and the private key (.pfx or .pvk) in the host LocalMachine/Personal store.
You can vary the location of where you install/registry the public and private keys a bit, but then you'd have to configure your WCF service to find those cert elements on your server. The clients would have to do the same.
This does work. I've done it.
You can automate some of this using a .bat file and the makecert.exe and certmgr.exe DOS commands to ensure everything gets installed in the correct places.
I am working on a HIPAA cloud project and am implementing a Key Store as a central repository for all of the key pairs for PHI(Private Health Information) encryption... I am not worried about the actual data because it will be encrypted at rest and in transit.
However when a worker or webrole needs to work with the data they need to decrypt and reencrypt it (if they do updates). That's where the key Store comes into play. However, I don't want this internal service exposed and I also need it to be SSLed, because sending keys in the clear, even inside a virtual network of roles wouldn't pass a security audit.
So any suggestions on how I can get a web or worker role to use SSL with an internal endpoint?
Thanks
I don't think you can. Internal endpoints are on a closed network branch, so https would normally be redundant (although I understand your compliance issues). I found this answer (to my question) very useful in figuring out the security of internal endpoints: How secure are Windows Azure internal endpoints? - see the more detailed post that Brent links to.
I am implementing an app where I don't have a system requiring username and password. What I do require is a name and a phone number.
The scenario is like this:
1) user opens the app for the first time
2)app makes a request to my server and gets a unique UserKey
3)from now one any request the app makes to my REST service also has a signature. The signature is actually a SHA(UserKey:the data provided in the request Base64Encoded)
4)The server also performs the same hash to check the signature
Why I don't use SSH:
not willing to pay for the certificate
I don't need to send sensitive data like passwords, so I don't see the benefit of using it
I just need a simple way to call my own WCF REST services from own app
I understand that there is a flow of security at step2 when the UserKey comes in cleartext, but this happens only once when the app is first opened. How dangerous do you think this is?
What would you recommend? Is there any .NET library that could help me?
Actually, there are several problems with that approach. Suppose there's man-in-the-middle whenever you make a request to the server. By analyzing, for example, 100 sent packets he would recognize similar pattern with signature in your requests. Then he would forge his own request and add your signature. The server checks the hash - everything's alright, it's you and your unique user key. But it's not.
There's a notion of asymmetric keys in cryptography which currently is really popular and provides tough security service. Main concept is the following: server generates two keys - public and private; public key is used to encode texts; they can be decoded only with the use of private key, which is kept by the server in secure location. So server gives client the public key to encode his messages. It may be made double: client generates public key and gives it to the server. Then server generates keys and gives encoded with client's public key his own public key. This way it's almost impossible for man-in-the-middle to make an attack.
Better yet, since the problem is really common, you could use OAuth to authorize users on your website. It is secure, widely used (facebook, g+, twitter, you name them) and has implementations already in variety of languages.
Since you control both the application itself and the webservices, you can do this with SSL (which gets rid of the problems with your current approach) without paying for anything. You can create a self-signed certificate and install that on your webserver; configure the SSL context of your client application to only trust that one certificate. Then, create a client-side self-signed certificate and install that within your application. Set the server up to require mutually-authenticated SSL and only allow your self-signed certificate for access.
Done. You client will only talk to your legitimate server (so no one can spoof your server and trick the client in to talking to it) and your server will only talk to your legitimate clients (so no one can steal information, ID, etc). And it's all protected with the strong cryptography used within SSL.