I'm writing a program on Visual studio using C#. My question is, after I publish the app .exe, is any hacker can see my codes like SQL user name , password etc. is it possible? How can I block this in development stage?
You can Encrypting Configuration Information
Part of securing an application involves ensuring that highly
sensitive information is not stored in a readable or easily decodable
format. Examples of sensitive information include user names,
passwords, connection strings, and encryption keys. Storing sensitive
information in a non-readable format improves the security of your
application by making it difficult for an attacker to gain access to
the sensitive information, even if an attacker gains access to the
file, database, or other storage location.
But all you are doing is not making it not easily decodable.
You simple cannot hide a connection string.
You should connect to a service that authenticates the client and service connects to the database. The database whould not even be publicly available. See WCF (Windows Communication Foundation).
Even encrypted files are not safe, hackers can try access main computers to clone source files....
Related
My previous question was closed so I will be more specific.
I need to create an application, desktop one written in C#, that will ask for user credentials and after verification opens the GUI allowing to work with DB (black box for users).
It should be used from everywhere, not LAN or SQL domain. I assume I would need to do the following:
Create a client and a server applications that will deal with authentification. That would mean a lot of socketing stuff..
Once the user is verified, the client queries would be sent to database (client->server->DB).
The server would need to send the DB data sets back to the client.
As you can see, this is just my guess but I have no idea whether its too complicated or completely wrong. The main thing is that it must be desktop app (not web based one) and accessible from everywhere.
I am interested in main points how to design the system and will be extremely grateful for that.
You can use a certificate server for authentication..like Apache's mod_ssl
I have active directory and several client computers joined the active directory.
In the client computers I have installed wcf clients.
On the server the wcf service is hosted in IIS.
I use message secyrity with windows credentials.
Everything is working fine.
But I have heard that there are some programs than can extract the password from windows (put live compact disk in the cdrom and restart the pc).
They can use the user and pass to access the wcf service from elsewhere and do damage.
Is this true and what steps can I take to be more safe ?
Regards
Shiraz' advice is all valid for local (not AD) Windows accounts, but I believe the threat you're raising isn't related to the SAM-stored local Windows passwords, since you're talking about an Active Directory setup with Windows systems joined to the domain.
Presumably the message security/windows credentials only allows access to the user accounts you've setup in Active Directory. [All discussion here assumes we're talking about those AD accounts, not local accounts on each Windows client.]
Assuming you've only allowed access to the WCF service for AD accounts, then the WCF service is only practically vulnerable to attackers who can retrieve (or guess) the plaintext password. Since you raised the spectre of live CD attacks, I'll further assume you're only worried about attacks on the Windows clients and not on the AD domain controller (whose physical security is presumably much stronger than the physical protections of the Windows clients).
So the threat you're raising is the possibility that an attacker could somehow find the user's AD password somewhere on the hard drive of the Windows client (or an easily-broken equivalent of their password). That is not the kind of attack for which the Live CDs are generally useful - as Shiraz indicated, they're good for digging up the password hashes out of the local SAM and helping to brute-force try many password combinations (or compare them to a local or online "rainbow table" that contains a ton of pre-calculated password values). Some of these tools also scan through local caches of such passwords, such as older browsers that saved your password for web site authentication - though modern browsers pretty much all have avoided those plaintext backdoors now.
The main cache of a user's AD password on a Windows client is the "cached domain credentials" (which allows you to logon with your domain password even if you're not connected to the network). This isn't stored as just a simple hash of your AD password - instead, it's doubly-hashed and encrypted with the local SYSKEY, making it an order of magnitude more time-consuming to try to brute force. A reasonably long or strong (or both) AD password makes brute-force attacks pretty much infeasible except for very dedicated attackers (like espionage, governments, etc.) So your most effective tool to make sure this is infeasible is to set a reasonable password policy - complex characters and a decent minimum length is fine; non-complex but very long passwords (aka passphrases) are also worthwhile.
Other caches of the password might exist, but that's dependent entirely on whether your users are using really crappy applications - there are fewer and fewer such applications on the market today, but never say never.
This will depend on the version of windows and how updated it is. Previously there was a problem that you could boot the PC using a linux CD, then run a program that did a brute force attack on the SAM file that contained login information.
But the chances of this doing any damage are very small:
It requires physical access to the machine
It does not work with strong passwords
It would require access to your office or that your service is open to the internet
To counter these threats:
Require strong passwords
Encrypt harddisks
Block access to the service from the internet
Protect your offices
I am faced with a WCF security scenario that isn't particularly well documented online.
I am developing a product licensing service in WCF that will be deployed along with our software (i.e. the service is running on the same PC as the client). This licensing service will be responsible for a number of things related to controlling use of our software and connecting to our remote licensing server for updates, revocations etc. Consequently it's not the kind of service I want spoofed, and I don't really want spoof clients communicating with it either.
As it's running on the same PC as the client can anyone suggest a security policy for this scenario? I'm particularly interested in authentication as most of the other security principles are straightforward. I'm reluctant to get into certificates if I can help it but as mutual authentication is a priority I'm beginning to think I may need to implement a custom 'challenge/verify' scheme between the service and client.
Any ideas? Thanks for reading.
Chris.
My suggestion is that no matter how much effort you put into that, there will be an attack vector that makes all of your effort null and void. One option is to use ILMerge to provide a single dll for your entire application, and store it encrypted on disk and create a loader that hits your service passing in the registration information. On your side, the service will validate the customer information and send back a decryption key. The loader would use the decryption key to decrypt the DLL in memory and load it dynamically.
The shortcoming of this approach is that a determined cracker could debug your application and when the DLL is decrypted, write the unencrypted stream to disk. Your only means of retribution would be to place some kind of marker on the DLL so that you can identify who was responsible for breaking your copy protection and bring legal action if it's found open on the Internet.
As long as you're deploying this software to the client, then you cannot store any kind of key inside it without risking compromise. Even if you use certificates, you cannot hide them from the client while still making them visible to your application. And if you embed the key in the assembly itself then someone will just pop it open using Reflector.
Assuming you don't care about outright cracking (i.e. patching the assembly's code to simply bypass the license checks), then there's one and only one correct way to implement this type of security and that is to mimic the way a PKI works, by using a remote server exclusively.
In a PKI, when a server needs to validate a client via a certificate, it checks that certificate against the certificate authority's CRL. If the CRL reports that the certificate is revoked then it refuses access. If the CRL cannot be contacted then the certificate is considered invalid.
If you want to implement this scenario then you need 3 logical services but not in your current configuration. What you need is a remote licensing server, a client, and an application server. The application server can, theoretically, reside on the client, but the key aspect of this app server is that it performs license checks against the remote licensing service and handles all of the important application logic. That way, "spoofing" the server becomes an almost impossible task because a casual cracker would have to reverse-engineer the entire application in the process.
This is significantly less safe than making the application server a remote server, and may not offer many advantages over simply embedding remote security checks in the client itself and scrapping the local app/licensing server completely. But if you are determined to take this 3-tier approach then the aforementioned architecture would be the way to go.
Again, this is assuming that you aren't worried about "direct" cracking. If you are, then you'll have to read up on techniques specific to that particular attack vector, and understand that none of them are foolproof; they can only slow an attacker down, never stop him completely.
Reason: We have a new client that wishes for the database containing all their info to be stored on their own personal database server. However the web server will be located at another location.
Question How can you secure the data from the time it is inputed until the time an external database saves it?
Through some reading it seems that SSL will only cover so much and that some sort of a secure connection must be set up between the two. Or does the SSL cover this connection as well? It somewhat seems that it should.
SSL provides a reasonable solution to transport security (keeping the data safe from prying eyes as it goes over the wire).
Lock down the endpoints between the two systems as far as practical. For example, in addition to encryption, our firewall blocks physical access to the database except from well-known IP addresses.
You still need to ensure that your web server is secure (since the data is available unencrypted there), and that their database server is secure (including encryption of sensitive data when stored in database tables).
For those in the know, what recommendations do you have for storing passwords in Windows Azure configuration file (which is accessed via RoleManager)? It's important that:
1) Developers should be able to connect to all production databases while testing on their own local box, which means using the same configuration file,
2) Being Developers need the same configuration file (or very similar) as what is deployed, passwords should not be legible.
I understand that even if passwords in the configuration were not legible Developers can still debug/watch to grab the connection strings, and while this is not desirable it is at least acceptable. What is not acceptable is people being able to read these files and grab connection strings (or other locations that require passwords).
Best recommendations?
Thanks,
Aaron
Hum, devs are not supposed to have access to production databases in the first place. That's inherently non-secure, no matter if it's on Azure or somewhere else. Performing live debugging against a production database is a risky business, as a simple mistake is likely to trash your whole production. Instead I would suggest to duplicate the production data (eventually as an overnight process), and let the devs work against a non-prod copy.
I think it may be solved partially by a kind of credentials storage service.
I mean a kind of service that do not need a passwords, but allows access only for machines and SSPI-authenticated users which are white-listed.
This service can be a simple WebAPI hosted under SSLed server, with simple principles like so:
0) secured pieces have a kind of ACL with IP whitelist, or machine name-based, or certificate-based whitelist per named resource, or mixed.
1) all changes to stored data are made only via RDP access or SSH to the server hosting the service.
2) the secured pieces of information are accessed only via SSL and this API is read-only.
3) client must pre-confirm own permissons and obtain a temporary token with a call to api like
https://s.product.com/
3) client must provide a certificate and machine identity must match with the logical whitelist data for resource on each call.
4) requesting of data looks like so:
Url: https://s.product.com/resource-name
Header: X-Ticket: value obtained at step 3, until it expire,
Certificate: same certificate as it used for step 3.
So, instead of username and password, it is possible it store alias for such secured resource in connection string, and in code this alias is replaced by real username-password, obtained from step 4, in a Sql connection factory. Alias can be specified as username in special format like obscured#s.product.com/product1/dev/resource-name
Dev and prod instances can have different credentials aliases, like product1.dev/resource1 and product1/staging/resource1 and so on.
So, only by debugging prod server, sniffing its traffic, or by embedding a logging - emailing code at compilation time it is possible to know production credentials for actual secured resource.