Password Encryption 3 approaches - passwords

On one side I have:
http://forums.enterprisedb.com/posts/list/2481.page
Here we declare field as BYTEA and we can decrypt it and encryption is on db level.
On the other side:
https://www.owasp.org/index.php/Hashing_Java
Here as varchar and we only compare hashes to authorize.
Finally Spring gives http://static.springsource.org/spring-security/site/docs/3.1.x/apidocs/org/springframework/security/crypto/password/StandardPasswordEncoder.html + char secret value applied is the same for every password?
Which is the best approach? (I lean towards Spring since as I understand it encapsulates similar logic as OWASP in few lines of code?)

PostgreSQL encoding:
Your application probably will depend on PostgreSQL and maybe you have to rewrite this part if you want to use it with another DBMS.
If the PostgreSQL is on another machine you should consider using some form of secure communication between the application and the DBMS because the passwords are transferred between them as plain text.
OWASP vs Spring:
They are very similar.
Both use salt.
Spring use a secret (Owasp not).
Of course you could modify Owasp to use a secret if you need that or you can use the StandardPasswordEncoder without secret.
Spring's encode() returns only one string which contains the salt too (as usual in unix/linux) while Owasp requires an additional database attribute for the salt value.
Spring is simpler and maybe it's better maintained than the Owasp web article from 2008.
Owasp mixes functionalities: it encodes/checks the passwords and contains a lot of JDBC code too.
Spring just encodes/checks the passwords and your responsibility is the password storage. But maybe your framework does that for you or you could write it for yourself.
I'd use StandardPasswordEncoder. It's more simple and does the same as Owasp.

Related

Method to prove authenticity of download files in hindsight

I'm looking for a tool or method to prove the authenticity of resources download from the web and stored locally. To be clear: I don't mean the SHA or MD5 checksums to verify a downloaded file. What I need is a way to download and store a web resource in such a way that I can later prove that said resource indeed originated from that web server.
In particular for the following scenario: A website published an article about a client. He would like to sue for defamation of character. I need a way to store the article without them having the possibility of simply removing it and denying they ever published it. So preferably this would be a tool that is backed by publications making it credible in court.
I have thought about storing the TLS certificate, keys and the encrypted data. That would rely on the root CA, but I think that would in itself not be a problem. I could do this using a custom program and a library like OpenSSL, but I think this is such a common problem, there probably is a relatively standard tool for it. Also, I am not entirely sure to what extent this would constitute reliable evidence. And can someone point to publications that would back this method?
Maybe I am using the wrong search terms, but everything I find is about aforementioned SHA or MD5 checksums. Any help is much appreciated.
If I understand correctly you need something like signature with timestamp. Yes?
You not only need checksum from document (article, text value, whatever) but also proof that this article really existed in time.
When using digital signature you can store such timestamp in 3rd party certified providers. You sign document and send checksum to 3rd party provider. Later you can ask provider to verify that this exact document is valid & was indeed created at given time.
https://en.wikipedia.org/wiki/Trusted_timestamping
As this can cost (fee for provider to store the timestamps) you can create checksums from many documents (like take all documents from one hour), store all of them in a single file, create checksum from that file and sign it with timestamp. This way you create one timestamp for documents batch, not for each document.

Angular 5/6 project query parameter encryption

I'm trying to encrypt query parameters in an Angular 5/6 project. We have some sensitive data in the URL which we might need to encrypt or hash so an outside user won't know.
Is there a way to do that or worth doing? For example, would that be really safe, or maybe have a big impact on performance?
I've seen some routing configured as /edit/:id/:name, but I'm confused as to whether it's really safe to expose the ID or other parameters in the URL.
Like #jonrsharpe suggest, we can use eventEmiiter or subject through service to pass data as an object in between component so no need to work on hash query parameter in routing.

Hide JKS keystore / truststore password when running Java process

I have a number of Java applications which connect to other applications and services via connections secured with SSL. During development, I can specify the keystore/truststore to use and the password by using the JVM args:
-Djavax.net.ssl.trustStore=certificate.jks
-Djavax.net.ssl.trustStorePassword=mypassword
-Djavax.net.ssl.keyStore=certificate.jks
-Djavax.net.ssl.keyStorePassword=mypassword
-Djavax.net.ssl.keyStoreType=jks
This works perfectly. However, there is a requirement when going to production to hide the password, using JVM args means anyone who looks at the process list will be able to see the password in clear text.
Is there a simple way to get around this? I considered importing the certificates into the JRE's lib/security/cacerts file, but my understanding is that this will still require a password. One option would be to store the password, encrypted, in a file and then get the applications to read and decrypt on the fly, but this will involve changing and re-releasing all the applications (there are quite a few of them) so I would rather avoid this if at all possible. Does the javax.net.ssl library have any native built-in support for encrypted passwords (even if it's something as simple as just base64encoding, or anything that makes the passwords not-clear-text)?
Any suggestions much appreciated.
Firstly, you could consider hiding the ps output from other users, see these questions:
I don't want other users see my processes in ps aux. I have root. It's Debian. How to use grsec?
Hide processes from other users based on groups (under Linux)?
How to make a process invisible to other users?
Secondly, importing your certificates (assuming with private keys) into lib/security/cacerts would be pointless: it's the default truststore, but not the default keystore (for which there is no default value).
Thirdly, you can never really "encrypt" the password that's going to be used by your application (in a non-interactive mode). It has to be used, so if it was encrypted, its encryption key would need to be made available in clear at some point. Hence, it's a bit pointless.
Base 64 encoding, as you suggest, is just an encoding. Again, it's quite pointless since anyone can decode it (e.g. based64 -d).
Some tools, like Jetty, can store the password in an obfuscated mode, but that's not much more resistant than base 64 encoding. It's useful if someone is looking over your shoulder, but that's it.
You could adapt your application to read the passwords from a file (in plain text or obfuscated). You would certainly need to make sure this file isn't readable by unauthorised parties.
What really matters is to make sure that the keystore file itself is protected from users who are not meant to read it. Its password is meant to protect the container in cases where it would be readable by others or when you want to protect access in interactive mode. Since you can't really avoid to use the password in clear on a machine in unattended mode, there's little point having a difficult password, rather it's more important to protect the file itself. (It's not clear whether your applications are interactive or not, but I guess few users should be expected to type -Djavax.net.ssl....=... interactively.)
If you can't adapt your code to read from a file, change your keystore and keys passwords to a password you don't mind disclosing like "ABCD", and make sure you protect read access from this keystore file: that's what really matters in the end. Reading the password itself from a secondary file is merely postponing the problem by one step, since the password file and the keystore file are likely to be stored next to one another (and copied together by an unauthorised party).

Make login information secure in Visual Studio

In my program, I have a simple login prompt so that only certain users may enter a program, as well as make the program function differently depending on the user. What I would like to do is have the information for the user login information (username, password, etc.) securely stored without going through an online database. I know that using a text file to store this information is a very bad idea, and I'm sure there is an easier way to do this than to make an array of this login information internally inside my program. Could you all give me some suggestions of a way to do this?
Hashes are what you need. Paste a hash-making function into your code, MD5 functions are available online for all major platforms. Then store your pairs of hashes in your config file. Devise a clever way to combine a password with your admittance options into another hash so that the file is edit-proof. This way, you can distribute the account configuration and if you don't make a trivial cryptographic mistake, it will work just as you want.
Example of the config file line (hashes truncated to 6 chars for clarity):
1a2b3c print;search;evaluate 4d5e6f
Here, 1a2b3c is obtained as MD5(username.Text+verysecret), the verbs are the account's rights and 4d5e6f is obtained as MD5(line[1]+verysecret+password.Text) where line[1] is the split result of the config line where the verbs are stored and the rest is the user's password.
Note how the password gets automatically salted by the verbs and how the verbs are protected against editing because that would invalidate the password hash. The verysecret constant is something hidden in your executable code that will prevent anybody from computing the hashes and unlocking the program.
Hashing is not an asymmetric cipher or key pair; a motivated attacker can crack your program to bypass protection altogether anyway, so going to further lengths is useless.
If you are cheap to find an asymmetric scheme, but cunning enough, you can change a few initialization constants in that MD5 function. This will make the cracking of your code harder, especially against the making of a counterfeit account file.
EDIT: When authenticating, don't just if(hashfromconfig == computedhash)... Script kiddies know how to hook into the string comparison function. Write if(MD5(hashfromconfig) == MD5(computedhash))... instead... Then the string comparison will work just as before, only it will not see your precious key hash that goes into a wannabe-counterfeit file. Ideally, have several versions of the MD5 function scattered across your code and named differently. Use if(foo(hashfromconfig) == bar(computedhash))... for a nice effect.
"without going through an online database." - do you mean on the client side?
"securely stored" and "client side" are pretty much mutually exclusive terms in this scenario.
There is absolutely no way to securely store data without touching online (server-side) source of some kind. If you are touching server-side source, it might as well be a DB.

Do browsers alter their behaviour based on `Server` response header?

My current application I am working on has following line at response fields:
Server: Microsoft-IIS/6.0
I feel embarrassed. I am thinking about writing http module to cloak this field. However am I little afraid that browsers use this value, in order to achieve maximum performance, to alter some aspects of http implementation. So what can go wrong?
No, the Server field is purely informational, it does not concern the browser at all.
The HTTP Protocol RFC 2616 does not specify any behavior associated with this field:
14.38 Server
The Server response-header field contains information about the software used by the origin server to handle the request. The field can contain multiple product tokens (section 3.8) and comments identifying the server and any significant subproducts. The product tokens are listed in order of their significance for identifying the application.
It does note though:
Revealing the specific software version of the server might
allow the server machine to become more vulnerable to attacks
against software that is known to contain security holes. Server
implementors are encouraged to make this field a configurable
option.
Well, in short: unless the browser is some sort of robot looking for this information (mostly for the bad, you bet) there's nothing bad about deleting it.
This is useful for purposes of statistics, for example, and even some great web services that could just don't use it and save real money (actually, almost nothing for such given company) doesn't do that.