I am looking for advice and best practice in encrypting parts/whole logs in various formats (say JSON or CSV). I have 0 experience in encrypting anything.
(Disclaimer - I am using Nlog so if you have experience in that library, it would be great to get insight but I am also asking for general ideas/practices/processes.)
I want to encrypt and then decrypt parts of my logs, for example stack trace, user name and other minor things, while keeping other parts of logs untouched (they could be encrypted but there is no need). How to do that? Should I encrypt part of my logs, whole logs or whole files?
How common is it to encrypt logs? How should it be done? Using specific parts of a library to do the encryption or simply encrypting a string before passing it to the logger? How do you manage decrypting the logs after? Without encryption I can simply import my logs from JSON or CSV to Excel and do everything, but when parts of them are encrypted it complicates the process.
How do you organize the decryption process? Where/how do you decrypt? Is encrypting/decrypting resource intensive process, which might affect performance of a logging library?
Questions are rather basic but I am a novice programmer and I have not found answers on SO or other sites. Encrypting logs seems like an "odd thing".
Many thanks for all help.
Related
I am creating a cocoa base core data application. I would like to protect the sqlite database, prevent to read it out of the application. How?
You could use cipher algorithms to encrypt the database and decrypt when you use it in your app. CommonCrypto or SecurityTransform may be your choice. Take a look at the Cryptographic Services Guide Apple Dev-Docs.
The needed credentials could be stored securely in the OS X keychain.
So the user could per app start/login decrypt the database and on leave or something, encrypt it.
Another way could be to hardcode the credentials (maybe not a good idea, depends on the security standard you want to use by your app) and do the en-/decrypt on the fly per read/write into the database, so that the database itself is not encrypted but the records in it are. That could be more fault tolerant if your app crashes.
So there is no "right" way to do the task, it depends on what you want to archive and how secure the data has to be.
But what ever you do, don't save any credentials in the NSUserDefaults, that is absolutely insecure.
That would be like to have a secured chest and the key for it lays right on the chest.
For the iOS side there is is the Project iMas-encrypted-core-data on github. It might help you on Cocoa, too.
Aim of the project is to:
Provides a Core Data store that encrypts all data that is persisted. Besides the initial setup, the usage is exactly the same as Core Data and can be used in existing projects that use Core Data.
Under the hood they use SQLCipher and wrap the coreData methods into sql. So you get an encrypted storage but can use coreData syntax for accessing. No need to know about SQL syntax.
The project looks rather promising. It is definitely worth a look.
So I have been working on a client/server application written in java. At the moment I am looking for a way to verify that the code of the client application has not be changed and then recompiled. I've been searching Google for some time without a lot of success. An educated guess would be to generate a hash value of the client's code during runtime, send it to the server and compare it with a database database entry or a variable. However I am not sure if that is the right way or even how to generate a hash of the codebase during execution in a secure way? Any suggestions would be greatly appreciated.
What would stop the nefarious user from simply having the client send the correct checksum to the server each time? Nothing.
There is currently no way to completely ensure that software running on a client computer is not running altered software. It's simply not possible to trust their software without asserting control over their hardware and software. Unfortunately, this is a situation where you should focus on software features and quality, something that benefits all users, rather than preventing a few users from hacking your software.
I am open for the suggestions on the following:
have a file on S3
this file will be randomly downloaded by random people
the volume of downloads is low, maybe 200-300 at most per day, on a spike, but usually might be as low as 5-10.
file size is ~10-20mb.
I need somehow to count a) how many accesses to the file happened b) how many full (completed) downloads happened.
I believe the only good day is just have some Ruby or Node.js script. It'll count accesses, then somehow supply the file, and on final byte do the completed count.
Unfortunately, doesn't seem like a too nice of the approach.
Any better ideas?
I was also thinking about enabling access logging on S3 and then parsing logs, but that doesn't seem too good neither, as requires downloading and parsing logs.
I would stick with your first idea. Having some sort of server-side logic handling the counting stuff.
I don't know which type of clients are accessing your system, but with this approach you can parse additional data coming from your clients, like HTTP headers (if applicable), etc, and that can help you identifying the profile of your clients. That might not be useful at all to you, though.
Also, if you ever need to add more complicated logic (authentication, privileges, permissions, uploading files, etc), it will be much easier once you already have a backend application/script in place.
I am building a full featured web application. Naturally, you can save when you are in 'offline' mode to the local datastore. I want to be able to sync across devices, so people can work on one machine, save, then get on another machine and load their stuff.
The questions are:
1) Is it a bad idea to store json on the server? Why parse the json on the server into model objects when it is just going to be passed back to the (other) client(s) as json?
2) Im not sure if I would want to try a NoSql technology for this. I am not breaking the json down, for now the only relationships in the db would be from a user account to their entries. Other than the user data, the domain model would be a String, which is the json. Advice welcome.
In theory, in the future I might want to do some processing on the server or set up more complicated relationships. In other words, right now I would just be saving the json, but in the future I might want a more traditional relational system. Would NoSQL approach get in the way of this?
3) Are there any security concerns with this? JS injection for example? In theory, for this use case, the user doesn't get to enter anything, at least right now.
Thank you in advance.
EDIT - Thanx for the answers. I chose the answer I did because it went into the most detail on the advantages and disadvantages of NoSql.
JSON on the SERVER
It's not a bad idea at all to store JSON on the server, especially if you go with a noSQL solution like MongoDB or CouchDB. Both use JSON as their native format(MongoDB actually uses BSON but it's quite similar).
noSQL Approach: Assuming CouchDB as the storage engine
Baked in replication and concurrency handling
Very simple Rest API, talk to the data base with HTTP.
Store data as JSON natively and not in blobs or text fields
Powerful View/Query engine that will allow you to continue to grow the complexity of your documents
Offline Mode. You can talk to CouchDb directly using javascript and have the entire app continue to run on the client if the internet isn't available.
Security
Make sure you're parsing the JSON documents with the browers JSON.parse or a Javascript library that is safe(json2.js).
Conclusion
I think the reason I'd suggest going with noSQL here, CouchDB in particular, is that it's going to handle all of the hard stuff for you. Replication is going to be a snap to setup. You won't have to worry about concurrency, etc.
That said, I don't know what kind of App you're building. I don't know what your relationship is going to be to the clients and how easy it'll be to get them to put CouchDB on their machines.
Links
CouchDB # Apache
CouchOne
CouchDB the definitive guide
MongoDB
Update:
After looking at the app I don't think CouchDB will be a good client side option as you're not going to require folks to install a database engine to play soduku. That said, I still think it'd be a great server side option. If you wanted to sync the server CouchDb instance with the client you could use something like BrowserCouch which is a JavaScript implementation of CouchDB for local-storage.
If most of your processing is going to be done on the client side using JavaScript, I don't see any problem in storing JSON directly on the server.
If you just want to play around with new technologies, you're most welcome to try something different, but for most applications, there isn't a real reason to depart from traditional databases, and SQL makes life simple.
You're safe as long as you use the standard JSON.parse function to parse JSON strings - some browsers (Firefox 3.5 and above, for example) already have a native version, while Crockford's json2.js can replicate this functionality in others.
Just read your post and I have to say I quite like your approach, it heralds the way many web applications will probably work in the future, with both an element of local storage (for disconnected state) and online storage (the master database - to save all customers records in one place and synch to other client devices).
Here are my answers:
1) Storing JSON on server: I'm not sure I would store the objects as JSON, its possible to do so if your application is quite simple, however this will hamper efforts to use the data (running reports and emailing them on a batch job for example). I would prefer to use JSON for TRANSFERRING the information myself and a SQL database for storing it.
2) NoSQL Approach: I think you've answered your own question there. My preferred approach would be to setup a SQL database now (if the extra resource needed is not a problem), that way you'll save yourself a bit of work setting up the data access layer for NoSQL since you will probably have to remove it in the future. SQLite is a good choice if you dont want a fully-featured RDBMS.
If writing a schema is too much hassle and you still want to save JSON on the server, then you can hash up a JSON object management system with a single table and some parsing on the server side to return relevant records. Doing this will be easier and require less permissioning than saving/deleting files.
3) Security: You mentioned there is no user input at the moment:
"for this use case, the user doesn't
get to enter anything"
However at the begining of the question you also mentioned that the user can
"work on one machine, save, then get
on another machine and load their
stuff"
If this is the case then your application will be storing user data, it doesn't matter that you havent provided a nice GUI for them to do so, you will have to worry about security from more than one standpoint and JSON.parse or similar tools only solve half the the problem (client-side).
Basically, you will also have to check the contents of your POST request on the server to determine if the data being sent is valid and realistic. The integrity of the JSON object (or any data you are tying to save) will need to be validated on the server (using php or another similar language) BEFORE saving to your data store, this is because someone can easily bypass your javascript-layer "security" and tamper with the POST request even if you didnt intend them to do so and then your application will be sending the evil input out the client anyway.
If you have the server side of things tidied up then JSON.parse becomes a bit obsolete in terms of preventing JS injection. Still its not bad to have the extra layer, specially if you are relying on remote website APIs to get some of your data.
Hope this is useful to you.
I've got an issue with WCF, streaming, and security that isn't the biggest deal but I wanted to get people's thoughts on how I could get around it.
I need to allow clients to upload files to a server, and I'm allowing this by using the transferMode="StreamedRequest" feature of the BasicHttpBinding. When they upload a file, I'd like to transactionally place this file in the file system and update the database with the metadata for the file (I'm actually using Sql Server 2008's FILESTREAM data type, that natively supports this). I'm using WCF Windows Authentication and delegating the Kerberos credentials to SQL Server for all my database authentication.
The problem is that, as the exception I get helpfully notes, "HTTP request streaming cannot be used in conjunction with HTTP authentication." So, for my upload file service, I can't pass the Windows authentication token along with my message call. Even if I weren't using SQL Server logins, I wouldn't even be able to identify my calling client by their Windows credentials.
I've worked around this temporarily by leaving the upload method unsecured, and having it dump the file to a temporary store and return a locator GUID. The client then makes a second call to a secure, non-streaming service, passing the GUID, which uploads the file from the temporary store to the database using Windows authentication.
Obviously, this isn't ideal. From a performance point of view, I'm doing an extra read/write to the disk. From a scalability point of view, there's (in principle, with a load balancer) no guarantee that I hit the same server with the two subsequent calls, meaning that the temporary file store needs to be on a shared location, meaning not a scalable design.
Can anybody think of a better way to deal with this situation? Like I said, it's not the biggest deal, since a) I really don't need to scale this thing out much, there aren't too many users, and b) it's not like these uploads/downloads are getting called a lot. But still, I'd like to know if I'm missing an obvious solution here.
Thanks,
Daniel