So I have been working on a client/server application written in java. At the moment I am looking for a way to verify that the code of the client application has not be changed and then recompiled. I've been searching Google for some time without a lot of success. An educated guess would be to generate a hash value of the client's code during runtime, send it to the server and compare it with a database database entry or a variable. However I am not sure if that is the right way or even how to generate a hash of the codebase during execution in a secure way? Any suggestions would be greatly appreciated.
What would stop the nefarious user from simply having the client send the correct checksum to the server each time? Nothing.
There is currently no way to completely ensure that software running on a client computer is not running altered software. It's simply not possible to trust their software without asserting control over their hardware and software. Unfortunately, this is a situation where you should focus on software features and quality, something that benefits all users, rather than preventing a few users from hacking your software.
Related
I'm hoping to automate the downloading and installation of the free GeoIP databases and I want to know if there is any additional verification options avaliable given that MD5 is becoming more susceptible to pre-image attacks.
Additionaly the MD5 Sums are stored on the same server meaning any attacker breaking into that server will be able to upload potentially malicious database and have it be served without any client being the wiser.
GPG is a common verification tool, and it would be trivial to set up for most Linux users given their package managers already perform this sort of verification.
maxmind.com supports TLS SSL HTTPS on its download links (just add the 's' yourself), so be sure to keep your certificates accurate and libraries up to date and you should be as secure as is possible.
Even assuming their webserver gets hijacked, there's really no point in fretting about MD5 vs SHA vs GPG at that point as you would have no reasonable assurances or concept of the width and breadth of the attack. It might as well be an inside job intentionally perpetrated by the company themselves. maxmind makes no fitness guarantees against human or automated error, anyway, so take it under advisement.
For a free service (free database, free bandwidth, huge weekly updates) you can't exactly go begging for air-gapped fort knox rate security. TLS is already better than you'll need.
You are welcome to perform your own sanity-checking of a newly downloaded database against the previously downloaded database, to make sure any changes or corrections are nominally insignificant. Better still, you can use their GeoIP Update program or direct-download patches. This way, you are only downloading nominally insignificant updates to begin with, and can inspect them yourself before merging them into the database. And you'll be saving bandwidth for everyone.
We have a few applications which are running in Windows 2K, 2008 servers. They are written in java.
These applications needs to do many automation tasks. We are having difficulty to monitor these applications. Sometime due to XYZ reasons application either hangs or fail to perform desired job. We only come to know about this after a few days when some one reports that desired function hasn't been executed.
To come out of this issue, we added emails for each imp exceptions but then developer needs to spend time to check those 1000 emails everyday. Which is again not feasible & efficient solution.
Now we are looking for a alert, alarms, notification display & monitoring system. We need to have a remote application which can receive alarms from these java applications & then based on certain information/Condition/Configuration, remote application can display some red, orange, green text on the screen. Based on red text, users can be visually see that there is an issue in the system. If required users can be notified that there is a serious issue in the application.
Please help us to identify any existing mechanism, tool, package to achieve this goal. Any suggestion would be highly appreciated.
Thanks
There are myriad ways to achieve this, but all of them will require some effort. Which way to proceed depends on your needs and abilities. A couple of options occur to me:
Have your processes log their exceptions to a Syslog daemon, running on some central server. Then you could have an admin read through the log file for serious problems, but there are many ways to post-process syslog messages, a web search on it might give some more hints.
Is there any way, when logged into the server, to observe whether or not the process is running properly or not? You could install something like Nagios on a sever, and write a plugin that monitors your particular process on all the servers. The plugin can basically be a shell script that checks "ps", or a log file, or whatever you want.
If you are in an IT department, your organization might already have some system like this (NMS).
I'm not sure why this question is tagged "snmp", but it's technically possible to install an SNMP agent on each server, and have them send traps on certain conditions. I do think it would be slightly overkill because you would also have to get a good SNMP manager to receive the traps and alert a sysadmin.
I would go for a combination of the check_logfiles plugin to parse log exceptions and raise alerts, and check_jmx/jmxquery to check metrics inside the JVM such as heap usage and thread count.
check_logfiles
check_jmx
First let me say i am only a novice programmer, and by no means an sql guru. We have an app at work that is and has been under heavy dev from the vendor for sometime (2+ years). It runs as a MSSQL instance on one of our servers, and there is a client install for the desktops. The client software is making direct sql calls to the database.(it also has a local mysql instance to handle the client settings) there is 6-12 ports that had to be opened up for the communication. Looking at the sql manager, i can see direct sql calls from various clients.
Seems to me this is entirely the wrong approach. the closest thing i have done to this, was a webpage + php+ mysql. The webpage would make requests, and all the processing would be serverside, then simply display the results. The sluggishness my users feel i think is from the clientside request+ processing of the sql data.
ps: i realize that if they have not done it by now, switching to another paradigm seems out of the question. i just want to know if i am way off base.
You are way off base.
The client side has much more processing power.
Consider the case of one server and 5 clients. Even is the server has 3 times the power of a client the clients as a whole are still 5:3 more powerful.
If the application is sluggish it was probably poorly written. You need to investigate the root cause. Client / Server is a leading practice in design, I'm guessing it is not the root cause. It might be badly implemented or there might be other reasons. Your comment about having a local mysql sounds very fishy to me -- there should be no need for this.
I'm developing a Flash game in ActionScript 2, and the issue if that this game has to count the time securely.
It can't count the time from the Date class because the Flash Player takes the time from the local computer, and the user can change the local time so the time reported would be fake.
I haven't considerend to take the time from the server because there's a 3WH (3 way handshake) time and it would not be practical.
What do you sugest me??
You cannot perform secure computations on the user's system. They can manipulate it.
If that is a problem, your only real choice is to do it on the server. Of course, they could sandbox your app and fake a server conversation, so that's not entirely secure from within the client, but in most cases that won't cause a big problem since it should just affect that user (unless the data from the manipulated/forged server connection is then sent somewhere to affect other users).
When you are developing games that run on a system that you do not control there is basically no solution, you can make it hard for people but you can never be certain unless you basically modify your game to run on the server for all important parts. Even if you would make the game call the server for the time only people can insert a proxy and fake the response...
So is you really want to be sure no one messes with the game you have to make it run on the server (I know, lots of the time this is unwanted and/or impossible). In all other cases you can make it hard (obfuscate game code, encrypt communication) but never impossible - see google for lots of suggestions on making it hard, or see here and here.
The best way of solving the issue is to remove the incentive for players to cheat, so they simply won't try it at all -- of course lots of the time this is really hard.
See also: Cheat Engine, in case you didn't know about that one.
I am in the process of implementing an enhancement to an existing web application(A). The new solution will provide features(charts/images/data) to the application A. The new enhancement will be a new project and will generate new assemblies. I am trying to identify what would be most elegant way to read this information.
1) Do a binary reference and read the data directly. The new assemblies live with your application and are married together
2) Write a WCF call and get the data. This will help to decouple the application.
The new application will involve me to buy some expensive licences. So if i go with the 2nd option i can limit the license fee to a single server or atmost 2-3. My current applicaiton runs under a webfarm of 8 servers.
Please share out the pros/cons of both approach.
Thanks.
If you decouple the two pieces sufficiently, you will also permit the use of clients running something other than .NET. Using the first option, you could only support .NET clients. This may turn out to be important, even if today you are absolutely certain that only .NET will ever be used - tomorrow, your company may be purchased by another which is a Java or PHP shop.
Even if you never need to support a non .NET client, coupling to the assemblies will require you to maintain version compatibility between the client and server. If this is not necessary, then use option #2.
The benefit of using WCF (decoupled approach) is that you get a deployment option to take it outside of the machine if it impacts the machine too much in terms of processing or storage.
The downside is that you'll likely pay some performance hit compared to linking directly.
I'm sure you can do some dynamic linking so you don't have to deploy to all 8 servers.