I'm developing a Flash game in ActionScript 2, and the issue if that this game has to count the time securely.
It can't count the time from the Date class because the Flash Player takes the time from the local computer, and the user can change the local time so the time reported would be fake.
I haven't considerend to take the time from the server because there's a 3WH (3 way handshake) time and it would not be practical.
What do you sugest me??
You cannot perform secure computations on the user's system. They can manipulate it.
If that is a problem, your only real choice is to do it on the server. Of course, they could sandbox your app and fake a server conversation, so that's not entirely secure from within the client, but in most cases that won't cause a big problem since it should just affect that user (unless the data from the manipulated/forged server connection is then sent somewhere to affect other users).
When you are developing games that run on a system that you do not control there is basically no solution, you can make it hard for people but you can never be certain unless you basically modify your game to run on the server for all important parts. Even if you would make the game call the server for the time only people can insert a proxy and fake the response...
So is you really want to be sure no one messes with the game you have to make it run on the server (I know, lots of the time this is unwanted and/or impossible). In all other cases you can make it hard (obfuscate game code, encrypt communication) but never impossible - see google for lots of suggestions on making it hard, or see here and here.
The best way of solving the issue is to remove the incentive for players to cheat, so they simply won't try it at all -- of course lots of the time this is really hard.
See also: Cheat Engine, in case you didn't know about that one.
Related
The system my company sells is software for a multi-machine solution. In some cases, there is a UI on one of the machines and a backend/API on another. These systems communicate and both use their own clocks for various operations and storage values.
When the UI's system clock gets ahead of the backend by 30 seconds or more, the queries start to misbehave due to the UI's timestamp being sent over as key information to the REST request. There is a "what has been updated by me" query that happens every 30 seconds and the desync will cause the updated data to be missed since they are outside the timing window.
Since I do not have any control over the systems that my software is installed on, I need a solution on my code's side. I can't force customers to keep their clocks in sync.
Possible solutions I have considered:
The UI can query the backend for it's system time and cache that.
The backend/API can reach back further in time when looking for updates. This will give the clocks some room to slip around, but will cause a much heavier query load on systems with large sets of data.
Any ideas?
Your best bet is to restructure your API somewhat.
First, even though NTP is a good idea, you can't actually guarantee it's in use. Additionally, even when it is enabled, OSs (Windows at least) may reject packets that are too far out of sync, to prevent certain attacks (on the order of minutes, though).
When dealing with distributed services like this, the mantra is "do not trust the client". This applies even when you actually control the client, too, and doesn't necessarily mean the client is attempting anything malicious - it just means that the client isn't the authoritative source.
This should include timestamps.
Consider; the timestamps are a problem here because you're trying to use the client's time to query the server - except, we shouldn't trust the client. Instead, what we should do is have the server return a timestamp of when the request was processed, or the update stamp for the latest entry of the database, that can be used in subsequent queries to retrieve new updates (how far back you go on initial query is up to you).
Dealing with concurrent updates safely is a little harder, and depends on what is supposed to happen on collision. There's nothing really different here from most of the questions and answers dealing with database-centric versions of the problem, I'm just mentioning it to note you may need to add extra fields to your API to correctly handle or detect the situation, if you haven't already.
We are working with a .NET 3.5 app which is fast approaching legacy status. We have an existing SOAP service which reads records from our database and saves them to a third party MS SQL database, sending all the data rows in a single batch.
This has always worked fine, but recently we've taken on a much larger client than any we've had before, and they are transmitting much larger batches, so much so that they have begun to fail. We've upped the time out and max memory sizes in IIS, and maxed out the maxRequestLength in the web.config, but we are still bumping up against size problems.
So, I understand that long term, we should consider moving away from SOAP and into WCF, and plans for that are in the works. But in the mean time, we need a short term fix for this new client. And of course, to make the business and sales people happy, we need it kinda quickly.
I'm wondering what the best-practice approach might be. Initially I'm thinking something like this, but I could be thinking inside the box too much:
Establish a bench mark of # of records over which we don’t want to attempt to sync all at once.
Before attempting to save the data, check the number of records against that bench mark
If it's above it, then break the transmission down into segments which are each below that benchmark. SELECT TOP 10000 * FROM table WHERE sent = false, etc., if the benchmark is 10000. Then update sent to true for those records once submitted. Repeat.
Obviously, this will slow the process down, so to handle the user experience, we may want to toss in a status bar so they can see the progress.
Am I on the right track?
In addition to the comments from John, you should consider if you are solving the problem in the most optimal way.
It looks like you are triggering a one way sync between 2 database by calling a web service. This approach leads to the time out and memory problems that you are experiencing.
If your goal is to do the one way sync, you could use a free framework such as Microsofts sync framework: http://msdn.microsoft.com/en-US/sync
First let me say i am only a novice programmer, and by no means an sql guru. We have an app at work that is and has been under heavy dev from the vendor for sometime (2+ years). It runs as a MSSQL instance on one of our servers, and there is a client install for the desktops. The client software is making direct sql calls to the database.(it also has a local mysql instance to handle the client settings) there is 6-12 ports that had to be opened up for the communication. Looking at the sql manager, i can see direct sql calls from various clients.
Seems to me this is entirely the wrong approach. the closest thing i have done to this, was a webpage + php+ mysql. The webpage would make requests, and all the processing would be serverside, then simply display the results. The sluggishness my users feel i think is from the clientside request+ processing of the sql data.
ps: i realize that if they have not done it by now, switching to another paradigm seems out of the question. i just want to know if i am way off base.
You are way off base.
The client side has much more processing power.
Consider the case of one server and 5 clients. Even is the server has 3 times the power of a client the clients as a whole are still 5:3 more powerful.
If the application is sluggish it was probably poorly written. You need to investigate the root cause. Client / Server is a leading practice in design, I'm guessing it is not the root cause. It might be badly implemented or there might be other reasons. Your comment about having a local mysql sounds very fishy to me -- there should be no need for this.
Would it be useful for a hacker in any way to publicly display current server stats, such as average load times and memory usage?
The only issue I can forsee is that someone attempting to DDoS the server would have a visible indication of success, or would be able to examine patterns to choose an optimal time to attack. Is this much of an issue if I'm confident in the host's anti-DDoS setup? Are there any other problems I'm not seeing (I have a bad tendancy to miss wide-open security holes sometimes...)
Also useful for doing a MITM attack at the most busy time.
So the attacker can acquire the most targets before possible detection.
Another thing I can think of is logfile 'obfuscation'. Where requests by an attacker get lost in other logged stuff.
Maybe a long shot, but it can also be used to see where your visitors are coming from (based on the time they access the website), which can be used to target your visitors in other ways.
Also to expand on the possibility of attackers DOSsing the site, they can calculate the average response time at different times of the days (when it doesn't happen automatically). Because they can put load on the server and see when the load gets less.
Yes it's useful.
It will help him to know when he can download a big chunk of data, like a backup, without being detected by traffic statistics ;)
Also he will know when he can attack, do a penetration test, bruteforce or what ever, with better chance of hiding his track in the logs.
Furthermore, if he gain access he will know, when he could collect more credit cards, passwords from users, if he had no lack with the database or it's a Xss attack etc.
Ddos is another point, that you mension it already. Memory and average load will give him the success status of the attack.
I want to add a feature to my trial version of the application. After first activation, I want to make it limited to 90 days. But I am concerned about user's changing the date of system hence deceiving my application.
Is there any possibility to make it fool proof in a way that even if user takes the calender back, application expires after 90 days of first activation? First activation date has been saved in the database.
Thank you very much.
Furqan
In short, no, unless your application can run 24/7 and only allows itself to be started once. Even then, there'd be ways to subvert it.
#SB.101's answer is a way of checking for very simple date fiddling. It won't catch sophisticated cheats who know you're doing that and just keep setting the date to something sneaky that fools your checks. It will also annoy the odd few users who change the date on their system legitimately.
Pinging a server of yours over the internet to get the date would help, but is still able to be spoofed, and now annoys your users by forcing them to be connected to the internet (unless your application already needs that).
There is no sure-fire way of doing this. It is theoretically impossible. Remember that no matter how clever you are at checking whether the trial period has elapsed, a user can always modify or delete the recording of when the trial started!
I would advise you to just do something quick and simple, and rely on the fact that the small percentage of people who are both able to subvert your trial limitation and willing to bother doing so are unlikely to purchase the full version of your application anyway.
You can also save last run date in DB and can compare that to system date if that is newer that system date then you are deceived!
or
If you can use HTTP then can query time servers for current date time