As a software tester I came to an incident regarding testing on platforms with time travel. (the time can be set manually to past/future according to requirements of tests)
So the application time doesn't have to be same as my local time .... or should it be the same?
I found a bug that was caused by inconsistency between my local time and app time. Simple description: There are two validations. Validation #1 validates user input on client side (and is using local date for validation) and validation #2 validates user input on server side (and is using server date). Both validations are according to business rules that are specified in project specification. (it does not specify whether it should run locally or on server side) When there is inconsistency between those dates, it produces unexpected results.
However the bug was rejected by development that my test was wrong and that it's client's responsibility to synchronize those two dates.
Honestly I don't see reason what my local time has to do with application behaviour. There is lot of functionality and rules and for all of those is used server time as reference point. But because of that client side validation which is done by javascript the reference point is local time (because it's default behaviour, it's not intentional).
So I am just asking about your opinion. Do you think it's a bug or it's my bad understanding of importance of local time? How are you used to handle this things in your projects (as tester or developer)? This is not just issue of testing and server time travelling, but what about client "time travelling"? (eg. different time zones). Do you put any kind of efford to handle this things or just believe, that "bad local time = client problem" and that's not problem of development?
In general it is going to really depend on your application, what it does, and what is required. However, as a best practice, "application time" is always UTC. As another best practice, never trust client times. There are a ton of reasons an end-user's computer's time could be completely off or out of sync.
In almost every application I've worked with, we set the application server to UTC. For end-users we have them set their timezone, which we use to determine the timezone offset. So for example if an end-user selects "Eastern Timezone" we'd store that setting along with a -5 hour offset (ignoring daylight savings time for brevity). If you aren't storing user settings to a database you can still get their timezone via a client-side preference screen or automatically via javascript. But the key factor is ignoring their time, and just get their timezone/offset. Then you send that timezone over to the server so you can TRANSLATE the time using the server's accurate time. This way you always have the accurate server time, but then a reference to the user's local time that you can use for either reporting, logic, or display values.
So to answer your question: a bad local time in most cases needs to be accounted for, and is going to be YOUR problem. End-users aren't going to be checking their clocks to make sure they are accurate, they expect your application to work without having an IT person fix their side.
Related
For example: we have 2 fields in certain entity: created_at and updated_at. We can update those fields manually on backend after create or update operations, or create a trigger on the DB side that will fill/update these fields for us automatically.
There are some cases to consider:
Usually, on the backend after create or update we return the json of the object. In this case it'd be nice to see those timestamp fields set up on return, however if a trigger makes the modifying for us, to see these updated timestamps, backend would have to make another select just to set up these timestamps to nicely return it to the client.
Sometimes backend engineers can forget to update these fields manually leading to null records.
Not a DBA specialist myself but what do you think of the cost of the triggers? Especially in high RPS. Should I not worry about the performance that triggers have for such simple updates in the high-load systems?
Who is responsible for modifying timestamp fields in the app: backend or db?
It depends, it can be either.
There is no one "right" answer for which times to use or exclude. Depending on your system, which actors perform time-based actions (users, devices, servers, triggers), any (or all) of the list below might make sense to incorporate.
Depending on your system, you might have one (or more) of the following:
time A – when a user performs an action
this is most likely local device time (whatever the phone or computer thinks is current time)
but: anything is possible, a client could get a time from who-knows-where and report that to you
could be when a user did something (tap a button) and not when the message was sent to the backend
could be 10-20 seconds (or more) after a user did something (tap a button), and gets assigned by the device when it sends out batched data
time B – when the backend gets involved
this is server time, and could be when the server receives the data, or after the server has received and processed the data, and is about to hand it off to the next player (database, another server, etc)
note: this is probably different from "time A" due to transit time between user and backend
also, there's no guarantee that different servers in the mix all agree on time.. they can and should, but should not be relied upon as truth
time C – when a value is stored in the database
this is different from server time (B)
a server might receive inbound data at B, then do some processing which takes time, then finally submits an insert to the database (which then assigns time C)
Another highly relevant consideration in capturing time is the accuracy (or rather, the likely inaccuracy) of client-reported time. For example, a mobile device can claim to have sent a message at time X, when in fact the clock is just set incorrectly and actual time is minutes, hours - even days - away from reported time X (in the future or in the past). I've seen this kind of thing occur where data arrives in a system, claiming to be from months ago, but we can prove from other telemetry that it did in fact arrive recently (today or yesterday). Never trust device-reported times. This applies to a device – mobile, tablet, laptop, desktop – all of them often have internal clocks that are not accurate.
Remote servers and your database are probably closer to real, though they can be wrong in various ways. However, even if wrong, when the database auto-assigns datetimes to two different rows, you can trust that one of them really did arrive after or before the other – the time might be inaccurate relative to actual time, but they're accurate relative to each other.
All of this becomes further complicated if you intend to piece together history by using timestamps from multiple origins (A, B and C). It's tempting to do, and sometimes it works out fine, but it can easily be nonsense data. For example, it might seem safe to piece together history using a user time A, then a server time B, and database time C. Surely they're all in order – A happened first, then B, then C; so clearly all of the times should be ascending in value. But these are often out of order. So if you need to piece together history for something important, it's a good idea to look for secondary confirmations of order of events, and don't rely on timestamps.
Also on the subject of timestamps: store everything in UTC – database values, server times, client/device times were possible. Timezones are the worst.
The system my company sells is software for a multi-machine solution. In some cases, there is a UI on one of the machines and a backend/API on another. These systems communicate and both use their own clocks for various operations and storage values.
When the UI's system clock gets ahead of the backend by 30 seconds or more, the queries start to misbehave due to the UI's timestamp being sent over as key information to the REST request. There is a "what has been updated by me" query that happens every 30 seconds and the desync will cause the updated data to be missed since they are outside the timing window.
Since I do not have any control over the systems that my software is installed on, I need a solution on my code's side. I can't force customers to keep their clocks in sync.
Possible solutions I have considered:
The UI can query the backend for it's system time and cache that.
The backend/API can reach back further in time when looking for updates. This will give the clocks some room to slip around, but will cause a much heavier query load on systems with large sets of data.
Any ideas?
Your best bet is to restructure your API somewhat.
First, even though NTP is a good idea, you can't actually guarantee it's in use. Additionally, even when it is enabled, OSs (Windows at least) may reject packets that are too far out of sync, to prevent certain attacks (on the order of minutes, though).
When dealing with distributed services like this, the mantra is "do not trust the client". This applies even when you actually control the client, too, and doesn't necessarily mean the client is attempting anything malicious - it just means that the client isn't the authoritative source.
This should include timestamps.
Consider; the timestamps are a problem here because you're trying to use the client's time to query the server - except, we shouldn't trust the client. Instead, what we should do is have the server return a timestamp of when the request was processed, or the update stamp for the latest entry of the database, that can be used in subsequent queries to retrieve new updates (how far back you go on initial query is up to you).
Dealing with concurrent updates safely is a little harder, and depends on what is supposed to happen on collision. There's nothing really different here from most of the questions and answers dealing with database-centric versions of the problem, I'm just mentioning it to note you may need to add extra fields to your API to correctly handle or detect the situation, if you haven't already.
windows server has a registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation where the field ActiveTimeBias will return the offset in minutes from GMT for the machine that you are running on.
we have a web application that needs to present a user with their local time on an html page that we generate. we do this by having them set a preference for their timezone and then fetch the above value so that can compare server time and client time and do the correct calculation.
on servers that we have built this works great. we are deploying our application into a tier 1 cloud provider who is providing a windows server ami that we configure and use. what we have found is that when you use the clock control panel on a local server, the registry entrys for TimeZoneInformation are correct. when we do the same for this virtual machine, the entrys are correct with the exception of ActiveTimeBias.
Microsoft cautions against diddling the individual value in their usual fashion.
question for the community - has anyone else encountered this problem and if so, how did you fix it?
One usually doesn't program directly against these registry keys. For example, in .Net, we have the TimeZoneInfo class. It uses the same data, but in a much more developer-friendly way.
Regarding your particular question, the ActiveTimeBias key hold the current offset from UTC for the time zone that your server is set to. If your server is in a time zone that follows daylight savings rules, then this will change twice a year. If you manually update your time zone and refresh the registry, you will see it change.
The best advice I can offer is that since the timezone of the server is likely to be unimportant to anyone, you should set it to UTC.
See also: Daylight saving time and time zone best practices
I want to add a feature to my trial version of the application. After first activation, I want to make it limited to 90 days. But I am concerned about user's changing the date of system hence deceiving my application.
Is there any possibility to make it fool proof in a way that even if user takes the calender back, application expires after 90 days of first activation? First activation date has been saved in the database.
Thank you very much.
Furqan
In short, no, unless your application can run 24/7 and only allows itself to be started once. Even then, there'd be ways to subvert it.
#SB.101's answer is a way of checking for very simple date fiddling. It won't catch sophisticated cheats who know you're doing that and just keep setting the date to something sneaky that fools your checks. It will also annoy the odd few users who change the date on their system legitimately.
Pinging a server of yours over the internet to get the date would help, but is still able to be spoofed, and now annoys your users by forcing them to be connected to the internet (unless your application already needs that).
There is no sure-fire way of doing this. It is theoretically impossible. Remember that no matter how clever you are at checking whether the trial period has elapsed, a user can always modify or delete the recording of when the trial started!
I would advise you to just do something quick and simple, and rely on the fact that the small percentage of people who are both able to subvert your trial limitation and willing to bother doing so are unlikely to purchase the full version of your application anyway.
You can also save last run date in DB and can compare that to system date if that is newer that system date then you are deceived!
or
If you can use HTTP then can query time servers for current date time
I'm developing a Flash game in ActionScript 2, and the issue if that this game has to count the time securely.
It can't count the time from the Date class because the Flash Player takes the time from the local computer, and the user can change the local time so the time reported would be fake.
I haven't considerend to take the time from the server because there's a 3WH (3 way handshake) time and it would not be practical.
What do you sugest me??
You cannot perform secure computations on the user's system. They can manipulate it.
If that is a problem, your only real choice is to do it on the server. Of course, they could sandbox your app and fake a server conversation, so that's not entirely secure from within the client, but in most cases that won't cause a big problem since it should just affect that user (unless the data from the manipulated/forged server connection is then sent somewhere to affect other users).
When you are developing games that run on a system that you do not control there is basically no solution, you can make it hard for people but you can never be certain unless you basically modify your game to run on the server for all important parts. Even if you would make the game call the server for the time only people can insert a proxy and fake the response...
So is you really want to be sure no one messes with the game you have to make it run on the server (I know, lots of the time this is unwanted and/or impossible). In all other cases you can make it hard (obfuscate game code, encrypt communication) but never impossible - see google for lots of suggestions on making it hard, or see here and here.
The best way of solving the issue is to remove the incentive for players to cheat, so they simply won't try it at all -- of course lots of the time this is really hard.
See also: Cheat Engine, in case you didn't know about that one.