Time shift software like Time Machine enables you to test your application based on past or future time. Time Machine® guarantees no modification are needed in the database and the application.
I was just curious about how this software works internally. Does anyone have any thoughts about this?
These applications intercept the date and time calls of the application and return a date and time from a virtual clock.
Time Machine intercepts your file system’s date and time calls. If the
caller is configured with a virtual clock and the program is not on
the exclusion list, virtual time is returned; otherwise system time is
returned.
Source (for the mentioned Time Machine software): http://www.solution-soft.com/sites/default/files/wysiwyg/TMDataSheet.pdf
I actually wrote one from scratch for windows. I have to say it is an interesting endeavor, and the information is available on the web to write one from scratch.
The code I designed the intercept off of is out on codeproject.
The Three Functions I hooked were:
GetSystemTime
GetLocalTime
GetSystemTimeAsFileTime
These were the only ones I needed to capture to replace all time calls within .net.
The process isn't so much hard as it is difficult to maintain, especially if you are operating on a virtual system like Citrix.
Related
Is there a way to listen or wait for a specific time (e.g. 11:30 am) every day. The only way I know how is to set a timer that checks for the current time every 60 seconds which I have actually implemented using a backgroundworker. But is there a way to just wait and listen for the specified time (similar to monitoring for directory changes) and then take some action?
Thanks in advance.
Typically, rather than having a program resident in memory waiting, you would setup a Scheduled Task for this (or a cron job on linux). The scheduled task will run the program at the appropriate time. The program can still check (validate) the expected time if needed, but it shouldn't just always sit in the background using up resources if it's only going to run once per day.
The scheduled task is also better because it will recover automatically from computer reboots, crashes, etc. If something happens that interrupts your program's normal running, the scheduled task will still be able to run.
This is especially important in the .Net world, because .Net requires you to be very careful writing long-lived programs to avoid address space fragmentation. The .Net garbage collector is good at freeing up and returning old memory to the operating system, but over time your program's virtual address space can become fragmented and eventually you will not be able to allocate new memory any longer.
Even if this is part of a larger program, where there are also other things happening based on user interactions, it's still a good idea to split this off into a separate process.
The system my company sells is software for a multi-machine solution. In some cases, there is a UI on one of the machines and a backend/API on another. These systems communicate and both use their own clocks for various operations and storage values.
When the UI's system clock gets ahead of the backend by 30 seconds or more, the queries start to misbehave due to the UI's timestamp being sent over as key information to the REST request. There is a "what has been updated by me" query that happens every 30 seconds and the desync will cause the updated data to be missed since they are outside the timing window.
Since I do not have any control over the systems that my software is installed on, I need a solution on my code's side. I can't force customers to keep their clocks in sync.
Possible solutions I have considered:
The UI can query the backend for it's system time and cache that.
The backend/API can reach back further in time when looking for updates. This will give the clocks some room to slip around, but will cause a much heavier query load on systems with large sets of data.
Any ideas?
Your best bet is to restructure your API somewhat.
First, even though NTP is a good idea, you can't actually guarantee it's in use. Additionally, even when it is enabled, OSs (Windows at least) may reject packets that are too far out of sync, to prevent certain attacks (on the order of minutes, though).
When dealing with distributed services like this, the mantra is "do not trust the client". This applies even when you actually control the client, too, and doesn't necessarily mean the client is attempting anything malicious - it just means that the client isn't the authoritative source.
This should include timestamps.
Consider; the timestamps are a problem here because you're trying to use the client's time to query the server - except, we shouldn't trust the client. Instead, what we should do is have the server return a timestamp of when the request was processed, or the update stamp for the latest entry of the database, that can be used in subsequent queries to retrieve new updates (how far back you go on initial query is up to you).
Dealing with concurrent updates safely is a little harder, and depends on what is supposed to happen on collision. There's nothing really different here from most of the questions and answers dealing with database-centric versions of the problem, I'm just mentioning it to note you may need to add extra fields to your API to correctly handle or detect the situation, if you haven't already.
As a software tester I came to an incident regarding testing on platforms with time travel. (the time can be set manually to past/future according to requirements of tests)
So the application time doesn't have to be same as my local time .... or should it be the same?
I found a bug that was caused by inconsistency between my local time and app time. Simple description: There are two validations. Validation #1 validates user input on client side (and is using local date for validation) and validation #2 validates user input on server side (and is using server date). Both validations are according to business rules that are specified in project specification. (it does not specify whether it should run locally or on server side) When there is inconsistency between those dates, it produces unexpected results.
However the bug was rejected by development that my test was wrong and that it's client's responsibility to synchronize those two dates.
Honestly I don't see reason what my local time has to do with application behaviour. There is lot of functionality and rules and for all of those is used server time as reference point. But because of that client side validation which is done by javascript the reference point is local time (because it's default behaviour, it's not intentional).
So I am just asking about your opinion. Do you think it's a bug or it's my bad understanding of importance of local time? How are you used to handle this things in your projects (as tester or developer)? This is not just issue of testing and server time travelling, but what about client "time travelling"? (eg. different time zones). Do you put any kind of efford to handle this things or just believe, that "bad local time = client problem" and that's not problem of development?
In general it is going to really depend on your application, what it does, and what is required. However, as a best practice, "application time" is always UTC. As another best practice, never trust client times. There are a ton of reasons an end-user's computer's time could be completely off or out of sync.
In almost every application I've worked with, we set the application server to UTC. For end-users we have them set their timezone, which we use to determine the timezone offset. So for example if an end-user selects "Eastern Timezone" we'd store that setting along with a -5 hour offset (ignoring daylight savings time for brevity). If you aren't storing user settings to a database you can still get their timezone via a client-side preference screen or automatically via javascript. But the key factor is ignoring their time, and just get their timezone/offset. Then you send that timezone over to the server so you can TRANSLATE the time using the server's accurate time. This way you always have the accurate server time, but then a reference to the user's local time that you can use for either reporting, logic, or display values.
So to answer your question: a bad local time in most cases needs to be accounted for, and is going to be YOUR problem. End-users aren't going to be checking their clocks to make sure they are accurate, they expect your application to work without having an IT person fix their side.
I've started using a database at work that is based off SQL and Unix.
I am surprised to learn, that if someone requests for a change to be made to their details at around 5PM or a certain date, then the person who is allocated the incident then has to WAIT until 5pm and make the changes manually.
I'm surprised a button that says 'Apply changes later' does not exist, there is only a 'Save' button.
I have seen complicated solutions using Java on stackoverflow, but I am not familiar with UNIX or SQL, and googling brings no results.
Would it be a simple fix?
It wouldn't have to account for any time differences, and I'm assuming would just work off System clock; and I know Java has a calendar function that I assume works off the PC clock.
Java
Java does indeed have a sophisticated facility for scheduling a future task to be executed. See the ScheduledExecutorService class.
Rather than specify a date-time, you pass the schedule method a number of nanoseconds, or milliseconds, or seconds, or minutes, or hours, or days. You also pass a TimeUnit enum instance to indicate which granularity.
And, yes, Java depends on the host operating system for its clock to track the date-time.
Task Master
I suggest using your database to track the jobs to be run, in conjunction with Java. If using only Java, the scheduled jobs would exist only in memory and would disappear if the Java app exits or crashes.
Instead, the Java app on launch should check the database for any pending jobs, and schedule them with an executor. Each job on completion should mark the database "task master" table row as finished.
I have 5 servers on a LAN without Internet connection. I need them to keep the clock in sync among them.
I could configure them as NTP peers, and set a high stratum for the local clock of one of them. In this way, the other four would sync with that clock.
What I actually want, is them to agree on a time using all of the 5 local clocks (i.e. doing some kind of average), for reasons of robustness and precision. Is it possible with NTP?
PS: I do not want to use an external clock source.
EDIT: and no scripting outside NTP features, that could only make precision worse :)
If you average 5 drifting clocks, the only thing you get is another drifting clock that's harder to correct. It won't be more precise. NTP uses multiple servers to increase precision because it takes network latency into account. Since all your systems are on a fast local network, you just need one server.
Set up two systems to be NTP server, one a primary, and if you feel the need, one a backup. Have all other systems synchronize to them. This will be significantly easier to set up than the clock-averaging solution, and you won't have to develop any crazy scripts.
You might be able to have one of them listen for the times from each computer, perform an average, set the average as it's own time, and broadcast that time for all the other computers. It seems a little excessive, though.
you can set up one of them as ntp server which will broadcast its time on the local network and the others as slaves to listen on the local network
edit:
I missed the average part. well, in that case, you can probably write a script on the local server to collect times from all the slaves get the average and update own time with that value.
You may even want to get rid of ntp in that case and just use the script to update time on all the servers
I wish I could give a definitive proposal, but I don't know enough about your environment. No matter what you'll likely be doing some sort of script kung fu.
If it's unix/linux I would set everyone up with SSH authorized keys to poll each others' date +%s command (to get the epoch), average those times with awk or something, and then set the machine's own local date.
Or perhaps it would be more secure (and reliable) to have one authoritative machine check everyone's time in the same manor, average it, and then provision itself and every other host to that average.
On Windows you'll probably be looking into VBScript and WMI.
EDIT:
You may run into some weird problems if anyone's clock drifts forward from the average and my guess is about half of them will ;). Future timestamps can be rather strange. It will be up to you to determine how frequently this synchronization will need to occur.