I have been trying to create a timer program with VB 2010 to the accuraccy of 0.05seconds (If possible, 0.01s)
I insert a timer into the form (Timer1, Interval - 50).
The code when the timer ticks:
intdsecond = intdsecond + 5
If intdsecond > 99 Then
intdsecond = intdsecond - 100
intsecond = intsecond + 1
End If
If intsecond > 59 Then
intsecond = intsecond - 60
intminute = intminute + 1
End If
Note: intdsecond, intsecond and intminute are global variable used to record 0.01s, 1s and 1min time.
But when I run the timer for 1min, the recorded time was 48.05 sec
How can I make my timer more accurate? Is there anything i have done wrongly with the code?
Extra info: I am using windows 7, vb 2010, .Netframework 4 client profile.
If this is the System.Windows.Forms.Timer, it is not accurate to 50 ms:
The Windows Forms Timer component is single-threaded, and is limited to an accuracy of 55 milliseconds. If you require a multithreaded timer with greater accuracy, use the Timer class in the System.Timers namespace.
See the remarks on the documentation for System.Windows.Forms.Timer.
You might also consider System.Diagnostics.Stopwatch. It doesn't raise an event when the interval elapses, but if all you care about is the total elapsed time, it does provide some convenient properties (e.g. ElapsedMilliseconds).
You shouldn't write your timer logic expecting the interval to be completely precise. There are a variety of factors that can delay the timer (thread priority, latency in the code, etc.). And the shorter the interval, the greater the perceived error (and 50 milliseconds is quite short). Instead, you should always be comparing the current time with the start time (store the start time beforehand and compare against it) and use that for display purposes.
Related
I am looking to add a variable to count from 1 to 217 every hour in AnyLogic, in order to use as a choice condition to set a parameters row reference.
I am assuming I either need to use an event or a state chart however I am really struggling with the exact and cannot find anything online.
If you have any tips please let me know, any help would be appreciated
Thank you,
Tash
A state machine isn't necessary in this case as this can be achieve using a calculation or a timed event. AnyLogic has time() function which returns time since model start as a double in model time units of measurements.
For example: if model time units is seconds and it has been running for 2hr 2min 10sec then time(SECOND) will return 7330.0 (it is always a double value). 1/217th of an hour corresponds to about 3600/217 = 16.58 seconds. Also, java has a handy function Math.floor() which rounds down a double value, so Math.floor(8.37) = 8.0.
Assembling it all together:
// how many full hours have elapsed from the start of the model
double fullHrsFromStart = Math.floor(time(HOUR));
// how many seconds have elapsed in the current model hour
double secondsInCurrentHour = time(SECOND) - fullHrsFromStart * 3600.0;
// how many full 16.58 (1/217th of an hour) intervals have elapsed
int fullIntervals = (int)(secondsInCurrentHour / 16.58);
This can be packaged into a function and called any time and it is pretty fast.
Alternatively: an Event can be created which increments some count by 1 every 16.58 seconds and ten resets it back to 0 when the count reaches 217.
I have the following problem on a Windows 7 using VB.net with .NET Framework 4.0.
I have to send via a serial port a buffer of byte. The PC act like master and a device connected as slave receive the buffer. Each byte must be spaced from its next by a certain amount of time expressed in microseconds.
This is my code snippet
Dim t1 As New Stopwatch
Dim WatchTotal As New Stopwatch
WatchTotal.Reset()
WatchTotal.Start()
t1.Stop()
For i As Integer = 0 To _buffer.Length - 1
SerialPort1.Write(_buffer, i, 1)
t1.Reset()
t1.Start()
While ((1000000000 * t1.ElapsedTicks / Stopwatch.Frequency) < 50000) ' wait 50us
End While
t1.Stop()
Next
WatchTotal.Stop()
Debug.Print(WatchTotal.ElapsedMilliseconds)
Everything loop inside a thread.
Everything works correctly but on a machine with Windows 7 the Serial.write of 1 byte takes 1ms so if we have to send 1024 bytes it takes 1024ms.
This is confirmed by printing the elapsed time
Debug.Print(WatchTotal.ElapsedMilliseconds)
The problem seems to be in SerialPort.Write method.
The same code on a machine with Windows 10 takes less than 1ms.
The problem is more visible when we have to send many buffers of byte, in this case we send 16 buffers of 1027 bytes. On Win 7 it takes less than 20 seconds, in Win10 it takes the half or less (to send 1 buffers of 1027 bytes it takes approximately 120-150ms and less than 5 seconds to send 16 buffers of data).
Does anyone have any idea what that might depend on?
Thanks
EDIT 22/05/2020
If i remove the debug printing and the little delay to pause the communication i always have about 1027ms for sending 1027 bytes so i think that the problem belong only to SerialPort method and not to the timing or stopwatch object. This happen on a Windows 7 machine. The same executable on a Windows 10 machine go fast as expected.
For i As Integer = 0 To _buffer.Length - 1
SerialPort1.Write(_buffer, i, 1)
Next
One thing, your wait code seems cumbersome, try this.
'one tick is 100ns, 10 ticks is as microsecond
Const wait As Long = 50L * 10L ' wait 50us as ticks
While t1.ElapsedTicks < wait
End While
Busy loops are problematic and vary from machines. As I recall serial port handling was not very good on Win7, but I could be mistaken.
It is hard to believe that the receiver is that time sensitive.
If the Win7 workstation doesn't have or isn't using a high resolution timer, then that could account for the described difference.
From the Remarks section of the StopWatch class:
The Stopwatch measures elapsed time by counting timer ticks in the underlying timer mechanism. If the installed hardware and operating system support a high-resolution performance counter, then the Stopwatch class uses that counter to measure elapsed time. Otherwise, the Stopwatch class uses the system timer to measure elapsed time. Use the Frequency and IsHighResolution fields to determine the precision and resolution of the Stopwatch timing implementation.
Check the IsHighResolution field to determine if this is what is occurring.
I got this simple question which confused me a bit. I got 2 processors Both of which can individually do 1 billion operations in 33.0360723.
Yet both of them together do the operations in 27.4996964.
This makes no sense for me, if the time for a task for one processor is X, then should it not be X/2 for both of them together?
My code:
Function calc(ByVal i As Integer, ByVal result As String)
Math.Sqrt(i)
Return True
End Function
Sub Main()
Dim result As String = Nothing
Dim starttime As TimeSpan
starttime = DateTime.Now.TimeOfDay
For i = 0 To 1000000000
calc(i, result)
Next
Console.WriteLine("A single processor runs 1 billion operations in: ")
Console.WriteLine(DateTime.Now.TimeOfDay - starttime)
starttime = DateTime.Now.TimeOfDay
Parallel.For(0, 1000000000, Function(i) calc(i, result))
Console.WriteLine("All your processors run 1 billion operations in: ")
Console.WriteLine(DateTime.Now.TimeOfDay - starttime)
Console.ReadLine()
End Sub
PS: I did the code for this in VB.net.
If a person can walk 2 miles in 30 minutes, how long will it take 2 people to walk the same 2 miles?
All jokes aside, the documentation at MSDN says:Executes a for (For in Visual Basic) loop in which iterations MAY run in parallel. the keyword here is MAY.
You are letting the CLR do the work and experience says that .net CLR does not always work the way you thought it would.
In my case (copy-pasted the code) single processor - 21.495 seconds, all processors: 7.03 seconds. I have an i7 870 CPU on 32 bit Windows 7.
In Parallel.For the order of iteration is not necessarily in the same order of the loops.
Also what your function does is sqrt(i) which means one processor might be doing sqrt(smallernumbers) and another sqrt(largernumbers) .
Simple answer is the work done by each processor is not exactly half of the whole work you gave them and so they are not likely to be equal.
One processor might have done more work and other might have completed its work and wait for the other. Or one of the processor might have been preempted by the operating system to do some important stuff while your working thread may have been waiting.
Although you can pass sub-second times to performSelector:withObject:afterDelay:, it appears that the timer will fire as quickly as it can for any delay under 1 sec. For example, if I set the delay to 100 msec (0.100) or 10 msec (0.010), the timer will still fire in 2 or 3 msec. Is this a known limitation?
For performSelection:withObject:afterDelay:, the documentation for the delay reads:
delay — The minimum time before which the message is sent. Specifying a delay of 0 does not necessarily cause the selector to be performed immediately. The selector is still queued on the thread’s run loop and performed as soon as possible.
Compare this to NSTimer, where the documentation reads:
seconds — The number of seconds between firings of the timer. If seconds is less than or equal to 0.0, this method chooses the nonnegative value of 0.1 milliseconds instead.
It appears that performSelector:withObject:afterDelay: uses its delay setting just like NSTimer's seconds setting when a negative value is provided.
Can anyone confirm that that is correct?
As a follow-up, I discovered that performSelector:withObject:afterDelay: was working just fine, and that it wasn't triggering at sub-second intervals because I was passing it an int delay as follows:
int delay = 0.025; // 25 msec
[self performSelector:#selector(blahBlah:) withObject:nil afterDelay:delay];
OK, my bad! However, this leads to another observation — I thought the compiler would have reported a "loss of precision" when converting a double to an int without an explicit cast. However, it does not. Beware!
if you set the delay to 100 msec (0.100) using performSelector:withObject:afterDelay:, it will NOT be fired in 2 or 3 msec. It will be scheduled on the runloop after 100 msec, and wait until the runloop has the chance to perform. So it may be fired 102 or 103 msec after.
I use NSTimer to count from a certain moment.
int totalSeconds;
int totalMinutes;
int totalHours;
If the totalSeconds are 60, totalMinuts become +1. Its very simple and should work.
For example i started the NSTimer together with the clock of my mac. (running on simulator).
When i look at the clock of my mac and the timer and compare the time the first 10-20 seconds its counting perfectly synchronous. After that it fluctuates or goes ahead 5 seconds or more.
So i output my timer and found this:
2012-10-24 14:45:44.002 xxApp driveTime: 0:0:44
2012-10-24 14:45:45.002 xxApp driveTime: 0:0:45
2012-10-24 14:45:45.813 xxApp driveTime: 0:0:46
2012-10-24 14:45:46.002 xxApp driveTime: 0:0:47
The milliseconds are timed at 002 as you see. But at the third row its 813. This happens very randomly and causes the fluctuations.
Is there a more stable way to count?
From the NSTimer documentation
A timer is not a real-time mechanism; it fires only when one of the run loop modes to which the timer has been added is running and able to check if the timer’s firing time has passed. Because of the various input sources a typical run loop manages, the effective resolution of the time interval for a timer is limited to on the order of 50-100 milliseconds.
If your goal is to compute the total time that has passed since your program has started running, this is quite easy. As soon as you want to begin keeping track of time, store -[NSDate date] into a variable. Whenever you want to compute how much time has passed, call -[NSDate date again and do the following, assuming originalDate is a property where you stored the result of the first call to -[NSDate date]:
NSDate *presentDate = [NSDate date];
NSTimeInterval runningTime = [presentDate timeIntervalSinceDate:originalDate];
runningTime will be the total number of seconds that have elapsed since you started keeping track of time. In order to get the number of hours, minutes, seconds, and so on, an NSDateComponents object should be used.
This mechanism will allow you to use a timer to update your total running time "just about once a second" without losing accuracy.