In Cocoa, NSEvents have a timestamp property which returns a timeinterval representing "The time when the event occurred in seconds since system startup." I am writing an application which needs exactly this information- precisely when the user presses keys. But I am worried that the timestamp might not be accurate enough. Timeinterval itself has sub-millisecond precision, which is great. Is there any documentation that indicates whether or not the sub-millisecond precision is used to provide accurate timestamps for NSEvents that represent keyboard and mouse inputs?
If I had to guess at apple's implementation, I would hope that the timestamp on the NSEvent might be generated soon after a hardware interrupt, which would be fantastic. However I could also imagine a system which polls for keyboard/mouse inputs and only populates the timestamp field when the next poll interval comes around and reads the inputs.
Thanks very much for any insight.
You may find Apple's Cocoa Event Handling Guide: Event Architecture document a useful read. From that:
Before it dispatches an event to an application, the window server processes it in various ways; it time-stamps it, annotates it with the associated window and process port, and possibly performs other tasks as well.
This seems to indicate the Window Server, which lives between the kernel and the application layer, applies the timestamp... though it is possible it reads metadata that was generated at a lower level in order to do this.
I would think the best way to get a better sense of this would be to write an app that records timestamps of key down/up events and look for exact matches in adjacent events. If you see several events in a row with (nearly) identical timestamps, it's more likely they were queued in some buffer before they got timestamped. If there is enough of a gap between adjacent events, it's more likely getting stamped closer to the hardware event.
Related
I am a new labview user. I am trying to implement a controller in real time using labview. For that purpose, I started doing analouge input output exercise. As a part of learning process, I was trying to apply input on a system, get the data and feed it back through analouge channel. However, I noticed there is a signficant delay between input and output, It was about 1 ms. Then i thought of doing the simplest exercise. I generated an input signal read it through labview and feed it back again. So, basically its a task for ADC and DAC only. But still, it has the same amount of delay. I was under impression that if i do hardwared time data read and write, it would reduce the delay. But still, no change.
Could you help me out on this issue? Any kind of advice would be really helpful for me
What you're seeing might be the result of the while loop method you're using.
The while loop picks up data from the AI once every time it goes around and then sends that to the AO. This ends up being done in batches (maybe with a batch size of 1, but still) and you could be seeing a delay caused by that conversion of batches.
If you need more control over timing, LabVIEW does provide that. Read up on the different clocks, sample clocks and trigger clocks for example. They allow you to preload output signals into a niDAQ buffer and then output the signals point by point at specific, coordinated moments.
For example, I need a timer on the page. So I can have an action every 100 ms
type Action = Tick Time
and I have field time in my Model. Model could be big, but I need to recreate it and whole view every 100 ms because of time field. I think it would not be effective performance-wise.
Is there another approach or I shouldn't worry about such thing?
The whole view isn't necessarily being recreated every time. Elm uses a Virtual DOM and does diffs to change only the bare minimum of the actual DOM. If large parts of your view are actually changing on every 100ms tick, then that could obviously cause problems, but I'm guessing you're only making smaller adjustments at every 100ms, and you probably have nothing to worry about. Take a look at your developer tools to see the whether the process utilization is spiking.
Your model isn't being recreated every 100ms either. There are optimizations around the underlying data structures (see this related conversation about foldp internals) that let you think in terms of immutability and pureness, but are optimized under the hood.
I'm working on an IRC bot in VB.net 2012. I know I know, but it's not a botnet, lol
I wanted to ask your advise on how to manage to many timers. Right now I have a timer that rewards points at a user specified interval, a timer that autosaves those points at a user specified interval. I also have one that displays advertisements, and broadcast a collection of responses at specified intervals.
I feel like it's getting out of hand and would love to know if there is a way I could do all of these thing with a single or at least less timers.
Please keep in mind I learn as I go and don't always understand all the terminology, but I am a quick learner if you have the patience to explain. :)
Thank you.
Yes, of course you can do them with a single timer. In fact, that is what you should do—timers are a limited resource, and there's hardly ever a reason for a single application to use more than one of them.
What you do is create a single timer that ticks at the most frequent interval required by all of your logic. Then, inside of that timer's Tick event handler, you set/check flags that indicate how much time has elapsed. Depending on which interval has expired, you perform the appropriate action and update the state of the flags. By "flags", I mean module-level variables that keep track of the various intervals you want to track—the ones you're tracking now with different timers.
It is roughly the same way that you keep track of time using a clock. You don't use separate clocks for every task you want to time, you use a single clock that ticks every second. You operate off of displacements from this tick—60 of these ticks is 1 minute, 3600 of these ticks is 1 hour, etc.
However, I strongly recommend figuring out a way to do as many of these things as possible in response to events, rather than at regular intervals tracked by a timer. You could, for example, reward points in response to specific user actions. This type of polling is very resource-intensive and rarely the best solution.
NOTE: This question has been ported over from Programmers since it appears to be more appropriate here given the limitation of the language I'm using (VBA), the availability of appropriate tags here and the specificity of the problem (on the inference that Programmers addresses more theoretical Computer Science questions).
I'm attempting to build a Discrete Event Simulation library by following this tutorial and fleshing it out. I am limited to using VBA, so "just switch to [insert language here] and it's easy!" is unfortunately not possible. I have specifically chosen to implement this in Access VBA to have a convenient location to store configuration information and metrics.
How should I handle logging metrics in my Discrete Event Simulation engine?
If you don't want/need background, skip to The Design or The Question section below...
Simulation
The goal of a simulation of the type in question is to model a process to perform analysis of it that wouldn't be feasible or cost-effective in reality.
The canonical example of a simulation of this kind is a Bank:
Customers enter the bank and get in line with a statistically distributed frequency
Tellers are available to handle customers from the front of the line one by one taking an amount of time with a modelable distribution
As the line grows longer, the number of tellers available may have to be increased or decreased based on business rules
You can break this down into generic objects:
Entity: These would be the customers
Generator: This object generates Entities according to a distribution
Queue: This object represents the line at the bank. They find much real world use in acting as a buffer between a source of customers and a limited service.
Activity: This is a representation of the work done by a teller. It generally processes Entities from a Queue
Discrete Event Simulation
Instead of a continuous tick by tick simulation such as one might do with physical systems, a "Discrete Event" Simulation is a recognition that in many systems only critical events require process and the rest of the time nothing important to the state of the system is happening.
In the case of the Bank, critical events might be a customer entering the line, a teller becoming available, the manager deciding whether or not to open a new teller window, etc.
In a Discrete Event Simulation, the flow of time is kept by maintaining a Priority Queue of Events instead of an explicit clock. Time is incremented by popping the next event in chronological order (the minimum event time) off the queue and processing as necessary.
The Design
I've got a Priority Queue implemented as a Min Heap for now.
In order for the objects of the simulation to be processed as events, they implement an ISimulationEvent interface that provides an EventTime property and an Execute method. Those together mean the Priority Queue can schedule the events, then Execute them one at a time in the correct order and increment the simulation clock appropriately.
The simulation engine is a basic event loop that pops the next event and Executes it until there are none left. An event can reschedule itself to occur again or allow itself to go idle. For example, when a Generator is Executed it creates an Entity and then reschedules itself for the generation of the next Entity at some point in the future.
The Question
How should I handle logging metrics in my Discrete Event Simulation engine?
In the midst of this simulation, it is necessary to take metrics. How long are Entities waiting in the Queue? How many Acitivity resources are being utilized at any one point? How many Entities were generated since the last metrics were logged?
It follows logically that the metric logging should be scheduled as an event to take place every few units of time in the simulation.
The difficulty is that this ends up being a cross-cutting concern: metrics may need to be taken of Generators or Queues or Activities or even Entities. Consider also that it might be necessary to take derivative calculated metrics: e.g. measure a, b, c, and ((a-c)/100) + Log(b).
I'm thinking there are a few main ways to go:
Have a single, global Stats object that is aware of all of the simulation objects. Have the Generator/Queue/Activity/Entity objects store their properties in an associative array so that they can be referred to at runtime (VBA doesn't support much in the way of reflection). This way the statistics can be attached as needed Stats.AddStats(Object, Properties). This wouldn't support calculated metrics easily unless they are built into each object class as properties somehow.
Have a single, global Stats object that is aware of all of the simulation objects. Create some sort of ISimStats interface for the Generator/Queue/Activity/Entity classes to implement that returns an associative array of the important stats for that particular object. This would also allow runtime attachment, Stats.AddStats(ISimStats). The calculated metrics would have to be hardcoded in the straightforward implementation of this option.
Have multiple Stats objects, one per Generator/Queue/Activity/Entity as a child object. This might make it easier to implement simulation object-specific calculated metrics, but clogs up the Priority Queue a little bit with extra things to schedule. It might also cause tighter coupling, which is bad :(.
Some combination of the above or completely different solution I haven't thought of?
Let me know if I can provide more (or less) detail to clarify my question!
Any and every performance metric is a function of the model's state. The only time the state changes in a discrete event simulation is when an event occurs, so events are the only time you have to update your metrics. If you have enough storage, you can log every event, its time, and the state variables which got updated, and retrospectively construct any performance metric you want. If storage is an issue you can calculate some performance measures within the events that affect those measures. For instance, the appropriate time to calculate delay in queue is when a customer begins service (assuming you tagged each customer object with its arrival time). For delay in system it's when the customer ends service. If you want average delays, you can update the averages in those events. When somebody arrives, the size of the queue gets incremented, then they begin service it gets decremented. Etc., etc., etc.
You'll have to be careful calculating statistics such as average queue length, because you have to weight the queue lengths by the amount of time you were in that state: Avg(queue_length) = (1/T) integral[queue_length(t) dt]. Since the queue_length can only change at events, this actually boils down to summing the queue lengths multiplied by the amount of time you were at that length, then divide by total elapsed time.
I'm using GPS units and mobile computers to track individual pedestrians' travels. I'd like to in real time "clean" the incoming GPS signal to improve its accuracy. Also, after the fact, not necessarily in real time, I would like to "lock" individuals' GPS fixes to positions along a road network. Have any techniques, resources, algorithms, or existing software to suggest on either front?
A few things I am already considering in terms of signal cleaning:
- drop fixes for which num. of satellites = 0
- drop fixes for which speed is unnaturally high (say, 600 mph)
And in terms of "locking" to the street network (which I hear is called "map matching"):
- lock to the nearest network edge based on root mean squared error
- when fixes are far away from road network, highlight those points and allow user to use a GUI (OpenLayers in a Web browser, say) to drag, snap, and drop on to the road network
Thanks for your ideas!
I assume you want to "clean" your data to remove erroneous spikes caused by dodgy readings. This is a basic dsp process. There are several approaches you could take to this, it depends how clever you want it to be.
At a basic level yes, you can just look for really large figures, but what is a really large figure? Yeah 600mph is fast, but not if you're in concorde. Whilst you are looking for a value which is "out of the ordinary", you are effectively hard-coding "ordinary". A better approach is to examine past data to determine what "ordinary" is, and then look for deviations. You might want to consider calculating the variance of the data over a small local window and then see if the z-score of your current data is greater than some threshold, and if so, exclude it.
One note: you should use 3 as the minimum satellites, not 0. A GPS needs at least three sources to calculate a horizontal location. Every GPS I have used includes a status flag in the data stream; less than 3 satellites is reported as "bad" data in some way.
You should also consider "stationary" data. How will you handle the pedestrian standing still for some period of time? Perhaps waiting at a crosswalk or interacting with a street vendor?
Depending on what you plan to do with the data, you may need to supress those extra data points or average them into a single point or location.
You mention this is for pedestrian tracking, but you also mention a road network. Pedestrians can travel a lot of places where a car cannot, and, indeed, which probably are not going to be on any map you find of a "road network". Most road maps don't have things like walking paths in parks, hiking trails, and so forth. Don't assume that "off the road network" means the GPS isn't getting an accurate fix.
In addition to Andrew's comments, you may also want to consider interference factors such as multipath, and how they are affected in your incoming GPS data stream, e.g. HDOPs in the GSA line of NMEA0183. In my own GPS controller software, I allow user specified rejection criteria against a range of QA related parameters.
I also tend to work on a moving window principle in this regard, where you can consider rejecting data that represents a spike based on surrounding data in the same window.
Read the posfix to see if the signal is valid (somewhere in the $GPGGA sentence if you parse raw NMEA strings). If it's 0, ignore the message.
Besides that you could look at the combination of HDOP and the number of satellites if you really need to be sure that the signal is very accurate, but in normal situations that shouldn't be necessary.
Of course it doesn't hurt to do some sanity checks on GPS signals:
latitude between -90..90;
longitude between -180..180 (or E..W, N..S, 0..90 and 0..180 if you're reading raw NMEA strings);
speed between 0 and 255 (for normal cars);
distance to previous measurement matches (based on lat/lon) matches roughly with the indicated speed;
timedifference with system time not larger than x (unless the system clock cannot be trusted or relies on GPS synchronisation :-) );
To do map matching, you basically iterate through your road segments, and check which segment is the most likely for your current position, direction, speed and possibly previous gps measurements and matches.
If you're not doing a realtime application, or if a delay in feedback is acceptable, you can even look into the 'future' to see which segment is the most likely.
Doing all that properly is an art by itself, and this space here is too short to go into it deeply.
It's often difficult to decide with 100% confidence on which road segment somebody resides. For example, if there are 2 parallel roads that are equally close to the current position it's a matter of creative heuristics.