I'm trying to figure use case to deploy beacon to detect duration customer stay at specific sections in a mall.
As i understand I can use one unique UUID declare as region to be monitor by the app, but the didEnterRegion does not providing the major & minor needed to identify which beacon was detected. The app will be able to have a short time to do the ranging to retrieve the major & minor for about 10s after the didEnterRegion trigger.
What if I have beacons that have overlaps coverage detection space?
Let's say the space have 4 beacons and when customer move from 1 beacon to another there won't be any exit/enter region trigger as the UUID/region are still the same.
What will be the better implementation or solution for scenario that I want to log time duration of customer stay at different beacons?
Thanks
A few tips:
Use a single ProximityUUID. Use a different major value per zone in the mall. Make the minor value be different for each beacon.
Set up a CLBeaconRegion for each zone (major) and start both monitoring and ranging for each. (Up to 20 max).
Extend background ranging for 3 min per region entry/exit.
When ranging, use the estimated distance to decide which beacon is closest. Whatever zone that is in (major value) is the Mall zone. If it is not the same as the current zone, mark a timestamp for exiting that zone and a timestamp for entering the new one.
If you get a monitoring region exit event for the zone the phone is currently in, mark a timestamp for exiting that zone.
Related
I am applying the VRP example of optaplanner with time windows and I get feasible solutions whenever I define time windows in a range of 24 hours (00:00 to 23:59). But I am needing:
Manage long trips, where I know that the duration between leaving the depot to the first visit, or durations between visits, will be more than 24 hours. So currently it does not give me workable solutions, because the TW format is in 24 hour format. It happens that when applying the scoring rule "arrivalAfterDueTime", always the "arrivalTime" is higher than the "dueTime", because the "dueTime" is in a range of (00:00 to 23:59) and the "arrivalTime" is the next day.
I have thought that I should take each TW of each Customer and add more TW to it, one for each day that is planned.
Example, if I am planning a trip for 3 days, then I would have 3 time windows in each Customer. Something like this: if Customer 1 is available from [08:00-10:00], then say it will also be available from [32:00-34:00] and [56:00-58:00] which are the equivalent of the same TW for the following days.
Likewise I handle the times with long, converted to milliseconds.
I don't know if this is the right way, my consultation would be more about some ideas to approach this constraint, maybe you have a similar problematic and any idea for me would be very appreciated.
Sorry for the wording, I am a Spanish speaker. Thank you.
Without having checked the example, handing multiple days shouldn't be complicated. It all depends on how you model your time variable.
For example, you could:
model the time stamps as a long value denoted as seconds since epoch. This is how most of the examples are model if I remember correctly. Note that this is not very human-readable, but is the fastest to compute with
you could use a time data type, e.g. LocalTime, this is a human-readable time format but will work in the 24-hour range and will be slower than using a primitive data type
you could use a date time data tpe, e.g LocalDateTime, this is also human-readable and will work in any time range and will also be slower than using a primitive data type.
I would strongly encourage to not simply map the current day or current hour to a zero value and start counting from there. So, in your example you denote the times as [32:00-34:00]. This makes it appear as you are using the current day midnight as the 0th hour and start counting from there. While you can do this it will affect debugging and maintainability of your code. That is just my general advice, you don't have to follow it.
What I would advise is to have your own domain models and map them to Optaplanner models where you use a long value for any time stamp that is denoted as seconds since epoch.
We have a bunch of devices in the field (various customer sites) that "call home" at regular intervals, configurable at the device but defaulting to 4 hours.
I have a view in SQL Server that displays the following information in descending chronological order:
DeviceInstanceId uniqueidentifier not null
AccountId int not null
CheckinTimestamp datetimeoffset(7) not null
SoftwareVersion string not null
Each time the device checks in, it will report its id and current software version which we store in a SQL Server db.
Some of these devices are in places with flaky network connectivity, which obviously prevents them from operating properly. There are also a bunch in datacenters where administrators regularly forget about it and change firewall/ proxy settings, accidentally preventing outbound communication for the device. We need to proactively identify this bad connectivity so we can start investigating the issue before finding out from an unhappy customer... because even if the problem is 99% certainly on their end, they tend to feel (and as far as we are concerned, correctly) that we should know about it and be bringing it to their attention rather than vice-versa.
I am trying to come up with a way to query all distinct DeviceInstanceId that have currently not checked in for a period of 150% their normal check-in interval. For example, let's say device 87C92D22-6C31-4091-8985-AA6877AD9B40 has, for the last 1000 checkins, checked in every 4 hours or so (give or take a few seconds)... but the last time it checked in was just a little over 6 hours ago now. This is information I would like to highlight for immediate review, along with device E117C276-9DF8-431F-A1D2-7EB7812A8350 which normally checks in every 2 hours, but it's been a little over 3 hours since the last check-in.
It seems relatively straightforward to brute-force this, looping through all the devices, examining the average interval between check-ins, seeing what the last check-in was, comparing that to current time, etc... but there's thousands of these, and the device count grows larger every day. I need an efficient query to quickly generate this list of uncommunicative devices at least every hour... I just can't picture how to write that query.
Can someone help me with this? Maybe point me in the right direction? Thanks.
I am trying to come up with a way to query all distinct DeviceInstanceId that have currently not checked in for a period of 150% their normal check-in interval.
I think you can do:
select *
from (select DeviceInstanceId,
datediff(second, min(CheckinTimestamp), max(CheckinTimestamp)) / nullif(count(*) - 1, 0) as avg_secs,
max(CheckinTimestamp) as max_CheckinTimestamp
from t
group by DeviceInstanceId
) t
where max_CheckinTimestamp < dateadd(second, - avg_secs * 1.5, getdate());
I am streaming data from devices and I want to use the LAG function to identify the last value received from a particular device. The data is not streamed at a regular period and in rare cases it could be days between receiving data from a device.
Is there a maximum period for the LIMIT DURATION clause?
Is there any down-side to having long LIMIT DURATION periods?
There is no maximum period for LIMIT DURATION in the language. However it is limited by amount of data the input source can hold - e.g. 1 day is default retention policy for Event Hub (can be increased in configuration).
When job is being started, Azure Stream Analytics reads up to LIMIT DURATION amount of data from the source to make sure it has correct value for the LAG at job start time. If data volume is high, this can increase job start time.
If you need to use data that is more than several days old, it may make more sense to use it as a reference data (which can be updated at daily intervals for example)
I have a certain amount of elements, and each of these elements represents one day. Each time midnight occurs (i.e. >>>user<<< time = 00:00), I want the "current" element in the list to expire (and the next one will take its place). Now this seems easy and all, but when you start scratching the surface it's a mess (at least according to me). The problems begin with time zones. If midnight occurs, and after this I change my time zone to one where this particular midnight has not occured yet, then when it does occur "again" in the new time zone, I do not want to count it again (the expired element should remain expired while the element that took its place should count as the current one). Also, when the app is suspended/shut down for a couple of days, I want it to update itself based on the number of valid midnights that occured since last use (as I see it, this makes using UIApplicationSignificantTimeChangeNotification pointless, as it is only sent for the most recent passed midnight).
Ideally, I would like these elements to be totally unaware of dates and time; they should simply be a list 0,1,2,3,... together with a "current element" pointer (i.e. a simple integer), which will be increased for each valid midnight occurence.
How would you suggest that I should implement this?
Base it on UTC midnight, so that no matter what time zone you're in, you're unaffected by the local time change. It eliminates the time zone issue altogether.
I am trying to write MQL4 code to find the exact price and time for all the 2 MA (50 and 100) earlier crossovers that has already taken place in my MT4 charts.
Would appreciate any pointers.
Thanks,
use a for loop to cycle through all the candles on your chart.
Get the fast iMA() zone (if fast iMA() > slow iMA(), then it is buy zone; if fast iMA() < slow iMA(), then it is in sell zone).
Get the iMA zone (#2) for current and previous/next candle.
If the 2 zones don't match (ie, 1 is buy zone and the other is sell zone), then a crossing occurred.
Add that candle time to an array.
Not sure how you can get the exact price though (the crossing don't usually occur at the exact start/end of candle, so it is very difficult to determine the exact crossing time/price) unless you do the above on tick level instead of candle level. Good luck.