iOS background polling without location services - objective-c

this is a question we've all wondered about a number of times, and no one seems to have a good answer.
How do apps like DataMan run on a regular basis in the background, indefinitely, and still get into the app store?
The app allows a user to turn on "precise data tracking" and select a frequency at which the app updates it's data usage counters with zero user interaction - the intervals are once every minute, once every 10 minutes, and once every 20 minutes.
Yes, I've read all the associated Apple Documentation on background processes and implemented many of them successfully. I've also explored the ins and outs of this old post, but it's old enough now that many of those "loop holes" have been patched and the documented stuff works better anyway.
While I've had great luck with registering my app as a VOIP app and requesting a keep-alive at certain intervals, it's not app-store-ok unless it's a VOIP app (DataMan isn't). Furthermore, registering for VOIP keep-alives doesn't actually exhibit the same behavior as DataMan...VOIP keep-alive calls come at somewhat-random intervals, or at least at the frequency you select without syncing up to clock time. DataMan actually falls in line with clock-mandated intervals and updates its data counters at the :10, :20, :30 minute marks, etc.
Any ideas?

According to their support site, their pro version just got pulled by apple. I would bet that their other versions are next.
Just because you manage to sneak something past the review team doesn't mean they won't catch it later, or that other people will succeed. What they're doing is clearly against Apple's guidelines if they are not also offering one of the approved background services.

Related

ISO-8583 message processing(defining priority of messages)

I need to get an understanding of ISO-8583 message platform,lets say i want to perform a authorization of a card transaction,so in real time at a particular instance lets say i got 100000 requests from network(VISA/MASTERCARD) all for authorization,how do i define priority of there request and the response,can the connection pool handle it(in my case its HIKARI),how is it done banks/financial institutions for authorizing a request.Please provide me some insights on how to manage all these requests.Should i go for a MQ?
Tech used are:-spring boot,hibernate,spring-tcp-starter
Your question doesn't seem to be very well researched as there are a ton of switch platforms out there that due this today and many of their technology guides can be found on the web including for major vendors like ACI, FIS, AJB,.. etc if you look yard enough.
I have worked with several iso-interface specifications, commercial switches, and home grown platforms and it is actually pretty consistent in how they do the core realtime processing.
This information on prioritization is generally in each ISO-8583 message processing specification and is made explicitly clear in almost every specification I've ever read written by someone who is familar with ISO-8533 and not just making up their own variant or copying someone elses.
That said.. in general at a high level authorizations / financials (0100, 0200) requests always have high priority than force posts (0x20) messages.
Administrative messages in the 05xx and 06xx and 08xx sometimes also get bumped up above other advices.. but these are still advices and almost always auths/financials are always processed first as they A) Impact the customer B) have much tighter timers than any advice by usually more than double or more.
Most switches I have seen do it entirely in memory without going to MQ and or some other disk for core authorization process to manage these.. but not to say there is not some sort of home grown middle ware sometimes involved.. but non-realtime processes regularly use a MQ process to queue or disk queuing these up into processes not in-line of the approval for this Store-and-forward (SAF) processing.. but many of these still use memory only processing for the front of their queue.
It is important to also differentiate between 100000 requests and 100000 transactions.. the various exchanges both internal and external make a big difference in the number of actual requests/responses in flight at even given time.. a basic transaction can be accomplished in like two messages.. but some of the more complex ones can easily exceed 20 messages just for a pre-authorization or a completion component.
If you are dealing with largely batch transaction bursts.. I can see the challenge of queuing but almost every application I have seen has a max in flight for advices and requests separate of each other.. and sometimes even with different timers.. and the apps pumping the transactions almost always wait for the response back before sending more.. and this tends to work fine for just about everyone.. including big posting batches from retailers and card networks. So if your app doesn't have them.. you probably need to add them.
In fact your 100000 requests should be sorted by (Terminal ID and/or Merchant ID) + (timestamp/local timestamp) + (STAN and/or RRN).
Duplicated transaction requests expected to be rejected.
If you simulating multiple requests from single terminal (or host) with same test card details the increasing of STAN/RRN would be a case.
Please refer to previous answers about STAN and RRN ISO 8583 fields.
In ISO message, what's the use of stan and rrn ?

Getting HLS livestream in sync across devices

We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.

UWP background task location tracking

I'm trying to develop an UWP app that is able to continually track the user's location in an in-process background task. I've been looking at the Microsoft sample code on GitHub (geolocation / Secenario3_BackgroundTask) but the problem there is that it is based on a TimeTrigger. The shortest interval that TimeTrigger allows is 15 minutes, but I need to get location updates much more frequently. Let's say at least once per minute. Is that possible at all?
I've seen that there is a LocationTrigger but there's not much documentation for it. I don't understand when this trigger gets fired. In my tests, it never got fired.
LocationTrigger is used for Geofencing, it is triggered when a mobile device enters or leaves a particular area.
As you say, TimeTrigger is not good for your requirement because it has an interval of 15 minutes at the minimum.
Windows 10 introduces a new mechanism called extended execution. Location tracking is one of the supported scenarios in which you ask the OS to keep your app running when another app is switched to the foreground.
This is a sample for your.

Philips Hue command limitation

First of all I'm developing my own C# library for controlling Philips Hue, which means I'm not using the official SDK. (I'm guessing that the SDK will make sure you won't have any problems)
I'm a little confused about the limitation in the Core concepts page in the API, which states:
We can’t send commands to the lights too fast. If you stick to around 10 commands per second to the /lights resource as maximum you should be fine. For /groups commands you should keep to a maximum of 1 per second.
I intend to respect this limitation, but does the limitation still apply when you are performing GET requests on the /lights resource, or is it only for sending actual commands with PUT requests to /lights/<id>/state that change the state of the light? Same question goes for the /groups resource.
Also is it even possible to damage anything by sending too many requests, or will it just take longer to get all responses?
Edit:
My overall question is: How should I understand the API limitation?
A more specific sub-question is: Should I wait 100 ms before sending another /lights command, relative to when I received a response, or relative to when I sent the previous command?
Another sub-question is: Should I consider this limitation only when using PUT requests on e.g. /lights/<id>/state, or on all request types GET/PUT/POST/DELETE
I don't know if anything was changed in firmware updates, but I have discovered that the bridge might not be so simple as you would think, and that the API description isn't very clear.
I've done a little testing while running firmware 01009914.
The bridge seems to have some kind of queue of incoming commands. I sent {"bri":254} to a group 9 times and 1 final command of {"bri":1}. From the first command to when the light is actually dimmed, takes roughly 3-4 seconds. Each time I sent a command the bridge replied almost instantly with success token.
I did the same small tests sending other commands, 10 of each JSON object:
{"bri":254} 3-4 seconds
{"on":true, "bri":254} 6-7 seconds
{"on":true, "bri":254, "alert":"none", "effect":"none"} 12-13 seconds
This actually shows that each change of attributes takes roughly 0.3 seconds for the bridge to handle.
I will claim that for each attribute we change, the bridge takes about 300 ms to finish, and the limitation of commands should be understood as: As long as you stick with changing one attribute of a group each second, you should be fine.
Note: I only tried with one group consisting of three lights, and I don't know if the bridge actually does have a queue of incoming commands, and in case it does have a queue, I don't know what the limit of items in it is.
Edit:
Now we have some official clarification of the Hue System Performance.
I'm fairly certain that the 10 commands per second is a guideline to prevent failure of the Bridge, and is a technical limitation of the hardware. Any more than that and you're apt to overload the bridge. I believe this applies to commands as well as requests.
Both approaches are reasonable. For laziness' sake, you could wait for 100ms to send a response, but I would only rely on this method if you don't plan on any other interactions with the Bridge.
I consider this limitation on all request types.
You won't damage anything if you send commands too fast. However, if you send commands too fast the bridge might become unresponsive and/or some messages can be ignored.
When it comes to the bridge, the way I think of it is that the bridge is more or less single threaded, so it works best if you make sure you don't send the next command before the previous one has returned.
In practice we've found that this works much better than waiting a fixed time between each request. In fact, you can pretty much send commands as fast as you want as long as you wait for the previous one to finish.
When you send a command to the bridge, the bridge has to then send it to the lamps through Zigbee. Since it's a mesh network in some cases the message has to make a couple of hops from lamp to lamp before it reaches the target. Depending on how many lamps you have and how many hops the signal needs to take, this can take a while. Also, it's possible that some messages randomly take much longer than others.
In general the system is not designed to handle very fast changes, but if you keep the above in mind you can make many cool effects :)

NSDate: Get precise time independent of device clock? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How can I locally detect iPhone clock advancement by a user between app runs?
Is there a way to determine the actual time and date in iOS (not the time of the device)
Is there a clock in iOS that can be used that cannot be changed by the user
Brief
I am working with an auto-renewable subscription-based app. When the app receives the latest receipt from Apple, it stores the expires_date_ms key in NSUserDefaults. Thirty days after that date, the app checks with Apple to see if the subscription is still active. The app can be considered an offline app, but it must connect to the internet once every 30 days in order to check the subscription status. This time comparison will be used to tell the user he/she must connect.
Problem
I am using the code below to compare the current time with the expires_date_ms:
NSTimeInterval expDateMS = [[productInfo objectForKey:#"expires_date_ms"] doubleValue];
NSTimeInterval currentDateMS = ([[NSDate date] timeIntervalSince1970] * 1000);
if (currentDateMS > expDateMS)
subExpired = YES;
This is fine and works well, but from what I can tell there's a loophole that can be exploited - if the user sets the device's clock back a hour/month/decade, the time comparison will become unreliable because [NSDate date] uses the device's current time (please correct me if I'm wrong).
Question
Is there any way of retrieving a device-independent time in milliseconds? One that can be accurately and reliably measured with no regards to the device clock?
While Kevin and H2CO3 are completely correct, there are other solutions for the purposes of checking a subscription (which I would hope does not need millisecond accuracy....)
First watch UIApplicationSignificantTimeChangeNotification so that you get notifications of when the time changes suddenly. This will even be delivered to you if you were suspended (though I don't believe you will receive it if you were terminated). This gets called when there is a carrier time update, and I believe it is called when there is manual time update (check). It also is called at local midnight and at DST changes. The point is that it's called pretty often when the time suddenly changes.
Keep track of what time it was when you go into the background. Keep track of what time it is when you come back into the foreground. If time moves radically backwards (more than a day or two), kindly suggest that you would like access to the network to check things. Whenever you check-in with your server, it should tell you what time it thinks it is. You can use that to synchronize the system.
You can similarly keep track of your actual runtime. If it gets wildly out of sync with apparent runtime, then again, request access to the network to sync things up.
I'm certain that attackers would be able to sneak 35 days or whatever out of this system rather than 30, but anyone willing to work that hard will just crack your software and take the check out entirely. The focus here is the uncommitted attacker who is just messing with their clock. And that you can catch pretty well.
You should test this carefully, and be very hesitant to accuse the user of anything. Just connecting to your server should always be enough to get a legitimate user working again.
You need to connect to/retrieve information from a reliable, official time server and use that time data in your app. For example, here's a world time server with an easy-to use API
Here are three options I can think of:
clock_gettime(CLOCK_MONOTONIC) gets the current system uptime. This is relatively unreliable because if the user reboots, this is reset. You could save the last value used and at launch use the last saved value as an offset, but the problem with this is that the time that the device was shut off for won't be calculated.
mach_absolute_time() counts the number of CPU ticks since the last reboot. It can be fetched easily through CACurrentMediaTime. Note that this can be reset simply by rebooting the device, so if changing the time is very important, I'm not so sure if you would go this way.
Network Time Protocol (NTP) is a networking protocol for synchronizing the clocks of computer systems. In practice, all NTP is is querying a time server. An iOS library for NTP can be found here.
So the first two methods do not require connectivity, while the third does. However, the third method is the only foolproof one.
There is no such thing as a non-mutable device clock that persists across reboots. The only way to get a trustworthy time is to contact a remote server that you trust and ask what its time is.