iOS APNS Messages not arriving until app reinstall - objective-c

I have an app that is using push notifications with apples APNS.
Most of the time it works fine, however occasionally (at random it seems, I havent been able to find any verifiable pattern) the messages just dont seem to be getting to the phone.
The messages are being recieved by APNS but just never delivered. However when I reinstall the app or restart the iPhone they seem to arrive.
Im not sure if this is a problem within my app or not, as even when the app is closed (and handling of the notification should rest completely with the Operating System no notification is recieved until a restart/reinstall is done.
The feedback service yields nothing, and NSLogging the received notification within the app also yields nothing (like the notification never makes it to the app)
EDIT:
Some additional information, as nobody seems to know whats going on.
I am using the sandbox server, with the app signed with the developer provisioning profile, so theres no problems there. And the App recieves the notifications initially.
The problem seems to be that when the app doesnt recieve anything when its in the background for about 90s-120s it just stops receiving anything until it is reinstalled.
Even double tapping home and stopping the app that way doesnt allow it to recieve notifications in the app closed state. Which I would have thought would have eliminated problems with the apps coding entirely, since at that point its not even running.
I timed it to see after how long it stops recieving notifications. There are 3 trials here.
==================================Trial 1=====================================
| Notification Number | Time since Last | Total Time | Pass/fail |
| 1 | 6s | 6s | Pass |
| 2 | 30s | 36s | Pass |
| 3 | 60s | 96s | Pass |
| 4 | 120s | 216s | Fail |
==============================================================================
==================================Trial 2=====================================
| Notification Number | Time since Last | Total Time | Pass/fail |
| 1 | 3s | 3s | Pass |
| 2 | 29s | 32s | Pass |
| 3 | 60s | 92s | Pass |
| 4 | 91s | 183s | Fail |
==============================================================================
==================================Trial 3=====================================
| Notification Number | Time since Last | Total Time | Pass/fail |
| 1 | 1s | 1s | Pass |
| 2 | 30s | 61s | Pass |
| 3 | 30s | 91s | Pass |
| 4 | 30s | 121s | Pass |
| 5 | 30s | 151s | Pass |
| 6 | 30s | 181s | Pass |
| 7 | 30s | 211s | Pass |
| 8 | 30s | 241s | Pass |
| 9 | 60s | 301s | Pass |
| 10 | 120s | 421s | Fail |
==============================================================================
Does anyone have any idea what could be going on here.
Another Edit:
Just tested the problem across multiple devices, and its happening on all of them, so its definately not a device issue. The notifications stop coming through even when the app has never been openened. Could the programming within the app effect how the push notifications are received even when its never been open?

It appears this may have been an issue outside of my control, as everything is now working fine, with zero changes.
Going to blame apple or some sort of networking problem somewhere inbetween.

Related

Splunk - Remove events between 1st login and last logout while user has any session open where logins and logouts can happen multiple times in a row?

The questions sounds a bit confusing, so to break it down, I'm trying to find the time difference between logins and logouts. The catch is two-fold. It's possible that the time range doesn't catch the first login that's logged out during the time range, and vice versa it's possible that the time range doesn't catch the last logout that's logged in during the time range. This can end up looking like below in a resulting table.
| Action | Action Number |
|--------|---------------|
| Login | 1 |- 1am
| Login | 2 |- 1:01 am
| Logout | 1 |- 1:02 am
| Logout | 2 |- 1:03 am
| Logout | 3 |- 1:04 am
| Logout | 4 |- 1:05 am
| Login | 3 |- 1:10 am
| Logout | 5 |- 1:11 am
| Login | 4 |- 1:15 am
| Login | 5 |- 1:16 am
| Logout | 6 |- 1:17 am
| Login | 6 |- 1:18 am
| Logout | 7 |- 1:20 am
| Logout | 8 |- 1:22 am
Where action number is what number that login/logout is during the time frame. For example, the first login will have an action number of 1, as will the first logout, and so on.
I've written the logic to get that in place, but what I need help with is "removing" the events in between the first login and last logout for each break in activity (periods when the user had no sessions logged in).
This would mean that the first login (for the time range) for this user would be Login - 1, and they had a logged in session until Logout 4. This means I would want to remove Login 2 and Logout 1 and 2. Then I can calculate the time difference between the two remaining events to find the total time they were logged in to any session in that period.
To summarize, the following is the result that I'm wanting to generate from the above table, but I can't find a good way to do this.
| Action | Action Number | Flag for Deletion |
|--------|---------------|-------------------|
| Login | 1 | False |
| Login | 2 | True |
| Logout | 1 | True |
| Logout | 2 | True |
| Logout | 3 | True |
| Logout | 4 | False |
| Login | 3 | False |
| Logout | 5 | False |
| Login | 4 | False |
| Login | 5 | True |
| Logout | 6 | True |
| Login | 6 | True |
| Logout | 7 | True |
| Logout | 8 | False |
The following 6 lines of SPL produce your raw dataset:
| makeresults count=1
| eval data="Login,1|Login,2|Logout,1|Logout,2|Logout,3|Logout,4|Login,3|Logout,5|Login,4|Login,5|Logout,6|Login,6|Logout,7|Logout,8"
| makemv delim="|" data
| mvexpand data
| rex field=data "(?<action>[^\,]+),(?<action_number>\d+)"
| fields - _time, data
The following SPL builds upon this, to produce the required result. Loosely speaking, it:
Keeps a count of the active sessions (streamstats)
Ensures this does not go below zero (streamstats, eval)
Works out the next action (reverse, streamstats)
Uses logic based on session count, current action and next action to decide whether you need to use this event in your calculations (eval)
| eval actiontype=if(action=="Login",1,-1)
| streamstats reset_after="("session_count<\"0\"")" sum(actiontype) AS session_count
| eval session_count=if(session_count==-1,0,session_count)
| reverse
| streamstats current=f global=f window=1 max(actiontype) AS next_actiontype
| eval "Flag for Deletion"=if(session_count>1 OR (session_count==0 AND next_actiontype==-1) OR (session_count==1 AND actiontype==-1),"True","False")
| reverse
| fields action, action_number,"Flag for Deletion"

Problems with using the bootstrap-datepicker in Fitnesse Tests

In my Fitnesse Tests I want to enter dates through datepicker elements. Sometimes it works. But most of the time a different date, unlike the date that was entered, appears. Here is an example:
| ensure | do | type | on | id=field_id | with | |
| ensure | do | type | on | id=field_id | with | 05.05.1997 |
| check | is | verifyValue | on | id=field_id | [28.05.1997] expected [05.05.1997] |
(To make sure that the field isn't already filled, I pass an empty String first.)
Mostly, the 'day'-statement is different from what was entered. Do you know the reason for this behavior? How can I solve this?
Thanks in advance!
This is related to how you wrote your fixture and not FitNesse, the problem is that it returns a different value and also implies that the previous line didn't work - | ensure | do | type | on | id=field_id | with | 05.05.1997 |

SQL - Combining 3 rows per group in a logging scenario

I have reworked our API's logging system to use Azure Table Storage from using SQL storage for cost and performance reasons. I am now migrating our legacy logs to the new system. I am building a SQL query per table that will map the old fields to the new ones, with the intention of exporting to CSV then importing into Azure.
So far, so good. However, one artifact of the previous system is that it logged 3 times per request - call begin, call response and call end - and the new one logs the call as just one log (again, for cost and performance reasons).
Some fields common are common to all three related logs, e.g. the Session which uniquely identifies the call.
Some fields I only want the first log's value, e.g. Date which may be a few seconds different in the second and third log.
Some fields are shared for the three different purposes, e.g. Parameters gives the Input Model for Call Begin, Output Model for Call Response, and HTTP response (e.g. OK) for Call End.
Some fields are unused for two of the purposes, e.g. ExecutionTime is -1 for Call Begin and Call Response, and a value in ms for Call End.
How can I "roll up" the sets of 3 rows into one row per set? I have tried using DISTINCT and GROUP BY, but the fact that some of the information collides is making it very difficult. I apologize that my SQL isn't really good enough to really explain what I'm asking for - so perhaps an example will make it clearer:
Example of what I have:
SQL:
SELECT * FROM [dbo].[Log]
Results:
+---------+---------------------+-------+------------+---------------+---------------+-----------------+--+
| Session | Date | Level | Context | Message | ExecutionTime | Parameters | |
+---------+---------------------+-------+------------+---------------+---------------+-----------------+--+
| 84248B7 | 2014-07-20 19:16:15 | INFO | GET v1/abc | Call Begin | -1 | {"Input":"xx"} | |
| 84248B7 | 2014-07-20 19:16:15 | INFO | GET v1/abc | Call Response | -1 | {"Output":"yy"} | |
| 84248B7 | 2014-07-20 19:16:15 | INFO | GET v1/abc | Call End | 123 | OK | |
| F76BCBB | 2014-07-20 19:16:17 | ERROR | GET v1/def | Call Begin | -1 | {"Input":"ww"} | |
| F76BCBB | 2014-07-20 19:16:18 | ERROR | GET v1/def | Call Response | -1 | {"Output":"vv"} | |
| F76BCBB | 2014-07-20 19:16:18 | ERROR | GET v1/def | Call End | 456 | BadRequest | |
+---------+---------------------+-------+------------+---------------+---------------+-----------------+--+
Example of what I want:
SQL:
[Need to write this query]
Results:
+---------------------+-------+------------+----------+---------------+----------------+-----------------+--------------+
| Date | Level | Context | Message | ExecutionTime | InputModel | OutputModel | HttpResponse |
+---------------------+-------+------------+----------+---------------+----------------+-----------------+--------------+
| 2014-07-20 19:16:15 | INFO | GET v1/abc | Api Call | 123 | {"Input":"xx"} | {"Output":"yy"} | OK |
| 2014-07-20 19:16:17 | ERROR | GET v1/def | Api Call | 456 | {"Input":"ww"} | {"Output":"vv"} | BadRequest |
+---------------------+-------+------------+----------+---------------+----------------+-----------------+--------------+
select L1.Session, L1.Date, L1.Level, L1.Context, 'Api Call' AS Message,
L3.ExecutionTime,
L1.Parameters as InputModel,
L2.Parameters as OutputModel,
L3.Parameters as HttpResponse
from Log L1
inner join Log L2 ON L1.Session = L2.Session
inner join Log L3 ON L1.Session = L3.Session
where L1.Message = 'Call Begin'
and L2.Message = 'Call Response'
and L3.Message = 'Call End'
This would work in your sample.

How to find exact time of last received transaction on asynchronous mirror (SQL Server 2005)?

I need to provide users with exact time of data integrity in case of forced service with possible data loss.
I guess that I can find last LSN using:
SELECT [mirroring_failover_lsn]
FROM [master].[sys].[database_mirroring]
But that won't give me exact time.
Read How to read and interpret the SQL Server log. You'll see that LOP_BEGIN_XACT contains a timestamp. Given an LSN, you can analize the log and find all pending transactions (that is all xact_ids that do not have a commit or rollback logged before the given LSN). All the pending transations will be rolled back in case of failover. This will be the data lost if a forced failover occurs. There will be a number of transactions pending that will be undone, and these various transactions have started at various times. If you want to attach a 'exact time of data integrity' then you can say that no data loss will occur for anything earlier than the earliest pending lop_begin_xact. Eg. given the following log stream:
+-----+-----------+---------+------------+
| LSN | Operation | xact_id | timestampt |
+-----+-----------+---------+------------+
| 1 |INSERT | 1 | |
| 2 |BEGIN_XACT | 2 | 12:00 |
| 3 |INSERT | 1 | |
| 4 |BEGIN_XACT | 3 | 12:02 |
| 5 |COMMIT_XACT| 1 | |
| 6 |INSERT | 2 | |
| 7 |INSERT | 3 | |
| 8 |COMMIT_XACT| 3 | |
| 9 |COMMIT_XACT| 2 | |
Lets say that the mirroring failover LSN is 8. In this case you can say that not data loss will occur earlier than 12:00, because xact_id 2 is not committed at LSN 8 and therefore it will be rolled back. Note that xact_id 3 is commited by LSN 8 so it won't be lost, even though it has a later time stamp. So your timestamp is no t absolute, this is why I say 'no data loss will occur earlier than...' rather than 'data after ... will be lost'.

Which tables/fields reveal user activity in Trac?

(This may be a webapp question.) I would like to use Trac 1.0.1 activity for time tracking. For example, closing a ticket, editing a wiki page or leaving a comment
I was imagining output something like this:
| Time | Ticket | Custom field | Summary | Activity |
| 2013-05-08 10:00 | 4123 | Acme | Ticket title | Ticket closed |
| 2013-05-08 10:00 | 4200 | Sierra | Title here | Comment left on ticket |
| 2013-05-08 10:00 | - | - | - | Edited /wiki/Acme/Coyote |
| 2013-05-08 10:00 | - | - | - | Committed /git/Apogee.txt|
I would like to include basically everything that appears in the timeline, including comment activity. For ticket-related activity, I would like to include ticket number and a custom field.
Which tables should I be looking at? (A pointer to relevant docs or code would suffice.)
I believe you are just asking for trac database schema which can be viewed here, you can also view the source for timeline here.