Erlang + Apple Push notification [Issue with invalid token] - ssl

I'm currently trying to create a push notification module for Erlang.
When the token is valid, everything works great...
The issue is when an old device token (which is invalid by now) is rejected.
I understand that invalid token will be rejected by the apns with a 6 bytes socket message, and invalidate the connection (Which I think is really dumb, whatever...)
The thing is I do not seems to get the 6 bytes socket message that APNS should give me with my module, like the controlling process is not listening the socket.
Here's my code :
-module(pushiphone).
-behaviour(gen_server).
-export([start/1, init/1, handle_call/3, handle_cast/2, code_change/3, handle_info/2, terminate/2]).
-import(ssl, [connect/4]).
-record(push, {socket, pid, state, cert, key}).
start(Provisioning) ->
gen_server:start_link(?MODULE, [Provisioning], []).
init([Provisioning]) ->
gen_server:cast(self(), {connect, Provisioning}),
{ok, #push{pid=self()}}.
send(Socket, DT, Payload) ->
PayloadLen = length(Payload),
DTLen = size(DT),
PayloadBin = list_to_binary(Payload),
Packet = <<0:8,
DTLen:16/big,
DT/binary,
PayloadLen:16/big,
PayloadBin/binary>>,
ssl:send(Socket, Packet).
handle_call(_, _, P) ->
{noreply, P}.
handle_cast({connect, Provisioning}, P) ->
case Provisioning of
dev -> Address = "gateway.sandbox.push.apple.com";
prod -> Address = "gateway.push.apple.com"
end,
Port = 2195,
Cert="/apns-" ++ atom_to_list(Provisioning) ++ "-cert.pem",
Key="/apns-" ++ atom_to_list(Provisioning) ++ "-key.pem",
Options = [{certfile, Cert}, {keyfile, Key}, {password, "********"}, {mode, binary}, {active, false}],
Timeout = 1000,
{ok, Socket} = ssl:connect(Address, Port, Options, Timeout),
ssl:controlling_process(Socket, self()), %% Necessary ??
gproc:reg({n,l, pushiphone}),
{noreply, P#push{socket=Socket}};
handle_cast(_, P) ->
{noreply, P}.
handle_info({ssl, Socket, Data}, P) ->
<<Command, Status, SomeID:32/big>> = Data,
io:fwrite("[PUSH][ERROR]: ~p / ~p / ~p~n", [Command, Status, SomeID]),
ssl:close(Socket),
{noreply, P};
handle_info({push, message, DT, Badge, [Message]}, P) ->
Payload = "{\"aps\":{\"alert\":\"" ++ Message ++ "\",\"badge\":" ++ Badge ++ ",\"sound\":\"" ++ "msg.caf" ++ "\"}}",
send(P#push.socket, DT, Payload),
{noreply, P};
handle_info({ssl_closed, _SslSocket}, P) ->
io:fwrite("SSL CLOSED !!!!!!~n"),
{stop, normal, P};
handle_info(AnythingElse, P) ->
io:fwrite("[ERROR][PUSH][ANYTHING ELSE] : ~p~n", [AnythingElse]),
{noreply, P}.
code_change(_, P, _) ->
{ok, P}.
terminate(_, _) ->
ok.
So When I start the module, and I push to a valid token, the push is received on the phone, but when I push to an invalid token, and then to a valid token, the valid token won't receive any push...
I am aware I should listen to the feedback service in order to remove the Device token from my database, but I also need to know if the push gateway as invalidated my connection in order to reconnect.
So Here's the real question : Why my gen-server doesn't receive the error-response packet (which should match the handle_info({ssl, Socket, Data}, P) )?

Your socket is configured with active=false. You won't receive any messages unless you set it active=true (or repeatedly active=once). See the documentation for inet:setopts/2.
You also shouldn't have to set the controlling_process to self().

Related

Receive message from an Elm process

I'm toying around with Elm processes in order to learn more about how they work. In parts of this, I'm trying to implement a timer.
I bumped into an obstacle, however: I can't find a way to access the result of a process' task in the rest of the code.
For a second, I hoped that if I make the task resolve with a Cmd, the Elm runtime would be kind enough to perform that effect for me, but that was a naive idea:
type Msg
= Spawned Process.Id
| TimeIsUp
init _ =
( Nothing
, Task.perform Spawned (Process.spawn backgroundTask)
)
backgroundTask : Task.Task y (Platform.Cmd.Cmd Msg)
backgroundTask =
Process.sleep 1000
-- pathetic attempt to send a Msg starts here
|> Task.map ( always
<| Task.perform (always TimeIsUp)
<| Task.succeed ()
)
-- and ends here
|> Task.map (Debug.log "Timer finished") -- logs "Timer finished: <internals>"
update msg state =
case msg of
Spawned id ->
(Just id, Cmd.none)
TimeIsUp ->
(Nothing, Cmd.none)
view state =
case state of
Just id ->
text "Running"
Nothing ->
text "Time is up"
The docs say
there is no public API for processes to communicate with each other.
I'm not sure if that implies that a process can't cummunicate with the rest of the app.
Is there any way to have update function receive a TimeIsUp once the process exits?
There is one way but it requires a port of hell:
make a fake HTTP request from the process,
then intercept it via JavaScript
and pass it back to Elm.
port ofHell : (() -> msg) -> Sub msg
subscriptions _ =
ofHell (always TimeIsUp)
backgroundTask : Task.Task y (Http.Response String)
backgroundTask =
Process.sleep 1000
-- nasty hack starts here
|> Task.andThen ( always
<| Http.task { method = "EVIL"
, headers = []
, url = ""
, body = Http.emptyBody
, resolver = Http.stringResolver (always Ok "")
, timeout = Nothing
}
)
Under the hood, Http.task invokes new XMLHttpRequest(), so we can intercept it by redefining that constructor.
<script src="elm-app.js"></script>
<div id=hack></div>
<script>
var app = Elm.Hack.init({
node: document.getElementById('hack')
})
var orig = window.XMLHttpRequest
window.XMLHttpRequest = function () {
var req = new orig()
var orig = req.open
req.open = function (method) {
if (method == 'EVIL') {
app.ports.ofHell.send(null)
}
return orig.open.apply(this, arguments)
}
return req
}
</script>
The solution is not production ready, but it does let you continue playing around with Elm processes.
Elm Processes aren't a fully fledged API at the moment. It's not possible to do what you want with the Process library on its own.
See the notes in the docs for Process.spawn:
Note: This creates a relatively restricted kind of Process because it cannot receive any messages. More flexibility for user-defined processes will come in a later release!
and the whole Future Plans section, eg.:
Right now, this library is pretty sparse. For example, there is no public API for processes to communicate with each other.

Erlang Tests Timeout Error

I am trying to implement an Erlang Twitter-like application implemented on the data_actor module. It allows users to register, tweet, fetch all their tweets, subscribe to the tweets of other users, and get their timeline which includes both their tweets and those of users they are subscribed to. The application has a the data_actor loop as the main server and the worker_process loop for distributed servers. On user registration, the data_actor loop (main server) assigns each user a worker server (which uses the worker_process loop).
In addition, I have a server module which is just the interface to the application.
I have written 7 test cases but the last test (subscription_test) fails with the following error. I have looked at a few suggested solutions to this timeout test. Some suggest skipping test on timeout. Firstly, I have not included a timeout for these tests. Secondly, I want all the tests to be successful. It appears I have a logical error. Being new to Erlang, I can hardly point out what is amiss.
Eshell V9.0.4
(Erlang_Project_twitter#GAKUO)1> data_actor:test().
data_actor: subscription_test...*timed out*
undefined
=======================================================
Failed: 0. Skipped: 0. Passed: 6.
One or more tests were cancelled.
error
(Erlang_Project_twitter#GAKUO)2>
Below is the data_actor module with the application implementation logic and the 7 tests:
%% This is a simple implementation of the project, using one centralized server.
%%
%% It will create one "server" actor that contains all internal state (users,
%% their subscriptions, and their tweets).
%%
%% This implementation is provided with unit tests, however, these tests are
%% neither complete nor implementation independent, so be careful when reusing
%% them.
-module(data_actor).
-include_lib("eunit/include/eunit.hrl").
%%
%% Exported Functions
%%
-export([initialize/0,
% internal actors
data_actor/1,worker_process/1]).
%%
%% API Functions
%%
% Start server.
% This returns the pid of the server, but you can also use the name "data_actor"
% to refer to it.
initialize() ->
ServerPid = spawn_link(?MODULE, data_actor, [[]]),
register(data_actor, ServerPid),
ServerPid.
% The data actor works like a small database and encapsulates all state of this
% simple implementation.
data_actor(Data) ->
receive
{Sender, register_user} ->
{NewData, NewUserId} = add_new_user(Data),
Sender ! {self(), registered_user, NewUserId},
data_actor(NewData);
{Sender, get_timeline, UserId, Page} ->
{user, UserId, Worker_pid, _Tweets, _Subscriptions} = lists:nth(UserId + 1, Data),
Worker_pid ! {Sender, get_timeline, UserId, Page},
data_actor(Data);
{Sender, get_tweets, UserId, Page} ->
{user, UserId, Worker_pid, _Tweets, _Subscriptions} = lists:nth(UserId + 1, Data),
Worker_pid ! {Sender, get_tweets, UserId, Page},
data_actor(Data);
{Sender, tweet, UserId, Tweet} ->
{user, UserId, Worker_pid, _Tweets, _Subscriptions} = lists:nth(UserId + 1, Data),
Worker_pid ! {Sender, tweet, UserId, Tweet},
data_actor(Data);
{Sender, subscribe, UserId, UserIdToSubscribeTo} ->
{user, UserId, Worker_pid, _Tweets, _Subscriptions} = lists:nth(UserId + 1, Data),
{user, UserId, SubscribedToWorker_pid, _Tweets, _Subscriptions} = lists:nth(UserId + 1, Data),
Worker_pid ! {Sender, subscribe, UserId, UserIdToSubscribeTo,SubscribedToWorker_pid},
data_actor(Data)
end.
%%% data actor internal processes
add_new_user(Data) ->
%create new user user and assign user id
NewUserId = length(Data),
% start a worker process for the newly registred user
Worker_pid = spawn_link(?MODULE,worker_process,[[{user, NewUserId, [], sets:new()}]]),
%add the worker_pid to the data list
NewData = Data ++ [{user, NewUserId,Worker_pid, [], sets:new()}],
{NewData, NewUserId}.
%%% worker actor
worker_process(Data)->
receive
{Sender, get_timeline, UserId, Page} ->
Sender ! {self(), timeline, UserId, Page, timeline(Data, UserId, Page)},
worker_process(Data);
{Sender, get_tweets, UserId, Page} ->
Sender ! {self(), tweets, UserId, Page, tweets(Data, UserId, Page)},
worker_process(Data);
{Sender, tweet, UserId, Tweet} ->
{NewData, Timestamp} = tweet(Data, UserId, Tweet),
Sender ! {self(), tweet_accepted, UserId, Timestamp},
worker_process(NewData);
{Sender, subscribe, UserId, UserIdToSubscribeTo, SubscribedToWorker_pid} ->
NewData = subscribe_to_user(Data, UserId, UserIdToSubscribeTo,SubscribedToWorker_pid),
Sender ! {self(), subscribed, UserId, UserIdToSubscribeTo},
worker_process(NewData)
end.
%%
%% worker actor internal Functions
%%
timeline(Data, UserId, Page) ->
[ {user, UserId, Tweets, Subscriptions}] = Data,
UnsortedTweetsForTimeLine =
lists:foldl(fun(Worker_pid, AllTweets) ->
Worker_pid ! {self(), get_tweets, UserId, Page},
receive
{_ResponsePid, tweets, UserId, Page, Subscribed_Tweets} ->Subscribed_Tweets
end,
AllTweets ++ Subscribed_Tweets
end,
Tweets,
sets:to_list(Subscriptions)),
SortedTweets = lists:reverse(lists:keysort(3, UnsortedTweetsForTimeLine)),
lists:sublist(SortedTweets, 10).
tweets(Data,_UserId, _Page) ->
[{user, _UserId, Tweets, _Subscriptions}]= Data,
Tweets.
tweet(Data, UserId, Tweet) ->
[ {user, UserId, Tweets, Subscriptions}] = Data,
Timestamp = erlang:timestamp(),
NewUser = [{user, UserId, Tweets ++ [{tweet, UserId, Timestamp, Tweet}], Subscriptions}],
{ NewUser, Timestamp}.
subscribe_to_user(Data, UserId, _UserIdToSubscribeTo,SubscribedToWorker_pid) ->
[{user, UserId,Tweets, Subscriptions}] = Data,
NewUser = [{user, UserId, Tweets, sets:add_element(SubscribedToWorker_pid, Subscriptions)}],
NewUser.
%%
%% Test Functions
%%
%% These tests are for this specific implementation. They are a partial
%% definition of the semantics of the provided interface but also make certain
%% assumptions of its implementation. Thus, they need to be reused with care.
%%
initialization_test() ->
catch unregister(data_actor),
initialize().
register_user_test() ->
ServerPid = initialization_test(),
% We assume here that everything is sequential, and we have simple
% incremental ids
?assertMatch({0, _Pid1}, server:register_user(ServerPid)),
?assertMatch({1, _Pid2}, server:register_user(ServerPid)),
?assertMatch({2, _Pid3}, server:register_user(ServerPid)),
?assertMatch({3, _Pid4}, server:register_user(ServerPid)).
init_for_test() ->
ServerPid = initialization_test(),
{0, Pid1} = server:register_user(ServerPid),
{1, Pid2} = server:register_user(ServerPid),
{2, Pid3} = server:register_user(ServerPid),
{3, Pid4} = server:register_user(ServerPid),
[Pid1, Pid2, Pid3, Pid4].
timeline_test() ->
Pids = init_for_test(),
[Pid1, Pid2 | _ ] = Pids,
?assertMatch([], server:get_timeline(Pid1, 1, 0)),
?assertMatch([], server:get_timeline(Pid2, 2, 0)).
users_tweets_test() ->
Pids = init_for_test(),
[Pid1 | _ ] = Pids,
?assertMatch([], server:get_tweets(Pid1, 1, 0)),
?assertMatch([], server:get_tweets(Pid1, 2, 0)).
tweet_test() ->
Pids = init_for_test(),
[Pid1, Pid2 | _ ] = Pids,
?assertMatch([], server:get_timeline(Pid1, 1, 0)),
?assertMatch([], server:get_timeline(Pid2, 2, 0)),
?assertMatch({_, _Secs, _MicroSecs}, server:tweet(Pid1, 1, "Tweet no. 1")),
?assertMatch([{tweet, _, _, "Tweet no. 1"}], server:get_tweets(Pid1, 1, 0)),
?assertMatch([], server:get_tweets(Pid1, 2, 0)),
?assertMatch([{tweet, _, _, "Tweet no. 1"}], server:get_timeline(Pid1, 1, 0)), % own tweets included in timeline
?assertMatch([], server:get_timeline(Pid2, 2, 0)),
Pids. % no subscription
subscription_test() ->
[_Pid1, Pid2 | _ ] = tweet_test(),
?assertMatch(ok, server:subscribe(Pid2, 2, 1)),
?assertMatch([{tweet, _, _, "Tweet no. 1"}], server:get_timeline(Pid2, 2, 0)), % now there is a subscription
?assertMatch({_, _Secs, _MicroSecs}, server:tweet(Pid2, 2, "Tweet no. 2")),
?assertMatch([{tweet, _, _, "Tweet no. 2"},
{tweet, _, _, "Tweet no. 1"}],
server:get_timeline(Pid2, 2, 0)),
done.
Below is the server module which is the interface to my application:
%% This module provides the protocol that is used to interact with an
%% implementation of a microblogging service.
%%
%% The interface is design to be synchrounous: it waits for the reply of the
%% system.
%%
%% This module defines the public API that is supposed to be used for
%% experiments. The semantics of the API here should remain unchanged.
-module(server).
-export([register_user/1,
subscribe/3,
get_timeline/3,
get_tweets/3,
tweet/3]).
%%
%% Server API
%%
% Register a new user. Returns its id and a pid that should be used for
% subsequent requests by this client.
-spec register_user(pid()) -> {integer(), pid()}.
register_user(ServerPid) ->
ServerPid ! {self(), register_user},
receive
{ResponsePid, registered_user, UserId} -> {UserId, ResponsePid}
end.
% Subscribe/follow another user.
-spec subscribe(pid(), integer(), integer()) -> ok.
subscribe(ServerPid, UserId, UserIdToSubscribeTo) ->
ServerPid ! {self(), subscribe, UserId, UserIdToSubscribeTo},
receive
{_ResponsePid, subscribed, UserId, UserIdToSubscribeTo} -> ok
end.
% Request a page of the timeline of a particular user.
% Request results can be 'paginated' to reduce the amount of data to be sent in
% a single response. This is up to the server.
-spec get_timeline(pid(), integer(), integer()) -> [{tweet, integer(), erlang:timestamp(), string()}].
get_timeline(ServerPid, UserId, Page) ->
ServerPid ! {self(), get_timeline, UserId, Page},
receive
{_ResponsePid, timeline, UserId, Page, Timeline} ->
Timeline
end.
% Request a page of tweets of a particular user.
% Request results can be 'paginated' to reduce the amount of data to be sent in
% a single response. This is up to the server.
-spec get_tweets(pid(), integer(), integer()) -> [{tweet, integer(), erlang:timestamp(), string()}].
get_tweets(ServerPid, UserId, Page) ->
ServerPid ! {self(), get_tweets, UserId, Page},
receive
{_ResponsePid, tweets, UserId, Page, Tweets} ->
Tweets
end.
% Submit a tweet for a user.
% (Authorization/security are not regarded in any way.)
-spec tweet(pid(), integer(), string()) -> erlang:timestamp().
tweet(ServerPid, UserId, Tweet) ->
ServerPid ! {self(), tweet, UserId, Tweet},
receive
{_ResponsePid, tweet_accepted, UserId, Timestamp} ->
Timestamp
end.
When your worker process handles a get_timeline request, it calls the function timeline(...). But that tries to send get_tweets requests to the processes in Subscriptions, which contains the worker process itself. And since that is still busy handling the get_timeline, it's not able to handle a get_tweets message right now, so you never get an answer and your process is deadlocked.
Tip: make smaller unit tests and add them incrementally as you add functionality, so you can more easily pinpoint when your latest change broke the tests. You can even add a test before you implement the actual feature.

Race-condition while sending http request

I have a situation that I need to dispatch two post requests, synchronously, and the second depends on the response of the first, the problem is that the second gets sent even before the first's response is allocated, thus sending wrong information:
update : msg -> model -> (model, Cmd msg)
update msg m =
case msg of
...
Submit -> (m,
send FirstResp <| post "/resource1"
(jsonBody <| encoderX m) int)
FirstResp (Ok x) -> ({m | pid = x},
send SecondResp <| post "/resource2"
(jsonBody <| encoderY m) int)
...
I tested it several times. If the server gives 3 in the first post, the pid gets sent as 0, but If I submit it again, the pid is sent as 3 and the answer from the server, 4, for example, is ignored.
How can I make the post to wait for the value to be allocated?
As data structures in elm are immutable {m | pid = x} doesn't change m but returns a new record. So you have no updated model when you pass it to your 2nd request.
Using {m | pid = x} twice would get you the result you are looking for (but it's not very beautiful)
FirstResp (Ok x) -> ({m | pid = x},
send SecondResp <| post "/resource2"
(jsonBody <| encoderY {m | pid = x}) int)
You can use let in to store the new model in a variable before you send the request. If you modify the model now you only have to look in one place.
FirstResp (Ok x) ->
let
newM = {m | pid = x}
in
(newM, send SecondResp <| post "/resource2"
(jsonBody <| encoderY newM) int)
If you don't need the result of the first request in your model the even better solution would be to chain the requests with Task.andThen. With this you don't need 2 separate messages (FirstResp, SecondResp).
request1 m =
post "/resource1" (jsonBody <| encoderX m) int)
|> Http.toTask
request2 m =
post "/resource1" (jsonBody <| encoderX m) int)
|> Http.toTask
Submit ->
( m
, request1 m
|> Task.andThen request2
|> Task.attempt Resp
)
Resp (Ok res2) ->
-- res2 is the result of your request2
If you need both results you can map them into a Tuple and extract it in the update function.
Submit ->
( m
, request1 m
|> Task.andThen
(\res1 -> request2 res1
|> Task.map ((,) res1)
)
|> Task.attempt Resp
)
Resp (Ok (res1, res2) ->
-- use res1 and res2
Elm packages - Task

Elm divide subscription?

I'm playing with Elm and WebRTC, so I made a listen port which gets some messages from js:
type alias Message =
{ channel : String
, data : String
}
port listen : (Message -> msg) -> Sub msg
Now I would like to be able to divide the messages to different parts of my app. For instance, the chat uses the "chat" channel, while the game logic uses "game".
Is it possible to create a listenTo String subscription that filters out the messages with the correct channel (only returning the data)? Or perhaps a different way of doing it?
Update:
What I currently have, is something like this:
In my main.elm I have an update that looks like this. It can receive messages (from rtc) itself, and send messages for chat to it. (I would later add a "ForGame" then too)
type Msg = Received WebRTC.Message | ForChat Chat.Msg
update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
case msg of
Received message ->
let
_ = Debug.log ("Received message on \"" ++ message.channel ++ "\": " ++ message.data)
in
( model
, Cmd.none
)
ForChat msg ->
let
(chatModel, chatCmd) = Chat.update msg model.chat
in
({ model | chat = chatModel}, Cmd.map ForChat chatCmd)
Then I have subscriptions that combines all my subscriptions:
subscriptions : Model -> Sub Msg
subscriptions model =
Sub.batch
[ WebRTC.listen Received
, Sub.map ForChat <| Chat.subscriptions model.chat
]
In Chat.elm I have a similar structure, with an update that handles it's messages. The subscription of the chat listens to all messages from WebRTC, but filters only the ones with channel chat:
subscriptions : Model -> Sub Msg
subscriptions model = WebRTC.listen forChatMessages
forChatMessages : WebRTC.Message -> Msg
forChatMessages webrtcMessage =
if webrtcMessage.channel == "chat"
then
let
message = decodeMessage webrtcMessage.data
in
case message of
Ok msg -> Receive msg
Err error -> Debug.log ("Received unreadable message on chat channel \"" ++ toString webrtcMessage.data ++ "\" with error \"" ++ error ++ "\"") Ignore
else
Ignore
(Ignore is a Msg for chat, which just does nothing case msg of Ignore -> (model, Cmd.none). decodeMessage uses a decoder to decode a message decodeMessage : String -> Result String Message.)
I'm quite happy with this, because this way all logic for chat is in Chat.elm. So main.elm doesn't need to know what channels chat is using. Chat just follows the standard structure (Msg, update, view, subscriptions) and main forwards everything.
The only thing that's still not great, is that in Chat.elm I have the forChatMessages function. Used like: subscriptions model = WebRTC.listen forChatMessages. I would like to make this more reuseable, so it would become something like:
subscriptions model = WebRTC.listen for "chat" decodeMessage Receive Ignore
It would then be reusable by the game:
subscriptions model = WebRTC.listen for "game" decodeGameInfo UpdateInfo Ignore
Update 2:
I managed to generalize the forChatMessages function into:
for : String -> (String -> Result String d) -> (d -> msg) -> msg -> Message -> msg
for channel decoder good bad webrtcMessage =
if
webrtcMessage.channel == channel
then
let
decoded = decoder webrtcMessage.data
in
case decoded of
Ok data -> good data
Err error -> Debug.log ("Failed decoding message on " ++ channel ++ "channel \"" ++ toString webrtcMessage.data ++ "\" with error \"" ++ error ++ "\"") bad
else
bad
So I think I found the solution myself. Unless someones has comments on this. Perhaps there is a cleaner/nicer/better way of doing the same?
Let's say you have the following Msg definition:
type Msg
= Listen Message
| GameChannel String
| ChatChannel String
Your update function could then act upon the channel value and call update again with the correct channel, ignoring all channel values except for "game" and "chat":
update : Msg -> Model -> ( Model, Cmd Msg )
update msg model =
case msg of
Listen message ->
case message.channel of
"game" ->
update (GameChannel message.data) model
"chat" ->
update (ChatChannel message.data) model
_ ->
model ! []
GameChannel data ->
...
ChatChannel data ->
...
Your subscription function would look something like this:
subscriptions : Model -> Sub Msg
subscriptions model =
listen Listen
I found a solution myself, and added it to the original question.
For clarity, this is the short version:
In my main.elm:
type Msg = Received WebRTC.Message | ForChat Chat.Msg
update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
case msg of
Received message ->
let
_ = Debug.log ("Received message on \"" ++ message.channel ++ "\": " ++ message.data)
in
( model
, Cmd.none
)
ForChat msg ->
let
(chatModel, chatCmd) = Chat.update msg model.chat
in
({ model | chat = chatModel}, Cmd.map ForChat chatCmd)
subscriptions : Model -> Sub Msg
subscriptions model =
Sub.batch
[ WebRTC.listen Received
, Sub.map ForChat <| Chat.subscriptions model.chat
]
In Chat.elm:
subscriptions : Model -> Sub Msg
subscriptions model = WebRTC.listen <| for "game" decodeGameInfo UpdateInfo Ignore
In WebRTC.elm:
type alias Message =
{ channel : String
, data : String
}
port listen : (Message -> msg) -> Sub msg
for : String -> (String -> Result String d) -> (d -> msg) -> msg -> Message -> msg
for channel decoder good bad webrtcMessage =
if
webrtcMessage.channel == channel
then
let
decoded = decoder webrtcMessage.data
in
case decoded of
Ok data -> good data
Err error -> Debug.log ("Failed decoding message on " ++ channel ++ "channel \"" ++ toString webrtcMessage.data ++ "\" with error \"" ++ error ++ "\"") bad
else
bad

Erlang Apple Push notification not getting response-error before disconnect

I'm currently testing a bit my push notification module.
When the Device Token is invalid, it disconnects...
According to Apple push notification developer documentation I should get an error-response packet just before the apple push server disconnect...
The thing is I get disconnected, but I do not get anything on the Socket just before that, and I need to know if the push failed because of a malformed push (so I can fix the bug), or an invalid device token (so I can remove it from the database).
Here's my code :
-module(pushiphone).
-behaviour(gen_server).
-export([start/1, init/1, handle_call/3, handle_cast/2, code_change/3, handle_info/2, terminate/2]).
-import(ssl, [connect/4]).
-record(push, {socket, state, cert, key}).
start(Provisioning) ->
gen_server:start_link(?MODULE, [Provisioning], []).
init([Provisioning]) ->
gen_server:cast(self(), {connect, Provisioning}),
{ok, #push{}}.
send(Socket, DT, Payload) ->
PayloadLen = length(Payload),
DTLen = size(DT),
PayloadBin = list_to_binary(Payload),
Packet = <<0:8,
DTLen:16/big,
DT/binary,
PayloadLen:16/big,
PayloadBin/binary>>,
ssl:send(Socket, Packet).
handle_call(_, _, P) ->
{noreply, P}.
handle_cast({connect, Provisioning}, P) ->
case Provisioning of
dev -> Address = "gateway.sandbox.push.apple.com";
prod -> Address = "gateway.push.apple.com"
end,
Port = 2195,
Cert="/apns-" ++ atom_to_list(Provisioning) ++ "-cert.pem",
Key="/apns-" ++ atom_to_list(Provisioning) ++ "-key.pem",
Options = [{certfile, Cert}, {keyfile, Key}, {password, "********"}, {mode, binary}, {active, true}],
Timeout = 1000,
{ok, Socket} = ssl:connect(Address, Port, Options, Timeout),
{noreply, P#push{socket=Socket}};
handle_cast(_, P) ->
{noreply, P}.
handle_info({ssl, Socket, Data}, P) ->
<<Command, Status, SomeID:32/big>> = Data,
io:fwrite("[PUSH][ERROR]: ~p / ~p / ~p~n", [Command, Status, SomeID]),
ssl:close(Socket),
{noreply, P};
handle_info({push, message, DT, Badge, [Message]}, P) ->
Payload = "{\"aps\":{\"alert\":\"" ++ Message ++ "\",\"badge\":" ++ Badge ++ ",\"sound\":\"" ++ "msg.caf" ++ "\"}}",
send(P#push.socket, DT, Payload),
{noreply, P};
handle_info({ssl_closed, _SslSocket}, P) ->
io:fwrite("SSL CLOSED !!!!!!~n"),
{stop, normal, P};
handle_info(AnythingElse, P) ->
io:fwrite("[ERROR][PUSH][ANYTHING ELSE] : ~p~n", [AnythingElse]),
{noreply, P}.
code_change(_, P, _) ->
{ok, P}.
terminate(_, _) ->
ok.
It works great when the payload and the deviceToken are both right. if deviceToken is invalid, it only get's disconnected.
Does anyone can spot the issue ? because after 4 hours of searching, I have only found out that I clearly can't !
Here's the error-response table :
Status code Description
0 No errors encountered
1 Processing error
2 Missing device token
3 Missing topic
4 Missing payload
5 Invalid token size
6 Invalid topic size
7 Invalid payload size
8 Invalid token
255 None (unknown)
You seem to be using the simple notification format as defined by figure 5-1 in the apple documentation you've linked to (judging by your send() function). When this format is used, no error response is provided when the request is malformed - you just get the disconnect.
To get the error response you should be using the enhanced notification format detailed in figure 5-2.