I added push notifications which (also) occur based on user events. Based on certain things, I send notifications from my server. As I want them to persist I also store them in my db. I feel like when I am receiving notifications my code is just strange because I display them after listening
useEffect(() => {
notificationListener.current = Notifications.addNotificationReceivedListener(notification => {
});
but at the same time I aways save them to db before so I can get them through listening or through fetching from my db.
Is it common to store them in db? Do I just listen to new ones coming in and also fetch from db?!
Related
The app I'm currently working on is supposed to get "live" data through a WebSocket after getting the correct WebSocket address (access token), however I am not sure how can I run it in the "background" - as in, maintain the connection and get data after changing the components currently shown on the screen, since I'd like to be able to move to different parts of the app without losing the data that is sent in the meantime.
How can I run the WebSocket in the background, and have the received data saved for use so that the component that depends on it is still able to access the data that was sent while it was not active?
I've thought about having the WebSocket save the events it receives directly to a Redux store, however I'm not sure what would be the correct way to set this up so that it runs all the time, independent of navigation between screens.
I tried by following codes
// remove particaipant tracks
vm.tc.currentVideoRoom.participants.forEach((remoteParticipant) => {
remoteParticipant.tracks.forEach((track) => {
console.log(track); //here i found the remote participant video and audio track.
track.stop() // but here i found "track.stop() is not a function" error
})
});
I check twilio video documentation. but don't found any solution in here.
and i also check GitHub issue here the link. the guy mention remote participants tracks stop is not possible.
then how can i stop remote participant tracks using Twilio?
You can't stop a remote track like that. The remote track object is a representation of the track as a stream from the remote participant as a stream. A local track is a representation of a track as a stream from the device's camera or microphone, so when you call stop, it stops interacting with the hardware.
So you cannot call stop on a remote track, because that would imply you're trying to stop the track coming from the remote participant's camera or microphone.
If you want to stop seeing or hearing a remote track, you can detach the track from the page. If you want to unsubscribe from the track so that you stop receiving the stream of it, you can use the track subscriptions API. And if you want to actually stop the remote participant's device, you would have to send a message to the remote participant (possibly via the DataTrack API) and have them execute the track.stop() locally.
I am new here in webrtc, i have strange issue, when i worked with one to one user onaddstream event is working, i am getting its response, but after then 3rd person joined the room onaddstream event is not working, can anyone please help me how to resolve this issue ? here i have added my whole code for it, can anyone please review it and helped me to get event for all the remote users
var pc = new RTCPeerConnection(configuration);
pc.onaddstream = (remoteaddstream) => {
console.log(remoteaddstream);
};
navigator.mediaDevices.getUserMedia({
audio: true,
video: true,
}).then(stream => {
var localstreamid = stream.id;
console.log("stream id :"+localstreamid);
pc.addStream(stream);
}, function(e) {
console.log(e);
});
You need multiple peer connections to connect more than 2 parties, or alternatively you can make all the parties connect to a server that forwards the data.
From https://medium.com/#khan_honney/webrtc-servers-and-multi-party-communication-in-webrtc-6bf3870b15eb
Mesh Topology
Mesh is the simplest topology for a multiparty application. In this topology, every participant sends and receives its media to all other participants. We said it is the simplest because it is the most straightforward method.
Mixing Topology and MCU
Mixing is another topology where each participant sends its media to a central server and receives a media from the central server. This media may contain some or all other participant’s media
Routing Topology and SFU
Routing is a multiparty topology where each participant sends its media to a central server and receives all other’s media from the central server.
There is topic subscription function in firebase cloud messaging.
But, how do I differentiate which messages received through notification belong to which topic?
When I subscribe to a topic for example.
Messaging.messaging().subscribe(toTopic: "news")
And when I send the message, I receive the message from the back end in this format in the app .
the full messag is this = [AnyHashable("google.c.a.e"): 1, AnyHashable("google.c.a.ts"): 1500271703, AnyHashable("google.c.a.udt"): 0, AnyHashable("gcm.n.e"): 1, AnyHashable("aps"): {
alert = "google is hello world";
}, AnyHashable("google.c.a.c_id"): 967226232057261708, AnyHashable("gcm.message_id"): 0:1500271704062691%515abe1d515abe1d]
As we can see that, the message we receive contains no "topic" field. So how do we know whether this message is sent under "news" topic or another topic?
Thanks
Firebase Cloud messaging work as Triggers in database, in our Programming we defined one receiver for receive any trigger call and another is handler for handle that function during receive a Trigger call.Every Trigger defined by a unique identifier for handle all these procedure.
Firebase cloud message help to implement instant chat functionality in a application because these trigger directly connect with firebase database. Whenever we do any thing in database like Addition,Deletion,Insertion and updating then Trigger automatic reflect in a project.
Thanks
Basically I have two distinct services I wish to use (my own WCF back end service) and an Azure Mobile Service that both use push notifications. They're associated with the same app in the windows store.
In my code, I have two separate modules that call.
var newChannel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
This all seemed like fun and games and unless I horribly misread the documentations, having multiple channels for one App should be ok.
However, when I sent a notification from the WCF service to the app, it went to the AMS handler and naturally threw an invalid format exception given that I'm using my own Raw push notification format.
So my question is this; do I need to re-engineer the structure to have only one push channel handler that will divide the messages based on their format to the correct handlers, or what is the methodology I need to follow in order to get multiple push channels for a single app?
See the only formats being supported in wns push notification us either e xml based format or the json data format. If while communicating to the wns you are sending some other format then then it is bound to exceptions. ? Go through the demo from the link
Push notification sample
if this does not solve the problem then please leave a comment behind