Basically, I'm building a Dropbox clone that will avoid cloud storage. Ok, I'm not building it, but trying to estimate the amount of work needed.
Been reading different p2p options here on SO, but actually, there are very little topics on centralised p2p connections and how to build them from ground-up. I'm not even sure if it's appropriate to call it a p2p at all.
From ActionScript background I know that it can establish an UDP connection between 2 different clients across the globe with provided centralised server (RTMFP). It's highly abstracted, it doesn't even require to open ports and clients don't know the IPs of each other. So the subset of given options is quite limited.
Anyway, I need create a server-side app and a client-side app that will try to sync files between connected clients. I've read that socket connections are used for file transfers. And the questions here are:
How to pair the clients?
What should server do?
What should client do?
Thank you.
NB
Establishing connections and file syncing solutions are out of the question.
Related
I'am develop group call like google meet using WebRTC and SFU method for routing.
my project work well, until i open chrome://webrtc-internals to see webrtc connection status. and i compare with google meet.
Google meet
only 1 peer connection is active.
my project.
1 peer connection active for broadcast.
n-1 peer connection active as consumer.
so if total users in a room is 5. then on each client side has 5
peer connections are active too (1 as broadcaster, 4 as
consumers).
so my question is, how i can using only 1 peer connection as consumer? or using 1 peer connection as broadcast and also as consumer? maybe my method wrong? or misunderstood the implementation of SFU.
any suggestions or solutions?
I am still discovering/learning the stack of webrtc and related architectures, so take what I am saying with a grain of salt.
With a SFU architecture you can have multiple strategies to distribute the streams between your clients. In all case, you save bandwidth for the local user by only sending his streams once to the SFU.
As you state, for n users you can open 1 RTCPeerConnection with the SFU for the local user and n-1 RTCPeerConnection for remote users.
You can open only one RTCPeerConnection with the SFU for any number of users in the "room". To achieve this, when a new user enters the SFU session, his streams need to be added to the tracks of the PeerConnection present at the SFU. It will trigger some renegotiation through signaling, and your users will know a new track (stream) has been added. The client (javascript code for example) needs to identify the new tracks to a specific user, for that you can add the information of this user in the signaling payload. From the point of view of a given user, these new tracks (audio+video) will correspond to a new user.
The first approach is simpler but takes more ressources, more ice candidate to gather, stun request, connections to the SFU, etc..
The second one is more efficient but harder to implements. Both on the client and the server.
A link to bloggeek.me, which provides excellent ressources for webrtc, and talks about these two approaches, far better than me.
The post states that Jitsi server, use only one peer connection with the SFU, per user.
Other strategies exist, in livekit server, a SFU implementation in Golang, they use 2 PeerConnection per user. One for publishing the streams of the local user and the second to receive streams from all other users. Here a link to the client protocol of Livekit server
For approach 2 and 3, how SFU servers wire up all these streams correctly between each PeerConnection with a local user, I really don't know. It seems really specific to the project.
You have to check the SFU server API you are using, and see what is possible to do with it. But what you are looking for is definitely possible, given the "right" project for your use case.
For the client side it depends on project your are using too.
If you are in the early stage of your project, you can maybe check livekit server. It is an open source project, Apache 2.0 license, develop in golang, and provides a lot of interesting features out of the box. Auto scaling SFU instances through redis, kubernetes setup, client libraries in JavaScript, Flutter, a server sdk to interact with SFU instances in various langage, etc.. The ecosystem seems really nice and the documentation is good too.
Hope it helps a bit
Both Pipes and ASP.NET Core gRPC support local and remote IPC/RPC (with some platform limitations for gRPC)
When would I use one technology (Pipes) or the other (gRPC)?
Observations, thoughts and considerations I'm keeping in mind:
gRPC seems to be geared towards replacing WCF in some future iteration.
local deployments and with machine restrictions (running as non-admin/user, machine firewalls, different platforms/OS)
network traversal, and compatibility with same-machine -> multi-machine (frontend/backend arrays) for load and expansion
Spanning secure zones (where a Proxy is used, or other TLS cipher/order/registry setting) affects the ability for HTTP/2 to work
Pipes (named pipes?) have a different surface area and port (do they also use port 135, or NetBIOS over TCP (not sure of name))... how is it scanned and secured?
"memory mapped files" seem to be a challenge to get working, however it seems to work in ASP.NET Core with gRPC in the UDS configuration. Is this a correct inference?
Right now my scenario is to have two console apps communicate with each other, same machine or remote. Adding Asp.NET Core Web is an optional front end alternative for my scenario.
Simple IPC
Depends on how much communication is going to happen. If your communication is limited to simple collaborative signal passing or sharing some data between two processes you can safely use NamedPipeClientStream and NamedPipeServerStream on local system or local network but if you plan for the same on different systems then I would suggest using TcpClient and TcpListener.
Comprehensive IPC
WCF or now its replacement gRPC is for scenario where a complete API/Framework need to be executed remotely. For example I have an entire library of classes which I need to call from a different process (which mostly run on a different system); in that case gRPC kind of solutions make more sense.
Only you can decide.
This is a design decision which is highly unique for your application; your future plans and your system environment and any third person can only give you clues but ultimately you are the only person who can make the right decision.
I am building a project which requires constant connection with the server.
There are two major ways to achieve this:
Ajax pull
Ajax push
I have to decide between pinging a server (expensive) and maintaining keep-alive connections (firewalls block that.)
I was thinking about the live video streams. They are not keep-alive connections, nor frequent pings.
Is it possible, to send data, like JSON strings through rtmp?
It would be theoretically possible to implement RTMP's AMF3 and AMF0 Message types to carry the data. RTMP [Wikipedia]
The problem is that using a protocol typically used for streaming video might get your connection blocked or throttled by some service providers that limit such protocols to conserve bandwidth (and prevent employees from watching internet videos at work).
Maybe this article may be of some use to you. It explains how to set up an RTMP server with nginx.
From the article:
nginx is an extremely lightweight web server, but someone wrote a RTMP module for it, so it can host RTMP streams too. However, to add the RTMP module, we have to compile nginx from source rather than use the apt package. Don't worry, it's really easy. Just follow these instructions. :)
One comment on this article by a user named 'stefaniuk' linked to a github respitory for this that I think you should look in to. Check it out here.
I've written a simple server application which will run distributed on several machines.
My question is how does a network load balancer works, in general?
I've heard of round-robin and other algorithms, but what I haven't got answer to is how does the process really goes? In socket terms.
The client connects to one of the load balancer machines, asks for a "free-to-connect-to" server and simply connects to it?
That's the simpliest way I can think of.
.. or, does it use the load balancer as a proxy (that implies that all the NBs must be always connected to the application servers, and data is transferred through them)?
It's more of a general question. How would you do this?
Thank you all!
There are several different ways to load balance an application. Some are physical devices that sit between your router and the servers; some are software based with a bit of code that runs on each of the load balanced devices.
Microsoft has load balancing built into Windows which is all software based. It's pretty good and easy to set up.
However, I'll cover the physical route.
There are several algorithms here, but the main one is Round Robin with an option for "sticky" sessions. Sticky in this case means that the load balancer will try to keep a history of clients and forward requests from the same client to the same machine. This means the load balancer needs to keep a list of clients and where it directed those clients. Depending on cache size, clients may fall off the list and on future requests they may be forwarded to a different server.
Round Robin is a pretty simple idea. For each request that comes in send it to the next server in the list. More complicated algorithms might take into account how many requests go to a particular server and how long are those requests taking; then try to rebalance new requests to favor faster servers. This part is complicated though.
I want to communicate xml serialized objects from the server to the client and the other way arround. Now it is (probably) easy to invoke methods from a mobile client (compact framework) using WCF, but is there a way so that the server can invoke methods on the client side or some other way to pull data from the client? I know that callback contracts are not available in the compact framework as you can see here: http://blogs.msdn.com/andrewarnottms/archive/2007/09/13/calling-wcf-services-from-netcf-3-5-using-compact-wcf-and-netcfsvcutil-exe.aspx
Originally I thought of socket programming and of developing this by myself, then someone here mentioned WCF. But it seems like WCF only would work in a non mobile environment as I need callbacks.
Anyone can help me with this? Is it possible to develop a two way communication with a desktop server and multiple mobile clients using WCF, or will I have to do socket programming?
Thanks for any advice or any kind of help!
at ctacke
Thank you for your help. I actually stumbled across your Padran web server before.Havent really checked it out yet. But I definitely will do that later on. Anyway, a socket solution does not seem that bad at the moment. In the meanwhile I figured that it is quite easy to send data from multiple clients to a 'socket server'. If I can manage those connections somehow I can send data back to the clients. And then I would have to come up with some kind if protocol which handles the data or commands I send over the network... I guess the hardest part would be to make up such a protocol as I do not have a clue about that atm...
Even if you go to sockets it might be a bit difficult due to routing, carrier filtering and NAT translations (you've not mentioned what your actual network topology is). This is the reason that most mobile applications have to poll the server, even if it is a "push" paradigm (like Exchange's push mechanism, where the client actually polls).
Generally speaking, unless you're on something like a local wireless network where you have solid, routable, unfiltered network access, the client should periodically call the server and ask if the server has data. If it does, then it pulls the data from the server.
EDIT
Now that we know a little more about your topolgy from your comment, I can steer you a little more. Unfortunately Microsoft has not made it easy for Windwos CE devices to host services (WCF or otherwise). There is, in theory, the required infrstructure to build up your own WCF channel and actually host a service, but it's not a trivial task. I looked into it quite some time ago and figured it was a couple months of work and that would have been with the assistance of someone in Redmond that knew how the existing Exchange channel works.
Personally I'd opt for hosting a REST-based web service using our Padarn web server because it's simple to do and I've done it for quite a number of clients now. I realize that it's a little self-serving to propose Padarn as a solution but the entire reason I implemented custom IHttpHandlers in Padarn was because I couldn't find anything else out there that really provided any easy way for a CE device to host it's own services and it's a problem we often have to provide a solution for.
The other options would be things like a proprietary socket solution, hosting an FTP server on the device, using the (abhorrent, IMO) MS-provided HTTP server with ISAPI, Telnet or something along those lines. All of them seem either a hack, a lot of work or both.