How bitcoin peer discovery works after connecting to hard coded nodes? - bitcoin

I am tinkering with the bitcoin source code and trying to understand the exact working of peer discovery mechanism in the testnet mode for which I have made the following changes:
Disabled the DNS seed discovery in order to force bitcoind to fallback to connect to hardcoded nodes.
Changed the default hardcoded nodes to my known 4 addresses, lets say A,B,C and D, which I ensure are always online.
Now, when I run the bitcoind client (call it E), it connects to one of A,B,C or D, running the same modified version of bitcoind. It gets the peer addresses from the hardcoded node that it first connects to by exchanging getaddr and addr messages but I am not sure how it proceeds after that. I have following queries:
a. If a node falls back to connect to hardcoded nodes, is it supposed to connect to only one of the hardcoded nodes like it happends in my case or can it connect to multiple hardcoded nodes ?
b. After getting peer address via the addr message, when will the node E start connecting to those peers ?
Please point me to the relevant code files/sections if possible. Thanks

A. There are no "Hardcoded nodes" there are only DNS seeds of nodes, when you run them through the DNS request you'll get new node every request.
B. If the node isn't connected to it's maximum capacity of nodes (it's 8 active nodes and 125 inactive nodes) it will try connecting to new nodes the second it'll get the addr message
you can find them here:
livenet: https://github.com/bitcoin/bitcoin/blob/master/src/chainparams.cpp#L102
testnet: https://github.com/bitcoin/bitcoin/blob/master/src/chainparams.cpp#L181

Related

CoTurn Data Usage Stats on Multi User System

We want to track each users turn usage seperately. I inspected Turn Rest API, AFAIU it is just used to authorize the user which already exists in Coturn db. This is the point I couldn't realize exactly. I can use an ice server list which includes different username-credential peers. But I must have initialized these username-credential peers on coturn db before. Am I right? If I am right, I do not have an idea how to do this. Detection of user's request to use turn from frontend -> Should generate credentials like this CoTURN: How to use TURN REST API? which I am already achieved -> if this is a new user, my backend should go to my EC2 instance and run "turnadmin create user command" somehow -> then I will let the WebRTC connection - > then track the usage of specific user and send it back to my backend somehow.
Is this a true scenario? If not, how should it be? Is there another way to manage multiple users and their data usage? Any help would be appreciated.
AFAIU to get the stats data, we must use redis database. I tried to used it, I displayed the traffic data (with psubscribe turn/realm//user//allocation//traffic ) but the other subscribe events have never worked ( psubscribe turn/realm//user//allocation//traffic/peer or psubscribe turn/realm//user//allocation/*/total_traffic even if the allocation is deleted). So, I tried to get past traffic data from redis db but I couldn't find how. At redis, KEYS * command gives just "status" events.
Even if I get these traffic data, I couldn't realize how to use it with multi users. Currently in our project we have one user(in terms of coturn) and other users are using turn over this user.
BTW we tried to track the usage where we created peer connection object from RTCPeerConnection interface. I realized that incoming bytes are lower than the redis output. So I think there is a loss and I think I should calculate it from turn side.

Is there a way for the TcpListener to establish a connection between two TcpClients?

Let's say S is the server (TcpListener) and it is connected to two clients (TcpClient) A and B.
Is there a way for S to establish a connection between A and B (so A <-> B) and then disconnect from them both?
Basically, I'm looking for a way for two clients to connect to each other (for private messaging, for example) without having either of them needing to port forward or know the other's IP address, since this is inconvenient for the clients.
As shown above, I don't mind the initial involvement of a server (which of course, has to port forward and distribute its IP address).
However, I would like the server to be then able to disconnect from the two clients afterwards (to reduce traffic).

How to successfully set up a simple cluster singleton in Akka.NET

I was running into a problem attempting to set up a Cluster Singleton within an Akka.NET cluster where more than one instance of the singleton was starting up and running within my cluster. The cluster consists of Lighthouse (the seed node) and x number of instances of the main cluster node of which there are cluster shards as well as this singleton that exist within this node.
In order to reproduce the problem I was having I set up an example solution in GitHub but unfortunately I'm having a different problem here as I always get Singleton not available messages and my singleton never receives a message. This is sort of opposite problem that I was getting originally but nonetheless I would like to sort out a working example of cluster singleton.
[DEBUG][8/22/2016 3:06:18 PM][Thread
0015][[akka://singletontest/user/my-singleton-proxy#1237572454]]
Singleton not available, buffering message type [System.String]
In the Lighthouse process I see the following messags.
Akka.Remote.EndpointWriter: Dropping message
[Akka.Actor.ActorSelectionMessage] for non-local recipient
[[akka.tcp://sync#127.0.0.1:4053/]] arriving at
[akka.tcp://sync#127.0.0.1:4053] inbound addresses
[akka.tcp://singletontest#127.0.0.1:4053]
Potentially related:
https://github.com/akkadotnet/akka.net/issues/1960
It appears that the only bit that was missing was that the actor system specified in the actor path for my seed node did not match the actor system name specified in both Lighthouse and my Cluster Node processes. After ensuring that it matches in all three places the cluster is now behaving as expected.
https://github.com/jpierson/x-akka-cluster-singleton/commit/77ae63209042841c144f69d4cd70e9925b68a79a
Special thanks to Chris G. Stevens for his assistance.

What are ICE Candidates and how do the peer connection choose between them?

I newly wrote a simple chat application, but I didn't really understand the background of ICE Candidates.
When the peer create a connection they get ICE Candidates and they exchange them and set
them finally to the peerconnection.
So my question is, where do the ICE Candidates come from and how are they used and are they all really used ?
I have noticed that my colleague got less candidates when he executes the application on his machine, what could be the reason for different amount of Candidates ?
the answer from #Ichigo is correct, but it is a litte bit bigger. Every ICE contains 'a node' of your network, until it has reached the outside. By this you send these ICE's to the other peer, so they know through what connection points they can reach you.
See it as a large building: one is in the building, and needs to tell the other (who is not familiar) how to walk through it. Same here, if I have a lot of network devices, the incoming connection somehow needs to find the right way to my computer.
By providing all nodes, the RTC connection finds the shortest route itself. So when you would connect to the computer next to you, which is connected to the same router/switch/whatever, it uses all ICE's and determine the shortest, and that is directly through that point. That your collegue got less ICE candidates has to do with the ammount of devices it has to go through.
Please note that every network adapter inside your computer which has an IP adress (I have a vEthernet switch from hyper-v) it also creates an ICE for it.
ICE stands for Interactive Connectivity Establishment , its a techniques used in NAT( network address translator ) for establishing communication for VOIP, peer-peer, instant-messaging, and other kind of interactive media.
Typically ice candidate provides the information about the ipaddress and port from where the data is going to be exchanged.
It's format is something like follows
a=candidate:1 1 UDP 2130706431 192.168.1.102 1816 typ host
here UDP specifies the protocol to be used, the typ host specifies which type of ice candidates it is, host means the candidates is generated within the firewall.
If you use wireshark to monitor the traffic then you can see the ports that are used for data transfer are same as the one present in ice-candidates.
Another type is relay , which denotes this candidates can be used when communication is to be done outside the firewall.
It may contain more information depending on browser you are using.
Many time i have seen 8-12 ice-candidates are generated by browser.
Ichigo has a good answer, but doesn't emphasise how each candidate is used. I think MarijnS95's answer is plain wrong:
Every ICE contains 'a node' of your network, until it has reached the outside
By providing all nodes, the RTC connection finds the shortest route itself.
First, he means ICE candidate, but that part is fine. Maybe I'm misinterpreting him, but by saying 'until it has reached the outside', he makes it seem like a client (the initiating peer) is the inner most layer of an onion, and suggests the ICE candidate helps you peel the layers until you get to the 'internet', where can get to the responding peer, perhaps peeling another onion to get to it. This is just not true. If an initiating peer fails to reach a responding peer through the transport address, it discards this candidate and will try a different candidate. It does not store any nodes anywhere in the candidate. The ICE candidates are generated before any communication with the responding peer. An ice candidate does not help you peel the proverbial NAT onion. Also regarding the second quote I made from his answer, he makes it seem like ICE is used in a shortest path algorithm, where 'shortest' does not show up in the ICE RFC at all.
From RFC8445 terminology list:
ICE allows the agents to discover enough information
about their topologies to potentially find one or more paths by which
they can establish a data session.
The purpose of ICE is to discover which pairs of addresses will work. The way that ICE does this is to systematically try all possible pairs (in a carefully sorted order) until it finds one or more that work.
Candidate, Candidate Information: A transport address that is a
potential point of contact for receipt of data. Candidates also
have properties -- their type (server reflexive, relayed, or
host), priority, foundation, and base.
Transport Address: The combination of an IP address and the
transport protocol (such as UDP or TCP) port.
So there you have it, (ICE) Candidate was defined (an IP address and port that could potentially be an address that receives data, which might not work), and the selection process was explained (the first transport address pair that works). Note, it is not a list of nodes or onion peels.
Different users may have different ice candidates because of the process of "gathering candidates". There are different types of candidates, and some are obtained from the local interface. If you have an extra virtual interface on your device, then an extra ICE will be generated (I did not test this!). If you want to know how ICE candidates are 'gathered', read the 2.1. Gathering Candidates

Receive data by some processes

Can I receive data from network by some processes simultaneously?
For instance, I have two computes in the LAN. One computer send udp packet to other computer on port 5200. In computer number two I want to receive this packet by two processes. Can I create two sockets on same ip and port?
I forget to say that Process A I can't modify. In other words, I want to create application that receive same data as Process A. (Proccess A and Proccess B locate on the computer number two that receive data)
Yes! You can. Open the socket and set setsockopt with REUSE_PORT and REUSE_ADDRESS.
How about you create process A to act as middleware between the two processes B and C. And then add extra data to the packets sent to process A which will be used to determine the final destination of the data - process B or process C.
EDIT:
To answer your question exactly "no", for TCP/IP
"You can only have one application listening on a single port at one time."
Actually you question has been asked by someone else before, and I have only just cited the answer. The full answer can be found -> here.