Peers vs Members - Consul - api

Peer set - The peer set is the set of all members participating in log replication. For Consul's purposes, all server nodes are in the peer set of the local datacenter.
~ Quote from Official docs
What is the difference between peers and members then?
Why do we have following two APIs then? (one is enough?)
i. /status/peers
ii. /agent/members
Could you please shed light on the internal details?
Is there a possibility of inconsistency in results of above APIs?

Here is a comparison of /agent/members/, status/peers/ and catalog/nodes.
The possible difference in response is because each of the API end point get data from different sources.
/catalog/nodes: The request recieved by any agent is redirected to the leader, and leader provides the response from catalog.
/agent/members/: Agent receives the request and return member information obtained from gossip. This can be different from catalog end point (as obvious from log replication mechanism (Consul uses Raft Prorocol) ).
/status/peers/ : This API return the nodes participating in 'log replication'.
Ideally, this should be same as /catalog/node. But if there is a partition in the cluster, it is possible that, until the cluster recover, all members are not taking part in log replication. In this case /catalog/nodes/ and /status/peers/ can give different results.
To understand this proper, you need to know the raft protocol properly. Reference.

Related

CoTurn Data Usage Stats on Multi User System

We want to track each users turn usage seperately. I inspected Turn Rest API, AFAIU it is just used to authorize the user which already exists in Coturn db. This is the point I couldn't realize exactly. I can use an ice server list which includes different username-credential peers. But I must have initialized these username-credential peers on coturn db before. Am I right? If I am right, I do not have an idea how to do this. Detection of user's request to use turn from frontend -> Should generate credentials like this CoTURN: How to use TURN REST API? which I am already achieved -> if this is a new user, my backend should go to my EC2 instance and run "turnadmin create user command" somehow -> then I will let the WebRTC connection - > then track the usage of specific user and send it back to my backend somehow.
Is this a true scenario? If not, how should it be? Is there another way to manage multiple users and their data usage? Any help would be appreciated.
AFAIU to get the stats data, we must use redis database. I tried to used it, I displayed the traffic data (with psubscribe turn/realm//user//allocation//traffic ) but the other subscribe events have never worked ( psubscribe turn/realm//user//allocation//traffic/peer or psubscribe turn/realm//user//allocation/*/total_traffic even if the allocation is deleted). So, I tried to get past traffic data from redis db but I couldn't find how. At redis, KEYS * command gives just "status" events.
Even if I get these traffic data, I couldn't realize how to use it with multi users. Currently in our project we have one user(in terms of coturn) and other users are using turn over this user.
BTW we tried to track the usage where we created peer connection object from RTCPeerConnection interface. I realized that incoming bytes are lower than the redis output. So I think there is a loss and I think I should calculate it from turn side.

Is it possible to retrieve the STUN server used once the RTCPeerConnection is connected

Not sure the title makes a lot of sense. To add some context, we are building a WebRTC infrastructure and to so do we have a few STUN servers up and and running.
We sometimes have users complaining of call taking too long to connect therefore we would like to get some analytics on the calls. Because we provide a list of STUN IPs (including some public STUN as backup), we would like to detect the STUN server that successfully initiated the call.
We have collected a bunch of information thanks to RTCPeerConnection.getStats but there is nothing related to the STUN itself. So for my questions:
is there any JS API that allow us to retrieve the STUN used?
is there any tool that I am not aware of that could do the job?
do the SDP contains any information related to STUN?
Hope all of this is clear, thanks for your kind replies
The statistics do contain a server url:
https://w3c.github.io/webrtc-stats/#dom-rtcicecandidatestats-url
However, that is not implemented and since STUN servers are not involved in the actual call that information is unlikely to be useful.
For TURN servers you can get the active candidate pair and the IP of any relay involved from getStats. See https://webrtc.github.io/samples/src/content/peerconnection/constraints/ for a sample that shows how to determine the active candidate pair.

API Gateway High Availability

If API gateway fails (single entry point to the system), then unable to access all the services. Any HA(High Availability) design to handle API gateway failure?
1) As per your project location, you can choose one more region as your disaster recovery plan. When ever something fails in one region then immediately you can switch to another region by just changing the end point.
2) You can use services like route53 to divide your traffic between two regions or two api gateways. That way you will save atleast part of your traffic flowing even if one apig fails.
3) Always keep cloudwatch alarms to get notification about any failures in your system.
4) It is very unlikely that a api gateway will fail. It is AWS my friend.
"node_saini" has a great response and it's correct. I tried to comment but don't have the reputation to do so yet... the comment would say:
5) Configure your timeout to fail ASAP based on baselines and implement retries with exponential backoff on 5xx errors to alleviate any small percentage of failures which may occur.
With all applications, temporary failures are expected but permanent failures after retry can be a sign of a real problem brewing.

Is it possible to hack the world state in Hyperledger composer

Since the world state is stored in the database of the peer, is it possible to get all the data in it on the peer node?
If yes, how to make sure all the data in state are well access controlled?
Plus, if everyone can see the transactions in the ledger, all people can rebuild the state with the transaction payload. It means the world state is transparent to all participant.
If this is true, again, how to make sure only the participants with proper permission can view the state?
Anyone can build a transaction, but it to be committed, to the world state, it would have be endorsed by the Endorsement System chain code. A malicious node (trying to hack the world state) would first need to be a member of the channel (controlled by the MemberShip Service Provider) and have the right permissions in order for to the call to be propagated to the orderer and committed to the nodes.
In order to restrict who can see what, you have the options of restricting the query in your smart contract chain code logic, or assigning them a write only profile in your peer config yaml.
Yes, you can enter CouchDB directly using the web interface and modify data. Changes in CouchDB are not notified by Hyperledger Farbic. I tested this (on Hyperledger Fabric 1.1.0) with 2 peers, and using a query function in the chaincode, the results from both peers were different, i.e. no error was given that data has been modified.
Check also How your data is safe in Hyperledger Fabric when one can make changes to couchdb data directly where a similar answer is given.

how to capture bulletin messages in apache nifi

I want to know if there is a way to capture the bulletin messages(basically errors) that appear on the Nifi UI and store it in some attribute/file so that it can be looked upon later. The screen gets refreshed every 5 min and if there is a failure in any of the processors i would want to know the reason for it.
I am not particularly talking about the logging part here.
As you know, the bulletins reflect the messages that are already logged. So all this content is already stored in the {NIFI_HOME}/logs/nifi-app.log. However, if you wanted to consume the bulletin directly you have a couple different options.
You could consume the bulletins from the REST API. There are a couple endpoints for accessing the bulletins.
http[s]://{host}:{port}/nifi-api/controller/process-groups/{process-group-id}/status?recursive=true
This request will get the status (including bulletins) of all components under the specified Process Group. You can use the alias 'root' for the root level Process Group. The recursive flag will indicate whether or not to return just the children of that Process Group or all descendant components.
http[s]://{host}:{port}/nifi-api/controller/status
This request will get the status (including bulletins) of the Controller level components. This includes any reported bulletins from Controller Services, Reporting Tasks, and the NiFi Framework itself (clustering messages, etc).
http[s]://{host}:{port}/nifi-api/controller/bulletin-board?limit=n&sourceId={id}&message={str}
This request will access all bulletins and supports filtering based components, message and limiting the number of bulletins returned.
You could also create a Reporting Task implementation which has access to the bulletin repository. Reporting Tasks are an extension point which are meant to report details from this NiFi instance. This would require some Java code but would allow you to report the bulletin's however you like. Here is an example that reports metrics to Ambari [1].
[1] https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-ambari-bundle/nifi-ambari-reporting-task/src/main/java/org/apache/nifi/reporting/ambari/AmbariReportingTask.java