Slots in the loadbalancer algorithm? - load-balancing

Upon going through the documentation of `kong' api gateway, there's concept called slots in upstream servers. I did't get the meaning of that. Please share the knowledge of that.

I have taken a look on the Kong source code, and this is relative to an external resty lib:
https://github.com/Kong/lua-resty-dns-client/blob/c25166d25bb2b5cbdc2e3fa9cb4d5d510f69a2c1/src/resty/dns/balancer/round_robin.lua#L86
Several slots are created into a ring, and based on weight distributed on the slots (~10000 default value).
Then a pointer is incremented to find a peer to use.

Related

Is Multihop possible with LoRa?

I have a question regarding how to enable Multihop in LoRa (that is to communicate between two end devices without the LoRaWAN gateway). I have tried doing it using transparent bridging but it won't work.
Although it works with LoRaBlink the issue is flooding. If the number of devices increases the channel utilization as well as the performance goes down rapidly.
Can someone please suggest if there is any other way to do it or how to do it efficiently through LoRaBlink?
Thanks
If you check the wiki of Radiohead library, you will find RHRouter and RHMesh under topic Managers with the following description:
RHRouter Multi-hop delivery of RHReliableDatagrams from source node to destination node via 0 or more intermediate nodes, with manual, pre-programmed routing.
RHMesh Multi-hop delivery of RHReliableDatagrams with automatic route discovery and rediscovery.
There are raw LoRa libraries for a mesh network. It's implemented on the Pycom devices, and the library for it is called PyMesh. The technology is based on Thread by Thread groop.

Peers vs Members - Consul

Peer set - The peer set is the set of all members participating in log replication. For Consul's purposes, all server nodes are in the peer set of the local datacenter.
~ Quote from Official docs
What is the difference between peers and members then?
Why do we have following two APIs then? (one is enough?)
i. /status/peers
ii. /agent/members
Could you please shed light on the internal details?
Is there a possibility of inconsistency in results of above APIs?
Here is a comparison of /agent/members/, status/peers/ and catalog/nodes.
The possible difference in response is because each of the API end point get data from different sources.
/catalog/nodes: The request recieved by any agent is redirected to the leader, and leader provides the response from catalog.
/agent/members/: Agent receives the request and return member information obtained from gossip. This can be different from catalog end point (as obvious from log replication mechanism (Consul uses Raft Prorocol) ).
/status/peers/ : This API return the nodes participating in 'log replication'.
Ideally, this should be same as /catalog/node. But if there is a partition in the cluster, it is possible that, until the cluster recover, all members are not taking part in log replication. In this case /catalog/nodes/ and /status/peers/ can give different results.
To understand this proper, you need to know the raft protocol properly. Reference.

Does Service Fabric provide an api to move actors between partitions at runtime

I stumbled upon a very interesting paper from Microsoft Research, where they discuss an algorithm to re-distribute actors between servers / partitions based on their "proximity" (defined as number of remote calls to one another) to reduce remote calls across server boundaries.
They applied their prototype to the Orleans framework.
Now I'm wondering if the Service Fabric Actors framework also provides an interface to re-distribute / balance actors at runtime.
The only remotely related information I found in the online documentation mentions that Service Fabric redistributes partitions based on reported load.
Any insight would be very interesting.
Kind regards,
Pascal
In Service Fabric, an Actor's ID determines the partition it lives in. More info here. So an Actor can't move from partition to partition. Like you said, the Actor Service replica that owns the partition (with many Actors) be can moved from node to node. (for balancing) By using placement constraints you can influence movements.

Interoperability in DDS

I am new to DDS domain and need to have the below understanding.
how to publish common topics between two vendors to achieve interoperability in DDS?
The Scenario is :
Suppose there are two vendor products V1 and V2. V1 has a publisher which publishes on topic T1. V2 wants to subscribe for this topic.How will the Subscriber(V2) know that there exists a Topic T1?
I have a similar doubt on Domain level.how will a subscriber know to which domain it has to participate in?
I am using OpenDDS.
Thanks
Interoperability between vendors is possible, and regularly tested/demonstrated by the main vendors.
You will need to configure your DDS implementation to use RTPS ( I think RTPS2 currently), rather than any proprietary transport that vendors may use. This might be enabled by default.
In terms of which domain to participate in, you programmatically create a domain participant in a particular domain (which domain it connects to might be controlled by a config file) and all further entities (publishers, subscribers, etc) that you create then belong to that domain participant and therefore operate in that domain
To build on #rcs's answer a bit... the actual amount of work you have to do can depend on DDS implementations (OpenDDS, RTI, Prismtech...) because they'll have different defaults. If you use the same on both ends, then your configuration becomes a lot simpler since defaults should line up for things like domain and RTPS.
You will need to make sure the following match:
Domain ID
Domain Partion
Transport (I recommend RTPS, FWIW version difference between 2.1 and 2.2 hasn't mattered in my experience)
TCP or UDP
Discovery port and data port - this will matter more or less depending which implementations you use and whether or not you're using the same one on both ends of the connection, if use using the same, they'll have the same defaults.
Make sure the topic one publishes matches the topic the other subscribes to, this will apply to the Topic and the Type (see more here)
Serialization of the data
Discovery (unicast vs. multicast, make sure
whatever setup you choose is valid, ex: both devices are in the same
multicast group)
QoS settings will need to line up, though I think defaults will likely work (read more here)
Get the Shapes demo working between the machines you're working on first, this does some basic sanity checking to know that it is possible with the given configuration and network setup. Every vendor/implementation that I've seen has a shapes demo to run, for example, here is RTI's.
That's all I can think of right now, hope that helps. I have found DDS documentation to be really good, especially if you know when you can (and when you can't) use any vendor's documentation's answer for your implementation (ex: answer found on RTI's doc or forum and whether or not it works for your OpenDDS application). Often the solutions are similar, but you'll find RTI supports the most and RTI + Prismtech have some of the best documentation.
The DDS RTPS protocol exchanges discovery information so that different applications participating in the same domain (!) know who is out there, and what they are offering/requesting. You need to make sure that the two applications are using the same domain ID (specified on the domain participant). Also, as some implementations allow for different transport options, make sure to use RTPS (sometimes called DDSI) networking.
The RTPS specification contains a mapping from domain ID to port numbers, so if applications from different vendors use the same ID it should just work. Implementations might however override portnumbers with configuration.
To maximize the chance that the applications communicate properly, ensure they use the same IDL datamodel. Vendors have different approaches to type evolution / mapping types that don't exactly match, and not all of them implement the XTypes specification (yet).
Also, as some implementations are stricter than others, ensure that you stay within bounds of the specification. This means that a topic name should only contain alphanumerical characters (I sometimes see ':' to indicate scoping, that is not allowed).
Things that will definitely not work between vendors is TRANSIENT/PERSISTENT durability or communication over TCP, as both have not been standardized yet. TRANSIENT_LOCAL should work. The difference between TRANSIENT_LOCAL and TRANSIENT is that with TRANSIENT_LOCAL, data is no longer aligned after a publisher (writer) leaves the system, whereas with TRANSIENT that data will still be available.
Also note that for API-level interoperability between vendors, your best chance is to use the new isocpp API, since that one has been implemented pretty consistently across the vendor implementations I've seen.
Hope that helps!

NServicebus hierarchy and structure

I am just starting out learning NServicebus (and SOA in general) and have a few questions and points I need clarification on regarding how the solution is typically structured and common best practices:
The documentation doesn't really explain what an endpoint is. From what I gather it is a unit of
deployment and your service will have 1 or more endpoints. Is this correct?
Is it considered best practice to have one VS solution per Service you are developing? With a project for messages, then a project for each endpoint, and finally a project that is shared with the endpoints containing your domain layer?
From what I read, services are usually comprised of individual components. Can (or should) any component in the service access the same database, or should it be one database per component?
Thanks for any clarification or insight one can offer.
I will try and answer your questions the best i can...
I'm not sure about the term "Best Practices" i would consider the term "Best Thinking" or "Paradigm"
Q1: yes, an endpoint is a effectively a deployed process that consumes the messages of it's queue.
It's dose not have to belong to a single "Service" (logical) (In the case of a web endpoint for example), an endpoint can have one or more handlers deployed to it.
Q2: I would go with one solution (and later repo) per logical domain service, Inside a solution I would create a project per message handler, because as you scale you will need to move you handler between endpoints, or to their own endpoint depending on scale. Messages however are contracts, so i would put them in a solution/s maybe split commands and events. You may consider something like nuget to publish your message packages.
Q3: A "Service" is a logical composition of autonomous components, each is a vertical slice of functionality, so they can share the same database, but i would say that only one component has the authority to modify it's own data. I would always try to think what would happen when you need to scale.
Dose this make sense?