Upstream and Downstream definition - requirements

in my organization they tend to use a nomenclature of "downstream" and "upstream" when they talk about communications between systems. What is the definition of these concepts? Is this standard concepts in the world of IT-development?

I know this is old but I think the other answer has it the wrong way around. Think of it this way - if you are upstream from something what you do can affect it and equally something upstream from you affects you but something downstream can't.
So to use the same method, given a system S:
Upstream - something which S depends on (as it's actions "flow down" to S)
Downstream - something which depends on S (as S's actions "flow down" to it)

To simplify things, let's say we are talking about a system S.
Upstream - Something which depends on S
Downstream - Something which S depends on

Related

Naming conventions: Client, Driver, Actor, Adapter, Broker, Manager, EventEmitter, PubSub, EventBus

I have a mild case of analysis paralysis when it comes to naming.
Suppose we are wrapping some google API. These all seem reasonable:
googleClient
googleDriver
googleActor
googleAdapter
googleBroker
Actor might be more suited to a more concurrent program. But then google API is inherently asynchronous so maybe a good fit.
Suppose the API supports websockets or push messages and it supports methods like .subscribeToEventA(... it might make sense to call it
googleEmitter
googlePubSub
googleEventBus
or even
googleWrapper
The issue being they all seem reasonable and I have no rule of thumb for choosing between them. Is there a general style guide to lean on or a rule of thumb? Maybe some authoritative glossary for terms like these?
Naming things is subjective and so there is no right or wrong answer to what something should be named.
However you can name things based on well established design patterns so readers are more likely to be knowledgeable as to why something is named as it is.
Also key to naming things is to try to be consistent. Once you have settled on a name for a type of entity it's a good idea to add it to a Domain Specific Language (DSL). Having a documented vocabulary for your domain then makes it easier for other authors to use consistent naming conventions in your environment.

Where to create queues and exchanges?

I'm using RabbitMQ as message broker in first time and now I have a question about when to declare queues and exchanges using rabbit's own management tool and when to do it in the code of the software? In my opinion is that it is much better to create queues and exchanges using the management tool, because it's a centralized place to add new or remove useless queues without the need to modify the actual software. I am asking some advice and opinions.
Thank you.
The short answer is: whatever works best for you.
I've worked with message brokers that required external tools for defining the topology (exchanges, queues, bindings, etc) and with RabbitMQ that allows me to define them at runtime, as needed.
I don't think either scenario is "the right way". Rather, it depends entirely on your situation.
Personally, I see a lot of value in letting my software define the topology at runtime with RabbitMQ. But there are still times when it gets frustrating because I often end up duplicating my definitions between producers and consumers.
But then, moving from development to production is easier when the software itself defines the topology. No need to pre-configure things before moving code to production.
It's all tradeoffs.
Try it however you're comfortable. Then try it the other way. See what happens, and learn which you prefer and when. Just remember that you don't have to do one or the other. You can do both if you want.

How to change the nanomsg pipeline load balancing logic?

I'm hoping to use something like nanomsg (or ZeroMQ). Specifically the pipeline pattern. Instead of a round-robin method for sending out the data, I would like to load balance based on the data.Any suggestions?
I found an answer to the ZeroMQ use case here: ZMQ sockets with custom load-balancing
Ultimately though, I think what I was really looking for was better served with a DDS solution as opposed to ZeroMQ or nanomsg. I found this question and answer very helpful: WHY / WHEN using rather DDS instead of ZeroMQ?

Configuration Settings Service/Repository: Are They Used in a Real World

I’m currently enjoying reading "Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation" and the part that caught my attention is that on managing configuration settings for applications.
What's proposed in the book is that all configuration settings are externalized and centralized in a repository of some sort, be it an LDAP directory, ESCAPE Server or somesuch and then retrieved from there.
This sounds really compelling to me as this approach can provide a number of tangible benefits, but after Googling around for a bit it seems to me that this is not exactly a widespread approach.
I know there is a Twelve-Factor App article on this subject, but it suggests using environment variables instead of a centralized repository. This approach seems to be the most commonly used one, but it feels like a dirty one compared to a repository-based solution.
So, is the central-configuration-repository approach used in any significant manner in a real world, and if not -- what are the reasons for this?
Apparently, Zookeeper is frequently used for managing config variables:
http://zookeeper.apache.org/doc/r3.3.3/recipes.html#sc_outOfTheBox
Real World Use of Zookeeper
Doozer looks interesting as well:
https://github.com/ha/doozerd
And here is RESTful wrapper over Git inspired by the same text you refer to:
http://www.andycaine.com/configuration-management-with-restful-git/
Finally, env vars may feel dirty but they are actually quite a clean approach
https://devcenter.heroku.com/articles/config-vars

Sequential coupling in code

Is sequential coupling really a bad thing in code?
Although it's an anti-pattern, the only risk I see is calling methods in the wrong order but documentation of an API/class library with this anti-pattern should take care of that. What other problems are there from code which is sequential? Also, this pattern could easily be fixed by using a facade it seems.
Thanks
It is an antipattern to just ignore a method call because something which shouldn't have been done before hasn't.
This should be controlled using design by contract. Failed preconditions typically raise a failed precondition exception, which is basically the software yelling at you if you use the class in the wrong way. They are superior to written documentation.
Even in Wiki article you mentioned there is an opinion that
This may be an anti-pattern, depending on context.
In many cases there is no other way. Eventually we use algorithms to solve tasks. And they are by definition
an effective methods for solving a problem using a finite sequence of instructions
Sometimes it's possible to hide this sequence. But not always.
its a minor anti pattern, as if the documentation is bad (or the api is confusing) you can get things into a bad states. Its like a recipe where it only tells you to put the yolks aside after you've already beaten the eggs together.