This is more of a high level question. But say you have a large number of applications, many of them distributed (server/clusters) and they share configuration parameter.
What is a good way to store this application specific configuration (preferable in a central place) without relying on a single point of failure.
For configuration I mean things like "database server addresses", "web services endpoint", "Logging file name" and even why not some business related constants and parameter.
Some of this parameter could eventually be changed at runtime so the application needs to be able to also query dynamically these parameters.
I can think of an application storing the configuration at a local file (forget about the format) or a central database to store the same.
But I would like to ask the community if there are standards for handling configuration of multiple distributed systems.
Thanks.
Apache ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
Related
Both Pipes and ASP.NET Core gRPC support local and remote IPC/RPC (with some platform limitations for gRPC)
When would I use one technology (Pipes) or the other (gRPC)?
Observations, thoughts and considerations I'm keeping in mind:
gRPC seems to be geared towards replacing WCF in some future iteration.
local deployments and with machine restrictions (running as non-admin/user, machine firewalls, different platforms/OS)
network traversal, and compatibility with same-machine -> multi-machine (frontend/backend arrays) for load and expansion
Spanning secure zones (where a Proxy is used, or other TLS cipher/order/registry setting) affects the ability for HTTP/2 to work
Pipes (named pipes?) have a different surface area and port (do they also use port 135, or NetBIOS over TCP (not sure of name))... how is it scanned and secured?
"memory mapped files" seem to be a challenge to get working, however it seems to work in ASP.NET Core with gRPC in the UDS configuration. Is this a correct inference?
Right now my scenario is to have two console apps communicate with each other, same machine or remote. Adding Asp.NET Core Web is an optional front end alternative for my scenario.
Simple IPC
Depends on how much communication is going to happen. If your communication is limited to simple collaborative signal passing or sharing some data between two processes you can safely use NamedPipeClientStream and NamedPipeServerStream on local system or local network but if you plan for the same on different systems then I would suggest using TcpClient and TcpListener.
Comprehensive IPC
WCF or now its replacement gRPC is for scenario where a complete API/Framework need to be executed remotely. For example I have an entire library of classes which I need to call from a different process (which mostly run on a different system); in that case gRPC kind of solutions make more sense.
Only you can decide.
This is a design decision which is highly unique for your application; your future plans and your system environment and any third person can only give you clues but ultimately you are the only person who can make the right decision.
My management is evaluating non-Azure Microsoft Windows Service Bus (Azure is out of consideration for security reasons). It will be used to setup topic/subscription model with a number of WCF services with netMessagingBinding that we building, so I just have a few basic questions about that.
Are there any specific hardware requirements like dedicated server, dedicated database etc. for WSB to run in production environment?
It's easy to configure WCF service to listen on a specific topic subscription. Is there any way for WCF service to listen to multiple subscriptions?
Appreciate the answers.
You can install the service components and the databases all on one server (that is the default). However, for a number of reasons, we installed the services on a dedicated app server and then created the Service bus databases on an existing database server. The install package allows you to specify a different db server. Check this article for the minimum server requirements
Yes you can get one WCF service to listen to multiple subscriptions. You would need to create two (or more) System.ServiceModel.ServiceHost instances and then run them inside one process. For example we had one windows service running two ServiceHost's. Each host listened at a different queue and therefore implemented a different contract. This meant where queues were logically grouped we didn't need a new windows service per queue. You could do the same with subscriptions.
For question one, you will have to go through the exercise of hardware sizing. the good news is that WCF services can scale vertically, so you can add up servers if there were issues in handling client load.
To do hardware sizing you will have to make an estimate the expected load and then do performance/scalablity testing to figure the load bearing capacity of your serviceBus/services .
you could find a lot of resources for load testing like this one http://seroter.wordpress.com/2011/10/27/testing-out-the-new-appfabric-service-bus-relay-load-balancing/
once you do load testing and come up with the numbers, you can then do sizing using references like this one http://msdn.microsoft.com/en-us/library/bb310550.aspx
I am working with silverlight project that is consuming domain services. Actually i find that quite messy as one domain service class and metadata. I have already worked with Wcf services and found them very easy to update and handle. But domain service's modification (as new field or tables are added) is really a pain.
I want to know why people prefer domain services over silverlight enabled Wcf services? I mean advantages or disadvantages of both and performance implication
After goggling i found this are things you should see :
To authenticate users faster in the domain
To authenticate resources(gps etc) faster for the users
Utilization of resources
Utilization of network and descreasing the overall traffic in the
network.
The main benefit is that of the users and passwords management, which
could grow to be massive amount of work having to manage them
individually on each independent servers. The proposed changes of
migrating the whole platform to Active Directory environment will
assist in propagating the changes (such as new users, password
changes, new security requirements via GPO, etc) on to the servers
(which will run as domain clients, only 1 or 2 will run Primary and
Secondary ADC. Not all these servers are going to run host AD or be
an ADC, server OS is used due to it's robustness and reliability).
disadvantage
cost of infrastructure
good planning is must
Complex structure for user
I can imagine that the 'server' can be a machine/host but can be also a program like ftp server, smtp server, etc..
The 'service' on the other hand refers mainly to applications/programms..
Why can then for example the Sql Server cannot be called as Sql Service? It has the same semanthics. Or the other way round: MS Azure service: why it isn't called Azure Server? :)
I would say:
A server is expected to give a response
A service is not
Additionally, a service may include more than a server - it may well be an environment, hardware, SLA and more.
The services are features offered by the servers.
A server is a (possibly virtualized) piece of equipment that can be used to provide a service.
A service is something that you can use (usually remotely) that is provided by one or more servers.
The other difference is that these are really concepts at different levels of abstraction. Servers are concrete things. Services are abstract. Yet people mostly use services, and don't really care about what servers are used to implement them. Do you care about what servers are used to provide Google's web search service? No, you don't. Do you care about what servers are used to provide Amazon's cloud service? No, you don't.
A server is a a software program, or the computer on which that program runs, that provides a specific kind of service to client software running on the same computer or other computers on a network.
Per Microsoft - Windows Azure: operating system as an online service.
SQL Server is a Server, Any Stored Procedures or functions you write are Services. (A Query is a dynamic Service that has a life of just the call, it is sent to the Database, compiled, The Server executes the compiled code and returns the results)
I would say that there's no difference. They're used more-or-less interchangeably.
Or, if you prefer: you can come up with a definition, and someone will come up with a counter-example.
We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes.
Our application, or our system, as I should rather say, comes in two or three parts.
Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces.
Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients.
Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster.
Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services.
That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call?
Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email:
The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container?
As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following:
Because EJBs are inherently location independent, they use a remote programming
model. Method parameters and return values are serialized over RMI-IIOP and returned
by value. This is the intrinsic RMI "Call By Value" model.
WebSphere provides the "No Local Copies" performance optimization for running EJBs
and clients (typically servlets) in the same application server JVM. The "No Local
Copies" option uses "Call By Reference" and does not create local proxies for called
objects when both the client and the remote object are in the same process. Depending
on your workload, this can result in a significant overhead savings.
Configure "No Local Copies" by adding the following two command line parameters to
the application server JVM:
* -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util
* -Dcom.ibm.CORBA.iiop.noLocalCopies=true
CAUTION: The "No Local Copies" configuration option improves performance by
changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM.
One side effect of this is that the Java object derived (non-primitive) method parameters
can actually be changed by the called enterprise bean. Consider Figure 16a:
Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations?
Thanks
The only automatic optimization that can really be done for remote EJBs is if they are colocated (accessed from within the same JVM). In that case, the ORB will short-circuit some of the work that would otherwise be required if the request needed to go across the wire. There will still be some necessary ORB overhead including object serialization (unless you turn on noLocalCopies, with all the caveats it brings).
Alternatively, if you know that the UI controller is colocated, your method calls do not rely on parameter or return value copying, and your interface does not rely on the exception differences between local and remote views, then you could create and expose a local subinterface that will be much faster than remote access through the ORB.