Multiple hangfire dashboards and processing on the same server at the same time - hangfire

We have a situation where our staging and production instance are on the same server (using single IIS). Hence we have a hang-fire server installed on this pc one for staging and to manage production.
However we can't have both our staging and our production version of hang-fire running at the same time as there are conflicts so we need to have one stopped at all times.
Both these processing would work with different databases and process different items configured by config files.
Is there anyway to have basically 2 hangfire dashboards and related processing on the same server?

So my question was slightly off. It's not hangfire we needed to run on a different port but the process that is being used to host it.
In this case we are running the hangfire as a windows service.
host.RunAsService();
However, by default it used the port 5000. So when we had 2 windows services running they were conflicting on this port.
What we needed to do was configure our host server to run these instances on different ports. We were able to do this by adding a port configuration to our appsettings and setting it accordingly. We can then read that in during startup to configure what Url is used.
public static IWebHostBuilder CreateWebHostBuilder(string[] args)
{
var config = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", optional: true); //this is not needed, but could be useful
var settings = new ProcessingSettings();
config.Build().GetSection("ProcessingSettings").Bind(settings);
return WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseUrls($"http://*:{settings.Port}");
}

Related

Does apache ignite writes to work/marshaller directory, if persistance is not enabled?

I have two different ignite deployments. In both, Apache Ignite server is started from the java program. The program sets work directory, configures the logger and then starts the server.
I have web application (Apache Ignite Client), which connects to respective Apache Ignite Server and performs the operation on cache.
What I am observing is that, in one enviroment some files are created inside work/marshaller directory and in other deployment the marshaller folder is empty.
Persistence is not enabled.
Can anyone explain?
Thanks
Ignite would write to marshaller dir when a corresponding type is used. This is because it is possible to have situation when all nodes which knew what type corresponding to a given typeId has left, and the remaining can no longer make sense of data they possess.

Infinispan : change dynamically jgroups multicast port

We recently deployed an app using infinispan (first time)
This app runs in 3 environments (test (2 nodes), pilote (2 nodes) and production (4 nodes)).
My issue is that each node sees the 7 others. It's normal because the jgroups UDP config file is the same for everyone so they all talk using the same port.
I would like to set by code a specific port for each environment to avoid maintaining a specific config
Our config file is stored in our custom stack, (shared with all of our projects and I don't want the stack to depend on the projects environments definition)
I found the "Protocol" class but I have difficulties to get the link with the infinispan manager
Do you have any solution ?
You could use a variable for the mcast port, e.g. <UDP mcast_port="${my.mcast.port:15000}". Setting system property my.mcast.port would override the default of 15000.
You can get the UDP protocol and change the port programmatically in JGroups, but in Infinispan, this doesn't make any sense as - by the time the cache has been created - JGroups has already been started and the port cannot be changed after the JChannel has been connected.

How to share an ignite instance among jetty webapps

The docs state:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/startup/servlet/ServletStartup.html
Servlet-based startup may be used in any web container like Tomcat,
Jetty and etc. Depending on the way this startup is deployed the
Ignite instance can be accessed by either all web applications or by
only one. See web container class loading architecture:
But then points to a dead link regarding Jetty.
I'm using Jetty. How would this be done (sharing the ignite instance among all web applications)?
Link to Jetty classloading
Link to Ignite web configuration
The latter describes web session clustering but you don't have to enable that to use Ignite. I think these docs should cover your case.
To share Ignite instance between web apps, you will need:
Put Ignite libraries into server's main lib/ directory, and not under your web app directory
Instantiate Ignite using Jetty API, as per the documentation that you referenced
code:
Server service = new Server();
service.addListener("localhost:8090");
ServletHttpContext ctx = (ServletHttpContext)service.getContext("/");
ServletHolder servlet = ctx.addServlet("Ignite", "/IgniteStartup",
"org.apache.ignite.startup.servlet.ServletStartup");
servlet.setInitParameter("cfgFilePath", "config/default-config.xml");
servlet.setInitOrder(1);
servlet.start();
This assumes you are starting Jetty programmatically, i.e. with your own code. Your mileage may vary if you don't.

how the server send message SSE worked in multiple server instance environments

I have a question on how to make SSE worked in multiple server environments.
In UI, there are two steps:
1. source = new EventSource('http://localhost:3000/stream');
source.addEventListener('open', function(e) {
$("#state").text("Connected")
}, false);
user in UI can post to api to update data
after user post to api, server is sending event to UI to udpate UI
In one server environement, this worked perfect fine, no problem at all.
But in multi server instance environments, this won't be working. For example, I have two server instance, and UI subscribed to server 1, then server 1 is remembering the connection, but data update is from server 2, when data is changed, there is no connection for SSE in server 2. Then in this senario, how can server 2 send SSE to UI?
In order to make SSE working in multiple server environments, do we need to adopt any saving solution to save the connection information so that any server instance can send SSE accurately to UI?
Let me clarify this more:
yes, both service 1 and service 2 are behind load balancer, they do not have to have same URL. UI is pure frontend end application, can even be mobile app. So, if UI is sending a eventSource request to LB of server1, then only this instance can use this connection to send event back to UI, right? But if we have multiple instance of server 1, that means any server 1 instance other than current one can NOT send event back to UI.
I believe this is the limitation of SSE unless the connection can be shared among all the instances. But how.
Thanks
If you have two servers, with different URLs, make one SSE connection (from each client) to each server.
Be aware of CORS restrictions, i.e. the same origin policy. (It works identically to xhr2 CORS, so fairly easy to google; my book also covers it in detail, chapter 9.)
If you have two servers behind a load balancer, which is presenting a single URL to the clients, then you just have to make sure the load balancer is configured correctly. I.e. to always pass through that socket to the correct server. If a back-end server dies, and needs replacing, the load balancer should close the SSE socket; the client will then auto-reconnect, and get a new back-end server.
The multiple servers behind a load balancer, should either be having their own data push socket connections to a master data source, or should all be polling the master data source.

New Relic API - difference between instances and hosts?

Referring to https://github.com/newrelic/newrelic_api for the New Relic API, I was wondering what was the difference between hosts and instances.
Basically, I get what an application is and what a server is (obviously). I would assume instances are instances of the application, i.e. if my app were running on Heroku, each instance would correspond to a dyno running my app. But then what is a host? And what's the difference between host and instance?
Thanks,
-Billy
UPDATE
Thanks for the answer!
So if I got this right, in the general case, the mapping between applications and instances is 1-to-n, i.e. each app can have 1 or more instances. Also, the mapping between instances and hosts is n-to-m, i.e. each instance can be running on at most one host (at any given time), but instances are distributed among available hosts. Similarly, hosts are distributed among servers (say, m-to-s). Is that it? (Apologies if this sound like I'm saying very obvious stuff, but I'm unfamiliar with the terminology they are using over at New Relic)
If the above is correct, how can I get the instances - hosts and hosts - servers mappings from the API? I can see how to get the applications - instances and applications - hosts, but what about the other two?
Thanks again for your help!
A host (server) can run many instances of an application. Each process that responds to requests (e.g., a Unicorn worker) is an instance from the New Relic perspective. The host/instance distinction is roughly equivalent to the difference between an IP address and a port.
If you're using Heroku, New Relic treats the entire dyno grid as a single host/server, and each dyno as an instance.
Re: the updated question
A host is a machine or VM that applications run on, and each one can run N instances of the application.
A "server", for the purposes of the NR API, is an OS+hardware that's monitored by New Relic Server Monitoring. The NR application monitoring agent can also be running on a server monitored by the Server Monitoring agent. In that case, both the host and the server should report the same name to New Relic ("server01.example.com").
There isn't a way to get the instance-host or host-server mappings explicitly from the New Relic API. But in the case of server-host, the mapping is that they share the same name. You can probably infer the instance-host mapping from the instance names, too, since they will almost always contain the host name (and possibly also the port number).