command for enabling/disabling app context in mod cluster? - apache

Can anyone provide me a command to enable /disable context in mod cluster-1.0.10?
I have this
curl http://mydomain/mod_cluster-manager?Cmd=STOP-APP&Range=CONTEXT&JVMRoute=node1& Alias=default-host&Context=/myapp
but i am unable to understand Localhost(App or Web), Alias(App servers running on proxy) since i am newbie to this environment. It would be great if some one can explain me this or even provide me a new command.
Thanks!

Noooo! Please, don't use mod_cluster 1.0.10 unless you absolutely have to. If it is the case, make sure you are on the latest version of the maintenance branch: MOD_CLUSTER_1_0_10_GA_CP
The command you are asking for is this:
http://mydomain/mod_cluster-manager?nonce=YOUR_NONCE&Cmd=DISABLE-APP&Range=CONTEXT&JVMRoute=my-worker-server-1&Alias=alias&Context=/myapp
Explanation:
nonce, yes, it's exactly what they say on wiki in this context. It must be included if CheckNonce is on.
DISABLE-APP is the command that disables one of this resources:
LBgroup Load balancing group
Node (Worker)
Context (a particular app on a particular node)
Range=CONTEXT picks the last of the aforementioned three choices.
JVMRoute marks a particular node
Context marks a particular context on the selected node
Alias if there are more aliases-virtual servers on the node, select the right one. Leave it e.g. alias if you have only one server per node...doesn't matter.
HTH
karm

Related

Publishing of multiple Angular apps to Cloudflare workers with wrangler

I'm new to CF workers and the wrangler publish system, and I can find very little information around my requirements within online sources, perhaps my search query is wrong, so hoping I can find some help here.
I have an NX workspace, containing 2x apps. One app is deployed into the top-level worker, and the second one should be deployed to a sub-directory in the same worker, effectively create a parent-child structure, like the following:
example.com/ -> top-level app
example.com/site2/ -> child-level app
My issue is, I do not understand where and how to define, in wrangler.toml, the /sub-directory/. Should I have 2x separate worker-sites for these? I was under the impression that, I could just update the worker (index.js) file in my single worker-site to handle /site2/ otherwise treat the request as standard?
All I would really like to know is, how can I specify that my publish should to the /site2/ sub-directory, if at all possible?
Thanks in advance.
There are a couple ways to handle this. If your code / logic in the workers for the top-level vs child-level is completely different, I'd recommend using two separate workers. Then you can configure which "routes" each worker will run on -
https://developers.cloudflare.com/workers/cli-wrangler/configuration
Worker 1 could be -
routes = ["example.com/"]
Worker 2 could be -
routes = ["example.com/site2/"]
Check this out for more details on how routing / matching behaves -
https://developers.cloudflare.com/workers/platform/routes#matching-behavior
The other way to do it would be to have a single worker, and inspect the incoming request to behave differently depending on whether the request is at the root, or at /site2/. I'd only recommend this if there are small differences between how the two sites should behave (e.g. swapping out a variable).

Selenium Grid: Node API?

The problem:
I want to run Selenium Grid on AWS and would like to use their dynamic scaling. On scale down, it will just terminate an instance... which mean that a node can disappear just like that. Not the behaviour I would like, but using scripts or lifecycle hooks, I can try and make sure that any sessions on the node is not active before it is terminated.
Seems like I can hit this API to disconnect the node from the hub: http://NODE-IP:5555/selenium-server/driver/?cmd=shutDownSeleniumServer
Ideally, I need to find an API to the node directly to gather data of session activity.
Alternatives? Sessions logs?
Note:
This answer is valid only for Selenium 3.x series (3.14.1 which is as of today the last of the builds in Selenium 3 series). Selenium 4 grid architecture is a complete different one and as such this answer will not necessarily be relevant for Selenium 4 grid (Its yet to be released).
Couple of things. What you are asking for sounds like you need a sort of self healing mechanism. This is not available in the plain vanilla selenium grid flavor.
Selenium node, doesn't have the capability to track sessions that are running within it.
You need to build all of this at the Selenium Hub (which is where all this information resides in).
On a high level, you would need to do the following
Build a custom proxy by extending org.openqa.grid.selenium.proxy.DefaultRemoteProxy which would have the following capabilities:
Add an API which when used would mark the proxy as quiesced (meaning the node has been marked for maintenance and will no longer accept any new session requests)
Override getNewSession(Map<String, Object> requestedCapability) such that it first checks if a node is not quiesced and only then facilitate a new session.
Build a custom servlet which when invoked can do the following:
Given a node it can use the API built via 1.1 and mark a node as quiesced
would return back the list of nodes that don't have any sessions running in them. If you build your servlet by extending org.openqa.grid.web.servlet.RegistryBasedServlet, within your servlet you should be able to get the list of free node urls by doing something like below
List<RemoteProxy> freeProxies =
StreamSupport.stream(getRegistry().getAllProxies().spliterator(), false)
.filter(remoteProxy -> !remoteProxy.isBusy())
.collect(Collectors.toList());
List<URL> urls =
freeProxies.stream().map(RemoteProxy::getRemoteHost).collect(Collectors.toList());
Now that we have the custom Hub which is now enabled with functionality to do this cleanup, you could now first invoke the 2.1 end-point to mark nodes to be shutdown and then keep polling 2.2 end-point to retrieve all the IP and Port combinations for the nodes that are no longer supporting any test session and then invoke http://NODE-IP:5555/selenium-server/driver/?cmd=shutDownSeleniumServer on them.
That on a high level can do what you are looking for.
Some useful links which can help you get oriented on this (All of the provided links are blogs that I wrote at various points in time).
Self healing grid - https://rationaleemotions.wordpress.com/2013/01/28/building-a-self-maintaining-grid-environment/
Building a custom proxy - https://rationaleemotions.github.io/gridopadesham/CUSTOM_PROXY.html
Building a custom servlet for the hub - https://rationaleemotions.github.io/gridopadesham/CUSTOM_SERVLETS.html

Nifi, processor group. How to map all allowed path to just one listening port of different headers

Lemme come straight into this.
Well, I have implemented Nifi to localhost. It's working well and everything seems to be perfect.
I have made many different flows with headers of course within the cluster as below.
Cluster
When I right click the header and go to "View configuration" go to "Properties" will see as follows.
Processor details
You can see the "Listening Port" that is 10004 and a "hostname" as well. Then there is "Allowed path" as can be seen.
Now If I want to access this specific header I have to hit using 10.0.0.18:10004/spec/transform.
Now the issue is, I have many different headers which are having a different listening port that is assigned by me. NIFI is not allowing me to assign the same port for every flow I make. but I have to assign different port every time I make a new flow. I just want to assign port 10004 to every other flow and just differ them using the "Allowed path".
How come I make this possible. I have to always assign new port to every new flow. Is there a way to do that. Hope you guys understand what am I actually willing to have. Hope to have your answers soon.
Thank you
You can have one HandleHttpRequest at the beginning of your flow listening on port 10004, and set the "Allowed Paths" property to a regular expression that matches all of the paths you want to support. HandleHttpRequest will add the path as an attribute to each flow file named "http.context.path", so you could then use a RouteOnAttribute to route each path to a different part of the flow.
As Bryan Bende
but in nifi 1.14.0 that is attribute: http.request.uri

Convert Minecraft Mod into Server Plugin

I have developing Forge Minecraft mods for some time now. I was wondering if it were possible to actually put them on a server. I can't seem to find a direct answer. I know that this may not be the place to put this but I am just dying to know. Please let me know if I can do it, and if so, how.
You can't turn a MC mod into a server plugin because Bukkit and Forge are different things (take it from a plugin dev); however, you can make a mod work on Forge servers.
When making a Forge server, all of your server's players will have to have the client mod, in addition to the Forge client, installed for it to work, so be prepared.
The first step is to actually install the server. Next, go to your Forge server folder, and upload the mod(s) into the folder. Then restart your server and boom, its there. Server mods can be found here.
So if you didn't want to read this, the gist of it is, no, you cannot, but your mods can be used on a Forge server, but not as plugins for a Bukkit or Spigot server, and that your players will need a clientside modpack. Hope this helped!
If you want to make your mod server side only, without having your "clients" having to download it, add acceptableRemoteVersions = "*" to your #Mod line.
This way players don't have to have the mod to be able to connect to the server. This way you can have plugins/mods like dynmap or whotookmycookies without the players having to have the mod as well.
If you want to put them on a server, yes you can. But keep in mind if you solely developped for single player worlds you'll run into "Side" issues.
Code will break beacuse some code is specifically Client side only, and some code is Server side only.
You will have to adapt some synching code with packets to make sure all your graphical things will happen and user input will be returned to you.
Single player worlds are way less picky in that aspect. So make sure you run all your code on the "proper" side, and remember if(!world.isRemote)(test to see if you are running serverside) is your friend.
#Mod(modid = MyMod.MODID, name = MyMod.NAME, version = MyMod.VERSION), acceptableRemoteVersions = "*"
public class MyMod {
....
}

Adding a second webapp

I am struggling to setup a public website in Moqui, I am trying to have (dev-)www.example.net as the public marketing site and signup forms. Then the tennants on [tennant-name].example.net I have setup a basic component and then edited MoquiDevConf.xml, modified the webapp-list as shown below:
<webapp-list>
<webapp name="webpublic" http-port="8080" https-enabled="false">
<root-screen host="dev-www.example.net" location="component://webpublic/screen/webpublic.xml"/>
</webapp>
<webapp name="webroot" http-port="8080" https-enabled="false">
<root-screen host="^((?!dev-www.example.net).)*$" location="component://webroot/screen/webroot.xml"/>
</webapp>
</webapp-list>
I have restarted the app for the changes to take effect but all I get is an error 500 when I try and visit http://dev-www.example.net:8080/
org.moqui.BaseException: Could not find root screen for host [dev-www.example.net]
As far as I can tell Moqui is finding the component as I see this in the logs:
Added component [webpublic] at [file:/Volumes/MacHDD/Sources/atlas-moqui/runtime/component/webpublic]
Non dev-www hosts still work and I get the customary login screen so I am not sure what I am missing as this is almost a direct copy of the existing webroot?
Thanks for any help!
Sam
You probably using the same port number. Try different one (e.g. 8081) for the second one. All used ports should be different. Please see my comment as well.
My guess about why your particular configuration is not working is that the root-screen.#host attribute is always a regular expression and the URL you are using contains special characters including '-' and '.'. It should work if you escape these characters with a '\', i.e. use '.' and '-'.
That said, if you want to support virtual hosts with the same webapp root for multiple tenants you shouldn't need to declare the virtual hosts this way, this is only needed if you want a different webapp root screen (which may be what you eventually want to do).
UPDATE: With the configuration snippet above the issue is that there are multiple webapp-list.webapp elements, one with name=webroot which is the webapp used (as specified in the web.xml file in the moqui-name context-param) and the other with name=webpublic which is ignored because the configuration is found based on the name from the web.xml file.
The solution is to put both root-screen elements under the webapp element with name=webroot. Basically the way these are looked up is not arbitrary, it is explicit for the webapp name (the moqui-name context-param). If you have multiple webapps deployed they should have different moqui-name values to refer to different configurations. This would best be done in something other that Winstone, something like Tomcat. It would also stray from the documented ways of deploying Moqui, so a bit more work would need to be done. There isn't any really point in doing this, better to run everything in the same webapp with multiple root-screen elements and multiple root screens as needed.