Hi I am setting up a Tomcat cluster on windows machine with Apache httpd server.
I am done with two steps - load balancing and session affinity and now i am at 3rd step.Session Replication.In that step I need to add multicast route first.
In tutorial command of adding route in linux environment is given.
sudo route add -net224.0.0.0 netmask 240.0.0.0 dev eth0
here eth0 is device.
But,I am not getting how to add this in Windows 10. I am finding it out and trying on my machine. Any help would be appreciated.
I have set up the tomcat cluster on window machine using Tomcat 8.For Windows environment no need to run above command.It is working fine absolutely without this command.In almost all blogs present over web,use linux and given above command to setup route but in windows it is not required.But still question one question is pending why it is not require ?
Related
I downloaded the Fabric server jar file to a GitHub Codespace and am able to run the server without trouble. However, I am unable to determine the IP needed to connect to the server. Starting the server automatically forwards port 25565 and I make the port public. However, I can't figure out which IP to paste into Minecraft to connect to it. How do I figure out the IP of the server?
I found an answer thanks to inspiration from this question.
Steps:
Set up the fabric server jar as you normally would, but on the codespace. Start the server.
Split the terminal so one is running Java (server console) and the other is running bash.
Install ngrok via npm i ngrok --save-dev.
Once the server is finished setting up, run the command ./node_modules/.bin/ngrok tcp 25565.
Copy the ip shown under Forwarding (minus the tcp:// part and including the port). This should look something like 4.tcp.ngrok.io:17063.
You now have the ip of the serve!
Note: The free version of ngrok has URLs which change every time, as well as a limit, but for small-scale servers this shouldn't be an issue. You are also limited by the free codespace usage limit GitHub puts in place. However, you can easily get around this by creating a secondary account that you use codespaces on only for the server.
I am using arangodb 2.8.5 on ubuntu 14.04 (64bit)
In config file, endpoint = ssl://0.0.0.0:443
fails to start with error msg in log "FATAL failed
to bind to endpoint 'ssl://0.0.0.0:443'. Please check whether another
instance is already running or review your endpoints configuration."
Ran netstat -lnpt. Only port 22 is in use by ssh
Server starts up and binds to port 8530 with ssl when using endpoint = ssl://0.0.0.0:8530. Admin website is accessible https://www.website.com:8530/.../
I want the admin ui to be accessible without the need for additional port 8530 i.e. https://www.website.com/. This was possible to set up in the earlier versions. What am i doing wrong or is this not possible anymore?
Small application so i am trying to avoid running another web server in front to forward requests to arango apps. Thank you very much for any direction.
Regards,
Anjan
The problem occurs in conjunction with ArangoDB dropping its root privileges to the specified user by
[server]
endpoint = ssl://0.0.0.0:443
uid=arangodb
This may become possible with ArangoDB 3.0 again, however currently you have to choose one of the workarounds to allow non-root processes to bind lower ports:
authbind
Using the iptables REDIRECT target to redirect a low port to a high port (the "nat" table is not yet implemented for ip6tables, the IPv6 version of iptables)
SELinux or AppArmor
Use the capabilities system available as of Linux kernel 2.6.24 and CAP_NET_BIND_SERVICE capability:
setcap 'cap_net_bind_service=+ep' /usr/sbin/arangod
And then anytime ArangoDB is executed thereafter it will get the CAP_NET_BIND_SERVICE capability. setcap is in the debian package libcap2-bin.
More details on the capabilities can be found at:
capabilities(7) man page. Read this long and hard if you're going to use capabilities in a production environment. There are some really tricky details of how capabilities are inherited across exec() calls that are detailed here.
setcap man page
"Bind ports below 1024 without root on GNU/Linux"
In Azure, I created a virtual network and then associated an Ubuntu Server virtual machine, created with Azure Resource Manager Deployment method, with the network. I then updated the associated Network Security Group and added an inbound security rule for port 80 (Source:Any, Destination:Any, Service:TCP/80). After installing Apache on the VM, I tried to access the server from my browser, but have run into a wall. I can SSH into the VM just fine, but web is a no-go, and I cannot figure out why. Any help would be appreciated.
It sometimes happen to me too because I forgot to RESTART the VM, yes just restart it. At least this works for me. and also dont forget to add outbound rule too
It worked for me with this inbound rule.
Note that when a VM is created from the portal (in ARM model), it gets automatically associated to a virtual network (vnet), a specific subnet within the vnet and a network security group.
When creating the inbound security rule, make sure to:
identify the correct network security group associated to the VM
use a priority number lower than 65500
set the source port range as *
You also need open port 80 on the VM to allow web access.
I dont think that creating your Network Security Group opens the desired port on the VM automatically.
By default in Azure Resource Manager (ARM), all ports are open; there is no need to make Network Security Groups (NSGs) to open ports, only to close them. Here is an example of an ARM template that deploys an ubuntu VM with apache:
https://github.com/Azure/azure-quickstart-templates/tree/master/apache2-on-ubuntu-vm
Alternatively, if you want an auto-scaling LAP stack using VM Scale Sets (in public preview), you can find the ARM template for that here:
https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-lapstack-autoscale
Hope this helps! :)
I have a problem setting up a ipython cluster on a Windows server and connecting to this ipcluster using a ssh connection. I tried following the tutorial on https://ipython.org/ipython/doc/dev/parallel/parallel_process.html#ssh, but I have problems to understand what the options mean exactly and what parameters are to use exactly...
Could anyone help a total noob to set up an ipcluster? (Let's say the remote machine has ip 192.168.0.1 and the local machine has 192.168.0.2)
If you scroll roughly to the middle of the page https://ipython.org/ipython-doc/dev/parallel/parallel_process.html#ssh you will find this:
Current limitations of the SSH mode of ipcluster are:
Untested and unsupported on Windows. Would require a working ssh on Windows. Also, we are using shell scripts to setup and execute
commands on remote hosts.
That means, there is no easy way to build an ipcluster with ssh connection on windows (if it works at all).
Do you really need to connect the machines with an ssh connection? I guess it's possible with a ssh client on each windows machine, but if you are in a trusted local network you can also decide not to use the loopback interface and just expose the ports...
Sure you can start controller and engine separately! For further examples about ports (if you have problems with firewalls) see also How to setup ssh tunnel for ipython cluster (ipcluster)
I have a cluster of computers and I am using one of them as a kickstart server.
I configured DHCP/TFTP/FTP on it and it worked fine. When you boot any box in the cluster and choose to boot from network, it will reach out to that kickstart server, lease an IP, install OS..etc. However, using one box dedicated for kickstart is such a waste of resource and I am wondering is it possible to use some level of virtualization to achieve that, so you end up with a image that is a fully functional ks server and can be running on any box with the virtuallization tool set up?
I have used VirtualBox, Vagrant and Docker before but I am not sure will these tools be powerful enought to do it? Can anyone give some directional guidance or resource to help me get started.
Just virtualize the kickstart server;
Use the virtual environment DHCP Server facility and set the Kickstart Server DHCP module as "proxyDHCP"
When a PXE client boots up will get its IP from the virtual environment DHCP Server and the PXE booting information from the instance of the kickstart proxyDHCP server.
Next the PXE client will know where the TFTP and the rest of the kickstart facilities are located and will continue the boot/install.
Yep I always run kickstart on a VM, A good way to do things is have a bunch of VM's and share them across hosts. Pretty much every site I build out I have the following VM's
Build: running Kickstart/Cobbler, DHCP, TFTP
Provision: running Puppet or Chef
Monitoring: Zenoss or Nagios
The VM's disks all live on iscsi and create the VM's with libvirt KVM. Everything can easily live on one server. I usually have a second server that is prepared for the VM's and if there is ever an outage I just bring them up on the second server.