Unable to setup multiple instances of symmetricDS on a vm - replication

Please I have two instances of symmetricds running on my vm using the commands:
bin/sym --port 9000
bin/sym --port 4000
but I want to be able to run both instances through the linux service. I have tried installing the service using 'bin/sym_service install' in the respective directories. But it's not working and when I try to start the server I get an error stating that the port is in use.
How can I set up the linux service to use different ports?

To run multiple services of SymmetricDS on the same machine, you will need to:
Set unique port numbers for http, https, and jmx in conf/symmetric-server.properties.
Set unique service name for wrapper.name in conf/sym_service.conf
Then you can run "bin/sym_service install" to install the init script. On a side note, consider if it makes sense to run multiple nodes inside the same instance by placing multiple engine.properties files in the "engines" directory.

Find out which program is listening to the used port and kill it with kill -9 PROCESS_NUMBER. Then try again.
To run on a different port execute bin/sym --port 3000 server as stated in symmetricds.org/doc/3.10/html/user-guide.html#_sym_launcher

Related

Is it possible to host a Minecraft server on GitHub Codespaces?

I downloaded the Fabric server jar file to a GitHub Codespace and am able to run the server without trouble. However, I am unable to determine the IP needed to connect to the server. Starting the server automatically forwards port 25565 and I make the port public. However, I can't figure out which IP to paste into Minecraft to connect to it. How do I figure out the IP of the server?
I found an answer thanks to inspiration from this question.
Steps:
Set up the fabric server jar as you normally would, but on the codespace. Start the server.
Split the terminal so one is running Java (server console) and the other is running bash.
Install ngrok via npm i ngrok --save-dev.
Once the server is finished setting up, run the command ./node_modules/.bin/ngrok tcp 25565.
Copy the ip shown under Forwarding (minus the tcp:// part and including the port). This should look something like 4.tcp.ngrok.io:17063.
You now have the ip of the serve!
Note: The free version of ngrok has URLs which change every time, as well as a limit, but for small-scale servers this shouldn't be an issue. You are also limited by the free codespace usage limit GitHub puts in place. However, you can easily get around this by creating a secondary account that you use codespaces on only for the server.

Jenkins selenium docker and application files

I have a docker hub and a docker node up and running.I have also a docker container which includes my application up and running with the same set up as my pc. I get the following error.
[ConnectionException] Can't connect to Webdriver at http://ip:4444/wd/hub. Please make sure that Selenium Server or PhantomJS is running.
The IP is correct since I see there the selenium grid as it should be. Which might be the problem. When I get inside the container that i have in jenkins it runs my tests also.
Have you explicitly instructed the Hub Docker Container to expose it's internal port 4444 as 4444 externally?
Instructing a container to expose ports does not enforce the same port numbers to be used. So in your case, while internally it is running on 4444, externally it could be whatever port Docker thought was the best choice when it started.
How did you start your container? If via the docker cmd line, then did you use -P or -p 4444:4444? (Note the difference in case). -P simply exposes ports but no guarantee of number, where as -p allows you to map as you wish.
There are many ways to orchestrate Docker which may allow you to control this in a different way.
For example, if you used Docker Compose that has the potential to allow your containers to communicate via 4444 even if those are not the actually exposed ports. It achieves this through some clever networking but is very simple to set up and use.

Is there a way to access a running docker container on a remote server from my local development enviroment(Sublime)

Currently I can use rsub with sublime to edit remotely but the container is a second layer of ssh that is only accessible from the host machine.
Just curious, how do you use your remote host machine if you even have no ssh running on it?
Regarding to your question, I think you need to install openssh-server directly inside the container and map container's 22 port to the host's custom port. Inside your container you'll have to run some initial process that will launch all the processes you need (like openssh-server).
Consider this comprehensive example of the use of supervisord inside Docker container.

Start ipython cluster using ssh on windows machine

I have a problem setting up a ipython cluster on a Windows server and connecting to this ipcluster using a ssh connection. I tried following the tutorial on https://ipython.org/ipython/doc/dev/parallel/parallel_process.html#ssh, but I have problems to understand what the options mean exactly and what parameters are to use exactly...
Could anyone help a total noob to set up an ipcluster? (Let's say the remote machine has ip 192.168.0.1 and the local machine has 192.168.0.2)
If you scroll roughly to the middle of the page https://ipython.org/ipython-doc/dev/parallel/parallel_process.html#ssh you will find this:
Current limitations of the SSH mode of ipcluster are:
Untested and unsupported on Windows. Would require a working ssh on Windows. Also, we are using shell scripts to setup and execute
commands on remote hosts.
That means, there is no easy way to build an ipcluster with ssh connection on windows (if it works at all).
Do you really need to connect the machines with an ssh connection? I guess it's possible with a ssh client on each windows machine, but if you are in a trusted local network you can also decide not to use the loopback interface and just expose the ports...
Sure you can start controller and engine separately! For further examples about ports (if you have problems with firewalls) see also How to setup ssh tunnel for ipython cluster (ipcluster)

RabbitMQ Shovel plugin stuck on "starting" status

RabbitMQ starts up just fine, but the shovel plugin status is listed as "starting".
I'm using the following rabbitmq.config:
Each broker is running on a separate AWS instance. The remote server is windows 2008 server, the local server is Amazon Linux.
[{rabbitmq_shovel,
[{shovels,
[{scrape_request_shovel,
[{sources, [{broker,"amqp://test_user:test_password#localhost"}]},
{destinations, [{broker, "amqp://test_user:test_password#ec2-###-##-###-###.compute-1.amazonaws.com"}]},
{queue, <<"scp_request">>},
{ack_mode, on_confirm},
{publish_properties, [{delivery_mode, 2}]},
{publish_fields, [{exchange, <<"">>},
{routing_key, <<"scp_request">>}]},
{reconnect_delay, 5}
]}
]
}]
}].
Running the following command:
sudo rabbitmqctl eval 'rabbit_shovel_status:status().'
returns:
[{scrape_request_shovel,starting,{{2012,7,11},{23,38,47}}}]
According to This question, this can result if the users haven't been set up correctly on the two brokers. However, I've double-checked that I've set up the users correctly via rabbitmqctl user_add on both machines -- have even tried it with a different set of users, to be sure.
I also ran an nmap scan of port 5672 on the remote host to verify is was up and running on that port.
UPDATE Problem isn't solved but this does appear to be a result of connection problems with the remote server. I changed "reconnect_delay" to 0 in my config file, to avoid having shovel infinitely re-try the connection. Highly recommend others with this problem do this as well, as it allows you to get error messages out of rabbit_shovel_status. In my case I got the following error:
[{scrape_request_shovel,
{terminated,
{{badmatch,{error,access_refused}},
[{rabbit_shovel_worker,make_conn_and_chan,1},
{rabbit_shovel_worker,handle_cast,2},
{gen_server2,handle_msg,2},
{proc_lib,init_p_do_apply,3}]}},
{{2012,7,12},{0,4,37}}}]
Answering my own question here, in case others encounter this issue. This error (and also a timeout error if you get it, {{badmatch,{error,etimedout}}, ), is almost certainly a communications problem between the two machines, most likely due to port access / firewall settings.
There were a couple of dumb things I was doing here:
1) Was using the wrong DNS for my remote EC2 instance (D'oh! really dumb -- can't tell you how long I spent banging my head against the wall on this one...). Remember that stopping and starting your instance generates a new DNS, if you don't have an elastic IP associated with the instance.
2) My remote instance is a windows server, and I realized you have to open up port 5672 both in windows firewall and in EC2 security groups -- there are two overlapping levels of access controls here, and opening up the port in the EC2 management console isn't sufficient if your machine is windows server on EC2, as you also have to configure the windows server firewall.