We are deploying several NServiceBus services on remote machine using msbuild.
On the first step we're syncing all our files using msbuild and uploading our self-executed archives (each archive contains needed dlls for one service) + cmd script to stop/start services to remote server. That works fine.
Then we execute the script. We have configured all needed delegation on IIS and executing the script with right permissions.
The script itself:
:doUpdateForService
net stop %SERVICE%
echo [*] Unpacking Service...
%workDir%\%SERVICE%.exe -d%TARGET_DIR% -s
echo unpacking finished
net start %SERVICE%
IF %ERRORLEVEL% NEQ 0 goto onerror
goto:eof
When we're doing it the first time (all services are stopped), it works just fine, but on the second run we cannot stop services at all. Even locally. It returns us something like this:
The service can not be controlled in the existing state.
Any suggestions?
Related
I am trying to run apache server on AWS AppRunner using my source code repository with corretto 11 as runtime using the below start command
https://github.com/myanees284/apprunner-jmeter/blob/main/run_apacheee.sh
I could see the commands in the above sh gets executed and service gets deployed successfully as running. However after the deployment and health check, the commands are executed repeatedly.
Application log is here: https://gist.github.com/myanees284/db233e7e0d71eba4643f56c2e1bf87ec#file-application-logs2022-08-22t06_29_55-322z-2022-08-23t06_29_55-322z-json-L281
I am unable to understand why the code is executed multiple times when the service is already running?
After your start script exists, the container stops as well. That's why App Runner starts a new container after
Monit cannot start/stop service,
If i stop the service, just stop monitoring the service in Monit.
Attached the log and config for reference.
#Monitor vsftpd#
check process vsftpd
matching vsftpd
start program = "/usr/sbin/vsftpd start"
stop program = "/usr/sbin/vsftpd stop"
if failed port 21 protocol ftp then restart
The log states: "stop on user request". The process is stopped and monitoring is disabled, since monitoring a stopped (= non existing) process makes no sense.
If you Restart service (by cli or web) it should print info: 'test' restart on user request to the log and call the stop program and continue with the start program (if no dedicated restart program is provided).
In fact one problem can arise: if the stop scripts fails to create the expected state (=NOT(check process matching vsftpd)), the start program is not called. So if there is a task running that matches vsftpd, monit will not call the start program. So it's always better to use a PID file for monitoring where possible.
Finally - and since not knowing what system/versions you are on, an assumption: The vsftpd binary on my system is really only the daemon. It is not supporting any options. All arguments are configuration files as stated in the man page. So supplying start and stop only tries to create new daemons loading start and stop file. -- If this is true, the one problem described above applies, since your vsftpd is never stopped.
I have a taken a windows hosting that can run web core 2 applications. I have published first the application to a folder on my pc and then moved the files to a folder on the hosted web site. When I access the folder via http I get the following error message:
HTTP Error 502.5 - Process Failure
Common causes of this issue:
The application process failed to start
The application process started but then stopped
The application process started but failed to listen on the configured port
Troubleshooting steps:
Check the system event log for error messages
Enable logging the application process' stdout messages
Attach a debugger to the application process and inspect
This is a basic application without database, it's the "Hello World" empty Template of Visual Studio. I did this for a test.
The web provider says it does not depend from them. I have no way to know what is going wrong. Any idea?
You can read this blog http://dotnet4hosting.asphostportal.com/post/How-to-Publish-ASPNET-Core-2.aspx. It seems that you havent published your application properly.
Based on my experience, to resolve above error, you need to publish the website/project as a self-contained application. In order to publish it as a self-contained application, please add this to the csproj folder.
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp1.1</TargetFramework>
<PreserveCompilationContext>true</PreserveCompilationContext>
<RuntimeIdentifiers>win7-x64;win7-x86;ubuntu.16.04-x64;</RuntimeIdentifiers>
<SuppressDockerTargets>True</SuppressDockerTargets>
</PropertyGroup>
Hope this help!
When i run this command asadmin list-instances i get this result, someone have an ideas what does this mean?
[glassfish#mydas]$ asadmin list-instances
I1 not running [pending config changes are: _deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/admin-ear/admin-ear-13308077918078249404.0.ear; _deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/comptabilite-ear/comptabilite-ear-12940026351961817647.0.ear; _deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/comptabilite-ear/comptabilite-ear-11974752653489746292.0.ear; ]
I2 not running [pending config changes are: _deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/admin-ear/admin-ear-13308077918078249404.0.ear; _deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/comptabilite-ear/comptabilite-ear-12940026351961817647.0.ear; _deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/comptabilite-ear/comptabilite-ear-11974752653489746292.0.ear; ]
Command list-instances executed successfully.
I know that i have two instances of my cluster and not running, but i mean this lines here:
[pending config changes are: _deploy
/opt/glassfish3/glassfish/domains/D/applications/__internal/admin-ear/admin-ear-13308077918078249404.0.ear;
_deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/comptabilite-ear/comptabilite-ear-12940026351961817647.0.ear;
_deploy /opt/glassfish3/glassfish/domains/D/applications/__internal/comptabilite-ear/comptabilite-ear-11974752653489746292.0.ear;
]
I checked this file /opt/glassfish3/glassfish/domains/D/applications/__internal and i removed all the files but i get the same result.
And how can i empty all this to get a clear message like this :
I1 not running
I2 not running
Thank you.
The message means that you did some configuration changes for the instances via domain admin server (DAS), but the instances have not been started since then. That means that the remote instances don't know about those configuration changes and will trigger synchronization from the DAS to apply changes when started. Until they can connect to the DAS, those changes will not be applied.
In your case it seems that you have deployed 3 EARs, and you specified either to deploy them on all targets, or the deploy targets include the 2 instances. Therefore the EARs will be deployed to both instances once the config is synchronized (after you start the instances).
Files in applications/__internal are the files of the EAR applications, removing them only corrupts the applications, but doesn't undeploy them. Undeploy would be triggered only if you deployed the applications by dropping the to the autodeploy directory, but not if you deploy using asadmin or admin console. If you open config/domain.xml file, you should still be able to see references to all the 3 applications somewhere, even after you deleted the application files.
In order to hide the messages in list-instances, you should properly undeploy all the 3 applications to remove them from the configuration, or at least remove both instances from their deployment targets, so that they only remain deployed on the DAS (but that is probably not what you want usually).
If you want the application to be deployed on the instances, you need to start the instances to synchronize the configuration with the DAS.
Try the following:
asadmin start-instance --sync full I1
asadmin start-instance --sync full I2
This should resynchronize your instances with the DAS.
If this doesn't help you can try the following:
asadmin list-instances --long=true
This should list the failed commands in detail. You can connect to the specific instances via SSH and execute the commands manually, this should apply the pending changes. You may have to restart the instances afterwards to make them synchronize the status with the DAS.
See also:
Glassfish Admin Guide - Resynchronizing GlassFish Server Instances and the DAS
Glassfish Admin Guide - list-instances
GLASSFISH-13213 - failed commands should not be listed as pending changes
I'm trying to follow the steps in the RabbitMQ docs here to get clustering with SSL working on Windows. I'm noticing though that the "rabbitmqctl status" command starts failing after the environment variables defined in those steps are set. I'm getting the following error when executing "rabbitmqctl status":
Error: unable to connect to node 'rabbit#server1': nodedown
I've already configured RabbitMQ to use TLS 1.2 and have verified that it's working. I've ensured that my Erlang 18 cookie is the same in the user directory C:\users\me and C:\Windows on the machine, but the error persists, and is stopping other servers from clustering with it. The docs say that the Windows SSL Cluster setup is "Coming soon"... Here are the steps I've taken so far on server1. I think that Erlang wants forward slashes in the paths - this matches the rabbit.config SSL settings.
Combined the contents of my server\cert.pem and server\key.pem into rabbit.pem via the command "type server\cert.pem server\key.pem > server\rabbit.pem"
Created environment variable ERL_SSL_PATH and set to: "C:/Program
Files/erl7.0/lib/ssl-7.0/ebin"
Created environment variable RABBITMQ_CTL_ERL_ARGS and set to: -pa "%ERL_SSL_PATH%" -proto_dist inet_tls -ssl_dist_opt server_certfile C:/OpenSSL-Win64/server/rabbit.pem -ssl_dist_opt server_secure_renegotiate true client_secure_renegotiate true
Created environment variable RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS and set to same value as RABBITMQ_CTL_ERL_ARGS
Copied the erlang cookie at C:\Windows.erlang.cookie to my local user profile directory.
Restarted rabbit using rabbitmq-service start
At this point, on server1, "rabbitmqctl status" no longer works. Attempts to try to join server2 to server1 result in a "node down" error.
Edit 1: I can't get the initial step in the docs working to ask Erlang to report its SSL directory on Windows in order to set ERL_SSL_PATH correctly. Erlang is installed at C:\Program Files\erl7.0 on my server.
Edit 2: Using werl.exe (at C:\Program Files\erl7.0\bin\werl.exe), I was able to issue a command "Foo=io:format(code:lib_dir(ssl, ebin))." and it reported the path as: c:/Program Files/erl7.0/lib/ssl-7.0/ebin. However, this doesn't seem to be the cause of the this issue since that's already what I was using.
Thanks,
Andy
For environment changes to take effect on Windows, the service must be
re-installed. It is not sufficient to restart the service. This can be
done using the installer or on the command line with administrator
permissions
(source)
This will do:
rabbitmq-service.bat stop
rabbitmq-service.bat remove
rabbitmq-service.bat install
rabbitmq-service.bat start
Also, if while the node you're working on is down, the other cluster nodes were running, their state might be assumed to have gone out of sync. In that case, the node might fail to start up and you might need to:
rabbitmqctl force_boot
Check the logs to confirm. (at %RABBIT_BASE%\log\rabbit#server.log)
Late answer but, hopefully this could help a searcher...