Kestrel server doesn't work when run in background, why? - asp.net-core

I have a project made and tested with Visual Studio. It works.
Then I uploaded it into Ubuntu server.
Then ran it with dotnet run. Works, remote machines see it (via nginx proxy).
Then I tried dotnet run &. The process seems like started, but nothing listens on port specified. Then, according to the example, I tried sudo nohup dotnet run kestrel > /dev/null 2>&1 &. This time it listened for a while, then died with:
Application started. Press Ctrl+C to shut down.
fail:
Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[0]
An unhandled exception has occurred: Can not find compilation library location for package 'google.protobuf'
System.InvalidOperationException: Can not find compilation library
location for package 'google.protobuf' at
Microsoft.Extensions.DependencyModel.CompilationLibrary.ResolveReferencePaths()
(A fragment of output from nohup.out, first linens, I skipped project details as irrelevant and private).
Any clues what's happening? I still get no errors when running it in foreground.
Here's what I found out: I can't run it (it gives same error messages) when I run it as root. On my test server I have a special user account named "dotnet". When I log in as dotnet I can run the app. As root I can't.
I don't want to run my app with root privileges.
Next try: I run dotnet restore as root. Then I go with nohup dotnet run kestrel > /dev/null 2>&1 & and it works.
Nice. Now is there a way to start my app with limited privileges?

I found the answer myself, so I'll share.
First: do not run .NET core projects on Linux as root if you don't intend to give them full root privileges. I think web applications with root privileges are bad idea and sort of asking for troubles.
But well, when something doesn't work, it's tempting to use sudo from time to time. And it turns out it was the cause of the problem:
dotnet restore and dotnet run must be executed with the same privileges, by the same user. When I issued dotnet restore as root, it worked. It's even harder the other way, when you want to run the project as user with lower privileges. You have to remove all temporary files before issuing dontet restore. So, generally nohup dotnet run kestrel > /dev/null 2>&1 & works as charm, no sudo is needed here, running this as root can be harmful.
Now I always create special dotnet user to run dotnet apps on servers. It's safer this way. The user cannot sudo. When I need to perform administrative tasks I just start separate root session. I use the same approach with database access. The app has only rights to execute procedures, not even select allowed.

Related

The only way to start server sides (back end) is running them with a command line like "npm start"?

...is there like compiling the project and make it run to be autoexecutable?
Sorry for the general question. I have been doing little projects with server side and I find that always I need to write "npm start" or so to make the whole start working.
My doubt is, Do these projects need to be compiled somehow or it is just as is, a simple line runs the coded files and that works like a server side?
Also, should not a server side able to run by itself (by definition) when the system restarts? So far, I needed to create bat files/start folder in windows to make then run in case of restart.
According to NPM documentation:
npm start
This runs an arbitrary command specified in the package’s "start" property of its "scripts" object. If no "start" property is specified on the "scripts" object, it will run node server.js.
To start the server you have to start a process and that process is started by npm start. If processes are killed they cannot be brought back to life by themselves. If the process is killed (eg when you restart) you have to make sure a new process is automatically spawned. You can accomplish this in multiple ways. You could use services (for example systemctl in Debian). You could also use tools like Kubernetes which can automatically restart a container in case of a crash.
Another possbile solution to use something like Respawn which allows you to respawn a process if it crashes from NodeJS code. Of course, it can also be accomplished with plain NodeJS.

how to update self contained asp.net core app and avoid file locks

I have a self hosted asp.net core app deployed and in use in an enterprise environment on Windows Server 2012.
I am looking for a way to automate the update process, I am currently doing this through a bat file but keep getting windows file lock errors where the file cannot be deleted. The process I am following in the bat file is as follows:
kill the dotnet core process for the web app
clear the directory (after sleep for a couple of seconds)
copy the updates over
restart the web app
I am getting the errors in 2 where I try to clear out the existing directory which still has file locks even though I have killed the process - "Cannot delete output file - access is denied".
My question is how can I upgrade the self contained asp.net core web app in place and avoid the file locks? If the site is offline for a few seconds it is not an issue.
Thanks
There a several reasons i can think of that deleting the directory gives access denied errors.
Your process isn't actually stopped yet. I know you can use powershell to await until porcess is stopped. (or check if process is stopped yet and otherwise wait 3 more seconds)
Another process still runs in this folder. (maybe even a command line, or explorer.exe is opened in the folder.)
You need admin rights to delete this folder.
The bat file you are executing executes from this directory, and itself is locking the directory.
Try one of the following:
powershell Stop-Service.
It should wait until service is really stopped.
powershell Wait-Process Waits untill process is stopped. you can call this directly after Stop-Process
Try to run powershell to wait like this for example (in commandline):
powershell -Command "Wait-Process -Name MyProcess"`
(warning you might run into ExecutionPolicy problems)
Tip
Use msdeploy, you can remote execute commands and deploy your application.
You can use pre and post scripts (to stop and start the app) and msdeploy it self will sync the folder/directory for you.

Jenkins SSH remote process is getting killed as soon as the Jenkins SSH plugin returns back

Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.

Jenkins + Phing: Build Failure - can't find build.xml

Trying to set up Jenkins on one of my servers for the first time and think I might be missing something.
Jenkins 1.545
Phing 2.6.1
Jenkins builds give me the following output.
Building in workspace /var/www/vhosts/domain.co.uk/httpdocs
looking for '/var/www/vhosts/domain.co.uk/httpdocs/build.xml' ...
looking for '/var/www/vhosts/domain.co.uk/httpdocs/build.xml' ...
looking for 'build.xml' ...
buildfile 'build.xml' not found.
Build step 'Invoke Phing targets' marked build as failure
Finished: FAILURE
If I run my build.xml on it's own it works fine.
I'm using a custom workspace at the moment, before I tried a symlink from the default workspace to my webroot, when I did that it found the build file but failed when trying to run phing. I know it's a problem with permissions but I'm not sure exactly what.
I'm running this on a plesk web server and have tried adding the jenkins user to the psacln and psaserv groups but that didn't work either.
I use hudson but I think is the same problem.
Provide to ant job the full path (advanced settings)
${WORKSPACE}/buil.xml
Assuming the correct set of jenkins user
RUN_AS_USER=jenkins
Go to the custom workspace and
chown -R jenkins:jenkins myworkspace
if it doesn't work
chmod -R 777 myworkspace
then you will fix later.
I hope it helps.

SSIS Execute Process Task Can't Find executable

I am using the 7zip standalone .exe to unzip a file. I am using the Execute Process task for this. I have tested this over and over again on multiple machines and I know it works (at least in debug mode/visual studio). I have uploaded this package the server. I have created a job that calls said package from the Package Store. The package is not able to find the .exe no matter where I put it.
My first thought was to put the .exe on the C:\ drive, which failed. I have also failed in my attempts to place the .exe on a network location that the account the package is running under has full control over.
Basically, has anybody else had issues getting the Execute Process Task to find an executable when the package is uploaded to the server?
The error message is
Can't find 7za.exe in directory C:\7zip
I'll risk a downvote for being wrong, but I believe you have a permission issue.
You say it runs fine on other servers from BIDS, try it without BIDS. Call it from a command-line on a box that it works on.
dtexec.exe /file C:\HereComesTheUnzipper.dtsx
If that works, then repeat the step on the troublesome server. RDC into the box and try again
dtexec.exe /ser localhost /sq HereComesTheUnzipper
If that still works, then you are looking at an issue with the job. What account is the SQL Agent service running as? Is the SSIS job step running as a particular set of credentials? If so, is it a SQL Server login (which wouldn't map to anything on the physical box)? Regardless of what your answer is, the resolution will be to ensure the account has access to
7z.exe
whatever scratch area 7zip may use while unpacking files (I assume %temp%)
the output folder (C:\bin\7z.exe -e e:\data\MyThing.7z)