Starting two services, but the second one fails to start because the first one is taking too long - systemctl

In my /opt/entrypoint.sh file, I am trying to start the following services:
1. mosquitto.service
2. notus-scanner
3. postgresql#14-main
4. redis-server#openvas.service
5. ospd-openvas
6. gvmd
7. gsad
However, it seems that the gvmd service fails to start because postresql#14-main is still starting, at least according to this screenshot:
I tried adding sleep 10 after each line of starting a service, but that still doesn't help.
To my understanding, After= is supposed to mean that, after these services have started, it will start the current service (gvmd in my case).
I'm basically just trying to start the gvmd service after postgresql#14-main has successfully started, but not sure what's the best way to do this other than repeatedly trying to run systemctl start gvmd until it works properly.

Related

Sonar Api: After scan is finished on new pull request it’s not possible to get /api/measures/component?metricKeys=coverage

SonarQube: Enterprise Edition Version 9.2.4 (build 50792)
Sonar client: 4.7.0.2747
Scan is launched for merge request in gitlab. I am requesting coverage for pull request.
Imidietly after scan (using scanner client) is finished I try to get coverage by following call:
http:///api/measures/component?metricKeys=coverage&component=&pullRequest=
I am getting:
404 : “{“errors”:[{“msg”:“Component \u0027u0027 of pull request \u0027\u0027 not found”}]}”
Interestingly if I put some sleep (1 second) after scan is finished and before i do a call to get coverage everything is fine.
It seems it has to do something with the fact that it’s a new pull request and regardless scan is finished and it generates link with results, it still requires some time before it will be possible for the api call i mentioned to be able to return coverage. Also, if i retry the operation(scan and get results) on already existing pull request there are no issues like this.
Could you please elaborate on this issue, is such behavior is expected or maybe there are some other ways I can get coverage right away after the scan is finished without adding any sleeps…
As a side observation under same circumstances if i do scan on new pull request and call another api (/issues/search?) to get list of detected issues and it successfully works without any additional sleeps,
Thank you.
After the call from the scanner client completes, SonarQube executes a "background task" in the project that finalizes the computations of measures. When the background task is complete, your measures will be available. This is why adding a "sleep" appears to work for you. In reality, it's just luck that you're sleeping long enough. The proper way to do this is to either manually check the status of the background task, or use tools that check for the background task completion under the covers.
If you're using Jenkins pipelines, and you have the "webhook" properly configured in SonarQube to notify completion of the background task, then the "waitForQualityGate" pipeline step does this, first checking to see if the task is already complete, and if not, going into a polling loop waiting for it to complete.
The machinery uses the "report-task.txt" file that should be written by the scanner. This is in the form of a Java properties file, but there's only one property in the file that you care about, which is the "ceTaskId" property. That is the id of the background task. You can then make an api call to "/api/ce/task?id=", which returns a block that tells you whether the background task is complete or not.

Remote debug Idea doesn't works for openresty

I am using mobDebug. If run a lua script from command line everything works.
But when I run them from openresty the Idea doesn't stop. It only writes "Connected/ Disconnected"
Configs:
location / {
access_by_lua_block {
local client = require("client")
}
client.lua:
local mobdebug = require("mobdebug");
mobdebug.start()
local lfs = require("lfs")
print("Folder: "..lfs.currentdir())
modebug debug_hook is not invoked for needed lines, set_breakpoints don't invoked.
Idea Debug Logs, but nothing occures:
Idea catch debug from terminal client.lua; But it miss it from running nginx.
THIS IS NOT AN ANSWER. It's just that I am experiencing basically the same problem, and comment space is too small to fit all the relevant observations I would like to share:
I was actually able to stop immediately after mobdebug.start() in code running in nginx, and to step-debug - but only in code called directly from init_by_lua_block. This code of course executes once during the server startup or config reload.
I was never able to stop in worker code (e.g. rewrite_by_lua_*). mobdebug.coro() didn't help, and mobdebug.on() threw about "attempt to yield across C-call boundary"
I was only ever able to stop one time, on next statement after mobdebug.start(); once I hit |> (Resume program), it won't stop on any further breakpoints.
Using mobdebug.loop() is not a correct way to do this, as it's used for live coding, which is not going to work as expected with this setup. mobdebug.start() should be used instead.
Please see an example of how this debugging can be setup with ZeroBrane Studio here: http://notebook.kulchenko.com/zerobrane/debugging-openresty-nginx-lua-scripts-with-zerobrane-studio. All the details to how paths to mobdebug and required modules are configured should still be applicable to your environment.

Hangfire - new server appears every time code changed

I'm new to Hangfire. I have it working on a dev machine but every time I change the code and run the app (asp.net Core 2 MVC) - a new server appears in the list on the dashboard.
I can't find anything in the documentation about this - or in the sample files. I've read about cancellation tokens but these seem to be for intentional shutdown requests not code updates?!
Is this expected behaviour? Am I expected to manually restart the application in IIS every time code is updated (more important on the server than dev machine obviously).
Thanks.
Found a work around # this link which worked for me. Credit to ihockett.
TLDR
I know this is a pretty old topic at this point, but I’ve been running into a similar issue. I wanted to throw my contribution to working around jobs which have been aborted due to server shutdown. If automatic retries are disabled (Attempts = 0), or the jobs fails due to server shutdown and is beyond the maximum number of attempts, you can run into this issue. Unfortunately for us, this was causing new jobs to not start processing until the aborted jobs were either manually deleted or re-queued.
Basically, I took the following approach to automatically handle aborted jobs: during startup and after initializing the BackgroundJobServer, I use the MonitoringApi to get all of the currently processing jobs. If there are any, I loop through each and call BackgroundJob.Requeue(jobId). Here’s the code, for reference:
var monitor = Hangfire.JobStorage.Current.GetMonitoringApi();
if (monitor.ProcessingCount() > 0)
{
foreach (var job in monitor.ProcessingJobs(0, (int)monitor.ProcessingCount()))
{
BackgroundJob.Requeue(job.Key);
}
}

Run automated tests on a schedule to server as health check

I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy

TeamCity - How do you get a list of the last finished build of each project through rest api?

I am trying to figure out a way of returning all the last finished builds from teamcity. Essentially I am creating a status page for teamcity and want to show all the currently failing builds. So far I have tried various API calls. The following API call I thought for sure would give me all failures since the last successful builds, but it doesn't seem to work.
/guestAuth/app/rest/builds/?locator=status:failure,sinceBuild:(status:success)
Any help would be greatly appriciated. If I can get all the last finished builds, I can just sort to show only failures.
That REST call is correct. I am using TeamCity 7.1. Could it be that you simply haven't had any failures since the last successful build? Try inverting the conditions:
/guestAuth/app/rest/builds/?locator=status:success,sinceBuild:(status:failure)
This will return a list of successful builds since the last failure (the opposite). If you get results with this query, then your query should return no results. In otherwords, of these two queries:
/guestAuth/app/rest/builds/?locator=status:failure,sinceBuild:(status:success)
/guestAuth/app/rest/builds/?locator=status:success,sinceBuild:(status:failure)
At any given time, given that there are completed builds, one should ALWAYS return zero builds and the other should ALWAYS return one or more builds.
According to a comment on this JetBrains' ticket, since TeamCity 8.1 it is possible to use this API call to get the latest build status for all build configurations under a project:
http://teamcity.jetbrains.com/app/rest/buildTypes?locator=affectedProject:(id:TeamCityPluginsByJetBrains)&fields=buildType(id,name,builds($locator(running:false,canceled:false,count:1),build(number,status,statusText)))