DBT Docs Throwing Service Unavailable Error - dbt

I am not sure why this file cannot be found.
I am using Dagster to run the DBT Docs Generate command, upload those files to S3 so that they can be deployed via CI/CD to an ngnix server.
The documentation will sometimes load after a fresh deploy, but quickly will throw the above error. I can confirm, though, that the debug and docs generate CLI commands produce no error(s), and all the files (including the catalog.json) are present on the ngninx server.
I am not sure how to debug this weird issue. Thanks!
I tried running the docs generate command with the --no-compile flag, but it produces the same result.

Related

GitLab Runner fails to upload artifacts with "invalid argument" error

I'm completely new to trying to implement GitLab's CI/CD pipelines, but it's been going quite well. In fact, for my ASP.NET project, if I specify a Publish Profile in the msbuild command that uses Web Deploy, it actually deploys the code successfully to the web server.
However, I'm now wanting to have the "build" job create artifacts which are uploaded to GitLab that I can then subsequently deploy. We're using a self-hosted instance of GitLab, for which I'm not an admin, but I can speak to the admin if I know what I'm asking for!
So I've configured my gitlab-ci.yml file like this:
variables:
NUGET_PATH: 'C:\Program Files\Nuget\Nuget.exe'
NUGET_SOURCES: 'https://api.nuget.org/v3/index.json'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin\msbuild.exe'
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- '& "$env:NUGET_PATH" restore ApplicationTemplate.sln -Source "$env:NUGET_SOURCES"'
- '& "$env:MSBUILD_PATH" ApplicationTemplate\ApplicationTemplate.csproj /p:DeployOnBuild=true /p:Configuration=Release /p:PublishProfile=FolderPublish.pubxml'
artifacts:
paths:
- '.\ApplicationTemplate\bin\Release\Publish\'
The output shows that this builds the code just fine, and it also seems to successfully find the artifacts for upload. However, when it uploads the artifacts, even though the request gets a 200 OK response, the process fails. Here is the log output:
So, it finds the artifacts, it attempts to upload them and even gets a 200 OK response (in contrast to the handful of similar reports of this error I've been able to find online), but it still fails due to an invalid argument.
I've already enabled verbose debugging, as you can see from the output, but I'm none the wiser. Looking at the GitLab Runner entries in the Windows Event Log on the box where the runner is hosted doesn't shed any light on things either. The total size of the artifacts is 61.1MB, so I don't think my issue is related to that.
Can anyone see from this output what's invalid? Can I identify which argument is invalid and/or why it's invalid?
Edit: Things I've tried
Specifying a value for artifacts:expire_in.
Setting artifacts:public to FALSE, since I'm using a self-hosted GitLab environment and the default value for this setting (TRUE) is not valid in such an environment.
Trying every format I can think of for the value of the artifacts:paths setting (this seems to be incredibly robust - regardless of the format I use, the Runner seems to have no problem parsing it and finding the files to upload).
Taking a cue from this question I created a new project with a very simple build job to upload a single file:
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- echo "Test" > test.txt
artifacts:
paths:
- test.txt
About 50% of the time this job hangs on the uploading of the artifacts and I have to cancel it. The other half of the time it fails in exactly the same way as the my previous project:
After countless hours working on this, it seems that ultimately the issue was that our internal Web Application Firewall was blocking some part of the transfer of artefacts to the server, or the response back from it. With the WAF reconfigured not to block traffic from the machine running the GitLab Runner, the artefacts are successfully uploaded and the job succeeds.
This would have been significantly easier to diagnose if the logging from GitLab was better. As per my comment on this issue, it should be possible to see the content of the response from the GitLab server after uploading artefacts, even when the response code is 200.
What's strange - and made diagnosing the issue even harder - is that when I worked through the issue with the admin of our GitLab instance, digging through logs and running it in debug mode, the artefact upload process was uploading something successfully. We could see, for example, the GitLab Runner's log had been uploaded to the server. Clearly the WAF's blocking was selective and didn't block everything in both directions.

How do I keep azure devops from treating infos and warnings as errors?

I am working on an azure devops pipeline created without YAML. In the pipeline node.js and npm is used to build some web interfaces. mkdocs is used to build web documentation.
My problem is that azure devops treats some infos and warnings as errors:
While the build does not fail it is marked as only partially successful. I prefer to have a clean build.
How do I keep azure devops from treating infos and warnings as errors? Or is it some setting I have to configure on the side of mkdocs and npm?
1.For the Info which is treated as error, you can uncheck the Fail on Standand Error option and then add 2>&1 | Write-Host to your mkdocs command, see PS About Redirection
. You should run the command via Powershell task.
2.And for the error about fsevents, it seems to be one issue starting from npm V.3.10.8. Use Node.js Tool Installer task to install the latest NPM version and run the pipeline again. If the issue persists, you can try joefiorini's workaround:
Add this script to your package.json file.
"optionalDependencies": {
"fsevents": "*"
},
It seems that the company firewall prevented npm from making ssl connections because of missing certificates or something. I added
npm set strict-ssl false
to the build pipeline which -ironically- makes the connection less secure, but it makes all errors go away which I prefer to suppressing the errors/warnings/info.
I don't know if Don't Fail on Standard Error would even do anything since the build did not fail, it was partially successful. I prefer to have it checked in case a real error occurs.
After looking at it some more I am not exactly sure the highlighting and specification of errors is correct in the pipeline results. Why would an info output be marked as error anyway?

dbt deps command results in "Unable to connect to registry hub"

When running dbt deps, I get back this error message:
Running with dbt=0.17.0
Error sending message, disabling tracking
Encountered an error:
Unable to connect to registry hub
What's happening here, and how can I work around it?
First of all, it's worth understanding what's going on here. It looks like you're trying to install a package from the dbt hub site (hub.getdbt.com) — if you open up your packages.yml file, you'll find something like this:
packages:
- hub: package-owner/package-name
version: 0.1.0
When you run dbt deps (at a high level):
dbt sends a request to hub.getdbt.com
From hub.getdbt.com, a request is sent to GitHub to download the package.
The package is copied into your project
This error occurs if dbt cannot connect to the hub site after sending a network request repeatedly. First off, we recommend you retry the dbt deps command — sometimes it's just a blip in connectivity that goes away on the second try.
If the error persists, there may be a few different reasons for it:
hub.getdbt.com might be unavailable. This happens but is relatively rare. You can navigate to hub.getdbt.com to check if this is the case. Also check the Netlify status page to see if there are any issues.
GitHub might be down — you can check this by going to the GitHub status page.
Finally, it may be that a firewall rule or antivirus software on your computer is rejecting the request. Talk to your IT team to find out if this is the case and whether that restriction can be removed.
We generally recommend using the hub syntax for packages, however if you need to work around it, you can consider using the git syntax (docs) or installing the package from a local directory (docs)

Gitlab-CI: AWS S3 deploy is failing

I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.

Jammit deployment not working with Heroku - 500 error received

I have been using Jammit to handle asset packaging in a rails3 app, hosted at heroku, without any problems.
I have now added some new css and js files to my application and when I push the updates to heroku the new assets are not loading. Instead, each css and javascript file produces the standard heroku 500 error page (i.e. when i view the css/js files loaded with firefox web developer addon, I see the source code of the 500 error page).
Funny thing is that the app runs without any problems in development mode, with all the recent versions of css/js files loading independently just as they are supposed to.
Since I do not receive any error messages in development mode I am a bit lost here and do not know where to start looking - what could be the issue here.
Note: I use 'Heroku Jammit' plugin to compile the assets and deploy to heroku and the compilation finishes without any error messages. (I use the 'heroku jammit:deploy' command, then 'git add .' everything, then commit changes and push to heroku master git rep.
I could really use some help here, has anyone experienced any similar issues with Jammit and Heroku?
Many thanks for your time and help!
Kind Regards,
Alex
i guess, one of the reason might be that - jammit is unable to compress you js files. If you happen to have any syntax error in your js files, jammit compression fails. Try running "jammit" on local machine, and see if it fails.