what is the use of custom-artifact in spinnaker, it always gives error - Custom references are passed on to cloud platforms to handle or process 500 - spinnaker

i am trying to use custom-artifact account in spinnaker.
I have a pipeline, where i want to pull a http file (a deployment manifest) as an artifact, and use it in deployment.
i use custom-artifact and put the url - (https://raw.githubusercontent.com/sdputurn/flask-k8s-inspector/master/Deployment.yaml) in reference.
I have tried running this pipeline multiple times, but i always fails with the error (Internal Server Error",“message”:“Custom references are passed on to cloud platforms to handle or process”,“status”:500)
i saw some tutorials where they just use custom artifact and put some http url to get files for deploy stage.
steps to re-produce:
1. create a new pipeline --> in configuration stage --> add artifact --> choose "custom-artifact" --> update reference with (https://raw.githubusercontent.com/sdputurn/flask-k8s-inspector/master/Deployment.yaml) --> check "use default artifact" and fill the same details -- > add one more stage Deploy --> use the artifact template from configuration stage --> run the pipeline
spinnaker version - 1.16.1

For the Spinnaker version 1.17.1 the custom-artifact is deprecated. If possible use the embedded-artifact>produce an artifact and use the artifact in another execution.

Related

How can I add the current service endpoint to the deployed lambda environment

Reading here you can reference cloud formation variables to set the environment, like
environment:
BASE_URL: ${cf:${self:service}-${self:provider.stage}.ServiceEndpoint, 'reinstall'}
Unfortunately, the CF vars are not set on the initial install, so without the default value here, the serverless deploy here will fail, and with it the initial install will not have the correct endpoint value -- only a subsequent install will.
Is there a way to configure this to work on the initial install or after a serverless remove, in the serverless.yml? Is there some plugin that allows updating the environment as part of some postinstall step? Alternatively, does the handler have the service endpoint available somewhere at run time?

Failed on startup: ExpectedArtifact matches multiple artifacts

I created a pipeline (KubernetesV2 provider) with a GitHub trigger that expects multiple artifacts using a regex. First stage is a bake stage using that artifact as "overrides" artifact.
If a push event is received containing multiple artifacts, the pipeline does not start with the reason
"Failed on startup: Expected artifact ExpectedArtifact(matchArtifact=Artifact(type=github/file, name=charts/values-.*.yml... matches multiple artifacts
I would like to to execute a pipeline instance for each of the artifacts. For now it seems to me that this cannot be done using Spinnaker alone. I could invoke a Jenkins job that again for each of the artifacts triggers a pipeline e.g. via webhook.
Could you please comment on this?
Thanks!
Does the override artifact need to have that same naming convention? I wonder if the workaround for this is to have the override artifact named something like override-blah-bblah.yml which then will make the spinnaker trigger think there is only one artifact found.

Global variable in Jenkins Repository URL

I am trying to use a global Jenkins variable in the Repository URL field:
Repository URL: ${BUILD-PEND-SRC}
BUILD-PEND-SRC is defined in Configure System and a value of a proper URL is set. If I do a shell execution job with echo ${BUILD-PEND-SRC} it does display the correct value.
However, when I run the job, I get
ERROR: Failed to check out ${BUILD-PEND-SRC}
org.tmatesoft.svn.core.SVNException: svn: E125002: Malformed URL '${BUILD-PEND-SRC}'
Which tells me that Jenkins did not resolve ${BUILD-PEND-SRC}.
I am summarizing the SO answer that solved it for git-based Jenkins pipeline jobs but also applies to svn-based jobs: https://stackoverflow.com/a/57065165/1994888 (credits go to #rupesh).
Summary
Edit your job config
go to the Pipeline section
go to the definition Pipeline script from SCM
uncheck Lightweight checkout
The issue seems to be with the scm-api-plugin (see the bug report in the Jenkins issue tracker), hence, it is not specific to a version control system.

Build information in server group

The Clusters-tab in the Spinnaker web UI shows my Server Groups and their deployment version (V000 ... Vn). Next to the deployment version, some build information is displayed, which in my Spinnaker instance is always (No build info).
Is there a way to add some build info, for example a Git commit/tag or Docker tag?
Right now the build info is based on the jenkins build information. It derives this information from the ami tags appversion and build_host to link back to jenkins. appversion has to follow a defined schema, see this comment in the Rosco source code for example.
You cannot customize these values at this point, but a pull request is welcome

Maven deploy fails for Apache Archiva

I have a Maven project which generates a 413.06 KB jar file. I have to deploy it on Apache Archiva based managed repository. I have tried to deploy different versions, and it created required layout and structure, uploaded some files, even it uploaded that jar with 200~ KB. every time the jar file size changes but always it fails to upload 413.06 KB jar file.
Information:-
I am running standalone Archiva
I have given guest account to Global Repository Manager & "Repository Manager - MYREPO"
I have also tried a separate account in Archiva with "Repository Manager - MYREPO" rights and configured it in maven's settings.xml file to set custom timeout.
I am getting following error
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy
(default-deploy) on project SharedshelfRepository: Error deploying artifact: Transfer error:
The server did not respond within the configured timeout. -> [Help 1]
that might be maven-deploy-plugin issue, resources plugin itself needs several dependencies,try manually jar nad p
What version of Maven are you using? You might try 3.0.4 as it has a different HTTP library. I'm also not sure if there's more context for what was happening when it timed out (it seems more request oriented rather than deploy oriented, and deploy does request some metadata).
I can't see that you'd need to alter the timeout, as none of the defaults should apply to such a small file. How long does it take to fail?