Calling a Jenkins job from a Codefresh pipeline fails with: x509: failed to load system roots and no roots provided - ssl

I have a Jenkins job which I would like to invoke from my Codefresh pipeline.
Using the following example from the Codefresh docs, I have my Codefresh pipeline configured and ready:
https://codefresh.io/docs/docs/integrations/jenkins-integration/#calling-jenkins-jobs-from-codefresh-pipelines
The resulting build runs with the following output:
Pulling image codefresh/cf-run-jenkins-job:latest
Pulled layer '1160f4abea84'
Pulled layer '6df1582e0e0e'
Digest: sha256:a95b23c24b51d5fc1705731f7d18c5134590b4bc61b91dcf5a878faf2aec60b3
Status: Downloaded newer image for codefresh/cf-run-jenkins-job:latest
INFO[0000] Going to trigger <jenkins_job_name> job on https://<jenkins_host>:8443
ERRO[0000] Post https://<jenkins_host>:8443/job/<jenkins_job_name>/build: x509: failed to load system roots and no roots provided
Successfully ran freestyle step: Triggering Jenkins Job
Reading environment variable exporting file contents.
Reading environment variable exporting file contents.
As you can see, the build fails to successfully trigger the Jenkins job.
After some research in the Internet I came to conclusion that this is an SSL certificate issue.
But I have no idea how to proceed from here on. What exactly is missing and where it should be configured. I would really appreciate any help here.

Do you know that kind of SSL configuration your Jenkins server has? Is it mutual authentication or just a server-side certificate? Is it self-signed or not?
Have you tried to call the Jenkins API on your own (outside of Codefresh) and SSL works fine?
Also I would suggest you open a support ticket (from the top right menu in the Codefresh UI) and make sure to mention the URL of the build that has this issue.

Related

Unable to instantiate the chaincode in muticloud setup

I am trying to achieve the multicloud architecture. My network has 2 peers, 1 orderer and a webclient. This network is in Azure. I am trying to add a peer from Google Cloud Platform to the channel of Azure. For this, I created a crypto-config for 3rd peer from Azure webclient. But in the crypto-config, I made the changes like peers in Azure have their own certificates while for the 3rd peer, I placed the newly created certificates. Now I can install, instantiate, invoke and do queries in the peers(1 and 2). And I can install the chaincodes in 3rd peer. But I am unable to instantiate the chaincodes.
Getting the following error: Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Post http://unix.sock/containers/create?name=dev-(CORE_PEER_ID)-documentCC-1: dial unix /var/run/docker.sock: connect: permission denied
Can anyone guide me on this.
Note: All the peers, orderer, webclient are running in different vm(s)
#soundarya
It doesn’t matter how many places your solution is deployed
The problem is you are running docker by using sudo command try to add docker to sudo group
Below block will help you out
https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket
To learn more concept about docker.sock
You can refer to my answer in another Can anyone explain docker.sock

Pull request analysis not working in Sonar Qube for bit bucket server

I have SonarQube for Bamboo plugged in and working fine with MSBuild. I also have SonarQube or bit bucket server, which, as far as I can tell, is configured correctly, but pull request analysis is not working. I have a repository in bit bucket configured to allow Sonar Qube analysis and have min. severity set to INFO. But when I click the pull request, a box on the right says:
"Sonar data unavailable. Was not able to fetch data for Sonar project "[project name]:[branch name]". Either the build is not finished yet, your pull request has not been analyzed or a non-existing Sonar project is referenced. You can configure the referenced Sonar project in the repository settings."
The referenced project is configured right in the repo settings, and I have branch analysis for this project working just fine on sonar qube server from the bamboo plugin. The pull request analysis just won't jive while everything else is working. Anyone seen this issue? Any ideas as to why?
Bit bucket server v4.13.0
Sonar for bit bucket server 1.13.1-bbs4
Could you please create a bug report at https://support.mibexsoftware.com ? We can then analyze the issue in detail. It would also help if you could send us the debug logs of the plug-in. I can give you more detailed instructions after we have your bug report.
Thanks,
Michael from Mibex Software

Build spinnaker with docker-compose, redirect to localhost

i build spinnaker using docker-compose follow here
but it always redirect to localhost, how can i fix this.
e.g.
http://localhost:8084/auth/redirect?to=http%3A%2F%2F192.168.99.100%3A9000%2F%23%2Finfrastructure
i set the host:0.0.0.0 in spinnaker-local.yml and configured deck apache2 with proxyPreserve=On, it's not working.
where is the configuration about 'redirect'?
All containers running well but fiat gets error mesages, like this:
WARN 1 --- [ecutionAction-1] c.n.s.fiat.roles.UserRolesSyncer : [] User permission sync failed. Server status is DOWN. Trying again in 10000 ms. Cause:(Provider: DefaultServiceAccountProvider) retrofit.RetrofitError: unexpected url: front50/serviceAccounts
i'm sure set fiat false, is this matter?
thanks.
The docker-compose link project is not available anymore. That deployment type is not supported anymore.
The easiest way i suggest for people to get started quick is by using Armory Open source Minnaker. It runs on top of a K3S small cluster and contains a functional spinnaker deployment.
Great way to get started.
I tried the debian local deployment and it failed all the time.
Enjoy your CD operations.

ERROR: The overall deployment failed because too many individual instances failed deployment

I'm trying to deploy using CircleCI -> S3 -> CodeDeploy -> EC2.
I was able to upload deploy image onto S3 from CircleCI, but unable to deploy S3 to EC2 instance. Here's the error.
The overall deployment failed because too many individual instances
failed deployment, too few healthy instances are available for
deployment, or some instances in your deployment group are
experiencing problems. (Error code: HEALTH_CONSTRAINTS)
The error was provided from CodeDeploy. I can't figure out why and how.
I'd appreciate if you give some advise.
If you are running on Ubuntu there might be plenty of reasons, here is a checklist can verify
Check code-deploy agent is installed on your EC2 Instance. Please refer this document to install code deploy agent.
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html
$ sudo service codedeploy-agent status
In case if you are running Ubuntu release 20.x and you get this error
./install:22:in block in method_missing': undefined method path' for
#<IO:> (NoMethodError)
try running the install file via this script
sudo ./install auto > /tmp/logfile
Check you have EC2 Instance Code Deploy Role -> Create a code deployment role and assign it to the Instance, https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html.
In case if you assign the EC2 Role after initiate, restart the server.
Check your appsec.yml file placement as per the top answer, try to avoid any long timeout in it.
Log into your instance check your error log
$ tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log
You should be able to figure out what caused the individual instances to fail by digging into the deployment instance details:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-view-instance-details.html
These should contain more detailed information about why your application was unable to be deployed.
This error is commonly due to problems in the configuration of the appSpec.yml or appSpec.json file (It depends on the format you are using).
"If you have any Hook I recommend that you remove them, check if it works, then you can add one by one (the Hooks) and so you can identify the error"
The appspec.yml file should be located at the root of your project:
│-- appspec.yml
│-- index.html
└-- scripts
│-- install_dependencies
│-- start_server
└-- stop_server
In the scripts folder you will have to place the processes that you want to be executed according to the Hook
Here is an example of the appspec.yml file
version: 0.0
os: linux
files:
- source: /index.html
destination: /var/www/html/
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
ApplicationStop:
- location: scripts/stop_server
timeout: 300
runas: root
I hope I can help you 😃👻🕺🏾
Make sure the CodeDeploy Host Agent Service is running in your target EC2 instance.
The error you are facing is a generic error message thrown on any of the event failure which could be beforeblockTraffic, blockTraffic, ApplicationStop etc.
The first step in this case would be check whether code deploy agent is running or not if first event i.e. BeforeBlockTraffic event is failed.
As you can see in the screenshot below, the event failure message would tell you the exact error behind.
From the failed deployments, I can see all lifecycle events were skipped. Instance i-0bcc36e73851297f2 is currently in Stopped state but I can see the IAM instance profile is missing. Your Amazon EC2 instances need permission to access the Amazon S3 buckets or GitHub repositories where the applications that will be deployed by AWS CodeDeploy are stored. To launch Amazon EC2 instances that are compatible with AWS CodeDeploy, you must create an additional IAM role, an instance profile. 1
For such failures, you can always begin with a general troubleshooting checklist for a failed deployment 2 and then look for troubleshooting guides on Deployment Issues and Instance issues3.
1[http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html]1
2 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-general.html]2
3 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting.html]3
Check the status of the Code Deploy Agent. In my case, the agent wasn't up.
Please check the role given to the ec2 machine(where the agent is running). It should have s3 access as well. This resolved my issue.
"The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path 'appspec.yml'"
Please place your appspec.yml file in your root folder to solve this error
To access your after script and before script
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.

Testlink Jenkins result integration not working

I want to sync automation result from jenkins to testlink. I tried with Testlink -jenking plugin and testlink-api-client but not worked getting error.
Pre-setup :
$tlCfg-> api-> enabled
$tlCfg-> exec_cfg-> enable_test_automation
From Testlink UI enable automation for the project.
Test code :
TestLinkAPIClient testlinkAPIClient = new TestLinkAPIClient(APIKEY, "http://localhost/testlink/lib/api/xmlrpc/v1/xmlrpc.php");
testlinkAPIClient.reportTestCaseResult(Project, TestPlan, TEST_CASE, Build, notes/comments, teststatus);
output :
"testlink.api.java.client.TestLinkAPIException: The call to the xml-rpc client failed.".
References used :satishjohn.wordpress.com
2. softwaretestinghelp.com
and other stackoverflow threads.
I browsed and try out defined steps from some of the blogs but still facing same issue?. Can anyone help me to resolve this issue or other approach on sync result with testlink ?.
I believe you should follow the documentation(1) written by kino who wrote the plugin.We recently managed to sync automation results from Jenkins to Testlink by following above doc.Our auto tests were written based on testng framework, Hence we used "testng-results.xml" and TestNg method name based result seeking strategy.
We didn't come across an issue as you mentioned. From (2) and (3) you can get the plugin source .My advice is to debug the code after enabling the debug on Jenkins hosted tomcat server. So you can find the actual cause of the issue by yourself.
Reference:
(1) https://wiki.jenkins-ci.org/download/attachments/753702/jenkins.pdf
(2) https://github.com/jenkinsci/testlink-plugin
(3) https://github.com/kinow/testlink-java-apienter code here
You can run wireshark and filter on port "tcp port http" to see exact error you get from the server. When it was not working for us we were getting 200 OK with text "XML-RPC server accepts POST requests only."
You can also check /var/log/apache2/error.log for testlink errors.
We fixed the issue by setting following config in config.inc.php and restarting apache.
$tlCfg->api->enabled = TRUE;
$tlCfg->exec_cfg->enable_test_automation = ENABLED;