As the method https://spinnaker.io/reference/api/docs.html#api-Pipelinecontroller-getPipelineLogsUsingGET is not available, according to https://github.com/spinnaker/spinnaker/issues/5550, is there any alternative method to get the pipeline execution logs from spinnaker? Looking for an endpoint in spinnaker, which will provide the log of a pipeline when called...
I am able to get the execution log for a pipeline with this API:
https://{spinnaker-url}/pipelines/{executionID}
Related
Having set up GitLab CI and AWS Fargate resources as described in the documentation, we have a situation where the runner can trigger the Fargate task, which goes into RUNNING state, but the master runner never seems to realize this.
Running with gitlab-runner 14.7.0 (98daeee0)
on gitlab-fargate-master DyE5BsVA
Preparing the "custom" executor
INFO[2022-01-27T13:54:49Z] Starting fargate PID=1447 version="0.2.0 (933d940)"
INFO[2022-01-27T13:54:49Z] Executing the command PID=1447 command=config_exec
Using Custom executor with driver fargate 0.2.0 (933d940)...
INFO[2022-01-27T13:54:49Z] Starting fargate PID=1452 version="0.2.0 (933d940)"
INFO[2022-01-27T13:54:49Z] Executing the command PID=1452 command=prepare_exec
INFO[2022-01-27T13:54:56Z] Starting new Fargate task PID=1452 command=prepare_exec
INFO[2022-01-27T13:54:58Z] Persisting data that will be used by other commands PID=1452 command=prepare_exec taskARN="arn:aws:ecs:us-east-1:558517226390:task/gitlab-ci-cluster/ee488fa1d7d7475fab9be01d5bad180e"
INFO[2022-01-27T13:54:58Z] Waiting Fargate task to be ready PID=1452 command=prepare_exec taskARN="arn:aws:ecs:us-east-1:558517226390:task/gitlab-ci-cluster/ee488fa1d7d7475fab9be01d5bad180e"
Within AWS, the task has created its Log Stream in Cloudwatch, but there are no events in that log. It's unclear what is actually happening.
What can be done to find out?
We have reverted to using a vanilla Docker container from the GitLab documentation registry.gitlab.com/tmaczukin-test-projects/fargate-driver-debian:latest but exactly same happens.
Solved - problem was missing AWS permission ECS:DescribeTasks, which for some reason was not causing an error message in the Runner.
(I had mistakenly added AmazonEC2_FullAccess, not AmazonECS_FullAccess as described in the docs)
Having run a "Generate Policy" in AWS based on CloudTrail Events (awesome new feature!), I can now confirm the permissions actually being used are:
EC2: DescribeNetworkInterfaces.
ECS: StopTask, DescribeTasks, RunTask
Note the EC2 permission, which is missing from the docs.
Not sure if you have solved your problem but I noticed this question as I had the exact same issue yesterday. For me this was caused as my gitlab manager task was using an IAM role which was limited to start and stop tasks but it was apparently missing permissions to check weather a task is in the RUNNING state. So I fixed my ecs execution role and then it started working for me.
I am trying to connect to my Redis instance from a groovy script (ExecuteGroovyScript) and execute arbitrary commands such as LPUSH. I currently have RedisConnectionPoolService enabled and working fine for caching processors.
Is there any way to achieve this? Any examples are appreciated.
EDIT:
I got to the point where I can call a command but for some reason it fails, here is the code and error
service = context.getControllerServiceLookup().getControllerService("2b841623-35ed-1e1a-0a77-46087267939d")
service.getConnection().withCloseable { redis ->
redis.listCommands().lPush("key".getBytes(), "1".getBytes())
}
If you have a RedisConnectionPoolService called service and call service.getConnection(), you will have a Spring Redis RedisConnection instance, so you can check their API for the kinds of calls you can make.
For LPUSH specifically you can call service.getConnection().listCommands().lpush()
I'm developing a application that can collect mpreduce job progress info to analyze.The first way is parse log file.but It's ugly。Is there any method like hook or plugin can do this
You can probably use the YARN application API to get most of the information. See this Yarn Application API
Here is an excerpt from the page:
... All query parameters for this api will filter on all applications. However the queue query parameter will only implicitly filter on unfinished applications that are currently in the given queue.
There are other YARN APIs too, that you can utilize to achieve your goal. It is certainly better than scanning log files.
I am running a workflow on a n1-ultramem-40 instance that will run for several days. If an error occurs, I would like to catch and log the error, be notified, and automatically terminate the Virtual Machine. Could I use StackDriver and gcloud logging to achieve this? How could I automatically terminate the VM using these tools? Thanks!
Let's break the puzzle into two parts. The first is logging an error to Stackdriver and the second is performing an external action automatically when such an error is detected.
Stackdriver provides a wide variety of language bindings and package integrations that result in log messages being written. You could include such API calls in your application which detects the error. If you don't have access to the source code of your application but it instead logs to an external file, you could use the Stackdriver agents to monitor log files and relay the log messages to Stackdriver.
Once you have the error messages being sent to Stackdriver, the next task would be defining a Stackdriver log export definition. This is the act of defining a "filter" that looks for the specific log entry message(s) that you are interested in acting upon. Associated with this export definition and filter would be a PubSub topic. A pubsub message would then be written to this topic when an Stackdriver log entry is made.
Finally, we now have our trigger to perform your action. We could use a Cloud Function triggered from a PubSub message to execute arbitrary API logic. This could be code that performs an API request to GCP to terminate the VM.
While Running the Mule, I am facing the below error:
Timeout waiting for mule context to be completely started
Please let me know the work around solution for this. The same integration is working fine i.e the query fetching is happening fine with other system having mule but the same is not working in my system. Please Suggest a way to overcome this.
Thanks in Advance...!
Goutham ...Did you configured timeout in your flow? If it is configured ..
1. is it configured in Munit which we need to look into run and wait scope..
2. Or is this coming during the shutdown of mule ?
You can set a timeout value to enable the current flow to complete. However, there is no built in method or utility to check what messages are in flight. You can connect a profiler and see the active threads (or just a thread dump), this should provide you an overview of what’s happening at the JVM level.
To ensure all inflight messages are processed you can shutdown mule in two steps:
Stop the flow(s) manually (this will prevent new messages from coming)
Stop Mule
Alternatively, you can set shutdownTimeout to a milliseconds value for a flow; hwoever this is not a global value.
https://docs.mulesoft.com/mule-user-guide/v/3.8/starting-and-stopping-mule-esb
http://grepcode.com/file/repo1.maven.org/maven2/org.mule/mule-core/3.7.0/org/mule/transport/AbstractMessageDispatcher.java
The second link will provide you the internal implementation of Mule's AbstractMessageDispatcher .Hope this helps.
Thanks