So we have an environment that can't be accessed without a VPN and we want to use drone there but at this situation we can't use webhooks. Is it possible to make drone periodically check our scm and if there's new commit it should start doing its work?
You can use the CRON feature to periodically build. The documentation has more details here: https://docs.drone.io/cron/.
Alternatively, if the Drone server and runner are inside of a VPN, you could publish events to something like Amazon Web Services SQS to push the messages and then poll the SQS queue from inside of the VPN to ping the internal Drone server.
Related
I'm looking for the advice of how to manually (i.e. without using Runtime Manager - RM) deploy a mule application package on the on-premises Mule cluster. The official documentation suggests using the RM for the purpose either via the gui or cli or api. However, the RM is not available on our environment.
I can manually deploy the package on a single node by copying it to the /apps folder. But this way the application is only deployed on a single node, not on the cluster.
I've tried using the AMC agent rest API for the purpose with the same result - it only deploys on a single node.
So, what's the correct way of manually deploying a mule application on the Mule servers cluster without using Anypoint RM?
We are on Mule 4.4 EE.
Copy the application jar file into the apps directory of every node. Mule clusters do not transfer applications between nodes.
Alternatively ou can use the Runtime Manager Agent instead however it also works in a per node basis. You need to send the same request to each node to deploy.
Each connector may or may not be cluster aware. Read each connector documentation to understand how they behave. In particular the documentation of the VM connector states:
When running in cluster mode, persistent queues are instead backed by the memory grid. This means that when a Mule flow uses VM Connector to publish content to a queue, Mule runtime engine (Mule) decides whether to process that message in the same origin node or to send it out to the cluster to be picked up and processed by another node.
You can register the multiple nodes through AMC agent on the cloudhub control plane and create a server group and deploy code through control plain runtime manager it does the job of deployment to same app in n nodes
Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/
I am totally new to spring framework. I am trying to create a project where I can have the connectivity to the rabbitMq and I even before I publish the message, I want to check if the queues are alive or not. Is this possible to ping the queue to see if it is alive or not?
RabbitMQ have the management API. You can use it to check the status of queue,exchange,binding.
If you are working on PHP. Then here is the libarary which can be used.
I have a Rails app running on AWS elastic beanstalk on a web tier. I want to send email notifications to users so I'm using sqs to send messages to a queue:
sqs = AWS::SQS.new
sqs.queues.named("messaging_queue").send_message("HELLO")
and then I would like to take these messages off the queue using a worker tier instance.
My issues is that when I create the worker tier instance from the console it asks for the application version which defaults to the latest deployed version to my web tier. I don't want to upload my entire web application to the worker, just the code responsible for performing the emailing.
What's the best way to do this? I could upload a zip but I would like to just use git
Can you refactor the code that is responsible for sending emails into a separate library? That way you can create a new web app which just wraps around the email functionality in your library and runs on a worker tier environment. The worker daemon will post messages to your new worker tier app which will then send the email. That way you do not have to deploy your entire code base to your worker tier environment.
You can use git and eb to achieve this. Your worker tier application version and webapp application version can be managed in different branches or in your case it seems better to keep them in different git repositories. If you wish to use branches then you can read about the eb command "eb branch", it may be useful.
Read more about eb here.
I have a Java API on my server, and I want it to create tasks and add them to Celery via RabbitMQ. I followed the following tutorial, http://www.rabbitmq.com/tutorials/tutorial-two-python.html, where I used java for the client (send.java) and python to receive (receive.py). In receive.py, where the callback method is invoked, I call a method that I've annotated with #celery.task so that the task is added to celery.
I'm wondering how all of this is deployed on a server though, specifically, why there is a receive.py file. Is receive.py a process that must continually run on the server? Is there a way to configure RabbitMQ so that it automatically routes java client tasks to celery?
Thanks!
RabbitMQ is just a message queue. Producers put messages and consumers get them on demand. You can only restrict access for specific queues via RabbitMQ's auth options.
As for deployment: yes, receive.py needs to continuously run. It is Celery's job to do that. See the Workers Guide for info on running a worker.