Here is a brief description of what I am trying to achieve.
Application details:
We owned an API (StartJob) that kicks off an asynchronous job of an application (not owned by us) and returns a unique id which is later used to retrieve the status of the job.
Then we have another API(GetStatus) which internally calls the application to retrieve the status (i.e. Successful/failed/running) and return to the user.
Goal:
Our goal is log # jobs were started, failed, and successful on Amazon cloud-watch.
Logging # jobs started is in StartJob API. But logging success/failure metric is not straightforward as multiple calls to GetStatus API for the same success/failed job would log twice hence destroying the metrics.
Does cloud-watch provide something like for a unique id metric that is not logged twice?
Thanks
Related
I have a long-running job that normally takes an entire day to accomplish. There are some logs at the start and finish of the process.
When I use the AWS SDK 'GetLogEvents' to retrieve the log stream with the next forward token, I made hundreds of calls with no event data before reaching the finish.
https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_GetLogEvents.html
The log events limit per request is 100 items
I consider putting the API call in a loop with an event data exists condition. However, it may result in rate limiting for the CloudWatch API (25 requests/s). So I want to find a better approach
I have a program which connects to a web service for pulling some messages.After I receive them I have no way of reading those messages again.So I decided to secure them in a persistent store to be processed by other parties.
I wrapped this request and persist process method in Hangfire's AddOrUpdate-cron recurring job hoping in case of exception during job execution hangfire will attempt the execute the task later with it's stored state.Is my assumption correct? I couldn't see any explanation in the documents regarding recurring job states.
In the case of delayed,recurring or fire-forget jobs does Hangfire serialize the code piece of those jobs with their states to database?
Following article answers this question and gives some information regarding how background jobs are handled.
Hi i have created a thread group which contains 10 transaction controller, each control has multiple number of HTTP request samplers.
Now to identify the bottleneck as per the requirement, each transaction control has to run one after another.
For ex: 30 threads- Register and login. send reports and logout is the scenario.
so, For each action i have created 1 transaction controller which contains required http sampler request. First i need to run register for 30 users. after getting the response for all 30 users then only login transaction controller should run. and so on.. one by one.
I tried creating multiple thread groups but i am fetching security tokens in each group. So if i create multiple thread group i cant call variable values of one thread group in another.
So pls if anyone know the solution help me out, i am a beginner to jmeter...
This can be achieved using only 'Synchronizing Timer' element of the JMeter. Please find below the brief description of this element:
Synchronizing Timer: This element is used when you intentionally want to pause the users/threads at a specific step until the count of user mentioned in this element is reached
You can create your script in following structure:
Transaction Controller for Register requests
HTTP Register Request1
HTTP Register Request2
Transaction Controller for Login requests
HTTP Register Login1
Synchronizing Timer [Set the 'Number of Simulated Users to Group by' to 30 and Timeout based on your requirement [Recommended value to set is '300000' i.e 5 minutes]. Do not set Timeout to '0' otherwise your test will remain in running state forever if any of the user gets failed in previous step]
HTTP Register Login2
HTTP Register Login3
Note: In above example, you can see I have added synchronizing timer as a child of first HTTP sampler request of under Login transaction controller.
When the test reach at 'HTTP Register Login1' then before sending this request, it will perform the synchronizing timer and wait for all the users to complete the register action.
For beginners, the following blog posts of RedLine13 are very useful to jump start on JMeter:
https://www.redline13.com/blog/kb/
Also, the response time will definitely increase when you use the synchronizing timer because this timer will pause the test until all user reached the step and then perform the login operation. As all 30 users will perform the login action at the same time so the response time will go high as compared to the case when some users are doing registration and others doing login.
Kindly let me know if you have any questions.
If you want "login" transaction to be kicked off only when all 30 users completed "register" transaction you need to:
Add Test Action sampler between "register" and "login" transaction controllers
Add a Synchronizing Timer as a child of the Test Action sampler and set Number of Simultaneous Users to Group by to 30
This way Test Action sampler will act as a "rendezvous point" so all 30 threads will "meet" there, this way you will get confidence that all 30 threads completed registration prior to starting logging in.
Example test plan:
Can anyone give me examples of how in production a correlation id can be used?
I have read it is used in request/response type messages but I don't understand where I would use it?
One example (which maybe wrong) I can think off is in a publish subscribe scenario where I could have 5 subscribers and if I get 5 replies with the same correlation id then I could say all my subscribers have received it. Not sure if this would the be correct usage of it.
Or if I send a simple message, the I can use the correlation to guarantee that the client received it.
Any other examples?
A web application that is providing HTTP API for outsiders for performing a processing task and you want to give the results for the caller as a response to the HTTP request they made.
A request comes in, message describing the task is pushed to queue by the frontend server. After that the frontend server blocks to wait for response message with the same correlation id. A pool of worker machines are listening on queue and one of them picks up the task, performs it and returns the result as message. Once a message with right correlation id comes in, frontend server continues to return the response to the caller.
In the context of CQRS and EventSourcing a command message correlation id will most likely get stored togehter with the corresponding events from the domain. This information can later be used to form an audit trail.
Streaming engines like Apache Flink use correlation ids, much like you said, to guarantee exactness of processing.
I am building an app that is using Twilio and we need to call Twilio's server a certain period after the call starts.
I'm using RoR and right now its on Heroku, but we can move it elsewhere if need be.
I've looked at Delayed Jobs and Cron jobs but I don't know enough about it to know which route I should take.
Seems like Delayed jobs and cron jobs are usually reoccurring (not very accurate with timing) and are started with rake not by a user action, right?
What tool will give me minute accuracy for this use case?
Twilio Evangelist here.
I like to approach this with Redis backed worker queues. Redis is a key-value store, which we can use with Resque (pronounced 'rescue'), and Resque-Scheduler to provide a queueing mechanism. For example, we can have the application respond to user interaction by creating a call using the Twilio REST API. Then we can queue a worker task that will do something with the call a specified amount of time later. Here, I'm just going to get the status of the call after a few seconds (instead of waiting for the status callback when the call completes).
Heroku have a really good walkthrough on using Resque, there is also an excellent Rails Casts episode on Resque with Rails. A data add-on is available from Heroku called Reddis Cloud to provide a Redis server. Although for development I run a local server on my computer.
I've created a simple app based on the Heroku tutorial and the Rails Casts episode. It has a single controller and a single model. I'm using a callback on the model to create an outbound Twilio call. Such that when a new Call record is created, we initiate the outbound Twilio call:
class Call < ActiveRecord::Base
# Twilio functionality is in a concern...
include TwilioDialable
after_create do |call|
call.call_sid = initiate_outbound call.to,
call.from,
"http://example.com/controller/action"
call.save
call
end
end
I'm using a concern to initiate the call, as I may want to do that from many different places in my app:
module TwilioDialable
extend ActiveSupport::Concern
include Twilio::REST
def initiate_outbound to, from, url
client = Twilio::REST::Client.new ENV['TW_SID'], ENV['TW_TOKEN']
client.account.calls.create(to: to, from: from, url: url).sid
end
end
This simply creates the outbound Twilio call, and returns the call SID so I can update my model.
Now that the call has been placed, so I want to check on its status. But as we want the call to ring a little first, we'll check it in 15 seconds. In my controller I use the Resque.enqueue_in method from Resque-Scheduler to manage the timing for me:
Resque.enqueue_in(15.seconds, MessageWorker, :call => #call.id)
This is instructing Resque to wait 15 seconds before performing the task. It is expecting to call a Resque worker class called MessageWorker, and will pass a hash with the parameters: {:call => #call.id}. To make sure this happens on time, I set the RESQUE_SCHEDULER_INTERVAL to 0.1 seconds, so it will be very precise. The MessageWorker itself is fairly simple, it finds the call record and uses the Call SID to get the updated status and save this to the database:
class MessageWorker
#queue = :message_queue
def self.perform params
# Get the call ID to do some work with...
call = Call.find(params["call"])
# Setup a Twilio Client Helper with credentials form the Environment...
client = Twilio::REST::Client.new ENV['TW_SID'], ENV['TW_TOKEN']
# We could get some
status = client.account.calls.get(call.call_sid).status
# Update my call object.
call.status = status
end
end
When I ran this locally, I had to start the Redis server, the Redis Worker rake task, the Redis Schedule rake task, and I am using Foreman and Unicorn to run the Rails app. You can configure all of this into your Procfile for running on Heroku. The tutorial on Heroku is a great walkthrough for setting this up.
But now I can create a call and have it dial out to a phone. In the mean time I can refresh the calls/show/:id page and see the value magically updated when the worker process is run 15 seconds later.
I also find that resque-web is really useful for debugging Resque, as it makes it really easy to see what is happening with your tasks.
Hope this helps!