I ran into an issue where I had long running JitterBit operations that were scheduled. I had them scheduled close together, since I needed to keep data flowing. But, when they would take longer than expected I would wind up with multiple instances of the operation set running at the same time. This was killing my performance.
I'll put the fix in the answer below.
To resolve this issue I added an additional Script Operation at the beginning of my operation set (with the schedule running on this operation). This script simply checks to see if one of the operations in this set is already running. If not, it starts the next operation. If there is anything running, it exists and waits till the next scheduled instance.
This is a sample of my script. This one assumes that there were originally two operations in this operation set.
<trans>
$isInQueue=GetOperationQueue("<TAG>Operations/OperationToCheck01</TAG>");
$isInQueue2=GetOperationQueue("<TAG>Operations/OperationToCheck02</TAG>");
$isRunning=$isInQueue[0][1];
$isRunning2=$isInQueue2[0][1];
if(($isRunning==1 && $isRunning!=Null()) || ($isRunning2==1 && $isRunning2!=Null()),
WriteToOperationLog("Skip for now: "+$isRunning+" / "+$isRunning2);,
WriteToOperationLog("Nothign is Running - Starting Operation Chain.");
RunOperation("<TAG>Operations/OperationToCheck01</TAG>");
);
</trans>
Related
I am trying to understand, how pipe lining in redis works? According to one blog I read, For this code
Pipeline pipeline = jedis.pipelined();
long start = System.currentTimeMillis();
for (int i = 0; i < 100000; i++) {
pipeline.set("" + i, "" + i);
}
List<Object> results = pipeline.execute();
Every call to pipeline.set() effectively sends the SET command to Redis (you can easily see this by setting a breakpoint inside the loop and querying Redis with redis-cli). The call to pipeline.execute() is when the reading of all the pending responses happens.
So basically, when we use pipe-lining, when we execute any command like set above, the command gets executed on the server but we don't collect the response until we executed, pipeline.execute().
However, according to the documentation of pyredis,
Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request.
I think, this implies that, we use pipelining, all the commands are buffered and are sent to the server, when we execute pipe.execute(), so this behaviour is different from the behaviour described above.
Could someone please tell me what is the right behaviour when using pyreids?
This is not just a redis-py thing. In Redis, pipelining always means buffering a set of commands and then sending them to the server all at once. The main point of pipelining is to avoid extraneous network back-and-forths-- frequently the bottleneck when running commands against Redis. If each command were sent to Redis before the pipeline was run, this would not be the case.
You can test this in practice. Open up python and:
import redis
r = redis.Redis()
p = r.pipeline()
p.set('blah', 'foo') # this buffers the command. it is not yet run.
r.get('blah') # pipeline hasn't been run, so this returns nothing.
p.execute()
r.get('blah') # now that we've run the pipeline, this returns "foo".
I did run the test that you described from the blog, and I could not reproduce the behaviour.
Setting breakpoints in the for loop, and running
redis-cli info | grep keys
does not show the size increasing after every set command.
Speaking of which, the code you pasted seems to be Java using Jedis (which I also used).
And in the test I ran, and according to the documentation, there is no method execute() in jedis but an exec() and sync() one.
I did see the values being set in redis after the sync() command.
Besides, this question seems to go with the pyredis documentation.
Finally, the redis documentation itself focuses on networking optimization (Quoting the example)
This time we are not paying the cost of RTT for every call, but just one time for the three commands.
P.S. Could you get the link to the blog you read?
In normal applescript, the script is executed down the page, and so any code in loops for every 5 seconds will only run while the loop is running - there is no way to have a single function run every few second regardless of what the script is currently doing or where it is in the script (that I know of). In cocoa-applescript, however, is there a way to run a handler every 5 seconds, at all times, no matter what it is currently doing? Here is what it should be doing in my cocoa-applescript app:
on checkInternetStrength()
do shell script "/System/Library/PrivateFrameworks/Apple80211.framework/Versions/Current/Resources/airport -I | grep 'agrCtlRSSI:'" -- this being the script which returns the line containing the signal strength
set SignalStrength to result
set RSSIcount to (count of characters in SignalStrength)
set SignalStrength to ((characters 18 thru RSSIcount of SignalStrength) as string) as integer -- this to turn SignalStrength into just the number and not the whole output line
set SignalStrength to (100 + SignalStrength) as integer
set SignalBar's setIntValue_(SignalStrength) -- SignalBar being the Level Indicator described below
end checkInternetStrength
Summed up, it runs the airport command to check internet connection, turns this into a number from 1 to 100 and uses this on an NSLevelIndicator (100 maximum) to show current signal strength graphically. Now, there is no point having this run once or when you hit a button - that is an option, but it would be nice if it updated itself every, say, 5 seconds with the realtime value. So is there any way to have a process which runs every 5 seconds to do this, while still enabling full functionality of the rest of the script and interface - i.e. as a background process? Comment if you need more extracts from the script.
Example
In Unity-C# scripting, the 'void Update() {code}' will run the code within it every frame while doing everything else simultaneously, so a cocoa-applescript version of this might be an answer, if anyone knows.
I Dont believe this is possible but what I had a similar problem before, what i do, I have an external applescript applicaion that is hidden the repeats the commands, the only problem is, it wont send it back to the app, you'll have to make the external applescript app do it, like
display notification, etc..., in the applescript apps "Info.plist" you can add this:
<key>LSUIElement</key>
<string>1</string>
To make the app run invisibly, but sorry i dont think you can run a handler in the app its self
i have a daily( 09:00am) box containing 10 jobs inside it. All child jobs are sequentially scheduled to run.
On Monday, jobs 1,2 &3 completed and job4 failed. And coz of this, the downstream is stalled and the box is running infinetly( until some actions taken manually)
But the requirement is to run this box again on Tue 09:00am. I heard of Kickstart attribute to kick off the box on next scheduled time irrespective of last run status.
Can someone tell about this kick_start attribute? Also suggest me any other way to schedule this box daily.
TIA
Never heard of the kick_start attribute and could not find it in the R11.3.5 reference guide.
I would look at the box_terminator: y that will fail the box if a job in it fails and the job_terminator: y that will terminate and fail a job if the box it is in fails.
box_criteria is another attribute that may help as you can define what success or failure looks like. For example if you don't care if job4 fails, define box_criteria: s(job3).
Course that only sets your box to FA where it will run the next time it's starting conditions are met. It does nothing to run the downstream for the current run.
Have fun and test, test, test.
I have a waf task that is running a msbuild in order to build a project but I do want to run this only if last execution was not successful.
How should I do this?
Store in your build.env.MS_SUCC = 1 and retrieve the value from the previous build (for the first time you naturally have to check if the dict item MS_SUCC exists)
I have erlang application. In this application i run process with spawn(?MODULE, my_foo, [my_param1, my_param2, my_param3]).
And my_foo:
my_foo(my_param1, my_param2, my_param3) ->
...
some code here
...
ok.
When i open etop i see that this my_foo/3 function status: proc_lib:sync_wait/2
Than i try to put exit(self(), normal) in the end of my function, but i see same behavior: proc_lib:sync_wait/2 in etop.
How can i kill or exit process correctly?
Thank you.
Note that exit(Pid, Reason) and exit(Reason) do NOT do the same thing if Pid is the process itself. exit/1 tells the current process to exit - from the inside if you like - while exit/2 sends an exit signal to the process, even if the process is itself. So when you do exit(self(), normal) you are actually sending the normal exit signal to yourself, which is ignored.
In this case putting the exit call at the end of the function should not make any difference as the process automatically dies (with reason normal) when the function with which it was started ends. It seems like the process is suspended somewhere before that.
proc_lib:sync_wait/2 is called inside proc_lib:start/start_link and sits and waits for the spawned process to do proc_lib:init_ack/1/2 to return the return value for start. It would appear that your process does not call init_ack.
Based on the limited information that you give in the question I would suspect that your process hasn't finished running yet.
Normally you don't need to add exit/2 to your process. It will exit automatically when the function has finished running.
You probably have a long running call in some code here that has not finished running. I recommend that you add logging information and see where you are stuck.