I'm trying to write a policy extension in lua for Citrix Netscaler that calculates the base64 of a string and add it to a header. Most of the time the function works just fine, but more than a few times I see in the ns.log that its execution was terminated with the following message -
terminating execution, function exceeded time limit.
Nowhere in the docs could I find what exactly is this time limit (from what I saw it's about 1ms, which makes no sense to me) or how to configure it.
So my question is: is this property configurable and if so how?
Why do you need to go 'lua' ? with policy expressions you can do text.b64encode or text.b64decode . I am not answering your Q but you might not be aware of the builtin encoders/decoders in Netscaler.
Although I don't have any official document I believe that the max execution time is 10ms for any policy. This is also somehwat confirmed by the counters. Execute the following command on the shell:
nsconmsg -K /var/nslog/newnslog -d stats -g 10ms -g timeout
You will see all counters with these names. While your script is executing you can run
nsconmsg -d current-g 10ms -g timeout
This will let you see the counters in real time. When it fails you will see the value incrementing.
I would say that you can print the time on the ns.log while your script runs to confirm this. I don't know exactly what you are doing but keep in mind that netscaler policies are supposed to do very brief executions, after all you are working on a packet and the scale for packets is in the order of nanoseconds.
Related
When trying to run load test in JMeter 5.1.1, tests always freeze at the end if server gives up. Test completes correctly if server does not give up. Now this is terrible because the point of test is to see at what point server gives up but as mentioned test never ends and it is necessary to kill it by hand.
Example:
Test running 500 threads for local server goes smoothly and it finish
with tiding up message
Exactly the same test running 500 threads for cloud based server
at some points results in error test goes to about 99 % then
freezes on summary as in below example:
summary + 99 in 00:10:11 = 8.7/s Avg: 872 Min: 235 Max:
5265 Err: 23633 (100.00%) Active: 500 Started: 500 Finished: 480
and that's it you can wait forever and it will just be stuck at this point.
Tried to use different thread types without success. Next step was to change Sampler error behavior and yes changing it from Continue to Start Next Thread Loop or Stop thread helps and test is ending but then results in html look bizarre and inaccurate. I even tried to set timeout setting to 60000 ms in HTTP request Defaults but this also has given strange results.
That said can someone tell me how to successful run load test for server so that is always completes regardless of issues and is accurate> Also I did see few old question about the same issue and they did not have any answer that would be helpful. Or is there any other more reliable open source testing app that also has GUI to create tests?
You're having 100% of errors which looks "strange" to me in any case.
If setting the connect and response timeouts in the HTTP Request Defaults doesn't help - most probably the reason for "hanging" lives somewhere else and the only way to determine it is taking a thread dump and analyzing the state of the threads paying attention to the ones which are BLOCKED and/or WAITING. Then you should be able to trace this down to the JMeter Test Element which is causing the problem and closely look into what could go wrong.
Other hints include:
look for suspicious entries in jmeter.log file
make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network sockets, etc. It can be done using i.e. JMeter PerfMon Plugin
make sure to follow recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure
I have an issue with using lua scripts running on redis
When running INFO command
used_memory_lua_human:1.08G
The usage of Lua is not extensive (single set and get commands)
How can I reduce this value?
Apparently Redis caches every Lua script that is running on it, in order to spare loading it.
This is a good feature given the set of the scripts is limited.
The problem caused because we are formatting the script with different variable every execution, and it gets a different identifier every execution.
The solution I found is running SCRIPT FLUSH command after every execution, in order to remove it from cache
The explanation of why this is happening is explained by gCoh's answer very well.
The only problem is that running SCRIPT FLUSH can lead to a lot of errors and even loss of processing if you use EVALSHA to run LUA scripts. (For example, a library like scripto runs LUA scripts using EVALSHA). Read more about EVALSHA here
EVALSHA keeps the script cached in redis server and gives out a hash when you load the script for the first time. When you run the script second time, rather than loading the script on the server it just sends the script hash and redis executes it.
Now if you run SCRIPT FLUSH it will remove all the cached scripts and your SHA hashes will become obsolete, which will lead to runtime errors when the script is being executed and it isn't found.
For me it was this: NOSCRIPT No matching script. Please use EVAL.
This should be handled in the scripto library, where if the script isn't found it should load it again but until them, please take care that this doesn't happen with you.
We have TIMEOUT 3600 used across many execute_process calls.
Some of our slow servers often exceed the timeout and builds fail as expected.
However, this is often causing issues and we keep increasing the timeout intervals.
Now I'm thinking to remove timeout completely.
Is there a reason for using the TIMEOUT option other than failing?
Why should we have TIMEOUT at all (e.g., none of add_custom_target commands has this feature)? Why use it only in execute_process (request is not meant for CTest)?
Turning my comment into an answer
Generally speaking I would remove the individual timeouts.
I have a timeout for the complete build process running on a Jenkins server. There it calculates an average over the last builds and you can give a percentage value of how much your build may differ (I use a "Timeout as a percentage of recent non-failing builds" of 400% so only the really hanging tasks will be terminated).
Recently I found myself several times in situations where I need to let run some operation in some background xterm and I'd need to be notified when my input is requested.
I know how to make it so I'm notified when the command ends, but that doesn't help in the cases where the command is not 100% batch (it puts up a prompt every now and then; a common example would be apt-get) or where the command hangs (because of some network failure, for example).
So I'd like to be notified when there's been no output in the last N minutes. Is there some way to configure xterm to do that for me, or maybe some other tool (screen maybe) that could do it?
xterm doesn't notice if the application is actually waiting for input, or simply doing nothing. An application (or shell) could be modified to do this, but that seems like a lot more work than you expected (i.e., many programs could be modified).
I also don't know of a way how to do it for applications that might be waiting for input, but if you have a batch application that should always output log info within a certain time span then you could run an extra process that does the notification if it doesn't get killed within a timeout. The process gets killed whenever a new line is read. Maybe that will help you or someone else to adapt it to processes that might wait for input:
i=0;{ while true;do echo $i;((i++));sleep $i;done }|while read line;do if [ $pid ];then sudo kill $pid;fi;bash -c 'sleep 5;notify-send boom'& pid=$!;echo $line;done
The part before the pipe sign is a process that outputs slower and slower and if it becomes slower than the threshold, notify-send sends notifications. If you wanted output to happen within 3 minutes, use sleep 3m.
I have query take seconds to get a key from redis sometimes.
Redis info shows used_memory is 2 times lager than used_memory_rss and OS starts to use swap.
After cleaning the useless data, used_memory is lower than used_memory_rss and everything goes fine.
what confuse me is: if any query cost like 10 second and block other query to redis would lead serious problem to other part of the app, but it seems fine to the app.
And I can not find any of this long time query in slow log, so I check redis SLOWLOG command and it says
The execution time does not include I/O operations like talking with the client, sending the reply and so forth, but just the time needed to actually execute the command (this is the only stage of command execution where the thread is blocked and can not serve other requests in the meantime)
so if this means the execution of the query is normal and not blocking any other queries? What happen to the query when memory is not enough and lead this long time query? Which part of these query takes so long since "actually execute the command" time cost not long enough to get into slowlog?
Thanks!
When memory is not enough Redis will definitely slow down as it will start swapping .You can use INFO to report the amount of memory Redis is using ,even you can set a max limit to memory usage, using the maxmemory option in the config file to put a limit to the memory Redis can use. If this limit is reached Redis will start to reply with an error to write commands (but will continue to accept read-only commands),