Wrangler publish fails because of exceeding size while gzip < 1mb - cloudflare

When I run wrangler publish, I get:
Total Upload: 2879.48 KiB / gzip: 474.38 KiB
The documentation mentions a maximum size of <1mb. The gzip is well below this threshold, yet I get the following error:
Script startup timed out. This could be due to script exceeding size limits or expensive code in
the global scope. [code: 10021]
The odd thing is that it sometimes does upload the worker; but most of the time it fails with the above error message.

This failure is due to the startup time limit of 200ms. It sounds like your code is spending more than 200ms to parse everything and execute the global scope. However, sometimes it stays under the limit due to random variation in timing.
This implies that your Worker will always experience cold start times around 200ms.
To fix this you should try to eliminate unneeded dependencies and unnecessary startup-time computation. Maybe there is work being done at startup that could be done lazily when needed by a request?

Related

java.lang.OutOfMemoryError at report parsing in gatling using sbt to test grpc service

I try to use gatling for testing perfomance of my grpc service
I make just one grpc call but with huge amount of entities in metadata (1000 ids) and subscribe for server streaming that every 15sec sends me some data for every entity
Test duration is about 2 minutes, and after test completed gatling tries to make a report by parsing log file, then i got this error:
Parsing log file(s)...
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.lang.Integer.valueOf(Integer.java:1081)
at scala.runtime.java8.JFunction1$mcII$sp.apply(JFunction1$mcII$sp.scala:17)
at scala.collection.immutable.Range.map(Range.scala:59)
at io.gatling.charts.stats.StatsHelper$.buckets(StatsHelper.scala:22)
at io.gatling.charts.stats.LogFileReader.<init>(LogFileReader.scala:151)
at io.gatling.app.RunResultProcessor.initLogFileReader(RunResultProcessor.scala:52)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:34)
at io.gatling.app.Gatling$.start(Gatling.scala:93)
at io.gatling.app.Gatling$.fromMap(Gatling.scala:40)
at .load.mygrpc.GatlingRunner$.main(GatlingRunner.scala:16)
at .load.mygrpc.GatlingRunner.main(GatlingRunner.scala)
I tried to increase heap size by editing VM options in Idea (set -Xms2028m -Xmx4096m)
Also i tried to increase heap size for gatling in build.sbt file with javaOptions variable and directly in command line by flags
Also if i run test from terminal by using "sbt gatling:test" command, gatling always make such a thing: Dumping heap to java_pid16676.hprof ...Heap dump file created [1979821821 bytes in 15.669 secs]
And this dump is always the same size, even if i change amount of entities in my call
Anyway, changing java heap size options didn't help, possibly there're some ways to change configuration of simulation log or any manipulations to reduse generated objects in heap, please help, thanks for any advise, even if you reccomend another tool (ghz not suitable)

Apache Flume stuck after ChannelFullException is occured 500 times

I have flume configuration with rabbitmq source, file channel and solr sink. Sometimes sink becomes so busy and file channel is filling up. At that time ChannelFullException is being thrown by file channel. After 500 number of ChannelFullException are thrown flume stuck and never responds and recover itself. I want to learn that, where does 500 value come from? How can I change it? 500 is strict because when flume stucks, I count exceptions and I find 500 number of ChannelFullException log line everytime.
You are walking into a typical producer-consumer problem, where one is working faster than the other. In your case, there are two possibilities (or a combination of both):
RabbitMQ is sending messages faster than Flume can process.
Solr cannot ingest messages fast enough so that they remain stuck in Flume.
The solution is to send messages more slowly (i.e. throttle RabbitMQ) or tweak Flume so that it can process messages faster. I think the last thing is what you want. Furthermore, the unresponsiveness of Flume is probably caused by the java heap size being full. Increase the heap size and try again until the error disappears.
# Modify java maximum memory size
vi bin/flume-ng
JAVA_OPTS="-Xmx2048m"
Additionally, you can also increase the number of agents, channels, or the capacity of those channels. This would naturally cause a higher footprint on the java heap size, so try that first.
# Example configuration
agent1.channels = ch1
agent1.channels.ch1.type = memory
agent1.channels.ch1.capacity = 10000
agent1.channels.ch1.transactionCapacity = 10000
agent1.channels.ch1.byteCapacityBufferPercentage = 20
agent1.channels.ch1.byteCapacity = 800000
I don't know where the exact number 500 comes from, a wild guess would be that when there are 500 exceptions thrown the java heap size is full and Flume does not respond.
Another possibility is that the default configuration above causes it to be exactly 500. So try tweaking it so if it ends up being different or better, that it does not occur anymore.

Jmeter test always freeze when tested server gives up

When trying to run load test in JMeter 5.1.1, tests always freeze at the end if server gives up. Test completes correctly if server does not give up. Now this is terrible because the point of test is to see at what point server gives up but as mentioned test never ends and it is necessary to kill it by hand.
Example:
Test running 500 threads for local server goes smoothly and it finish
with tiding up message
Exactly the same test running 500 threads for cloud based server
at some points results in error test goes to about 99 % then
freezes on summary as in below example:
summary + 99 in 00:10:11 = 8.7/s Avg: 872 Min: 235 Max:
5265 Err: 23633 (100.00%) Active: 500 Started: 500 Finished: 480
and that's it you can wait forever and it will just be stuck at this point.
Tried to use different thread types without success. Next step was to change Sampler error behavior and yes changing it from Continue to Start Next Thread Loop or Stop thread helps and test is ending but then results in html look bizarre and inaccurate. I even tried to set timeout setting to 60000 ms in HTTP request Defaults but this also has given strange results.
That said can someone tell me how to successful run load test for server so that is always completes regardless of issues and is accurate> Also I did see few old question about the same issue and they did not have any answer that would be helpful. Or is there any other more reliable open source testing app that also has GUI to create tests?
You're having 100% of errors which looks "strange" to me in any case.
If setting the connect and response timeouts in the HTTP Request Defaults doesn't help - most probably the reason for "hanging" lives somewhere else and the only way to determine it is taking a thread dump and analyzing the state of the threads paying attention to the ones which are BLOCKED and/or WAITING. Then you should be able to trace this down to the JMeter Test Element which is causing the problem and closely look into what could go wrong.
Other hints include:
look for suspicious entries in jmeter.log file
make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network sockets, etc. It can be done using i.e. JMeter PerfMon Plugin
make sure to follow recommendations from 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

cmake execute_process TIMEOUT uses

We have TIMEOUT 3600 used across many execute_process calls.
Some of our slow servers often exceed the timeout and builds fail as expected.
However, this is often causing issues and we keep increasing the timeout intervals.
Now I'm thinking to remove timeout completely.
Is there a reason for using the TIMEOUT option other than failing?
Why should we have TIMEOUT at all (e.g., none of add_custom_target commands has this feature)? Why use it only in execute_process (request is not meant for CTest)?
Turning my comment into an answer
Generally speaking I would remove the individual timeouts.
I have a timeout for the complete build process running on a Jenkins server. There it calculates an average over the last builds and you can give a percentage value of how much your build may differ (I use a "Timeout as a percentage of recent non-failing builds" of 400% so only the really hanging tasks will be terminated).

Policy extension quits with terminating execution, function exceeded time limit

I'm trying to write a policy extension in lua for Citrix Netscaler that calculates the base64 of a string and add it to a header. Most of the time the function works just fine, but more than a few times I see in the ns.log that its execution was terminated with the following message -
terminating execution, function exceeded time limit.
Nowhere in the docs could I find what exactly is this time limit (from what I saw it's about 1ms, which makes no sense to me) or how to configure it.
So my question is: is this property configurable and if so how?
Why do you need to go 'lua' ? with policy expressions you can do text.b64encode or text.b64decode . I am not answering your Q but you might not be aware of the builtin encoders/decoders in Netscaler.
Although I don't have any official document I believe that the max execution time is 10ms for any policy. This is also somehwat confirmed by the counters. Execute the following command on the shell:
nsconmsg -K /var/nslog/newnslog -d stats -g 10ms -g timeout
You will see all counters with these names. While your script is executing you can run
nsconmsg -d current-g 10ms -g timeout
This will let you see the counters in real time. When it fails you will see the value incrementing.
I would say that you can print the time on the ns.log while your script runs to confirm this. I don't know exactly what you are doing but keep in mind that netscaler policies are supposed to do very brief executions, after all you are working on a packet and the scale for packets is in the order of nanoseconds.