My production environment now has a good number of triggers and classes. These all work great and are functioning as they should. However, I am unable to deploy a couple of new triggers because the test classes are calling too many future methods. I assure you that my code is bulkified so as to only call a future method one time per run. However, when a trigger is being deployed through the IDE, every test is run and as a result the future calls are being run too many times.
I have tried places a try/catch around all of my future calls hoping that if it did hit the limit it would just route to the catch method. It still fails the deployment though with the same error.
The main future call that I am making is only reference-able through one class. It is an HTTP call which pings my website.
Are there any methods to avoid this limit, short of completely re-doing all of my test classes? As you can see below, the excess future calls are occurring on (default) and not on a specific trigger.
10:39:15.617|LIMIT_USAGE_FOR_NS|(default)|
Number of SOQL queries: 85 out of 100 ******* CLOSE TO LIMIT
Number of query rows: 1474 out of 50000
Number of SOSL queries: 0 out of 20
Number of DML statements: 19 out of 150
Number of DML rows: 23 out of 10000
Number of script statements: 2370 out of 200000
Maximum heap size: 0 out of 6000000
Number of callouts: 0 out of 10
Number of Email Invocations: 0 out of 10
Number of fields describes: 0 out of 100
Number of record type describes: 0 out of 100
Number of child relationships describes: 0 out of 100
Number of picklist describes: 0 out of 100
Number of future calls: 11 out of 10 ******* CLOSE TO LIMIT
Without looking at the specific code, it is difficult to tell what paths are causing you to reach the future calls limit during testing. It would be worth exploring why the components in the current deployment are hitting the future method limit.
If your objective is to get the tests to pass, you could use a combination of Test.isRunningTest(), Limits.getFutureCalls() and Limits.getLimitFutureCalls() so that the future methods aren't invoked during testing when they would otherwise cause the limit to be exceeded.
E.g.
if(Test.isRunningTest() && Limits.getFutureCalls() >= Limits.getLimitFutureCalls()) {
system.debug(LoggingLevel.Error, 'Future method limit reached. Skipping...');
} else {
callTheFutureMethod();
}
Related
What is the purpose of setting the loop count? Is it just depend on how many times i want to run the test? Or it has other purpose of test with different loop count? Will it affect the final test result?
"If you give loop count as 2 then every request two times to the server"
I found this online, but i don't understand what it means.
Based on my understanding, the loop count set to 2 because of i want to repeat the test twice only. After the first test end, then the threads in first round test in dead before the second test starts. Then the new thread group will send the request to the server. Why "every request two times to the server"?
The loop count means each thread of your thread group will run the steps inside the loop twice if iteration is set to 2
The thread will start based on delay and rampup and not related to this setting
If your server has concurrent users limit, for example 100, and you want to execute more, as 600, you can set loop count as 6 and execute 600 requests with given server limits
It's the number of times for each JMeter thread (virtual user) to execute Samplers inside the Thread Group
Each JMeter thread executes Samplers upside down (or according to the Logic Controllers) so if there are no more Samplers to execute the thread will shut down. And it might be the case you won't be able to achieve the desired concurrency because some threads have already finished execution and some haven't been yet started like it's described in the JMeter Test Results: Why the Actual Users Number is Lower than Expected so you might want to increase the number of iterations or even set it to "Infinite" and control the test duration using "Duration" section of the Thread Group or Runtime Controller
My task is to optimize a pretty heavy query (~10 000 rows). I would like to use multithreading, so each of the threads processed and returned a specific range of data, for example, I create 3 threads.
1st thread processes and returns first 100 rows,
2nd - next 100 rows,
3rd - next 100 rows
When a thread has finished it's work, it takes next 100 rows and so on till there are no more data to be returned.
I've read about TPL, but it has been a native functionality since .NET 4.0, but my project is based on 3.5. Also I read about Reactive library, which has TPL functionality for .NET 3.5, but was unable to get it working for me.
It boils down to this: how do I break the query down to pieces, which could be executed by a number of threads? (possibly in a loop)
P.S I prefer LINQ, but a simple textual script is acceptable as well
So after some tinkering I found a pretty basic way to achieve multithreaded query processing without TPL on .NET Framework 3.5
My approach:
Get the total row count of the table
Batch size = row count / thread count
Create the threads so each of them would get a specific row subset depending on the batch size. Info for SQL servers < 2012 Here and 2012+ Here
(example: table has 300 rows, we use 3 threads, each thread would return a batch of 100 rows)
Start all the threads and wait for them to complete (I used a flag)
Dispose of the threads
Don't forget to add "MultipleActiveResultSets=True" (MARS) when writing your connection string or db connection configuration. This will allow multiple batches on a single connection
This one works quite well for me. Please comment on this, if you have a better idea on how to approach multithreaded querying on .NET 3.5
I need to conduct a series of database performance tests using jMeter.
The database has ~32m accounts, and ~15 billion transactions.
I have configured a JDBC connection configuration and a JDBC request with a single SELECT statement and a hardcoded vAccountNum and this works fine.
SELECT col1,col2,col3,col4,col5 from transactions where account=vAccountNum
I need to measure how many results sets can be completed in five minutes for 1 session; then add sessions and tune until server resources are exhausted.
What is the best way to randomize vAccountNum so that I can get an equal distribution of accounts returned?
Depending on what type vAccountNum is the choices are in:
Various JMeter Functions like
__Random function - to generate random number within defined range
__threadNum function - returns current thread's number (1 for first thread, 2 for second, etc.)
__counter function - a simple counter which is being incremented by 1 each time it is called
CSV Data Set Config - to read pre-defined vAccountNum values from CSV file. In that case make sure that you provide enough account numbers so you won't be hammering the server with the same query which likely to be returned from cache.
I tried to understand, what is the proper way to control the number of runs: is it the trial or rep? It is confusing: I run the benchmark with --trial 1 and recieve the output:
0% Scenario{vm=java, trial=0, benchmark=SendPublisher} 1002183670.00 ns; Ï=315184.24 ns # 3 trials
It looks like 3 trials were run. What are that trials? What are reps? I can control the rep value with the options --debug & --debug-reps, but what is the value when running w/o debug? I need to know how many times exactly my tested method was called.
Between Caliper 0.5 and 1.0 a bit of the terminology has changed, but this should explain it for both. Keep in mind than things were a little murky in 0.5, so most of the changes made for 1.0 were to make things clearer and more precise.
Caliper 0.5
A single invocation of Caliper is a run. Each run has some number of trials, which is just another iteration of all of the work performed in a run. Within each trial, Caliper executes some number of scenarios. A scenario is the combination of VM, benchmark, etc. The runtime of a scenario is measured by timing the execution of some number of reps, which is the number passed to your benchmark method at runtime. Multiple reps are, of course, necessary because it would be impossible to get precise measurements for a single invocation in a microbenchmark.
Caliper 1.0
Caliper 1.0 follows a pretty similar model. A single invocation of Caliper is still a run. Each run consists of some number of trials, but a trial is more precisely defined as an invocation of a scenario measured with an instrument.
A scenario is roughly defined as what you're measuring (host, VM, benchmark, parameters) and the instrument is what code performs the measurement and how it was configured. The idea being that if a perfectly repeatable benchmark were a function of the form f(x)=y, Caliper would be defined as instrument(scenario)=measurements.
Within the execution of the runtime instrument (it's similar for others), there is still the same notion of reps, which is the number of iterations passed to the benchmark code. You can't control the rep value directly since each instrument will perform its own calculation to determine what it should be.
At runtime, Caliper plans its execution by calculating some number of experiments, which is the combination of instrument, benchmark, VM and parameters. Each experiment is run --trials number of times and reported as an individual trial with its own ID.
How to use the reps parameter
Traditionally, the best way to use the reps parameter is to include a loop in your benchmark code that looks more or less like:
for (int i = 0; i < reps; i++) {…}
This is the most direct way to ensure that the number of reps scales linearly with the reported runtime. That is a necessary property because Caliper is attempting to infer the cost of a single, uniform operation based on the aggregate cost of many. If runtime isn't linear with the number of reps, the results will be invalid. This also implies that reps should not be passed directly to the benchmarked code.
I'm using Azure Table Storage.
Let's say i have a Partition in my Table with 10,000 records, and I would like to get records number 1000 to 1999. And next time i would like to get records number 4000 to 4999 etc.
What is the fastest way of doing that?
All I can find till now are two options, which I don't like very much:
1. run a query which returns all 10,000 records, and filter out what I want when I get all 10,000 records.
2. Run a query whichs returns 1000 records at a time, and use a continuation token to get the next 1000 records.
Is it possible to get a continuation token without downloading all corresponding records? It would be great if i can get Continuation Token 1, than get Continuation token 2, and with CT2 get records 2000 to 2999.
Theoretically you should be able to use continuation tokens without downloading the actual data for the first 1000 recors by closing the connection you have after the first request. And I mean closing it at TCP level. And before you read all data. Then open a new connection and use continuation token there. Two WebRequests will not do it since the HTTP implementation will likely use keep alive wchich means all your data is going to be read in the background even though you don't read it in your code. Actually you can configure your HTTP requests to not use keep alive.
However, another way is naturally if you know the RowKey and can search on that but I assume you don't know which row keys will be in each 1000 entity batch.
Last I would ask why you have this problem in the first place. And what your access pattern is. If inserts are common and getting these records is rare I wouldn't bother making it more efficient. if this is like a paging problem i would probably get all data on the first request and cache it (in the cloud). if inserts are rare but you need to run this query often I would consider making the insertion of data have one partion for every 1000 entities and rebalance as needed (due to sorting) as entities are inserted.