Loadrunner Analysis threw an error and i have no idea what could have caused these many errors - testing

We recently ran a PT and while loading the result of that test in Analysis i got a popup telling me that there was an error, the error trace is below. I am not sure what could have caused these many errors or what they even mean. I check the Analysis report and it had all the necessary information.
One of the experts told me that the errors were there because i was capturing snaphots of error during the test run but i am not convienced, how can taking snaphots generate so many errors!!
Could one of the loadrunner experts here help me understand what these errors mean and what could have caused them?
Log Trace is available at: https://docs.google.com/document/d/16gfaAXpGcKC8f6wfKU_dVjM-IqfbzWgdM6VvVYvo68Q/edit

Related to the questions on scope of the error above: From your error log....
75004 Transaction : <OpenWebPage_Login_31>
InstanceID: 4294967299
VUser info:
+Host:HostMachine_IP,
+Group:scriptname.1,
+Script:scriptname.1,
+ID:1.
End Time: 1357937049.69605
Transaction end time is less than the scenario start time
I have cleaned up a single line in your log for clarity purposes. Note that the line includes a couple of items which have been scrubbed, such as "HostMachine_IP." You may also examine the hosting for group, "scriptname.1" to find out which load generator host this is applied to.
As you have indicated that you have multiple load generators involved, examine carefully the scope of the errors in the error log. Are they all tied to one particular hostname? If you have more than one group on the same host then are both groups impacted by this error (error follows host and not group). If the error is tied to a script and to a group, then does the same error occur with the same script in another group on another load generator, such a single user running as a control set on a different load generator?
Break The errors apart. Examine the errors critically. All Hosts mean one thing: One host another. A instances of a script across multiple load generators mean one thing: Instances on a single load generator another. All scripts on one load generator imply one thing, especially when the instances on a second/third load generator do not fail.

Make sure all of your load generators are at the same major.minor(patch) release level as your controller.

Related

In karate-Gatling framework-- can we have multiple scenarios with multiple assertion in one simulation file?

How can I have different assertion for all the scenarios in one simulation file.
I want to keep different maximum latency for each scenario in one simulation file.
Code Example
setUp(
Scenario1.inject(constantUsersPerSec(10)during(20 seconds)).assertions(
forAll.responseTime.max.it(maxLatency1)).protocols(protocol),
Scenario2.inject(constantUsersPerSec(10)during(20 seconds)).assertions(
forAll.responseTime.max.it(maxLatency2)).protocols(protocol)
)
Error Message: simulation compilation failed.
Also, I tried to keep multiple setup section for each scenario in one simulation file, but in that case I get error message-- set up can only be set once.
Please let me know if there is any way to set different latency for every scenario in single simulation file.

SCCM - Task sequence with application failure

I have been having some issues lately when including applications in task sequences. When running any task sequence that contains applications, it automatically fails, and says "The referenced package cannot be found." I've checked my distribution points and boundary groups, and verified the application content is distributed. When checking the logs, it just states it failed to find an application, I track down the application it's referencing, and redistribute, or even remove it from the task sequence, and try running it again, I get the same error, except for the next application content ID's. When adding packages to a task sequence, it seems to run successfully. Has anyone else encountered this?
EDIT: I've also been seeing 'content hash value mismatch' errors in the logs.
Any help is greatly appreciated. Some extra info:
I have already restored the site server VM, rebuilt the distribution point.
Failed to find CCM_ApplicationCIAssignment object for AdvertID="***2017A", ModelName="ScopeId_E6E2F6FB-692F-4938-8AC6-146644EAE93F/Application_ce95b2ac-bf5a-4de2-b930-6f9b74b7dfd0"
"Failed to resolve selected task sequence dependencies. Code(0x80040104)"

%ABAT-W-CREPRCERR in ActiveBatch 11

Our client uses an automation software called ActiveBatch (by Advanced Systems Concepts, Inc.). They're currently using ActiveBatch v8 and is now on the the process of migrating the automated jobs to a newer ActiveBatch v11.
Most the jobs have no problems coping with the newer software and they're running OK as of this writing. However, there is one job that is unable to run, rather, initialize in the first place. This job runs OK on v8. Whenever this job is being run on v11, it produces an error message:
%ABAT-W-CREPRCERR, error creating batch process for job %1
Quite self-explanatory; means the process for the particular job was not created. As per checking the user manual, it stated that the job's log file might explain more why the error occurred. Problem is, the log file is not very helpful as it only show magic numbers shown below:

Further readings states that it's Byte Order Mark for UTF-8. I don't know much about this stuff but since the log file only contains those characters, I'm not sure they're helpful at all.
Another thing, if I run the job manually (running EXE via Windows Explorer), no problems will be encountered and it will be a success. The job by the way is a Power Builder 9 application.

sqlplus hangs after being called from a batch file, without throwing up error message

Essentially, where I work we run a variety of reporting processes that follow the same basic structure...
A batch file calls an sql script which executes a stored procedure. Another script extracts the data from Oracle and writes to a csv. Finally, an excel macro runs to create the final output.
We have been encountering an issue recently where if the procedure takes approximately longer than an hour to run, it will then hang indefinitely without moving on to the next line of the batch file. No error message is thrown up.
The most frustrating part is that certain procedures sometimes have the issue, and then the next day they do not.
Has anyone else ever encountered this issue? Or have any idea what could be causing this problem? I feel like it could be connection/firewall related, but it really is not my area of expertise!
You should instrument the batch file and use extended SQL tracing to reveal where ALL of your time is going. Nothing can escape proper instrumentation. You will find the source of the problem. What you do about it varies depending upon the particular problem (i.e., anti-pattern).
I see issues like this all the time. What I do it to connect to the DB and see what is running by checking gV$session. Key is to identify what SQL the script is running, then see if there are any reasons for it to be "hung" (there are MANY possible reasons). for example, missing indexes ; missing or not up to date stats ; workload on the instance ; blocking locks ; ...
If you have the SQL Tuning Advisor, you can run the SQL through there to get some ideas on solutions. Also ADDM Report may provide some additional solutions.

Good data - debugging a graph (grf file)

I've got a graph that isn't behaving as it should in CloudConnect.
I'm running it locally, and it's completing, but not doing its work.
In an effort to figure out why this is, I've added printLog calls in many places, like the following
printLog(warn, 'transfrom from file ' + $in.0.fileName);
printLog(debug, 'joining etc');
The Phase consists of a FileList into a SimpleCopy, into a LookupJoin, a Reformat (produce SQL) and a DBInsert.
However, while I see logs for phases above, I'm not seeing anything produced in the log for any part of my phase. All parts of the phase do report running successfully in log. I've also done Enable Debugging on all connections in this phase.
Am I missing something to enable logging? Is there a better way to debug processing in CloudConnect?
Discovered the problem - the FileList will succeed if the source file cannot be found, but none of the subsequent steps will then fire. It's somewhat unintuitive, since the log files says 'succeeded'.
For debugging, after run you can access the data by right clicking on the connection, and selecting "View Data"
Sorry for the elementary question, but documentation didn't seem to cover this clearly, at least for a GoodData noob. I'll leave it up for anyone with the same problem!