Enable debug logging when inserting ApexTestQueueItem records via the SOAP API - testing

I'm inserting several ApexTestQueueItem records into an Org via the Partner API to queue the corresponding Apex Classes for asynchronous testing. The only field I'm populating is the ApexClassId. (Steps as per Running Tests Using the API)
After the tests have run and I retrieve the corresponding ApexTestResult record(s) the ApexLogId field is always null.
For the ApexLogId field the help documents have the Description:
Points to the ApexLog for this test method execution if debug logging is enabled; otherwise, null
How do I enable debug logging for asynchronous test cases?
I've used the DebuggingHeader in the past with the runTests() method but it doesn't seem to be applicable in this case.
Update:
I've found if I add the user who owns the Salesforce session under Administration Setup > Monitoring > Debug Logs as a Monitored User the ApexLogId will be populated. I'm not sure how to do this via the Partner API or if it is the correct way to enable logging for asynchronous test cases.

You've got it right. That's the intended way to get a log.

Related

Pulling summary report for monitoring using reporting task in NiFi

Working on a piece of the project where a report needs to be generated with all the flow details(memory used, number of records processed, Processes ran successful, failed, etc). Most of the details are present on the Summary tab, but the requirement is to have separate reports.
Can any one help me with solution/steps/examples/screens/videos.
Thanks much.
Every underlying behavior of the UX/UI that Apache NiFi provides is also accessible through an API (in fact, the UI calls the API to perform each of these tasks). So you can invoke the GET /system-diagnostics API to return that information in JSON form, and then parse this data and present it in whatever form you like.

Automated Debugging of failed API test cases

Our application has 35 web servers and around 100 different APIs being executed on it.
These APIs internally calls each other and execute independently as well.
We have automated test cases of around 30 APIs but some of our tests fail because the other APIs fail on which the API under test depends.
So how can we know through our automated test cases the reason for each test failure?
Example Scenario:
We have a test case to validate the API to fetch the User's Bank account balance.
Now we hit this API through rest Assured and tries to assert the expected output. This request goes first to the ledger server which further internally hits auth server to validate the request authenticity, then hits, then hits a counter server to log fetchBalance request, then hits several other servers to get the correct balance of the user and then responds to our request.
But the problem is that this may break at any instance and if it breaks, the ledger server returns always same error string- "Something failed underhood". Now debugging becomes a challenge. We have to go to each server and have to search for the logs to know the actual cause.
I want to write a solution which can trace complete lifecycle of this request and can report where it actually failed.
For this problem, you should be aware of the most common failure reasons. And then you can implement the strategy bases on the failure reasons.
Example: If you have send one request to the server that API may have some security validations and some processing steps and integration with different components.
If you can identify some failure points and can implement checkpoints against that.
Request failed at Security validation. You have possible error codes for that then write logic according to that
You have a failure at the processing step there could be a possible reason
If there is a failure at the integration point then there must be some error codes also. You can implement logic around them
Validate the state of data before each interaction with server. For example
assert expression1 : expression2
Where expression2 will be executed if expression1 fails. (This is a Groovy example, but you can modify this as needed.)
An example expression2 message could be something like: "Failure occured when trying to send 'so-and-so' request!".

J-Meter gives false results

I'm trying to learn J-Meter.
When I'm running a sample script of logging into a site using both valid/invalid credentials,it doesn't stop thread execution when invalid login credential is used and also login is not recorded in database.
Does it actually login to the website or only creates virtual login to create a similar environment.Is there any way to achieve this using Samplers?
JMeter is/acts as a headless browser.
Whatever your browser with an UI does, JMeter can also do - except executing a javascript. So, If you had recorded your script correctly - JMeter can login to the actual application as well.
Jmeter is not like QTP/Selenium. It does not know if it is a valid credential/invalid credential. It passes/fails the request based on the HTTP codes. If the HTML response from the server comes with a 200 http code, It passed for JMeter. If the server responds with code 500, JMeter fails the request. But JMeter also provides a way to validate the response you get - Assertion. You can use Response Assertion to see if you are seeing the home page or not to confirm if the user has logged in successfully.
To stop the test on error, select the appropriate option here in thread Group properties.
JMeter is a very nice tool & have been using it for 2 years with no issues.
Good luck!
Does your script have Config Element -> HTTP Cookie Manager? It needs cookie for the login function.
If your script has many transactions with the same level with login transaction and the option you select in your Thread Group is Continue, all transactions will be executed no matter login transaction is passed or failed.
In case you want the other transactions will not be executed if login fails, let add a Regular Expression Extractor as child of the login transaction to retrieve the text Dashboard, put other transactions into a Logic Controller -> If Controller. Suppose the Regular Expression Extractor has name Dashboard and Default value is NotFound, then the Condition of If Controller will be "${Dashboard}"!="NotFound"
JMeter automatically treats 2xx and 3xx HTTP Response Codes successful so it won't be able to detect failed login unless you explicitly tell it to check presence or absence of some specific content in the response data.
So if you add a Response Assertion you will be able to conditionally fail sampler and choose what to do in case of failure via "Action to be taken after a Sampler error" on Thread Group level.
See How to Use JMeter Assertions in Three Easy Steps guide for more details on the assertions domain.
If you're unsure what JMeter Sampler is doing you can check request and response details via View Results Tree listener. If you cannot simulate login event in majority of cases it is due to missing HTTP Cookie Manager and/or failed correlation of dynamic mandatory parameter(s) like Viewstate, CSRF token, etc.

Run automated tests on a schedule to server as health check

I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy

transaction processing scenario simulation in a web application

I am looking into transaction processing and I cannot find a way to simulate multiple queries (HTTP Methods) to a resource (script) which will act on shared data.
e.g an HTTP GET representing access to a resource from user1 with param1 and another HTTP GET for access from user2 with param2
For example 2 users trying to book a limited resource "at the same time" or access a url which triggers actions that should have all ACID properties.
Is there a way to test such scenarios in a web application?
Should I stick in a "programmable" scenario (a scenario I will code) which can run using a stress test tool ?
What method(s) do you use in such cases?
You can use Apache JMeter to set up test scripts which run multiple simulated users, varying test content and so on. You can even run slaves to test from more than one physical test clients if you need to increase your load. The requests can be created with templates including user-specific data, pick random prepared requests or run scripts to create the data for each request.