Is there a way I can do the equivalent of "incognito mode" with PhantomJS, where all cookies, local storage, etc., is only transient and does not persist across processes?
This would be useful for UI automation as well as some back-end applications.
No, there is no such thing out-of-the-box, but at least there are things you can do for some types of data.
Cache and cookies are by default only visible for one phantom instance, but they can be enabled by setting the files for them from the commandline. So you're good here.
The other problematic features are applicationCache, localStorage and webSQLDatabase determined by running modernizr test suite from phantom.
applicationCache is not really a problem as in most cases only public data is cached. It cannot be cleared after each run.
localStorage can be cleared after each run using localStorage.clear() (see here), provided you run your tests sequentially. This might not be the case as you probably use multiple processes in parallel to execute faster. No real solution here.
webSQLDatabase still can not be cleared: How to delete a database in Web SQL?
It's sad to say that there are multiple issues with session handling.
Related
I have several test suites that read and write data from a dedicated database when they are run. My strategy is to assume that the DB is in an unreliable state before a test is run and if I need certain records in certain tables or an empty table I do that setup before the test is run.
My attitude is to not cleanup the DB at the end of each test suite because each test suite should do a cleanup and setup before it runs. Also, if I'm trying to "visually" debug a test suite it helps that the final state of the DB persists after the tests have completed.
Is there a compelling reason to cleanup a DB after your tests have run?
Depends on your tests, what happens after your tests, and how many people are doing testing.
If you're just testing locally, then no, cleaning up after yourself isn't as important ~so long as~ you're consistently employing this philosophy AND you have a process in place to make sure the database is in a known-good state before doing something other than testing.
If you're part of a team, then yes, leaving your test junk behind can screw up other people/processes, and you should clean up after yourself.
In addition to the previous answer I'd like to also mention that this is more suitable when executing Integration tests. Since Integrated modules work together and in conjunction with infrastructure such as message queues and databases + each independent part works correctly with the services it depends on.
This
cleanup a DB after a test run
helps you to Isolate Test Data. A best practice here is to use transactions for database-dependent tests (e.g.,component tests) and roll back the transaction when done. Use a small subset of data to effectively test behavior. Consider it as Database Sandbox – using the Isolate Test Data pattern. E.g. each developer can use this lightweight DML to populate his local database sandboxes to expedite test execution.
Another advantage is that you Decouple your Database, so ensure that application is backward and forward compatible with your database so you can deploy each independently. Patterns like Encapsulate Table with View, and NoSQL databases ensure that you can deploy two application versions at once without either one of them throwing database-related errors. It was particularly successful in a project where it was imperative to access the database using stored procedures.
All this is actually one of the concepts that is used in Virtual test labs.
In addition to above answers, I'll add few more points:
DB shouldn't be cleaned after test because thats where you've your test data, test results and all history which can be referred later on.
DB should be cleaned only if you are changing some application setting to run your / any specific test, so that it shouldn't impact other tester.
So here's the problem, I've created a database model. When I create the model, a = Model(args), and then perform a.put(), GAE seems to automatically update the memcache, because all the data seems up-to-date even without me hitting the database. Logging the number of elements in the cache works also shows the correct number of elements. But I'm not manually updating the cache. How do I prevent this? Cheers.
You can set policy functions:
Automatic caching is convenient for most applications but maybe your application is unusual and you want to turn off automatic caching for some or all entities. You can control the behavior of the caches by setting policy functions.
Memcache Policy
That's for NDB. You don't say what language/DB you are using but I'm sure it's all similar.
I'm curious how AX 2009 handles code propagation when operating in a load balanced environment.
We have recently converted our AX server infrastructure from a single AOS instance to 3 AOS instances, one of which is a dedicated load balancer (effectively 2 user-facing servers). All share the same application files and database. Since then, we have had one user who has been having trouble receiving code updates made to the system. The changes generally take a few days before they can see it, and the changes don't seem to update all at once.
For example, a value was added to an ENUM field, and they were not able to see it on a form where it was used (though others connected to the same instance were). Now, this user can see the field in the dropdown as expected, but when connected to one of the instances it will not flow onto a report as it should. When connected to the other instance it works fine, and for any other user connected to either instance it works properly.
I'm not certain if this is related to the infrastructure changes, but it does seem odd that only one user is experiencing it. My understanding was that with this setup, code changes would propagate across the servers either immediately (due to sharing the Application Files), or at least in a reasonable amount of time (<1 day). Is this correct or have I been misinformed?
As your cache problems seems to be per user, then go learn about AUC files.
The files are store on the client computer and can be tricky to keep in sync. There are other problems as well.
Start AX by a script, delete the AUC file before starting AX.
There is no cache coherency between AOS instances: import an XPO on one AOS server, and it is not visible on the other. You will either have to flush the cache manually or restart the other AOS. The simplest thing is to import on each server, this is especially true for labels, as this is the only way to bring labels in sync to my knowledge.
I am sort of curious to this as well, but what I do know, is that if a user has access to the AOT (member of admin or a group with developer access), the client will cache AOT-elements more aggressively than if not having developer access.
Elements (like an Enum) might be cached at client level, but also at AOS-level. Restarting the AOS (service) would flush out memory for that service, forcing it to reload elements upon restart.
I guess what I am suggesting is that you make sure the element is not cached client side. Either restart the client, or run the "Refresh AOD" from the developer tools menu. If that doesn't help, try restaring the AOS the client connects to, and see if that helps.
I think it is safe to say, if you want to be absolutely sure every user has the most recent "copy" of any element, you should not develop on the application files shared by all of these services, but rather develop in an environment with 1 AOS. And when you need to move things to production, you need to take down all AOSes in production and move the chances over while the system is down.
In such cases it is often difficult to find the exact cause for a specific case.
I try to follow some best practices to avoid such situations:
- Use separate environment for developing
- Deploy code changes using layer files, not XPOs
- When deploying, stop all AOSs, deploy files, delete index files in the application directory, start one AOSs, compile, sync DB, start other AOS (or even shut down all and start again)
- Try to have latest kernel versions for AOSs and client
I want to analyze the performance (hence its weak points) of a sharepoint site doing stress test activity. What is needed to be done is call some methods exposed via web service that do the following things inside the sharepoint site:
-create a new group
-add a content to the group
-add an attachment to the content
-delete the content
-delete the previously created group
What is required is a simulation of a situation where there are 4500 users trying to do these operations concurrently (at the same time or more realistically within a short timespan, for example within 5 seconds).
We want to register the execution time of each operation (web method, for example of the "create new group"), too. I thought I could simulate these operations via a console applications using threads and stopwatchs. Is there anyone who has encountered a similar problem and can give me any existing solutions or hints to do it "the right way"? For
example how can I obtain that all threads start at the same instant? Thanks in advance.
I am a user of Visual Studio Load Testing since 2 years, and I find it very powerfull and easy to use. You can run integration tests, navigation in a web site, simulate database load, ... in fact, everything. Because it is a MS application, it is also fully compatible with all MS products like Sharepoint : it's easier to call a WCF service from a unit test than another technology (how to test nettcpbinding ?). You can also use the Visual Studio Profiler for instrumenting your code (and see what line of code is expensive or event ADO.net interactions). You can also easily extend the load testing by many extensibility points.
One important thing is that VS laod testing is "intrusive". It will note only collect response time, request lengths, ... but also all performance counters, database queries, ... All this metrics are saved in a dedicated database like SQLExpress for reporting. There is an AddOn for Excel.
Juste one important note (available for all load testing solutions !) :
You can run load tests from a developer machine or even a single dedicated machine, but you usually can't generate enough traffic to really see how the application responds (you machine can not simulate 500 concurrent users because of limited CPU/Memory/Network) . In order to simulate a lot of users, you'll set up what is known as a Load Test Rig.
A test rig is made up of a Test Controller machine and one or more Test Agent machines as shown in Figure 1. The controller manages and coordinates the agent machines and the agents generate load against the application. The test controller is also responsible for collecting performance monitor data from the servers under test and optionally from the test rig machines.
Here are some links :
MSDN
Dave's introduction
Not saying Visual Studio Load Testing is not a great tool. There are tools, like Tsung, Eventlet (and many others) that can support well over thousands of concurrent users.
Good luck.
In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think of, including ones that utilize Fluent NHib's PersistenceSpecification object.
The trouble starts when I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects. It's not that the first setup and tear-down of the test harness fails, in fact, the test harness successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database is bootstrapped a second time on a new in-memory database, with a new session and session factory.
The error is:
NHibernate.StaleStateException : Unexpected row count: 0; expected: 1
The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one.
I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown.
I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. I've even messed around with current_session_context_class to no avail. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and I may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests.
Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable?
Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable for large data sessions?
Here is how we do it http://www.gears4.net/blog/archive/14/nhibernate-integration-testing
Hope it helps
I was able to solve this problem by not using an in memory database, and instead saving a hard copy file after initialization, once per test suite run. Then instead of reinitializing the database after each test, the file-based SqlLite database is copied and that new copy is used for testing.
Essentially, I only setup the initial database data once and save that database off to the side and it is copied for each test. There is a definite possibility that the problem could be on my end, but I suspect that there is an issue with large in-memory SqlLite databases. So I recommend using the file-mode of the database, if you are running into trouble with a large in-memory sqllite database.