Can anyone suggest some good test automation strategies or design patterns for high availability and disaster recovery testing?
The best way to avoid failure is to fail constantly. Netflix uses a strategy called "Chaos Monkey", the code can be found here: SimianArmy. It generated random failures in your cloud infrastructure making you prepared to any random disaster.
Related
I realize that this might not be the best platform to ask this, but I think this would be best unbiased one to put my question in.
How would you compare OpenMDAO v/s modeFrontier with regards to there optimization capabilities and application scaling and overall software development? Which one would you pick and why?
If you know of any resources or link do provide.
The most fundamental technical difference is OpenMDAO can pass data + derivative information between components. This means that if you want to use gradient based optimization and have access to at least some tools that provide derivative information, OpenMDAO will have far more effective overall capabilities. This is especially important when doing optimization with high-cost analysis tools (e.g. partial differential equation solvers --- CFD, FEA). In those situations making use of derivatives offers between a 100x and 10000x speedup.
One other difference is that OpenMDAO is designed to run natively on a distributed memory compute cluster. Industrial frameworks can submit jobs to remote clusters and query for the results, but OpenMDAO itself can run on the cluster and has a direct and internal MPI based distributed memory capability. This is critical to it being able to efficiently handle derivatives of those expensive PDE solvers. To the best of my knowledge, OpenMDAO is unique in this regard. This is a low level technical detail that most users never need to directly understand, but the consequence is that if you want to do any kind of high fidelity coupled optimziations (aero-structural, aero-propulsive, aero-thermal) with more than one PDE solver in the loop then OpenMDAO's architecture is going to be by far the most effective.
However, OpenMDAO does not offer a GUI. It does not have the same level of data tracking and visualization tools. Also, I know that mode-frontier offers the ability to split a single model up across multiple computers distributed across an organization. Mode Frontier, along with other tools like ModelCenter and Isight, all offer this kind of smooth user experience and code-free interaction that many find valuable.
Honestly, I'm not sure a direct comparison is really warranted. I think if you have an organization that invests in a commercial integration tool like Mode Fronteir, then you can still use OpenMDAO to create tightly coupled integrated optimizations which you can then include as boxes inside your overall integration framework.
You certainly can use OpenMDAO as a complete integration framework, and it has some advantages in that area related to derivatives and execution in distributed memory environments. But you don't have to, and it certainly does not have to be an exclusive decision.
This might be a controversial topic, but I am concerned about the performance of boost graph vs commercial software such as TigerGraph, since we need to choose one.
I am inclined to choose Boost, but I am concerned whether performance-wise, boost is good enough.
Disregarding anything around persistence and management, I am concerned with boost graph's core performance of algorithms.
If it is good enough, we can build our application logic on top of it without worry.
Also, I got below benchmarks of LDBC SOCIAL NETWORK BENCHMARK.
LDBC benchmark
seems that TuGraph is the fastest...
Is LDBC's benchmark authoritative in the realm of graph analysis software?
Thank you
I would say that any benchmark request is a controversial topic as they tend to represent a singular workload, which may or may not be representative of your workload. Additionally, performance is only one of the aspects you should look at as each option is built to target different workloads and offers different features:
Boost is a library, not a database, so anything around persistence and management would fall on the application to manage.
TigerGraph is an analytics platform that is focused on running real-time graph analytics, such as deep link analysis.
Amazon Neptune is a fully managed service focused on highly concurrent transactional graph workloads.
All three have strong capabilities and will perform well when used in the manner intended. I'd suggest you figure out which option best matches the type of workload you are looking to run, the type of support you need, and the amount of operational work you are willing to onboard to make the choice more straightforward.
I am working in a project which is quite complex in terms of size (it's to make a web app). The first problem is that nobody is interested in any products which could really solve the problems surrounding the project (lack of time, no adjustments in timescales in response to ever changing requirements). Bare in mind these products are not expensive ( < $500 for a company making millions) and not products which require a lot of configuration (though the project needs products like that, such as build automation tools, to free up time).
Anyway, this means that testing is all done manually as documentation is a deliverable - this means the actual technical design, implementation and testing of the site suffers (are we developers or document writers? What are we trying to do here? are questions which come to mind). The site is quite large and complex (not on the scale of Facebook or anything like that), but doing manual tests as instructed to do so (despite my warnings) tells me this is not high quality testing and therefore not a high quality product to come out of it.
What benefits can I suggest to the relevant people to encourage automated testing (which they know I can implement)? I know it is possible to change resolution via cmd with a 3rd party app for Windows, so this could all be part of an automated build. Instead, I will probably have to run through all these permutations of browsers, screen resolutions, and window sizes manually. Also, where do recorded tests fall down on? Do they break when windows are minimised? The big problem with this is that I am doing the work in monitoring the test and the PC is not doing ALL of the work, which is my job (make the pc do all the work). And given a lack of resources, this clogs up a dev box - yes, used for development and then by me for testing. Much better to automate this for a night run when the box is free.
Thanks
Talking about money is usually the best way to get management attention, so here are a few suggestions:
Estimate how long it takes you to do your current manual testing.
Get a list of critical bugs that were found by customers - ideally with an idea of the impact cost (fixing a bug after release is always much more expensive than before), but it's usually good enough just to describe one or two particularly bad bugs. Your manual testing didn't catch these customer bugs, so this is a good way to demonstrate that your manual testing is inadequate.
Come up with a pilot project where you automate testing a certain area of the product where bugs were found in production. Estimate the cost of the pilot project - doing a restricted pilot has the advantages of being easier to scope and estimate. Then compare the ongoing cost of repeatedly running the automation versus testing every release manually; after a few release you should break even on the cost of the automation tool plus the test development. Be careful picking the automation area - try to avoid areas like a complex UI that might change significantly between releases and thus require a lot of time to be spent on updating the automated tests.
Good luck to you. I screamed for all of this and I work for a billion+ company. We still perform manual testing (including regression testing). Automated tests are finally being instituted because some of the developers went out and got demos of some of the software you're describing and began configuring a framework.
Your best bet is to come up with an actual dollars and cents documented comparison between working with a product and working without a product to prove unequivocably to the management figures in charge of spending the money and designing the processes that the ROI is not only there but people who need to perform testing and/or change their existing processes will actually find their jobs a little bit easier.
Go grassroots. Talk to your team, get them on board. Talk to your business analysts, get them on board. Talk to any QA people you have and get them on board. When the villagers attack the castle with pitchforks and torches, you can bet that the wallets will open up and you'll be performing automated testing.
I would just try to automate as much as you can, whenever you can. I don't think you need to necessarily ask for permission to do things like this. Maybe your management doesn't think of these things, and often they won't see the benefit until you show them a great example.
Is it just that capital expenditures are difficult ? I've seen places where the time of existing employees is already spent, and therefore, essentially worthless in comparison to new purchases.
As for convincing managers, cost of manual regression tests versus cost to automate. If you are running lots of manual tests, this should be an easy win. If you aren't running the tests often, try for cost of a bug. However, in many companies, the cost for a bug isn't attributed to the development department, quality and the cost of bug may not be a strong motivation (in other words, quality is just about pride and ego, not actually what it costs).
Convincing developers...if they aren't already on board...electo-shock therapy ? If they aren't there, it's going to be an up hill battle.
Have been trying to similar on my current project... I can say there's another factor - time. There's a learning curve on automated tools and automated test development. The first release that is tested with automated tools will not be tested as quickly as it was manually, because the testers are learning the tools in addition to exercising tests. The second release will be much faster and every release after that will be faster still - but the first one will be a schedule hit, if not a cost hit.
The financial case is not too hard - over time, the project saves lots of money, as resources for repetitive testing are vastly reduced.
But the hard part to find a strategy that lets you get the tool into usage with a minimum of schedule drag on the first release that uses the test tool. Testing is always squashed at the end of the schedule, so it's the thing most sensitive to schedule stress. Anything you can do to show management how to reduce or remove the learning curve and automated test setup and installation time is likely to increase your chances of using the tool.
Can someone point out the major differences between LoadRunner and Performance Center? My little research shows that both can be used for load testing and performance monitoring. What additional features are provided by Performance Center? Is VUGen a part of Performance Center?
Let me try to give a little less sales-oriented and more technical slant (without the rant).
Performance Center runs multiple copies of LoadRunner.
Performance Center adds a web app to schedules time on the various LoadRunner machines. Allocating time also implies allocating the number of "vusers" that has been purchased.
For more technical information (including an architecture map and tips on using it), see my http://wilsonmar.com/lr_perf_center.htm
For more sales information, see Joe or a HP Software salesperson.
As indicated by the previous post LoadRunner is performance and load testing. Performance center (which includes LoadRunner) is supposed to be a complete performance management solution.
Performance center is HORIFICALLY expensive and from what I can see a number of the so-called features simply address some of the licensing restrictions in LoadRunner. It also appears that not a lot of organisations use Performance Center so HP don't have much expertise.
If you just want performance and load testing without all the hassles then Borland SilkPerformer is a better option.
It should also be noted that significant license differences exist between Performance Center and LoadRunner controller related to remote access. By default Performance Center allows it, the LoadRunner controller does not. Please do not take my word for it, examine the license text for your copy of LoadRunner. These constraints have been in place for the better part of a decade.
Well Performance center is an advanced web based performance test management tool. however it is costlier than the Loadrunner in terms of license and deployment. But it provides the ease of managing performance testing task. You can schedule multiple tests through the scheduling window and according to the scheduler timing you can assign different scenarios to run at that point of time.
Hp Loadrunner provides the facility to run and schedule a single scenario at any point of time. However we cannot ignore the good things in Loadrunner --i.e it is easy to manage and schedule a run and monitor the results. Whereas in Performance center it may be difficult for new users to schedule and run a test and collate the results after the test.
I would suggest that Both the tools are good in use.
I've recently become quite interested in identifying patterns for software scalability testing. Due to the variable nature of different software solutions, it seems to like there are as many good solutions to the problem of scalability testing software as there are to designing and implementing software. To me, that means that we can probably distill some patterns for this type of testing that are widely used.
For the purposes of eliminating ambiguity, I'll say in advance that I'm using the wikipedia definition of scalability testing.
I'm most interested in answers proposing specific pattern names with thorough descriptions.
All the testing scenarios I am aware of use the same basic structure for the test which involves generating a number of requests on one or more requesters targeted at the processing agent to be tested. Kurt's answer is an excellent example of this process. Generally you will run the tests to find some thresholds and also run some alternative configurations (less nodes, different hardware etc...) to build up an accurate averaged data.
A requester can be a machine, network card, specific software or thread in software that generates the requests. All it does is generate a request that can be processed in some way.
A processing agent is the software, network card, machine that actually processes the request and returns a result.
However what you do with the results determines the type of test you are doing and they are:
Load/Performance Testing: This is the most common one in use. The results are processed is to see how much is processed at various levels or in various configurations. Again what Kurt is looking for above is an example if this.
Balance Testing: A common practice in scaling is to use a load balancing agent which directs requests to a process agent. The setup is the same as for Load Testing, but the goal is to check distribution of requests. In some scenarios you need to make sure that an even (or as close to as is acceptable) balance of requests across processing agents is achieved and in other scenarios you need to make sure that the process agent that handled the first request for a specific requester handles all subsequent requests (web farms are commonly needed like this).
Data Safety: With this test the results are collected and the data is compared. What you are looking for here is locking issues (such as a SQL deadlock) which prevents writes or that data changes are replicated to the various nodes or repositories you have in use in an acceptable time or less.
Boundary Testing: This is similar to load testing except the goal is not processing performance but how much is stored effects performance. For example if you have a database how many rows/tables/columns can you have before the I/O performance drops below acceptable levels.
I would also recommend The Art of Capacity Planning as an excellent book on the subject.
I can add one more type of testing to Robert's list: soak testing. You pick a suitably heavy test load, and then run it for an extended period of time - if your performance tests usually last for an hour, run it overnight, all day, or all week. You monitor both correctness and performance. The idea is to detect any kind of problem which builds up slowly over time: things like memory leaks, packratting, occasional deadlocks, indices needing rebuilding, etc.
This is a different kind of scalability, but it's important. When your system leaves the development shop and goes live, it doesn't just get bigger 'horizontally', by adding more load and more resources, but in the time dimension too: it's going to be running non-stop on the production machines for weeks, months or years, which it hasn't done in development.