Usage of STAF/STAX in Automation - automation

I had been exploring on STAX/STAF past week.
It is used for test automation execution & is some what similar to Hudson.
I would like to know on which type of Tests it can be used. i.e functional tests, load tests etc., The functional automation tests are basically dependent on the framework i.e how they run, their return status on fail or pass are through the framework . How I can we integrate such with the Test Automation Framework like STAF?

I've been using STAF/STAX for over 4 years.
PROs:
Open Source
Cross-platform
Concurrent execution
Extensible (i.e. you can write your own services)
Decent support from IBM through the STAF website
CONs:
Sometimes buggy
Difficult to diagnose problems
Programming STAX scripts is awkward and ugly (i.e. scripting via XML tags and embedded jython)
I've found that STAF/STAX is useful for systems test. It enables you, for example, to launch a server on one system and a client on another, then test their interaction. It's also helpful if you need to test cross-platform, or for multiple language bindings. I also like the fact that it can be used both in large, networked systems, as well as on an individual's desktop.
On the other hand, I would probably avoid using it for unit testing, or tests that are relatively simple and can be run on a single system. I'd probably use a language specific unit framework for that.

STAF is not comparable to Hudson.
When I look at something like Hudson/Jenkins and Buildbot, I see GUI with emphasis on scheduling, viewing what's going on, what was done, and how it went.
STAF, on the other hand, is more like the plumbing for a QA framework over a distributed environment. It helps with launching processes, collecting logs, locking resources, etc.

Related

Pros/cons of using FE API library as integration test tool suite

Wanting to open a discussion about testing approaches.
Context
I'm creating a new project and my main focus has been on efficiency and clean structure (not necessarily the most STANDARDISED code, but easiest to read, consistent, and quick to understand). Building my server-side code, and have mapped it from a loose outline of website UX designs. I want to create some integration tests in preparation for building FE.
I experimented with using newman/postman to automate some of these integration tests. Pros for this:
Not reinventing the wheel, by that - the postman requests would exist anyway, it's not a whole new suite I'm having to build and maintain
Consistency with project state, during the manual testing phase these postman requests would be updated so by the time integration tests happens, it's up to date automatically with the project state.
But had some issues with running. Then had the idea...
Why not build out the FE API library which connects to my server, and then use that as my testing suite?
Easier to enforce contract testing as it can be coupled with unit testing on a real consumer
Not a whole new suite I'm having to build and maintain, makes de-coupling near impossible and therefore reliability of testing increases
Efficient use of time as this will be built anyway, the only additional coding would be scripts that schedule test calls
Considering integrating the two repos (FE + BE) into a monorepo, using a management tool like NX (which can check for differences and only do builds / deploys when affected areas are touched). That way types can be consistent across touch points as they're both running of TypeScript.
Ideal conversation anchors:
Experimented with something similar? How did it go?
Considerations I haven't made which are likely to cause problems down the line?
Anything easier to achieve the goal that I haven't considered?

RPA Vs Traditional Automation Tools [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed last year.
Improve this question
I am Test Automation engineer and recently got opportunity to explore RPA tool blueprism. After exploring I found it similar to UI automation tools supporting various technologies. Can anyone tell me what value RPA adds compare to traditional tools. I was interested to see how it can use 'intelligence' but couldn't find any feature.
Can expert in this forum help me understand what RPA can do which traditional tool can not do ?
I see similar questions but they do not give any answers I am looking for.
Thanks,
Nilesh
The technological challenges of RPA and automation tools are quite similar. RPA and testing products differ in their user experience and reporting. While testing tools often offer features to assess risk or create testing data, RPA tools have bigger focus on bot creation and user data storage.
The main difference between the two very similar techniques Test (Process) Automation and Robotic Process Automation is the Goal. Almost all the points contained in the previous posts are, in my modest opinion, consequences of the goal of both techniques:
With a Test (Process) Automation tool you want to test an application or system under test. I.e.: Want to find bugs or to prove that the quality of the application has reached a certain level. The Test Process Automation will in general run in a test environment. If something goes wrong with your test automation code or tool breaking completely down the test environment, it is not that bad: You can reset the environment and have not hurt anyone.
With a RPA tool you want to implement a real life business process. The robot works in a productive environment. If something goes wrong you may really hurt someone, i.e. damage productive data or environment. The robot does the work of a user, not just simulates it. Therefore, the robot must be "save". It must also be possible to understand what the robot exactly did with the job it got.
I hope, this help to clarify.
PS: I include the word "Process" in the context of testing, because initializing or resetting a test environment, providing secondary data, booting a system under test, running a test, collecting results, comparing actual with expected results, creating reports for test management or DevOps is usually a process you automate using some kind of "Test Process Automation" not just Test Automation.
on a less official and serious note, RPA is a marketing term for a Test Automation Robot pumped up with some kind of a Workflow Editor and some remoting Technologies
We were using standard Test Automation Robots(UFT, Selenium etc) to do some RPA with the backlash that the automated workflow was rather coded than visualized and we had to have some effort invested into the infrastructure to support scaling. (launching them en-masse and automatically)
What does it solve?
- As mentioned above, visualising worfklows and scaling - although here it has limitations
What are the weak points?
The Test Automation Robot wrapped inside the RPA can be very limited - in many cases they are less mature than state of the art TA Robots.
The promise of record & replay and drag & drop your workflow. As always - we are not yet there
It solves a problem in a way it shouldn't be solved; The GUI is for the user the APIs are for the software (or call them robots in this case). These problems should be solved by writing integrations between systems or extending existing APIs (safer, cheaper, much more reliable etc)
RPA platforms provide you a singular place where various different type of applications can be automated.
These platforms fundamentally will try to consolidate and formalize the automation effort in an enterprise. and here the word "enterprise" is key.
for small businesses where they want to automate some task/s the intern can be asked to quickly build up something. no one cares what technology or tools were used. maybe he likes python, and someone else likes VBA. so a single task may be automated using several different technologies. no one cares as long as it works. the intern leaves and the next intern figures something new...
RPA platforms on the other hand are a larger "formal" effort that will try to automate tasks that otherwise require a lot of FTE (full time employee) count to accomplish. typical RPA use cases are repetitive tasks that humans are doing all day without using much brain. think of extracting each line item from a PO (purchase order) and putting it in an excel spreadsheet and then posting it on some internal application. now imagine a single guy doing this maybe for 100s of POs a day.
You cannot imagine how uneven the IT landscape in most of the enterprises is. old applications that were either built in house a long time ago or versions that arent being updated by the vendor any more. the bigger problem is when these applications do not have any integration points, so these RPA platforms provide the lease invasive (changes to old applications or upgrading even)
i can go on all day about RPA, do let me know if you have any follow up qns. i work for one of these RPA platforms, maybe i will be able to help.
There are many flavours of RPA.
Blueprism is not an ideal example of what modern RPA should look like, consider checking out Automation Anywhere or UiPath (both offer Community Edition you could download and try for free).
While technological differences may not be that vast (and indeed RPA vendors are now looking at test automation as a market for their products), biggest differences are in the ways the platforms are engineered, to name a few:
Security-oriented approach, RPA platform is designed to make sure it could handle important data responsibly.
Design for ease of use for non-technical people. Selenium is great but you need to know how to program to use it. UiPath requires easy drag-and-drop for the same things.
Working with unstructured data inputs, like OCR'ing documents and acting on them
ML integration, for decision making or extra capabilities. E.g. NLP stuff, sentiment analysis, helping OCR recognize new document formats etc.5. Integration with third-party like chat bots or BPM
Analytical and monitoring capabilities, to make sure that you know how long your bots take to do their work and to help them if they fail
Easy of use should not be discarded:
With RPA it's a half an hour job to receive a request by mail, take data from SAP, build pivot in Excel and upload to a website in JSON format. Could you do that in other tools? Sure! Is that as easy? Usually no.
So you could do poor man RPA with Selenium or AutoIT or bash or PowerShell, it will just be not as easy and will provide less capabilities while requiring more effort every step of the way. And if you do it properly you'll end up replicating one of the RPA platforms anyway.
Also in RPA there is usually but not always central coordination mechanism (ala Selenium Grid) to orchestrate several robots (up to 10k in UiPath case) to make sure they act in sync, have some sort of work queue, shift their workload, deploy processes to them etc. This makes all the difference for enterprise usage scenarios.
RPA and UI Automation tools have some technical features that intersect. For example;
UI Component utilization: These tools may utilize a UI screen image-based approach, OS platform frameworks (i.e. Microsoft Accessibility Frameworks), or technology-centric platforms extension (i.e Chrome or Firefox extension)
End-2-End application driving: These tools have the capabilities to drive applications to complete their duties. For example, log in to an application and get some data and shift to other legacy applications and enter data.
Screen scraping: These tools have screen scraping features to retrieve some data on screens there other techniques are not applicable.
3rd party application integration: Also these tools can integrate web services or databases to get data and use these in their application usage scenarios.
...
As you see lots of features these RPA and UI Automation tools share. But, the main concept is here not technology but application methodology. In this point of view, RPA Tools
Designed to drive real-life business flow in the production environment.
May have some cognitive power to complete human-exhibit tasks (i.e. document analysis, high OCR capability, pattern recognition)
Can work attendant and unattended
Doesn’t require any programming language knowledge. That non-technic staff can easily use and learn.
Contrast to below: for implementing complex flows, gaining scalability, achieve seamless integration to Third-Party application, and native integration of external technology into your business flows (i.e. third party microblog sentence classification A.I. library that you developed your own) Some RPA tools (Voodoo RPA) have their own Embedded Development Environment (EDE) for programmers.
Invented for doing the high-value repeatable task in 7/24 in reliable and secure manners
Enhanced workflow management, impersonation, and logging capabilities
In sum, RPA tools developed to easily implement high-volume repetitive tasks in a business environment but UI automation developed to test the application's UI and verify business rules suited for the baseline paradigm.
The main difference is a type of task we can automate using traditional
automation and RPA:
Traditional automation is mainly used to automate test cases of applications/products.
RPA is mainly used to automate business processes.
If we talk in terms of coding knowledge then traditional automation required more coding knowledge in comparison to RPA.
Traditional automation can either support desktop app automation or web app automation at a time without integration of 3rd party
tools. whereas RPA can support both web and desktop app automation.

Coded UI Tests Cross Browser Testing Design Patterns

Does anyone know a good book/blog/resource explaining design patterns for cross browser testing projects?
The MSDN link below explains how to set everything up.
http://blogs.msdn.com/b/visualstudioalm/archive/2012/10/30/introducing-cross-browser-testing-with-coded-ui-tests.aspx
This is all set up however we currently have a CUITs project with over 100 tests all setup and running on IE. And there were a lot of problems and required a lot of re-factoring of the UI Map, Playback settings, Test steps(as control aren't recognized) etc. to have even few tests run smoothly. Plus since our clients always use the latest version of Chrome and Firefox the framework isn't up-to-date to support the newest versions. Hence, continuing the way we're doing right now it looks like we'll end up with bulky test code which will soon be nightmare to maintain as we add more tests to the project.
It would be good to know what are the best practices in terms of managing/isolating tests so it involves less re-factoring and smooth integration between tests for various browser.
The Selenium solution sounds like a perfect fit for your case. You can find
best practices in terms of managing/isolating tests so it involves less re-factoring and smooth integration between tests for various browser
in these test design considerations. Take a closer look at the parts about Page object model and Page factory for better understanding of
design patterns for cross browser testing projects
If you eant to make a step forward with it, take a look at selenium grid. It allows you to run your tests on different machines against different browsers in parallel. That is, running multiple tests at the same time against different machines running different browsers and operating systems. Essentially, Selenium-Grid support distributed test execution. It also supports running your tests in a distributed test execution environment.

Testing reusable components / services across multiple systems

I'm currently starting a new project where we are hoping to develop a new system using reusable components and services.
We currently have 30+ systems that all have common elements, but at the moment we develop each system in isolation so it feels like we are often duplicating code and then of course we have 30+ separate code bases to maintain and support.
What we would like to do is create a generic platform using shared components to enable quick development of new collections, reusing code and reusing automated tests and reduce the code base that needs to be maintained.
Our thoughts so far are that we would have a common code base for specific modules for example User Management and Secure System Access, these modules could consist of their own generic web module, API and Context. This would create a generic package of code.
We could then deploy these different components/packages to build up a new system to save coding the same modules over and over again, so if the new system needed to manage users, you could get the User Management package and boom it does what you need. However, because we have 30+ systems we will deploy the components multiple times for each collection. Also we appreciate that some of the systems will need unique functionality so there would be the potential to add extensions to the generic modules for system specific needs OR to choose not to use one of the generic modules and create a new one, but use the rest of the generic components.
For example if we have 4 generic components that make up the system A, B, C and D. These could be deployed to create the following system set ups:
System 1 - A, B, C and D (Happy with all generic components)
System 2 - Aa, B, C and D (extended component A to include specific functionality)
System 3 - A, E, C and F (Can't reuse components B and D so create specific ones, but still reuse components A and C)
This is throwing up a few issues for me as I need to be able to test this platform and each system to ensure it works and this is the first time I've come across having to test a set up like this.
I've done some reading around Mircroservices and how to test them, but these often approach the problem for just 1 system using microservices where we are looking at multiple systems with different configurations.
My thoughts so far lead me to believe that for the generic components that will be utilised by the different collections I can create automated tests at the base code level and then those tests will confirm the generic functionality and therefore it will not be necessary to retest these functions again for each component, other than perhaps a manual sense check after deployment. Then at each system level additional automated tests can be added to check the specific functionality that may be created.
Ideally what I'd like would be to have some sort of testing platform set up so that if a change is made to a core component such as User Management it would be possible to trigger all the auto tests at the core level and then all of the specific system tests for all systems that will share the component to ensure that any changes don't affect core functionality or create a knock on effect to the specific systems. Then a quick manual check would be required. I'm keen to try and remove a massive manual test overhead checking 30+ systems each time a shared component is changed.
We work in an agile way and for our current projects we have a strong continuous integration process set up, so when a developer checks in some code (Visual Studio) this triggers a CI build (TeamCity / Octopus) that will run all of the unit tests, provided that all these tests pass, this then triggers an Integration build that will run my QA Automated tests which are a mixture of tests run at an API level and Web tests using SpecFlow and PhantomJS or Selenium Webdriver. We would like to keep this sort of framework in place to keep the quick feedback loops.
It all sounds great in theory, but where I'm struggling is trying to put something into practice and create a sound testing strategy to cover this kind of system set up.
So really what I'm hoping is that there is someone out there who has encountered something similar in the past and has thoughts on the best way to tackle this and has proven that they work.
I'm keen to get a better understanding of how I could set up a testing platform / rig to aid the continuous integration for all systems considering that each system could potentially look different, yet have shared code.
Any thoughts or links to blogs / whitepapers etc. that you think might help would be much appreciated!!
Your approach is quite good, and since soon I'll have to face the same issues like you - I can give you my ideas so far. I'm pretty sure that to
create a sound testing strategy to cover this kind of system set up
can't be squeezed-in in one post. So the big picture looks like this (to me) - you're in the middle of the Enterprise application integration process, the fundamental basis to be test covered will be the Data migration. Maybe you need to consider the concept of Service-oriented architecture
generic platform using shared components
since it'll enable you to provide application functionality as services to other applications. Here indirect benefit will be that SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. There are a lot of resources like this E2E testing or efficiently testing SOA.

Is including system test cases into the final packaged product of your application contributing to bloat or increasing risk?

I am packaging up an rpm file which has a %postinstall section that detects certain conditions and runs a suite of unit, function, and system tests. I am getting some push back that it exposes some of the internal structure as I use some of the same environment variables the code itself uses for diagnostics. Thoughts?
UPDATE: I am not planning on running the tests automatically nor exposing their existance to the end users. I am proposing that the testing package simply be available to any machine where the suite lands. It adds roughtly 3% to the final size of the package and requires an obscene amount of internal knowledge to execute properly.
The program itself is a library which others may use and is exposed in an API. The internal knowledge of how things functions is not at issue. My main motivation is the lack of a suitable test resources and the large variability in the target environment. Some of the tests are really simple (similar to what configure might do to determine all the right features are available from the compiler). Other tests are more involved and they prove the basic functions the library should provide.
If you want to avoid the complaint that it runs on every install, at least use the %check rule of RPM.
Sounds like people are concerned about "reverse engineering". So the software is proprietary? This would seem to be the crux of your problem. Regardless, it's common for the test suite to be separate from the packaged software.
However, you're not being unrealistic: Allowing users to run tests themselves on their systems and give you the results is a great aspect of a collaborative relationship with users. Unfortunately, you're running up against the proprietary business model.
Perhaps you can compromise by trimming down or rewriting the tests and the diagnostics to only prove an adequate amount of fitness without revealing too much. I wouldn't back down from throwing out the tests and diagnostics of what you've written so far.
You really should make the argument that users will be pleased and have more confidence in a software package shipped with a thorough testing system, and that these outweigh any fears of revealing the software's internals.