How can a change be brought about in the testing process that follows waterfall? [closed] - testing

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
We are a small company and i am a test coordinator appointed to bring a process in testing for the company.
We dont have a testing process in place. Development-Deployment and testing happens almost daily and the communication is established over skype or mails.
How do i start to bring a testing process in place?
We have operations running in 8 different countries and we dont have a dedicated testing team for testing. The business users are the testers we have.
It is crutial for me to get them all to testing when required.
So how do i bring that change in the way they work?
Any suggestions or help is kindly appreciated.

I think the best approach for this changing is show the test value for your managers.
I suppose that without well organized test process the bug finding happens eventually. The value of one crucial issue find by your customer but not by you may lead to the huge impact on the company business. Well, you may wait when it will happen or just start to build the test group.
Also this is the common fact that finding bugs as soon as possible saves a lot of money for the organization. This is mostly because fixing the issue close to the development time requires much less time.
I would recommend Jira as the open source tool which allow to organize the bugs tracking and also supports agile development process.

I would suggest to consider Comindware Tracker - workflow automation software. It executes processes you create automatically by assigning tasks to the right team member, only after the previous step in the workflow is completed. Furthermore, you can create forms visually, set your own workflow rules and have your data processed automatically. You can configure Comindware Tracker to send e-mail notifications to users when a particular event occurs with a task or document, or to send scheduled e-mail reports. Discussion threads are available within every task. You can share a document with a team and it will be stored within the task, document versioning is supported.
Perhaps the key reason why small company just starting to optimize workflows should consider Comindware Tracker is its ability in changing workflows in real-time during process execution without the need to interrupt it. As you are likely to make plenty of changes during the course of your starting phase, this solution is worth of attention. This product review might be useful - http://www.brighthubpm.com/software-reviews-tips/127913-comindware-tracker-review/
Disclaimer – I work in Comindware. We use Comindware Tracker to manage workflows within our company. I will be glad to answer any question about the solution, if any should rise.

If you are looking to release frequently then you should consider using automated regression testing.
This would involve having an automated test for every bit of significant functionality in your applications. In addition, when new functionality is being developed an automated regression test would be written at the same time.
The benefit of the automated regression test approach is that you can get the regression tests running in continuous integration. This allows you to continuosly regression test and uncover any regression bugs soon after the code is written.
Manual regression testing is very difficult to sustain. As you add more and more functionality to the applications the manual regression testing takes longer and makes it very difficult to release frequently. It also means the time spent testing will continually increase.
If your organisation decides not to go with test automation then I would suggest you need to create a delivery pipeline that includes a manual regression testing phase. You might want to consider using an agile framework such as Kanban for this (which typically works well with frequent releases).

Related

How to fit automation (System or E2E) tests in agile development lifecycle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am an automation test engineer and never found a right answer on how to fit System Integration tests (E2E) in the agile development life cycle.
We are a team of 10 developers and 2 QAs. The team is currently trying to baseline a process around the best processes around verification & validation of user stories once they have been implemented.
The current process we are following is a mixture of both static reviews and manual / Automated tests.
This is how our process goes:
1. Whenever a story is ready, the lead conducts a story preparation meeting where we discuss the requirements, ensures everybody is on the same page, estimation etc;
2. The story comes onto the board and picked up by a developer
3. The story is implemented by the developer. The implementation includes necessary unit and integration tests as well.
4. The story will then go for a code review
5. Once the code review is passed, it will be deployed & released into production
6. If something goes wrong in production, the code will be reverted back.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved). The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
In such situations, we are compromising on quality and releasing the code without properly testing it.
What would be the best approach in this situation? Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
Any good suggestions around this process are highly appreciated.
Here are some suggestions.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved).
This is where you need to invest time and effort. Some possible approaches include:
Creating mock micro-services
Creating a test environment which runs versions of the micro-services
Both of these approaches will be challenging, but when solved will typically pay-off in the medium to long term.
Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
The value from automated regression tests comes when they have reasonable levels of coverage (say 50-70% of important features are covered). You may want to consider spending some time getting the coverage up before working on new requirements. This short-term hit on the team's output will be more than offset by:
Savings in time spent manually testing
More frequent running of tests (possibly using continuous integration) which improves quality
A greater confidence amongst the developers to make changes to the code and to refactor
The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
Why not get the developers involved in writing automation tests? This would allow you to balance the creating of tests with the coding of new requirements. This may give the appearance of reducing the output of the team, but the team will become increasingly efficient as the coverage improves.
We are a team of 10 developers and 2 QAs
I like to think you are a team of 12 with development and QA skills. Share knowledge and spread the workload until you have a team that can deliver requirements and quality.
For our team, we lose time, but after a development story is done the corresponding test auomation story is put in to the next sprint.
Finished stories are unit tested and run through the current test automation scripts to make sure we haven't regressed with our past tests/code.
Once the new tests are constructed, we run our completed code via HP UFT and if successful, setup for deployment to Production.
This probably isn't the best way to get things done currently, but it has been a way for us to make sure everything gets automated and tested before heading to Production.

Is Requirement engineering is obsolete in Scrum Way of working? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The questions may seem strange!
In the project I am working now, Scrum methodology was adapted from the last three months. We used to follow a V- Model as it was standard in embedded industry.
Our project ran into some trouble and this decision were made. What currently is being done is that the customer (Product Owner) is giving top level requirement directly to development team, the requirements team is just a part of it.
The development team works on it and show the final outcome to Product Owner and if changes are needed it is made. Once the Product Owner is ok with the result, then the changes made are reported to requirements and they document it and pass it to test team.
What my problem with such an approach is that in this process we are technically making requirements team and test team obsolete. They come too late into the process.
Is this the way Scrum works? In this process everything is driven by development team and others basically are more or less spectator.
Some where I saw that we could still have the V-Model within the scrum methodology?
Edit:
I understand the tiny V-model releases every sprint. But my question is do they all work in parallel? For example: in the traditional V-model, which is a modified waterfall, there always was a flow - the requirement team will release the requirement to Development and test and they work parallel in designing and then once development is completed the test team starts testing. How that flow is handled in scrum way of working?
You have mentioned that "The sprint is not complete until the requirements and test parts are done for each story. " In our project at least the requirement part is being done (test team is completely kept out and the testing is more or less done by the development team on the product). But the requirement job is more or less a documentation job.,
The entire scrum is being driven by the development teams perspective. We are seeing scenarios where Development team decide the way certain function work (because the initial concept is too difficult to implement for them or may be more complex).
There is no creation of boundary at any level! Is this the way Scrum suppose to work?
The test team in the project is more or less demoralized currently. They know very clearly any issue they find at system test level is not gonna be taken care much. The usual excuse from development team is that they don't usually see the issue at machine.
Having a separate requirement engineering team is obsolete in the Scrum way of working. You should all be working together.
Scrum suggests that you should be working in multidisciplinary teams and working in small increments. You can think of this as doing tiny v-model releases each sprint. The sprint is not complete until the requirements and test parts are done for each story. You should consider them part of your definition of done.
I'd suggest a good point for you is to actually read the Scrum Guide. It has the following to say about the make up of development teams:
Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment;
Scrum recognizes no titles for Development Team members other than Developer, regardless of the work being performed by the person; there
are no exceptions to this rule;
Scrum recognizes no sub-teams in the Development Team, regardless of particular domains that need to be addressed like testing or business
analysis; there are no exceptions to this rule; and,
Individual Development Team members may have specialized skills and areas of focus, but accountability belongs to the Development Team as
a whole.
As an aside, I have some experience working in an embedded system with Agile methods and we had great success using automated testing to replace manual testers. Our testers, pretty much because responsible just for running the test suite on various hardware, physically running the tests. We even had the tests fully built into the production process; every new piece of hardware went through (a subset of) our test suite straight off the assembly line!

How to integrate testers in agile develop environment? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We work with Scrum and I think we are on the right way, but one thing bothers me: the testers aren't part of the development cycle yet. Now I am thinking on how to involve the testers in the development team. Now it is seperated and the testers have their 'own' sprint.
Currently we have a C.I. environment. Everytime a developer has finished a user story, he checks in his code and the build server builds the code on every check-in.
What I want is that the testers test the user stories in the same sprint the user story is implemented. But I am struggling on how to set this up.
My main question is: where can the tester test the user story? They can't be testing on the build server because on every check-in it creates a new build and there are a lot of check-ins . So that's not an option. So, should I create a seperate server where the testers can deploy by theirself? Or..
My question is, how have you guys set this up? How have you integrated the testers in the develpment process?
You need a staging server and deploy a build every once in a while. Thats how we do it: CI->Dev->Staging->Live
Edit: I always feel like an asshole posting wikilinks but this article about Multi-Stage CI is good: http://en.m.wikipedia.org/wiki/Multi-stage_continuous_integration
In my current project we have 4 small teams and each has 1 Tester assigned. The testers are part of the daily standup, sprint planning meetings etc. The testers also have their own daily standup so they can coordinate etc.
During the Sprint Planning Meeting 2 we create acceptance criteria / examples / test cases (however you want to call it) together (testers, developers and PO). The intend is to create a common understanding of the user story, to get the right direction and to split it into smaller pieces of functionality (scenario/test case) e.g. just a specific happy path. Thereby, we are able to deliver smaller working features, which can than be tested by the testers. Meanwhile the next part of the user story can be implemented.
Furthermore, it is decided which stories need an automated acceptance test and what level (unit, integration, gui test) makes most sense.
As already mentioned by OakNinja :) you will need at least one additional environment for the testers.
In our case those environments are not quality gates, but dev stages. So, whenever a developer finishes some functionality he tells the tester that he can redeploy if he wants to.
If the user story is finished it will be deployed on the staging server, where the acceptance of the user story will be made.
Deployment process:
Dev + Test => Staging (used for acceptances) => Demo (used for demoing user stories each 2nd week) => SIT and End2End Testing Environments (deployed each 2nd week) => Production (deployed roughly all 6 months)
We have QA resources involved throughout the sprint: Estimation, Planning, etc. When the devs first start coding, the QA members of the team start creating the test cases. As code gets checked in, it gets deployed out to a separate environment on a scheduled basis (or as needed) so that QA can execute their tests during the sprint. QA is also involved in regression after the stories have been mostly completed.
Our setup uses automated deployments using build configurations in TFS or TeamCity, depending on the project. Our environments are split like this:
Local development server. Developers have own source code, IIS, and databases (if necessary) to isolate them from each other and QA while working.
Build server. Used for CI, automated deployments. No websites or DBs here.
Daily Build environment (a.k.a. 'Dev' or 'Dev Test'). Fully functioning site where QA can review work as it is being done during the sprint and provide feedback.
QA lab (a.k.a. 'Regression' or 'UAT'). Isolated lab for regression testing, demos, and UAT.
We use build configurations to keep these up to date:
CI Build on checkins to handle checkins from local devs.
Daily scheduled build and automated deploy to Daily Build environment. Devs or QA can also trigger this manually, obviously, to make a push when needed.
Manual trigger for deploy to QA environment.
One point is missing from the explanations above, the best way to add your testers into the SCRUM process is by making sure they are part of the scrum team and work together with the rest of the team (devs, PO, etc) in the Sprint. Most of the time this is not really done, and all you end up having is (in the best case) a Mini-Waterfall process.
Now let me explain. There is little to add to the extensive hardware and environment explanations above, you can work with staged servers, or even better make it an internal feature to have the scripts in place that will allow testers to create their own environments when they want to (if you are using any CI framework chances are you already have all the parts needed in there).
What is bothering me is that you said that your testers "have their 'own' sprint".
The main problem that I've seen when getting testers involved into the SCRUM process is that they are not really part of the process itself. Sometimes the feeling is that they are not technical enough to work really close to developers, other times developers simply don't want to be bothered by explaining to testers what they are doing (until they are finished - not done!), other times it is simply a case of management not explaining that this is what is expected from the team.
In a nutshell, each User Story should have a technical owner and a testing owner. They should work together all the time and testing should start as soon as possible, even as short "informal clean-up tests" in the developers environment. After all the idea is to cut the Red Tape by eliminating all the unnecessary bureaucracy in the process.
Testers should also explain to developers the kind of testing they should be doing before telling the QA they can have a go on the feature. Manual testing is as much the responsibility of the developer as it is of the tester.
In short, if you want to have testers as part of your development, even more important than having the right infrastructure in place, you need to have the right mind-set in place, and this means changing the rules of the game and in many cases the way each person in the team sees his task and responsibility.
I wrote a couple of post on the subject in my blog, in case I didn't bother you too much up to now, you may find these interesting.
Switching to Agile, not as simple as changing your T-Shirt
Agile Thinking instead of Agile Testing
I will recommend to read the article "5 Tips for Getting Software Testing Done in the Scrum Sprint" by Clemens Reijnen. He explains how to integrate software testing teams and practices during a Scrum sprint.

How should we automate system testing? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We are building a large CRM system based on the SalesForce.com cloud. I am trying to put together a test plan for the system but I am unsure how to create system-wide tests. I want to use some behaviour-driven testing techniques for this, but I am not sure how I should apply them to the platform.
For the custom parts we will build in the system I plan to approach this with either Cucumber of SpecFlow driving Selenium actions on the UI. But for the SalesForce UI Customisations, I am not sure how deep to go in testing. Customisations such as Workflows and Validation Rules can encapsulate a lot of complex logic that I feel should be tested.
Writing Selenium tests for this out-of-box functionality in SalesForce seems overly burdensome for the value. Can you share your experiences on System testing with the SalesForce.com platform and how should we approach this?
That is the problem with detailed test plan up front. You trying to guess what kind of errors, how many, and in what areas you will get. This may be tricky.
Maybe you should have overall Master Test Plan specifying only test strategy, main tool set, risks, relative amount of how much testing you want to put in given areas (based on risk).
Then when you starting to work on given functionality or iteration (I hope you are doing this in iterations not waterfall), you prepare detailed test plan for this set of work. You adjust your tools/estimates/test coverage based on experiences from previous parts.
This way you can say at the beginning what is your general approach and priorities, but you let yourself adapt later as project progresses.
Question about how much testing you need to put into testing COTS is the same as with any software: you need to evaluate the risk.
If your software need to be
Validated because of external
regulations (FDA,DoD..)
you will need to go deep with your
tests, almost test entire app. One
problem here may be ensuring
external regulator, that tools you
used for validation are validated
(and that is a troublesome).
If your application is
mission-critical for your company,
than you still need to do a lot of
testing based on extensive risk
analysis.
If your application is not concerned
with all above, you can go with
lighter testing. Probably you can
skip functionality that was tested
by platform manufacturer, and focus
on your customisations. On the other
hand I would still write tests (at
least happy paths) for
workflows you will be using in your
business processes.
When we started learning Selenium testing in 2008 we created Recruiting application from SalesForce handbook and created a suite of tests and described our path step by step in our blog. It may help you get started if you decide to write Selenium code to test your app.
I believe the problem with SalesForce is you have Unit and UI testing, but no Service-level testing. The SpecFlow I've seen which drives Selenium UI is brittle and doesn't encapsulate what I'm after in engineering a service-level test solution:
( When I navigate to "/Selenium-Testing-Cookbook-Gundecha-Unmesh/dp/1849515743"
And I click the 'buy now' button
And then I click the 'proceed to checkout' button)
That is not the spirit or intent of Specflow.
Given I have not selected a product
When I select Proceed to Checkout
Then ensure I am presented with a message
In order to test that with selenium, you essentially have to translate that to clicks and typing, whereas in the .NET realm, you can instantiate objects, etc., in the middle-tier, and perform hundreds of instances and derivations against the same BACKGROUND (mock setup).
I'm told that you can expose SF through an API at some security risk. I'd love to find more about THAT.

Software "Robots" - Processes or work automation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have being toying with the idea of creating software “Robots” to help on different areas of the development process, repetitive task, automatable task, etc.
I have quite a few ideas where to begin.
My problem is that I work mostly alone, as a freelancer, and work tends to pill up, and I don’t like to extend or “blow” deadline dates.
I have investigated and use quite a few productivity tools. I have investigated CodeGeneration and I am projecting a tool to generate portions of code. I use codeReuse techniques. Etc.
Any one as toughs about this ? as there any good articles.
I wouldn't like to use code generation, but I have developed many tools to help me do many of the repetitive tasks.
Some of these could do nice things:
Email Robots
These receive emails and do a lot of stuff with them, they need to have some king of authentication to protect you from the bad stuff :
Automatically logs whatever was entered in a database or excel spreadsheet.
Updates something in a database.
Saves all the attachments in a specific shared folder.
Reboot a server.
Productivity
These will do repetitious tasks:
Print out all the invoices for the month.
Automatically merge data from several sources.
Send reminders of GTD items.
Send reminders of late TODO items.
Automated builds
Automated testing
Administration
These automate some repetitive server administration tasks:
Summarize server logs, remove regular items and send the rest by email
Rebuild indexes in a database
Take automatic backups
Meta-programming is a great thing. If you easily get access to the data about the class structure then you can automate a few things. In the high level language I use, I define a class like 'Property' for example. Add an integer for street number, a string for street name and a reference to the owning debtor. I then auto generate a form that has a text box for street number and street name, a lookup box for the debtor reference and the code to save and load is all auto-generated. It knows that street number is an integer so its text box can only accept integers. If I declare a read only property it will also make sure the text box is read only.
There are software robots, but often you really don't see them. For example consider a robot that is used to package stuff. There is a person who monitors the robot in case of a failure. When the robot fails, the person shuts the robot down and fixes things. That person is like a programmer who operates IDE to compile, refactor etc. When errors occur, the programmer fixes the code and runs the compiler again.
Well compiling is not very robot like, but then there are software that compile your project automatically. Now that is more like a kind of a robot. That software robot also checks things in the code like is there enough comments and so on.
Then we have software that generates code according to our input. For example we can create forms in MS Access easily with Wizards. The wizards are not automatically producing new forms form after form after form, because we need every form to be different. But the form generator is a kind of robot-like tool that is operated.
Of course you could input the details of every form first and then run generate, but people like to see soon every form. Also the input mechanism is the form pretty much already, so you get what you create on the fly. Though with data transformation tools you can create descriptions of forms from a list of field names, generate the forms, and call that as using robots.
There are even whole books about automated software production, but the biggest problem is, that the automation of the process lasts longer then the process itself.
Mostly programmers give up on this, since they try to achive everything on one step, from manual programming, to automation.
Common automation in software production is done through IDEs, CodeGenerators and such, until now nearly no logic is automated.
I would appreciate any advance in this topic. Try to automate little tasks from the process, and connect those tasks afterwards. Going step by step.
I'm guessing that, just like just about every software developer on planet Earth, you want to write software that writes software by itself. Unfortunately, it's an idea that only works on paper. I mean, we have things like code generators, DSLs, transformation pipelines, Visual Studio add-ins that statically analyse code and generate derivative code, and so on. But it's nowhere near anything one would call a 'robot'.
Personally, I think more needs to be done in this area. For example, the IDE should be able to infer things and make suggestions based on what I'm actually doing. For example, if I'm adding a property, the IDE infers what attributes other properties in the file has, and how the property itself is structured, and adjusts the property accordingly.
Any sort of AI is hard work and, regrettably, does not have such a great ROI. But it sure if fun.
Scripting away the repetitive tasks - that's what you refer? I guess you're a Windows developer where scripting is not as nearly common as in *nix world. Hence your question.
You might want to have a look at the *nix side of software development arena where the workflow is more or less similar to what you describe (at least more than Windows). Plowing your way via bash, perl, python, etc.. will get you what you want.
ps. Also look at nsr81's post in comments for similar scripting tools on Windows.
Code generation is certainly a viable tool for some tasks. If done poorly it can create maintenance problems, but it doesn't have to be done poorly. See Code Generation Network for a fairly active community, with conference, papers, etc.
Code Generation in Action is one book that comes to mind.
You can try Robot framework
http://robotframework.org/
Robot Framework is a generic automation framework,It has easy-to-use tabular test data syntax and it utilizes the keyword-driven approach.
Even you can used this tools as software bot (RPA).
Robotic Process Automation
First, a little back-story... In 2011, I was the Operations Manager for Contracting Center of Excellence at Bristol-Myers Squibb. We were in the early stages of rolling out a brand new Global Contracting System. This new system was replacing a great deal of manual effort across the globe with the intention of one system to create, store and retrieve Contracting information for all of the organization. No small task to be sure, and one we certainly underestimated the scope and eventual impact of. Like most organizations getting a handle on this contract management process, we found it to be from 4 to 10 times larger than originally expected.
We did a lot of things very right, including the building of a support organization from the ground up, who specialized on this specific application and becoming true subject matter experts to the organization in (7) languages and most time zones.
The application, on the other hand, brought it's own challenges which included missing features, less than stellar performance and a lot of back-end work needing done by the Operations team. This is where the Robotics Process Automation comes into the picture.
Many of the 'features' of this software were simply too complicated for end users to use, but were required to create contracts. The first example was adding a "Contact" to whom the Contract would be made with. The "Third Party", if you will. This is a seemingly simple thing, which took (7) screens of data entry, a cryptic point of access, twenty two minutes and a masters degree to figure out, on your own for each one. We quickly made the business decision to have the Operations team create these 'Contacts' on behalf of our end users. We anticipated the need to be a few thousand a year. We very quickly passed 800 requests per week. With three FTE's working on it, we had a backlog ever growing and a turn-around time of more than two weeks per request. Obviously, this would NOT due in any business environment.
The manual process was so complicated, even my staff had a large number of errors in creating them, even as subject matter experts. The resulting re-work further complicated the issue and added costs. I had some previous Automation experience and products that I worked with, but this need was even more intense and complicated than I had encountered before. I needed something great, fast, easy to implement and that would NOT require IT assistance (as that had it's own pitfalls.) I investigated a number of products, all professing to do similar things. One of course, stood out to me. It seemed to be the most capable, affordable and had good support options. The product I selected was Automation Anywhere at the bargain price of about $4000.00 USD.
I am not here to pitch for Automation Anywhere, or any specific product, for that matter. But, my experiences with this tool, forever changed my expectations and understanding of what Robotic Process Automation really means.
Now, don't get me wrong, I am not here to pitch for Automation Anywhere, or any specific product, for that matter. But, my experiences with this tool, forever changed my expectations and understanding of what Robotic Process Automation really means. (see below, if you are unsure)
After my first week, buying the tool and learning some of the features, I was able to implement a replacement of the manual process of creating a "Contact" in the contracting system from a two week turn around, to a (1) hour turn-around. It took the FTE effort of 22 minutes for each entry, to zero. I was able to run this Automated process from a desktop PC and handle every request, fully automated, including the validation and confirmation steps into other external systems to ensure better data quality than was ever possible, previously. In the first week, my costs for the software were recovered by over 200% in saved labor, allowing those resources to focus on other higher value tasks. I don't care where you are from, that is an amazing ROI!
That was just the beginning, now that we had this tool, and in fact it could do much more than this initial task I needed, it became one of the most valued resources for developing functional Proof of Concept/prototypes of more complex processes we needed to bridge the gaps in the contracting system. I was able to add on to the original purchase with an Enterprise License and secure a more robust infrastructure partnering with our IT department at a an insanely low cost for total implementation. I now had (5) dedicated Corporate servers operating 24/7 and (2) development licenses for building and supporting automation tasks and we were able to continue to support the Contracting initiative, even with the volume so much greater than anticipated with the same number of FTEs as we started with. It became the platform for reporting, end user notification, system alerts, updating data, work-flow, job scheduling, monitoring, ETL and even data entry and migration from other systems. The cost avoidance because of implementing this Robotic Process Automation tool can not be over stated. The soft-dollar savings from delivering timely solutions to the business community and the continued professional integrity we were able to demonstrate and promote is evident in the successful implementation to more than 48 countries in under (1) year and the entry of over 120,000 Contracts entered each year since.
It became the platform for reporting, end user notification, system alerts, updating data, work-flow, job scheduling, monitoring, ETL and even data entry and migration from other systems.
While the term, Robotic Process Automation is currently all the buzz, the concepts have been around for some time. Please, please however, don't make the assumption that this means it is a build and forget situation. As it grows, and it will grow, you need a strong plan to manage tasks, resources and infrastructure to keep things running. These tools basically mimic anything a human can do, and much more than a human as well. However, a human can rather quickly change their steps in a process if one of the 'source' systems she/he is using has a change in the user interface. Your Automation Tasks will need 'tweaked' to make that change in most cases. Some business processes can be easier than others to Automate and might be two complex for a casual "Automation task creator" to build and or maintain. Be very sure you have solid resources to build and maintain the tasks. If you plan to do more than one thing with your RPA tool, make sure to have solid oversight, governance, resources and a corporate 'champion' or I assure you, your efforts will not be successful.
Robotic Process Automation Defined:
(IRPA) Institute for Robotic Process Automation: “Robotic process automation (RPA) is the application of technology that allows employees in a company to configure computer software or a “robot” to capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems.”
Wikipedia: “Examples of robotic automation include the use of industrial robots in manufacturing and the use of software robots in automating clerical processes in services industries. In the latter case, the use of the term robot is metaphorical, conveying the similarity of those software products – which are produced to provide a generic automation capability and then configured within the end user environment to execute manual and repetitive tasks – to their industrial robot counterparts. The metaphor is apt in the sense that the software “robot” is now mimicking or replacing a function classically associated with a person.”