Bugtracker - agregation and automated workflow - process

Intro:
I'm working for a contractor company. We're making SW for different corporate clients, each with their own rules, SW standards etc.
Problem:
The result is, that we are using several bug-tracking systems. The amount of tickets flow is relatively big and the SLA are deadly sometimes. The main problem is, that we are keeping track of these tickets in our own BT (currently Mantis) but we're also communicating with clients in theirs BT. But as it is, two many channels of communication are making too much information noise.
Solution, progress:
Actual solution is an employee having responsibility for synchronizing the streams and keeping track of the SLA and many other things. It's consuming quite a large part of his time (cca 70%) that can be spend on something more valuable. The other thing is, that he is not fast enough and sometimes the sync is not really synced. Some parts of the comments are left only on one system, some are lost completely. (And don't start me at holidays or sickness, that's where the fun begins)
Question:
How to automate this process: aggregating tasks, watching SLA, notifying the right people etc. partially or all together?
Thank you, for your answers.

You need something like Zapier. It can map different applications and synchronize data between them. It works simply:
You create zap (for example between redmine and teamwork).
You configure mapping (how items/attributes in redmine maps to items/attributes in teamwork)
You generate access tokens in both systems and write them to zap.
Zapier makes regular synchronization between redmine and teamwork.
But mantis is not yet supported by Zapier. If all/most of your clients BT are in Zapier's apps list, you may move your own BT to another platform or make a request to Zapier for mantis support.
Another way is develop your own synchronization service that will connect to all client's BTs as each employee using login/password/token and download updates to your own BT. It is hard way and this solution requires continious development to support actual virsions of client's BTs.

You can have a look a Slack : https://slack.com/
It's a great tools for group conversations
Talk, share, and make decisions in open channels across your team, in
private groups for sensitive matters, or use direct messages
one-to-one.
you can have a lot of integrations tools, and you can use Zapier https://zapier.com/ with it to programm triggers.
With differents channels you can notifying the right people partially or all together in group conversation :)

The obvious answer is to create integrations between all of the various BT's. Without knowing what those are, it's hard to say if that's entirely possible. Most modern BTs have an API and support integrations. Some, especially more desktop based ones, don't. For those you probably have to monitor a database directly.
Zapier, as someone already suggested, is a great tool for creating integrations and may already have some of the ones yo need available. I love Slack and it has an API, but messages are basically just text and unless you want to do some kind of delimiting when you post messages to its API, it probably isn't going to work.
I'm not sure what budget is, but it will cost resources to create the integrations. I'd suggest that you hire someone to simply manage these. Someone who's sole responsibility is to cross-populate the internal and the external bug tracking system and track the progress in each. All you really need is someone with good attention to detail for this, they don't have to be a developer. This should be more cost effective than using developer resources on this.
The other alternative is simply to stop. If your requirements dictate that you use your clients' bug tracking software for projects you do for them, just use their software and stop duplicating the effort. If you need some kind of central repository or something for managing work maybe just a simple table somewhere or spreadsheet with the client, the project, the issue number, the status and if possible a link to the issue in the client's BT. I understand the need and desire for centralizing this, but if it's stifling productivity, then the opportunity costs are too high IMO.
If you create an integration tool foe this, you will indeed have a very viable product. This is actually a pretty common problem.

Related

What's the best way to monitor your REST API? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I've created an API based on the RESTful pattern and I was wondering what's the best way to monitor it? Can I somehow gather statistics on each request and how deep could I monitor the requests?
Also, could it be done by using open source software (maybe building my own monitoring service) or do I need to buy third party software?
If it could be achieved by using open source software where do I start?
Start with identifying the core needs that you think monitoring will solve. Try to answer the two questions "What do I want to know?" and "How do I want to act on that information?".
Examples of "What do I want to know?"
Performance over time
Largest API users
Most commonly used API features
Error occurrence in the API
Examples of "How do I want to act on that information?"
Review a dashboard of known measurements
Be alerted when something changes beyond expected bounds
Trace execution that led to that state
Review measurements for the entire lifetime of the system
If you can answer those questions, you can either find the right third party solution that captures the metrics that you're interested in, or inject monitoring probes into the right section of your API that will tell you what you need to do know. I noticed that you're primarily a Laravel user, so it's likely that many of the metrics you want to know can be captured by adding before ( Registering Before Filters On a Controller ) and after ( Registering an After Application Filter ) filters with your application, to measure time for response and successful completion of response. This is where the answers to the first set of questions are most important ( "What do I want to know?" ), as it will guide where and what you measure in your app.
Once you know where you can capture the data, selecting the right tool becomes a matter of choosing between (roughly) two classes of monitoring applications: highly specialized monitoring apps that are tightly bound to the operation of your application, and generalized monitoring software that is more akin to a time series database.
There are no popular (to my knowledge) examples of the highly specialized case that are open source. Many commercial solutions do exist however: NewRelic, Ruxit, DynaTrace, etc. etc. etc. Their function could easily be described to be similar to a remote profiler, with many other functions besides. (Also, don't forget that a more traditional profiler may be useful for collecting some of the information you need - while it definitely will not supplant monitoring your application, there's a lot of valuable information that can be gleaned from profiling even before you go to production.)
On the general side of things, there are many more open source options that I'm personally aware of. The longest lived is Graphite (a great intro to which may be read here: Measure Anything, Measure Everything), which is in pretty common use amongst many. Graphite is by far from the only option however, and you can find many other options like Kibana and InfluxDB should you wish to host yourself.
Many of these open source options also have hosted options available from several providers. Additionally, you'll find that there are many entirely commercial options available in this camp (I'm founder of one, in fact :) - Instrumental ).
Most of these commercial options exist because application owners have found it pretty onerous to run their own monitoring infrastructure on top of running their actual application; maintaining availability of yet another distributed system is not high on many ops personnel's wishlists. :)
(I'm clearly biased for answering this since I co-founded Runscope which I believe is the leader in API Monitoring, so you can take this all with a grain of salt or trust my years of experience working with 1000s of customers specifically on this problem :)
I don't know of any OSS tools specific to REST(ful) API monitoring. General purpose OSS metrics monitoring tools (like Graphite) can definitely help keep tabs on pieces of your API stack, but don't have any API-specific features.
Commercial metrics monitoring tools (like Datadog) or Application Performance Monitoring (APM) tools like (New Relic or AppDynamics) have a few more features specific to API use cases, but none are centered on it. These are a useful part of what we call a "layered monitoring approach": start with high-level API monitoring, and use these other tools (exception trackers, APM, raw logs) to dive into issues when they arise.
So, what API-specific features should you be looking for in an API monitoring tool? We categorize them based on the three factors that you're generally monitoring for: uptime/availability, performance/speed and correctness/data validation.
Uptime Monitoring
At a base level you'll want to know if you're APIs are even available to the clients that need to reach them. For "public" (meaning, available on the public internet, not necessarily publicized...a mobile backend API is public but not necessarily publicized) APIs you'll want to simulate the clients that are calling them as much as possible. If you have a mobile app, it's likely the API needs to be available around the world. So at a bare minimum, your API monitoring tool should allow you to run tests from multiple locations. If your API can't be reached from a location, you'll want notifications via email, Slack, etc.
If your API is on a private network (corporate firewall, staging environment, local machine, etc.) you'll want to be able to "see" it as well. There are a variety of approaches for this (agents, VPNs, etc.) just make sure you use one your IT department signs off on.
Global distribution of testing agents is an expensive setup if you're self-hosting, building in-house or using an OSS tool. You need to make sure each remote location you set up (preferably outside your main cluster) is highly-available and fully-monitored as well. This can get expensive and time-consuming very quickly.
Performance Monitoring
Once you've verified your APIs are accessible, then you'll want to start measuring how fast they are performing to make sure they're not slowing down the apps that consume them. Raw response times is the bare minimum metric you should be tracking, but not always the most useful. Consider cases where multiple API calls are aggregated into a view for the user, or actions by the user generate dynamic or rarely called data that may not be present in a caching layer yet. These multi-step tasks or workflows are can be difficult to monitor with APM or metrics-based tools as they don't have the capabilities to understand the content of the API calls, only their existence.
Externally monitoring for speed is also important to get the most accurate representation of performance. If the monitoring agent sits inside your code or on the same server, it's unlikely it's taking into account all the factors that an actual client experiences when making a call. Things like DNS resolution, SSL negotiation, load balancing, caching, etc.
Correctness and Data Validation
What good is an API that's up and fast if it's returning the wrong data? This scenario is very common and is ultimately a far worse user experience. People understand "down"...they don't understand why an app is showing them the wrong data. A good API monitoring tool will allow you to do deep inspection of the message payloads going back and forth. JSON and XML parsing, complex assertions, schema validation, data extractions, dynamic variables, multi-step monitors and more are required to fully validate the data being sent back and forth is Correct.
It's also important to validate how clients authenticate with your API. Good API-specific monitoring tools will understand OAuth, mutual authentication with client certificates, token authentication, etc.
Hopefully this gives you a sense of why API monitoring is different from "traditional" metrics, APM and logging tools, and how they can all play together to get a complete picture of your application is performing.
I am using runscope.com for my company. If you want something free apicombo.com also can do.
Basically you can create a test for your API endpoint to validate the payload, response time, status code, etc. Then you can schedule the test to run. They also provide some basic statistics.
I've tried several applications and methods to do that, and the best (for my company and our related projects) is to log key=value pairs (atomic entries with all the information associated with this operation like IP source, operation result, elapsed time, etc... on specific log files for each node/server) and then monitorize with Splunk. With your REST and json data maybe your aproach will be different, but it's also well supported.
It's pretty easy to install and setup. You can monitor (almost) real time data (responses times, operation results), send notifications on events and do some DWH (and many other things, there are lots of plugins).
It's not open source but you can try it for free if you use less than 50MB logs per day (that's how it worked some time ago, since now I'm on enterprise license I'm not 100% sure).
Here is a little tutorial explaining how to achieve what you are looking for: http://blogs.splunk.com/2013/06/18/getting-data-from-your-rest-apis-into-splunk/

Etherpad style synchronisation in Meteor?

Looking into Meteor to create a collaborative document editing app, because it’s great that Meteor synchronizes data between multiple clients by default.
But when using a text-area, like in Sameer Kalburgi’s example
http://www.skalb.com/2012/04/16/creating-a-document-sharing-site-with-meteor-js/
http://docshare-tutorial.meteor.com/
the experience is sub-optimal.
I tried to type at the same time with a colleague and my changes would be overwritten when she typed and vice versa. So in the conflict resolution there is no merge algorithm yet, I think?
Is this planned for the feature? Are there ways to implement this currently? Etherpad seems to handle this problem rather well. Having this in Meteor would make creating collaborative document editing apps way more accessible.
So I looked into it some more, the algorithm used in Etherpad is known as Operational Transformation:
The solution is Operational Transformation (OT). If you haven’t heard of it, OT is a class of algorithms that do multi-site realtime concurrency. OT is like realtime git. It works with any amount of lag (from zero to an extended holiday). It lets users make live, concurrent edits with low bandwidth. OT gives you eventual consistency between multiple users without retries, without errors and without any data being overwritten.
Unfortunately, implementing OT sucks. There's a million algorithms with different tradeoffs, mostly trapped in academic papers. The algorithms are really hard and time consuming to implement correctly. We need some good libraries, so any project can just plug in OT if they need it.
Thats’s from the site of sharejs. A node.js based ot server-client that you can hook into your existing client.
OT is also implemented in the Racer model synchronization engine for Node.js, that forms the underpinnings for Derby. At the moment, derby.js doesn’t transparently provide it yet, but they plan too, from the Derby docs:
Currently, Racer defaults to applying all transactions in the order received, i.e. last-writer-wins. (…) Racer [also] supports conflict resolution via a combination of Software Transactional Memory (STM), Operational Transformation (OT), and Diff-match-patch techniques.
These features are not fully implemented yet, but the Racer demos show preliminary examples of STM and OT.
Coincidentally, both the sharejs and derbyjs teams have an ex Google-waver on board. Meteor has an ex etherpad/Google Waver in their core team. Since Etherpad is one of the best known implementations of OT I was imagining Meteor would surely want to support it at some point as well…
I've created a Meteor smart package that integrates ShareJS:
https://github.com/mizzao/meteor-sharejs
It's quite preliminary right now, but you can import it into your app, drop in textareas, and it "just works". Please try it out and submit some new features via pull requests :)
Check out a demo here:
http://documents.meteor.com
What you describe seems out of Meteors scope for me. Its not a tool to set up collaboration possibilities!
What it provides is a way to transparently work against a subset of a servers database. But the implementation of use-case specific merging functionality is the job of the application, not the framework.

Integrating my RESTful web app with clients' SAP installations

My company runs a couple of B2B apps (written in Rails) dealing with parts and inventory and we've been trying to figure out the best way to integrate with some of our bigger users. We already offer the REST-style API that comes with Rails, but that, of course requires an IT Department on their end to decide to integrate it, so we'd like to lower that barrier if possible.
From what we've found, most of them are on SAP systems. Now, pretty much all I know about SAP is it's 1) expensive, 2) huge, 3) and does everything and anything you could ever need for your gigantic business to run. Naturally, this is all a bit imposing, and the resources on the site are a cross between impenetrable buzz-word laden sales material, and impenetrable jargon laden advanced technical material with little for the new, but technically competent user to be able to sink his teeth into.
So what I'm wondering is: as a 3rd party, that's not running a SAP installation, is there a way for us to offer access to our site's data through a web service or other API? Is it just a matter of providing or implementing a certain WSDL (and what would that be)? Is this feasible for someone without in-depth experience with SAP? Or is this a complete non-starter?
I'd say it's not possible without someone who knows the SAP system. You probably won't need to hire someone with in-depth SAP knowledge, but at least for the initial implementation, you'll need both the knowledge and a working system you can develop against. Technically speaking, it's not really that hard, but considering the fact that SAP systems are designed to handle multiple organizations, countries, legal systems, localizations and several thousands of users simultaneously, things are bound to be a bit more complex than almost any other software around - and most of the time not even bloated, it's just easy to get lost in that kind of flexibility.
My recommendation would be to find a customer (or a prospective customer) who has someone in their IT department with the necessary technical and processual knowledge and who is interested in conducting a development project. This way, you'd get access to a real system (testing of course) and someone who can explain to you the basics of the system. But, as I said, be prepared for complexity.
vwegert makes some excellent points.
As to this part of your question:
So what I'm wondering is: as a 3rd
party, that's not running a SAP
installation, is there a way for us to
offer access to our site's data
through a web service or other API? Is
it just a matter of providing or
implementing a certain WSDL (and what
would that be)?
Technically it is possible to expose any of your system's services as web-services to a client's SAP system. In order to do this you do not need any prior knowledge of SAP. (SAP should be able to import a WSDL, although there may be some limitations in the earlier pre-ECC5 systems).
For example a service that provides meter reads, airport departure schedules, industry trends etc is not dependend of what is in the user's system or how they set it up. However as soon as there is a need to initiate updates to the client system's data is when you need access to more specialised SAP knowledge.
Also note that many SAP functions can also be exposed as web services, but generally you do need someone with SAP (ABAP) knowledge to do this.
The ABAP language is actually fairly simple, but there is a huge learning curve to understand the data model and the myriad of configurable options in SAP.

Why use AMQP/ZeroMQ/RabbitMQ

as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.

Software "Robots" - Processes or work automation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have being toying with the idea of creating software “Robots” to help on different areas of the development process, repetitive task, automatable task, etc.
I have quite a few ideas where to begin.
My problem is that I work mostly alone, as a freelancer, and work tends to pill up, and I don’t like to extend or “blow” deadline dates.
I have investigated and use quite a few productivity tools. I have investigated CodeGeneration and I am projecting a tool to generate portions of code. I use codeReuse techniques. Etc.
Any one as toughs about this ? as there any good articles.
I wouldn't like to use code generation, but I have developed many tools to help me do many of the repetitive tasks.
Some of these could do nice things:
Email Robots
These receive emails and do a lot of stuff with them, they need to have some king of authentication to protect you from the bad stuff :
Automatically logs whatever was entered in a database or excel spreadsheet.
Updates something in a database.
Saves all the attachments in a specific shared folder.
Reboot a server.
Productivity
These will do repetitious tasks:
Print out all the invoices for the month.
Automatically merge data from several sources.
Send reminders of GTD items.
Send reminders of late TODO items.
Automated builds
Automated testing
Administration
These automate some repetitive server administration tasks:
Summarize server logs, remove regular items and send the rest by email
Rebuild indexes in a database
Take automatic backups
Meta-programming is a great thing. If you easily get access to the data about the class structure then you can automate a few things. In the high level language I use, I define a class like 'Property' for example. Add an integer for street number, a string for street name and a reference to the owning debtor. I then auto generate a form that has a text box for street number and street name, a lookup box for the debtor reference and the code to save and load is all auto-generated. It knows that street number is an integer so its text box can only accept integers. If I declare a read only property it will also make sure the text box is read only.
There are software robots, but often you really don't see them. For example consider a robot that is used to package stuff. There is a person who monitors the robot in case of a failure. When the robot fails, the person shuts the robot down and fixes things. That person is like a programmer who operates IDE to compile, refactor etc. When errors occur, the programmer fixes the code and runs the compiler again.
Well compiling is not very robot like, but then there are software that compile your project automatically. Now that is more like a kind of a robot. That software robot also checks things in the code like is there enough comments and so on.
Then we have software that generates code according to our input. For example we can create forms in MS Access easily with Wizards. The wizards are not automatically producing new forms form after form after form, because we need every form to be different. But the form generator is a kind of robot-like tool that is operated.
Of course you could input the details of every form first and then run generate, but people like to see soon every form. Also the input mechanism is the form pretty much already, so you get what you create on the fly. Though with data transformation tools you can create descriptions of forms from a list of field names, generate the forms, and call that as using robots.
There are even whole books about automated software production, but the biggest problem is, that the automation of the process lasts longer then the process itself.
Mostly programmers give up on this, since they try to achive everything on one step, from manual programming, to automation.
Common automation in software production is done through IDEs, CodeGenerators and such, until now nearly no logic is automated.
I would appreciate any advance in this topic. Try to automate little tasks from the process, and connect those tasks afterwards. Going step by step.
I'm guessing that, just like just about every software developer on planet Earth, you want to write software that writes software by itself. Unfortunately, it's an idea that only works on paper. I mean, we have things like code generators, DSLs, transformation pipelines, Visual Studio add-ins that statically analyse code and generate derivative code, and so on. But it's nowhere near anything one would call a 'robot'.
Personally, I think more needs to be done in this area. For example, the IDE should be able to infer things and make suggestions based on what I'm actually doing. For example, if I'm adding a property, the IDE infers what attributes other properties in the file has, and how the property itself is structured, and adjusts the property accordingly.
Any sort of AI is hard work and, regrettably, does not have such a great ROI. But it sure if fun.
Scripting away the repetitive tasks - that's what you refer? I guess you're a Windows developer where scripting is not as nearly common as in *nix world. Hence your question.
You might want to have a look at the *nix side of software development arena where the workflow is more or less similar to what you describe (at least more than Windows). Plowing your way via bash, perl, python, etc.. will get you what you want.
ps. Also look at nsr81's post in comments for similar scripting tools on Windows.
Code generation is certainly a viable tool for some tasks. If done poorly it can create maintenance problems, but it doesn't have to be done poorly. See Code Generation Network for a fairly active community, with conference, papers, etc.
Code Generation in Action is one book that comes to mind.
You can try Robot framework
http://robotframework.org/
Robot Framework is a generic automation framework,It has easy-to-use tabular test data syntax and it utilizes the keyword-driven approach.
Even you can used this tools as software bot (RPA).
Robotic Process Automation
First, a little back-story... In 2011, I was the Operations Manager for Contracting Center of Excellence at Bristol-Myers Squibb. We were in the early stages of rolling out a brand new Global Contracting System. This new system was replacing a great deal of manual effort across the globe with the intention of one system to create, store and retrieve Contracting information for all of the organization. No small task to be sure, and one we certainly underestimated the scope and eventual impact of. Like most organizations getting a handle on this contract management process, we found it to be from 4 to 10 times larger than originally expected.
We did a lot of things very right, including the building of a support organization from the ground up, who specialized on this specific application and becoming true subject matter experts to the organization in (7) languages and most time zones.
The application, on the other hand, brought it's own challenges which included missing features, less than stellar performance and a lot of back-end work needing done by the Operations team. This is where the Robotics Process Automation comes into the picture.
Many of the 'features' of this software were simply too complicated for end users to use, but were required to create contracts. The first example was adding a "Contact" to whom the Contract would be made with. The "Third Party", if you will. This is a seemingly simple thing, which took (7) screens of data entry, a cryptic point of access, twenty two minutes and a masters degree to figure out, on your own for each one. We quickly made the business decision to have the Operations team create these 'Contacts' on behalf of our end users. We anticipated the need to be a few thousand a year. We very quickly passed 800 requests per week. With three FTE's working on it, we had a backlog ever growing and a turn-around time of more than two weeks per request. Obviously, this would NOT due in any business environment.
The manual process was so complicated, even my staff had a large number of errors in creating them, even as subject matter experts. The resulting re-work further complicated the issue and added costs. I had some previous Automation experience and products that I worked with, but this need was even more intense and complicated than I had encountered before. I needed something great, fast, easy to implement and that would NOT require IT assistance (as that had it's own pitfalls.) I investigated a number of products, all professing to do similar things. One of course, stood out to me. It seemed to be the most capable, affordable and had good support options. The product I selected was Automation Anywhere at the bargain price of about $4000.00 USD.
I am not here to pitch for Automation Anywhere, or any specific product, for that matter. But, my experiences with this tool, forever changed my expectations and understanding of what Robotic Process Automation really means.
Now, don't get me wrong, I am not here to pitch for Automation Anywhere, or any specific product, for that matter. But, my experiences with this tool, forever changed my expectations and understanding of what Robotic Process Automation really means. (see below, if you are unsure)
After my first week, buying the tool and learning some of the features, I was able to implement a replacement of the manual process of creating a "Contact" in the contracting system from a two week turn around, to a (1) hour turn-around. It took the FTE effort of 22 minutes for each entry, to zero. I was able to run this Automated process from a desktop PC and handle every request, fully automated, including the validation and confirmation steps into other external systems to ensure better data quality than was ever possible, previously. In the first week, my costs for the software were recovered by over 200% in saved labor, allowing those resources to focus on other higher value tasks. I don't care where you are from, that is an amazing ROI!
That was just the beginning, now that we had this tool, and in fact it could do much more than this initial task I needed, it became one of the most valued resources for developing functional Proof of Concept/prototypes of more complex processes we needed to bridge the gaps in the contracting system. I was able to add on to the original purchase with an Enterprise License and secure a more robust infrastructure partnering with our IT department at a an insanely low cost for total implementation. I now had (5) dedicated Corporate servers operating 24/7 and (2) development licenses for building and supporting automation tasks and we were able to continue to support the Contracting initiative, even with the volume so much greater than anticipated with the same number of FTEs as we started with. It became the platform for reporting, end user notification, system alerts, updating data, work-flow, job scheduling, monitoring, ETL and even data entry and migration from other systems. The cost avoidance because of implementing this Robotic Process Automation tool can not be over stated. The soft-dollar savings from delivering timely solutions to the business community and the continued professional integrity we were able to demonstrate and promote is evident in the successful implementation to more than 48 countries in under (1) year and the entry of over 120,000 Contracts entered each year since.
It became the platform for reporting, end user notification, system alerts, updating data, work-flow, job scheduling, monitoring, ETL and even data entry and migration from other systems.
While the term, Robotic Process Automation is currently all the buzz, the concepts have been around for some time. Please, please however, don't make the assumption that this means it is a build and forget situation. As it grows, and it will grow, you need a strong plan to manage tasks, resources and infrastructure to keep things running. These tools basically mimic anything a human can do, and much more than a human as well. However, a human can rather quickly change their steps in a process if one of the 'source' systems she/he is using has a change in the user interface. Your Automation Tasks will need 'tweaked' to make that change in most cases. Some business processes can be easier than others to Automate and might be two complex for a casual "Automation task creator" to build and or maintain. Be very sure you have solid resources to build and maintain the tasks. If you plan to do more than one thing with your RPA tool, make sure to have solid oversight, governance, resources and a corporate 'champion' or I assure you, your efforts will not be successful.
Robotic Process Automation Defined:
(IRPA) Institute for Robotic Process Automation: “Robotic process automation (RPA) is the application of technology that allows employees in a company to configure computer software or a “robot” to capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems.”
Wikipedia: “Examples of robotic automation include the use of industrial robots in manufacturing and the use of software robots in automating clerical processes in services industries. In the latter case, the use of the term robot is metaphorical, conveying the similarity of those software products – which are produced to provide a generic automation capability and then configured within the end user environment to execute manual and repetitive tasks – to their industrial robot counterparts. The metaphor is apt in the sense that the software “robot” is now mimicking or replacing a function classically associated with a person.”