Related
Should developers be limited to certain applications for development use?
For most, the answer would be as long as the development team agrees it shouldn't matter.
For a company that is audited for security certifications, is there a method that balances the risk of the company and the flexibility, performance of the developers?
Scope
coding/development software
build system software
3rd party software included with distribution (libraries, utilities)
(Additional) Remaining software on workstation
Possible solutions
Create white-list of approved software where developer must ask for approval for desired software before he/she can use it. Approval would be based on business purpose/security risk.
Create black-list for software. Developers list all software used. Review board periodically goes over list.
Has anyone had to work at a company that restricted developer tools beyond the team setting? How did they handle the situation?
Edit
Cleaned up question. Attempted to make less argumentative.
Limiting the software that developers can use on their work machines is a fantastic idea. This way, all the developers will quit, and then the company won't have to spend as much money on salaries and equipment, resulting in higher profits.
Real answer: NO!!!
No, developers should not be limited in the software they use, because it prevents them from successfully doing their jobs. Think about how much you are paying your team of developers, - do you really want all that money to go spiraling down the drain because you've artificially prevented them from solving problems?
1) Company locks down the pc and treats the developer as competent as a secretary
What happens when the developer needs to do something with administrative permissions? EG: Register a COM object, restart IIS, or install the product they're building? You've just shut them down.
2) Create a white-list of approved software...
This is also impractical due to the sheer amount of software. As a .NET developer I regularly (at least once per week) use upwards of 50 distinct applications, and am constantly evaluating newer upgrades/alternatives for many of these applications. If everything must go through a whitelist, your "approval" staff are going to be utterly swamped by just one or 2 developers, let alone a team of them.
If you take either of these actions, you'll achieve the following:
You'll burn giant piles of time and money as the developers sit on their thumbs waiting for your approval team, or doing things the long slow tedious way because they weren't allowed to install a helpful tool
You'll make yourself the enemy of the development department (not good if you want your devs to actually do what you ask them to do)
You'll depress team morale substantially. Nobody enjoys feeling like they're locked in a cage, and every time they think "This would be finished 5 hours ago if only I could install grep", they'll be unhappy.
A more acceptable answer is to create a blacklist for "problem" software (and websites) such as Pidgin, MSN messenger, etc if you have problems with developers slacking off. Some developers will also rail against this, but many will be OK with it, provided you are sensible in what you blacklist and don't go overboard.
I think developers should have total control on applications that they use as long as they can do their job with them. Developers' productivity is directly related to working environment and no one will like being restricted and everyone likes to use software they like themselves.
Of course there should be some standards in terms of version control, document format, etc., but generally developers should have right to use any programs they want.
And security should be developer's concern - company admins should care about setting up proper firewall to protect against any kinds of attacks.
A better solution would to create a secure independent environment for the developers. An environment that if compromised won't put the rest of the company at risk.
The very nature of the development is to create crafty ingenuous pithy solutions. To achieve this, failures must happen.
Whatever they do, don't take away the Internet in general. Google = Coding Help 101 :)
Or maybe just leave www.stackoverflow.com allowed haha.
I'd say this depends on quite a list of factors.
One is team size. If you have a team of half a dozen developers, this can be negotiated whenever a need for some application pops up. If you have a team of 100 developers, some policy is probably in order.
Another factor is what those developers do. If they compile C code using a proprietary compiler for an embedded platform, things are very different from a team producing distributed web or PC software in a constantly shifting environment.
The software you produce and the target customers are important, too. If you're porting the Linux kernel to some new platform, whether code leaks probably doesn't matter all that bad. OTOH, there are a lot of cases where this is very different.
There are more factors, but in the end it all boils down to two conflicting goals:
You want to give your developers as much freedom as possible, because that stimulates their creativity.
You want to restrict them as much as possible, as this reduces risks. (I'm talking of security risks as well as the risk to ship non-functioning software etc.)
You'll have to find a middle ground that doesn't hurt creativity while allowing enough guarantees to not to hurt the company.
Of course! If you want a repeatable build process, you don't want it contaminated by whatever random bit of junk a programmer happens to use as a tool to generate part of the code. Since whatever application you are building lasts much longer than anyone expects, you also want to ensure that the tools you use to build it are available for roughly the same duration; random tools from the internet don't provide any such gaurantee.
Your team should say "The following tools are allowed for build steps and nothing else" and attempt to make that list short.
Obviously, it shouldn't matter what a programmer looks at to decide what to do, so the entire Internet is just fine as long as its just-look. Nor does it matter if he produces code by magic (or random tool) as long as your team doesn't mind accepting just that tool's output as though it were written by hand.
Our company has recently decided that a good section of our IT department is actually doing product development and not internal IT development and now has created a new department.
What are the types of changes that developers should be looking to make during this type of transition?
Is there really any difference between internal development and product development?
I don't know how quickly the differences will assert themselves in an existing group which is transitioning from one role to the other, but having worked as both an internal developer and a product developer, two huge differences leap to mind: requirements and and testing.
As a developer of internal tools, I was pretty much given free reign regarding interface, organization, and even scope. Specs were in the form of an email saying "can you write something that does X?" Similarly, testing was almost non-existent. After whatever testing I was able to do, the tools would be deployed directly to their target audience and bug reports, when they came at all, were directly from those end-users, again usually via email or even hall-tackle.
Now that I'm doing product development, the difference is dramatic. Specs are 30-100 page Word documents and we have a dedicated testing department which makes darn sure that what we produce matches those specs. I'm much better supported by the project managers and have clear channels for any feedback I have on requirements or design. It could be argued that product development offers less freedom to the individual developer, but in exchange for being part of a (hopefully) better-organized and better-supported team.
[I work with Jeff.]
As others have mentioned, a key difference stems from the nature of the user:
When developing internal applications, you're typically dealing users from a single department or group who use a limited number of apps, and they're mostly a captive userbase.
When developing external products, you're dealing with users who see the whole enchilada, and they're paying customers who can take their business elsewhere.
That difference matters because a coherent user experience across multiple applications becomes much more important for paying customers who see the entire set of apps.
In the IT case, neither the business nor the users themselves are typically concerned about the fact that their app doesn't look and work like some other app that some other department uses. And if for whatever reason they did care, their ability to do anything about it is more limited. While it's possible to make a business case for consistency across apps (branding to internal employees, usability for employees who use multiple apps, reduced development costs resulting from the reuse of common libraries, patterns, services, etc.), this kind of concern typically takes a backseat to the development of new business functionality.
In the product case, the users do care about coherence across the entire set of apps, and they can do something about it if the experience is weak.
So one major difference is the relative importance of a coherent, quality user experience. But that goal itself has significant organizational ramifications that we're already starting to see.
We'll see an increased emphasis on "horizontal" activities that seek to establish standards and improve communications across teams, since such activities directly support the goal of producing coherent products. Cross-cutting teams (like user experience) will become more influential than they formerly were, we may see new cross-cutting teams (e.g. teams that look at architecture across many systems rather than just a single system), we'll probably see more presentations about what different apps do, more cross-training, etc.
App development teams will have less autonomy than they formerly had in charting their own course. The user experience team (working with end users, business stakeholders and engineering teams) will specify standards around visual design, interaction design, etc. and the app teams will be expected to adopt those. We'll see test practices become more standardized. Builds and deployments across apps will be more uniform and coordinated. Monitoring, alerting, response time SLAs and other operational metrics will be more uniform across apps. The app teams won't be able to define these for themselves anymore (though they can certainly contribute to the larger discussion).
Management will increasingly allocate resources in a way that seeks to optimize globally across the entire product instead of optimizing locally for individual apps. So where in the past the composition of individual app teams has been relatively fixed (developers, SQA, etc.), going forward we should expect to see more fluidity.
A huge difference is in your customer. IT developers have the rest of the company (and sometimes partner / subsidiary companies) as their primary customers. The Product Development developers have customers (i.e. the people who buy the product that is the companies reason for existing) as the primary customer.
Yes, very much so. There's a huge difference between performing an activity that drives revenue and invoices and one that's perceived as overhead.
My experiance is that people working on product development side of the house have bigger budgets, better training, better travel and more skilled employees. Having worked in a product development company it always felt like the lower skilled employees were thrown over to IT department.
In-house development is very process-orientated. They just want to deliver xyz functionality and have it work based on a company-wide strategy. It's not the iterative make-the-product better cycle you get when your primary product is your code. As a result, in-house stuff is often just 'good enough' whereas software companies probably tend to try to improve their product long after it 'just works'.
Note, in-house dev teams can get away with being 'good enough' but software development companies can get away with it for a while but will ultimately lose out to the ones who strive to improve.
I think moving between both environments could be a shock to the system in either direction but that's not to say they both can't be run in the same way - just that in my experience they probably won't. For example, UI is usually given a lower priority when it is in-house software as the customers are generally getting paid to use it rather than paying to use it.
I have been equals amounts of time in both types of organizations. The particular organization where I worked where the software development was part of an IT department treated the development of software as a cost center, where as software development as part of an product was seen as a profit center.
The two are very different. The skill level of developers in my case was vastly different-- those working on a public product were overall better skilled and cared more about the quality of their work. As a developer of a real public product you are actually making the company money.
In my internal software job I typically had a set of known fixed requirements mostly worked out. I designed a solution, but was always given a deadline that was unreasonable if quality was a concern (including code quality), rushed the coding, and delivered the result. Any bugs found that passed any short QA process typically only got fixed if they made formal requests for fixes.
Product development in my experience is almost the reverse. All the requirements are not fixed (only what I was working on for that week was fixed), design is usually dictated by someone who has worked on the product the longest. I get decide how long something takes (but got to really explain why and justify the time), and coding is usually not rushed. Experimenting with ideas generally is more difficult in product development because a product that is meant for public use should use tried and tested approaches.
Therefore, I would say that if creativity is really important then product development may not be for you because a particular idea that you personally have is unlikely to ever make it into the product unless you can make a business case for it and somehow make it more important than what the business has already been planning.
Choosing a particular library is also more difficult depending on how the software is deployed. For example software used by the government typically has to pass Common Criteria certification, which can eliminate certain library choices.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I'm looking for process suggestions, and I've seen a few around the site. What I'd love to hear is what you specifically use at your company, or just you and your hobby projects. Any links to other websites talking about these topics is certainly welcome!
Some questions to base an answer off of:
How do users report bugs/feature requests to you? What software do you use to keep track of them?
How do bugs/feature requests get turned into "work"? Do you plan the work? Do you have a schedule?
Do you have specs and follow them? How detailed are they?
Do you have a technical lead? What is their role? Do they do any programming themselves, or just architecture/mentoring?
Do you unit test? How has it helped you? What would you say your coverage is?
Do you code review? When working on a tight deadline, does code readability suffer? Do you plan to go back later and clean it up?
Do you document? How much commenting do you or your company feel comfortable with? (Description of class, each method and inside methods? Or just tricky parts of the code?)
What does your SCM flow look like? Do you use feature branches, tags? What does your "trunk" or "master" look like? Is it where new development happens, or the most stable part of your code base?
For my (small) company:
We design the UI first. This is absolutely critical for our designs, as a complex UI will almost immediately alienate potential buyers. We prototype our designs on paper, then as we decide on specifics for the design, prepare the View and any appropriate Controller code for continuous interactive prototyping of our designs.
As we move towards an acceptable UI, we then write a paper spec for the workflow logic of the application. Paper is cheap, and churning through designs guarantees that you've at least spent a small amount of time thinking about the implementation rather than coding blind.
Our specs are kept in revision control along with our source. If we decide on a change, or want to experiment, we branch the code, and IMMEDIATELY update the spec to detail what we're trying to accomplish with this particular branch. Unit tests for branches are not required; however, they are required for anything we want to incorporate back into trunk. We've found this encourages experiments.
Specs are not holy, nor are they owned by any particular individual. By committing the spec to the democratic environment of source control, we encourage constant experimentation and revision - as long as it is documented so we aren't saying "WTF?" later.
On a recent iPhone game (not yet published), we ended up with almost 500 branches, which later translated into nearly 20 different features, a huge number of concept simplifications ("Tap to Cancel" on the progress bar instead of a separate button), a number of rejected ideas, and 3 new projects. The great thing is each and every idea was documented, so it was easy to visualize how the idea could change the product.
After each major build (anything in trunk gets updated, with unit tests passing), we try to have at least 2 people test out the project. Mostly, we try to find people who have little knowledge of computers, as we've found it's far too easy to design complexity rather than simplicity.
We use DOxygen to generate our documentation. We don't really have auto generation incorporated into our build process yet, but we are working on it.
We do not code review. If the unit test works, and the source doesn't cause problems, it's probably ok - but this is because we are able to rely on the quality of our programmers. This probably would not work in all environments.
Unit testing has been a god-send for our programming practices. Since any new code can not be passed into trunk without appropriate unit tests, we have fairly good coverage with our trunk, and moderate coverage in our branches. However, it is no substitute for user testing - only a tool to aid in getting to that point.
For bug tracking, we use bugzilla. We don't like it, but it works for now. We will probably soon either roll our own solution or migrate to FogBugz. Our goal is to not release software until we reach a 0 known bugs status. Because of this stance, our updates to our existing code packages are usually fairly minimal.
So, basically, our flow usually looks something like this:
Paper UI Spec + Planning » Mental Testing » Step 1
View Code + Unit Tests » User Testing » Step 1 or 2
Paper Controller & Model Spec + Planning » Mental Testing » Step 2 or 3
Model & Controller Code + Unit Tests » User Testing » Step 3 or 4
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Rejection
Branched Idea » Spec » Coding (no unit tests) » Mental Testing » Acceptance » Unit Tests » Trunk » Step 2 or 4
Known Bugs » Bug Tracker » Bug Repair » Step 2 or 4
Finished Product » Bug Reports » Step 2 or 4
Our process is not perfect by any means, but a perfect process would also imply perfect humans and technology - and THAT's not going to happen anytime soon. The amount of paper we go through in planning is staggering - maybe it's time for us to get a contract with Dunder Mifflin?
I am not sure why this question was down voted. I think it's a great question. It's one thing to google search, and read some random websites which a lot of times are trying to sell you something rather than to be objective. And it's another thing to ask SO crowd which are developers/IT Mangers to share their experiences, and what works or doesn't work for their teams.
Now that this point is out of the way. I am sure a lot of developers will point you towards "Agile" and/or Scrum, keep in mind that these terms are often used very loosely especially Agile. I am probably going to sound very controversial by saying this which is not my intention, but these methodologies are over-hyped, especially Scrum which is more of a product being marketed by Scrum consultants than "real" methodology. Having said that, at the end of a day, you got to use what works the best for you and your team, if it's Agile/Scrum/XP or whatever, go for it. At the same time you need to be flexible about it, don't become religious about any methodology, tool, or technology. If something is not working for you, or you can get more efficient by changing something, go for it.
To be more specific regarding your questions. Here's the basic summary of techniques that have been working for me (a lot of these are common sense):
Organize all the documents, and emails pertaining to a specific project, and make it accessible to others through a central location (I use MS OneNote 2007 and Love it for all my documentation, progess, features, and bug tracking, etc.)
All meetings (which you should try to minimize) should be followed by action items where each item is assigned to a specific person. Any verbal agreement should be put into a written document. All documents added to the project site/repository. (MS OneNote in my case)
Before starting any new development, have a written document of what the system will be capable of doing (and what it wont do). Commit to it, but be flexible to business needs. How detailed the document should be? Detailed enough so that everyone understands what the final system will be capable of.
Schedules are good, but be realistic and honest to yourself and business users. The basic guideline that I use: release quality and usable software that lacks some features, rather than a buggy software with all the features.
Have open lines of communication among your dev. team and between your developers and business groups, but at the end of a day, one person (or a few key people) should be responsible for making key decisions.
Unit test where it makes sense. But DO NOT become obsessive about it. 100% code coverage != no bugs, and software works correctly according to the specs.
Do have code standards, and code reviews. Commit to standards, but if it does not work for some situations allow for flexibility.
Comment your code especially hard to read/understand parts, but don't make it into a novel.
Go back and clean up you code if you already working on that class/method; implementing new feature, working on a bug fix etc. But don't refactor it just for the sake of refactoring, unless you have nothing else to do and you're bored.
And the last and more important item:
Do not become religious about any specific methodology or technology. Borrow the best aspects from each, and find the balance that works for you and your team.
We use Trac as our bug/feature request tracking system
Trac Tickets are reviewed, changed to be workable units and then assigned to a milestone
The trac tickets are our specs, containing mostly very sparse information which has to be talked over during the milestone
No, but our development team consists only of two members
Yes, we test, and yes, TDD has helped us very much. Coverage is at about 70 Percent (Cobertura)
No, we refactor when appropriate (during code changes)
We document only public methods and classes, our maximum line count is 40, so methods are usually so small to be self-describing (if there is such a thing ;-)
svn with trunk, rc and stable branches
trunk - Development of new features, bugfixing of older features
rc - For in house testing, bugfixes are merged down from trunk
stable - only bugfixing merged down from trunk or rc
To give a better answer, my company's policy is to use XP as much as possible and to follow the principles and practices as outlined in the Agile manifesto.
http://agilemanifesto.org/
http://www.extremeprogramming.org/
So this includes things like story cards, test-driven development, pair programming, automated testing, continuous integration, one-click installs and so on. We are not big on documentation, but we realize that we need to produce just enough documentation in order to create working software.
In a nut shell:
create just enough user stories to start development (user stories here are meant to be the beginning of the conversation with business and not completed specs or fully fleshed out use cases, but short bits of business value that can be implemented in less then 1 iteration)
iteratively implement story cards based on what the business prioritizes as the most important
get feedback from the business on what was just implemented (e.g., good, bad, almost, etc)
repeat until business decides that the software is good enough
There are a couple of questions on Stackoverflow asking whether x (Ruby / Drupal) technology is 'enterprise ready'.
I would like to ask how is 'enterprise ready' defined.
Has anyone created their own checklist?
Does anyone have a benchmark that they test against?
"Enterprise Ready" for the most part means can we run it reliably and effectively within a large organisation.
There are several factors involved:
Is it reliable?
Can our current staff support it, or do we need specialists?
Can it fit in with our established security model?
Can deployments be done with our automated tools?
How easy is it to administer? Can the business users do it or do we need a specialist?
If it uses a database, is it our standard DB, or do we need to train up more specialists?
Depending on how important the system is to the business the following question might also apply:
Can it be made highly available?
Can it be load balanced?
Is it secure enough?
Open Source projects often do not pay enough attention to the difficulties of deploying and running software within a large organisation. e.g. Most OS projects default to MySql as the database, which is a good and sensible choice for most small projects, however, if your Enterprise has an ORACLE site license and a team of highly skilled ORACLE DBAs in place the MySql option looks distinctly unattractive.
To be short:
"Enterprise ready" means: If it crashes, the enterprises using it will possibly sue you.
Most of the time the "test", if it may really be called as such, is that some enterprise (=large business), has deployed a successful and stable product using it. So its more like saying its proven its worth on the battlefield, or something like that. In other words the framework has been used successfully, or not in the real world, you can't just follow some checklist and load tests and say its enterprise ready.
Like Robert Gould says in his answer, it's "Enterprise-ready" when it's been proven by some other huge project. I'd put it this way: if somebody out there has made millions of dollars with it and gotten written up by venture capitalist magazines as the year's (some year, not necessarily this one) hottest new thing, then it's Enterprise-ready. :)
Another way to look at the question is that a tech is Enterprise-ready when a non-tech boss or business owner won't worry about whether or not they've chosen a good platform to run their business on. In this sense Enterprise-ready is a measure of brand recognition rather than technological maturity.
Having built a couple "Enterprise" applications...
Enterprise outside of development means, that if it breaks, someone can fix it. I've worked with employers/contractors that stick with quite possibly the worst managing hosting providers, data vendors, or such because they will fix problems when they crop up, even if they crop up a lot it, and have someone to call when they break.
So to restate it another way, Enterprise software is Enterprisey because it has support options available. A simple example: jQuery isn't enterprisey while ExtJS is, because ExtJS has a corporate support structure to it. (Yes I know these two frameworks is like comparing a toolset to a factory manufactured home kit ).
As my day job is all about enterprise architecture, I believe that the word enterprise isn't nowadays about size nor scale but refers more to how a software product is sold.
For example, Ruby on Rails isn't enterprise because there is no vendor that will come into your shop and do Powerpoint presentations repeatedly for the developer community. Ruby on Rails doesn't have a sales executive that takes me out to the golf course or my favorite restaurant for lunch. Ruby on Rails also isn't deeply covered by industry analyst firms such as Gartner.
Ruby on Rails will never be considered "enterprise" until these things occur...
From my experience, "Enterprise ready" label is an indicator of the fear of managers to adopt an open-source technology, possibly balanced with a desire not to stay follower in that technology.
This may objectively argued with considerations such as support from a third party company or integration in existing development tools.
I suppose an application could be considered "enterprise ready" when it is stable enough that a large company would use it. It would also imply some level of support, so when it does inevitable break.
Wether or not something is "enterprise ready" is entirely subjective, and undefined, and rather "buzz word'y".. Basically, you can't have a test_isEnterpriseReady() - just make your application as reliable and efficient as it can be..
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have being toying with the idea of creating software “Robots” to help on different areas of the development process, repetitive task, automatable task, etc.
I have quite a few ideas where to begin.
My problem is that I work mostly alone, as a freelancer, and work tends to pill up, and I don’t like to extend or “blow” deadline dates.
I have investigated and use quite a few productivity tools. I have investigated CodeGeneration and I am projecting a tool to generate portions of code. I use codeReuse techniques. Etc.
Any one as toughs about this ? as there any good articles.
I wouldn't like to use code generation, but I have developed many tools to help me do many of the repetitive tasks.
Some of these could do nice things:
Email Robots
These receive emails and do a lot of stuff with them, they need to have some king of authentication to protect you from the bad stuff :
Automatically logs whatever was entered in a database or excel spreadsheet.
Updates something in a database.
Saves all the attachments in a specific shared folder.
Reboot a server.
Productivity
These will do repetitious tasks:
Print out all the invoices for the month.
Automatically merge data from several sources.
Send reminders of GTD items.
Send reminders of late TODO items.
Automated builds
Automated testing
Administration
These automate some repetitive server administration tasks:
Summarize server logs, remove regular items and send the rest by email
Rebuild indexes in a database
Take automatic backups
Meta-programming is a great thing. If you easily get access to the data about the class structure then you can automate a few things. In the high level language I use, I define a class like 'Property' for example. Add an integer for street number, a string for street name and a reference to the owning debtor. I then auto generate a form that has a text box for street number and street name, a lookup box for the debtor reference and the code to save and load is all auto-generated. It knows that street number is an integer so its text box can only accept integers. If I declare a read only property it will also make sure the text box is read only.
There are software robots, but often you really don't see them. For example consider a robot that is used to package stuff. There is a person who monitors the robot in case of a failure. When the robot fails, the person shuts the robot down and fixes things. That person is like a programmer who operates IDE to compile, refactor etc. When errors occur, the programmer fixes the code and runs the compiler again.
Well compiling is not very robot like, but then there are software that compile your project automatically. Now that is more like a kind of a robot. That software robot also checks things in the code like is there enough comments and so on.
Then we have software that generates code according to our input. For example we can create forms in MS Access easily with Wizards. The wizards are not automatically producing new forms form after form after form, because we need every form to be different. But the form generator is a kind of robot-like tool that is operated.
Of course you could input the details of every form first and then run generate, but people like to see soon every form. Also the input mechanism is the form pretty much already, so you get what you create on the fly. Though with data transformation tools you can create descriptions of forms from a list of field names, generate the forms, and call that as using robots.
There are even whole books about automated software production, but the biggest problem is, that the automation of the process lasts longer then the process itself.
Mostly programmers give up on this, since they try to achive everything on one step, from manual programming, to automation.
Common automation in software production is done through IDEs, CodeGenerators and such, until now nearly no logic is automated.
I would appreciate any advance in this topic. Try to automate little tasks from the process, and connect those tasks afterwards. Going step by step.
I'm guessing that, just like just about every software developer on planet Earth, you want to write software that writes software by itself. Unfortunately, it's an idea that only works on paper. I mean, we have things like code generators, DSLs, transformation pipelines, Visual Studio add-ins that statically analyse code and generate derivative code, and so on. But it's nowhere near anything one would call a 'robot'.
Personally, I think more needs to be done in this area. For example, the IDE should be able to infer things and make suggestions based on what I'm actually doing. For example, if I'm adding a property, the IDE infers what attributes other properties in the file has, and how the property itself is structured, and adjusts the property accordingly.
Any sort of AI is hard work and, regrettably, does not have such a great ROI. But it sure if fun.
Scripting away the repetitive tasks - that's what you refer? I guess you're a Windows developer where scripting is not as nearly common as in *nix world. Hence your question.
You might want to have a look at the *nix side of software development arena where the workflow is more or less similar to what you describe (at least more than Windows). Plowing your way via bash, perl, python, etc.. will get you what you want.
ps. Also look at nsr81's post in comments for similar scripting tools on Windows.
Code generation is certainly a viable tool for some tasks. If done poorly it can create maintenance problems, but it doesn't have to be done poorly. See Code Generation Network for a fairly active community, with conference, papers, etc.
Code Generation in Action is one book that comes to mind.
You can try Robot framework
http://robotframework.org/
Robot Framework is a generic automation framework,It has easy-to-use tabular test data syntax and it utilizes the keyword-driven approach.
Even you can used this tools as software bot (RPA).
Robotic Process Automation
First, a little back-story... In 2011, I was the Operations Manager for Contracting Center of Excellence at Bristol-Myers Squibb. We were in the early stages of rolling out a brand new Global Contracting System. This new system was replacing a great deal of manual effort across the globe with the intention of one system to create, store and retrieve Contracting information for all of the organization. No small task to be sure, and one we certainly underestimated the scope and eventual impact of. Like most organizations getting a handle on this contract management process, we found it to be from 4 to 10 times larger than originally expected.
We did a lot of things very right, including the building of a support organization from the ground up, who specialized on this specific application and becoming true subject matter experts to the organization in (7) languages and most time zones.
The application, on the other hand, brought it's own challenges which included missing features, less than stellar performance and a lot of back-end work needing done by the Operations team. This is where the Robotics Process Automation comes into the picture.
Many of the 'features' of this software were simply too complicated for end users to use, but were required to create contracts. The first example was adding a "Contact" to whom the Contract would be made with. The "Third Party", if you will. This is a seemingly simple thing, which took (7) screens of data entry, a cryptic point of access, twenty two minutes and a masters degree to figure out, on your own for each one. We quickly made the business decision to have the Operations team create these 'Contacts' on behalf of our end users. We anticipated the need to be a few thousand a year. We very quickly passed 800 requests per week. With three FTE's working on it, we had a backlog ever growing and a turn-around time of more than two weeks per request. Obviously, this would NOT due in any business environment.
The manual process was so complicated, even my staff had a large number of errors in creating them, even as subject matter experts. The resulting re-work further complicated the issue and added costs. I had some previous Automation experience and products that I worked with, but this need was even more intense and complicated than I had encountered before. I needed something great, fast, easy to implement and that would NOT require IT assistance (as that had it's own pitfalls.) I investigated a number of products, all professing to do similar things. One of course, stood out to me. It seemed to be the most capable, affordable and had good support options. The product I selected was Automation Anywhere at the bargain price of about $4000.00 USD.
I am not here to pitch for Automation Anywhere, or any specific product, for that matter. But, my experiences with this tool, forever changed my expectations and understanding of what Robotic Process Automation really means.
Now, don't get me wrong, I am not here to pitch for Automation Anywhere, or any specific product, for that matter. But, my experiences with this tool, forever changed my expectations and understanding of what Robotic Process Automation really means. (see below, if you are unsure)
After my first week, buying the tool and learning some of the features, I was able to implement a replacement of the manual process of creating a "Contact" in the contracting system from a two week turn around, to a (1) hour turn-around. It took the FTE effort of 22 minutes for each entry, to zero. I was able to run this Automated process from a desktop PC and handle every request, fully automated, including the validation and confirmation steps into other external systems to ensure better data quality than was ever possible, previously. In the first week, my costs for the software were recovered by over 200% in saved labor, allowing those resources to focus on other higher value tasks. I don't care where you are from, that is an amazing ROI!
That was just the beginning, now that we had this tool, and in fact it could do much more than this initial task I needed, it became one of the most valued resources for developing functional Proof of Concept/prototypes of more complex processes we needed to bridge the gaps in the contracting system. I was able to add on to the original purchase with an Enterprise License and secure a more robust infrastructure partnering with our IT department at a an insanely low cost for total implementation. I now had (5) dedicated Corporate servers operating 24/7 and (2) development licenses for building and supporting automation tasks and we were able to continue to support the Contracting initiative, even with the volume so much greater than anticipated with the same number of FTEs as we started with. It became the platform for reporting, end user notification, system alerts, updating data, work-flow, job scheduling, monitoring, ETL and even data entry and migration from other systems. The cost avoidance because of implementing this Robotic Process Automation tool can not be over stated. The soft-dollar savings from delivering timely solutions to the business community and the continued professional integrity we were able to demonstrate and promote is evident in the successful implementation to more than 48 countries in under (1) year and the entry of over 120,000 Contracts entered each year since.
It became the platform for reporting, end user notification, system alerts, updating data, work-flow, job scheduling, monitoring, ETL and even data entry and migration from other systems.
While the term, Robotic Process Automation is currently all the buzz, the concepts have been around for some time. Please, please however, don't make the assumption that this means it is a build and forget situation. As it grows, and it will grow, you need a strong plan to manage tasks, resources and infrastructure to keep things running. These tools basically mimic anything a human can do, and much more than a human as well. However, a human can rather quickly change their steps in a process if one of the 'source' systems she/he is using has a change in the user interface. Your Automation Tasks will need 'tweaked' to make that change in most cases. Some business processes can be easier than others to Automate and might be two complex for a casual "Automation task creator" to build and or maintain. Be very sure you have solid resources to build and maintain the tasks. If you plan to do more than one thing with your RPA tool, make sure to have solid oversight, governance, resources and a corporate 'champion' or I assure you, your efforts will not be successful.
Robotic Process Automation Defined:
(IRPA) Institute for Robotic Process Automation: “Robotic process automation (RPA) is the application of technology that allows employees in a company to configure computer software or a “robot” to capture and interpret existing applications for processing a transaction, manipulating data, triggering responses and communicating with other digital systems.”
Wikipedia: “Examples of robotic automation include the use of industrial robots in manufacturing and the use of software robots in automating clerical processes in services industries. In the latter case, the use of the term robot is metaphorical, conveying the similarity of those software products – which are produced to provide a generic automation capability and then configured within the end user environment to execute manual and repetitive tasks – to their industrial robot counterparts. The metaphor is apt in the sense that the software “robot” is now mimicking or replacing a function classically associated with a person.”