What exactly defines production? - definition

Like almost anyone who's been programming for a while, I'm familiar with the term "production code" and have a vague sense of what it means. However, can someone offer a semi-rigorous definition, since it seems Wikipedia and Google can't? It seems like there are a lot of gray areas in what counts as production, such as internal tools that are used by a small group of people and therefore not "formalized" in terms of UI, documentation, etc. and open source apps that are feature complete, reasonably bug free and working, but lack polish, UI and extensive testing.

When your code runs on a production system, that means it is being used by the intended audience in a real-world situation.
Production code, however, does not necessarily mean robust, reliable, or stable code. The Daily WTF provides plenty of evidence in this regard.

Production means anything that you need to work reliably, and consistently.
Whether is a build script, or a public facing web server.
When others rely on your code, particularly folks who may not understand it (i.e. even "smart" developers but perhaps not in your group, but using a library you wrote), that code is production code.
It's production because "work stops" and "money is lost" when the production code fails.

The definition as I understand it is that production code is any code that is installed or in use on a live, non-test-bed system. A server used internally to a company is a production system if it is the live system used by the employees of the company. The point here is that code running on a server internal to the company writing the code can be production code.
Usually, a good distinction when looking at internal code is whether or not the group maintaining the code is separate from the group using the code. If the groups are separate, odds are that the code is production code. If running the business depends on the code, then it is certainly production code, even if it is developed and maintained in-house.

EDIT: The short answer: If you are "betting the farm on it", it is "production".
This is a great question--an absolutely critical distinction that routinely gets everyone in trouble due to misunderstandings. The question of what is "production" is a subset of the related question of what is an "environment".
So part of the answer is that "production" is THE "environment" that is most
important and is most trusted as THE "real" thing.
So now we must define "environment" (and then revisit "production"). We are still far from a satisfactory answer.
We programmers use the term "environment" constantly to refer to computer systems consisting of hardware that is executing software. That software is the code that we wrote plus software that it depends upon, which was written by others. We write our code and integrate it with the other software, then we typically run the integrated software through an escalating series of tests (unit tests, integration tests, functional tests, acceptance tests, regression tests, etc.), until we finally run the integrated software in the full manner in which it was intended.
Of course, not everything is fully automated. There are usually numerous people involved, and they have manual processes to perform. We programmers look for ways to automate as many of these processes as possible, but there is always a "man/machine boundary" in the systems we work on. Often, there are many such boundaries in any particular case.
On the other hand, there may not be any significant automation at all. For example, we spoke of "production" way back when we had a room full of people performing manual labor which produced a product. So, there doesn't have to be any automation present in our "production" "environment". There is also a middle ground, where the automation involved does not include software, such as in the case of a person running a loom to weave cloth.
Also, there may not be a product, since we have adapted our language of "production" "environment" to include product-less service providers.
Likewise, the testing may not involve software, since we may be testing a non-software-driven machine (e.g., the loom) or even the people (training and evaluation).
Now we have touched on all the crucial elements of an "environment":
there is a purpose, an intent, being pursued
an intent requires an intender, so there must be a sponsor (a person or
group, but not a machine) that specifies the intent
that intent is pursued through various processes that are performed by
various actors
those actors may be people, they may be software executing on hardware, or they
may be non-software-driven machines, so there may or may not be automation present
Now we can properly and fully define our original terms.
An environment consists of all the processes and their actors that
collaborate to pursue a particular intent on behalf of its sponsor. That
means software executing on hardware, that means non-software-driven machines, and that
means people performing their various duties. It is the intent that primarily
defines an environment, not its processes or its actors.
Furthermore...
If the intent being pursued in a particular environment is the
sponsor's ultimate goal, which usually involves producing a product or
providing a service in exchange for money, then we refer to that
environment as production.
Now we can go a bit further.
If the intent being pursued in an environment is the verification of
processes and their actors in preparation for production, we call
that a test environment.
We further call it an integration environment if that testing involves the
initial joining together of significant individuals or groups of processes and
their actors.
If that preparation involves the "programming" of human actors to perform new
processes, or the subsequent verification (evaluation), then we call that a
training environment.
Armed with these distinctions and definitions, we can now understand several common scenarios.
An environment can be mislabeled with a name that does not match its intent, such as when a training environment is used as test.
An environment can be grossly misused, such as when integration or training is done in production.
An environment can be misrepresented, such as when key processes or actors are left unidentified (e.g., manual reconciliations, or even by ignoring the people altogether).
An environment can be retasked, by repurposing its processes and actors to a new intent. A very successful technique for some organizations is to routinely "flip" several sets of actors (servers hosting software) between production, test, training, and integration upon each release.
In most cases, a single actor (person or hardware) can execute multiple processes which can participate in multiple environments. For example, a single computer server can host software that performs production transactions while also hosting other software that performs test or training functions.
Normally, a single instance of an actor should participate in only one environment at a time. On very rare occasion, a single actor can be shared across environments if the intents are mutually compatible. Most of the time, it is very unwise to attempt such sharing because the intents are not really compatible. A perfect example is running a test process on a server that also supports production processes, resulting in downtime because the test caused the entire server to fail.
Therefore, the intent of an environment must be construed with very wide latitude, to include concepts such as availability, reliability, performance, disaster recovery, accuracy, precision, repeatability, longevity, etc. This means that the actors and processes must often be construed to include things like providing power, cooling, backups, and redundancy.
Finally, note that the situation can get quite complex. For example, a desktop computer (actor) may be tasked by the development team (sponsor) to host their source control (process), which the team relies upon for their primary jobs (production). Nevertheless, the IT staff sees that same desktop computer as simply a developer workstation (development, not production) and treats it with contempt and nonchalance when it develops a hardware problem. But the developers are producing production code, so aren't they also part of production? Perspective matters.
EDIT: Production quality
A solid verification (testing) methodology should take packaged code from development and run it through a series of tests (integration, TQA, functional, regression, acceptance, etc.) until it comes out the other side "stamped" for production use. However, that makes the package production quality, but not actually production. The package only becomes production when a sponsor actually deploys it into an environment with that ultimate level of intent.
However, if your organization merely produces that package (its product) for the consumption of others, then such a release comes as close to production as that organization will experience with respect to that product, so it is common to stretch the term production to apply rather than clarify that it is production quality. In reality, that organization's production environment consists of the actors and processes involved in its development/release efforts that result in that product.
I said it could get quite complex...

Any code that will be used by it's intended userbase would fit into my definition of 'production code'.
Of course, the grey area in that definition would be clearly defining who your userbase is.
G-Man

The production software can perform at the necessary workload without disruption or degradation of the service
Software has been successfully tested in different production scenarios
Transforming working prototype into production software which runs on fail-safe redundant architecture that can work in real business, i.e. production environment, needs time, code refactoring, and attention to details
The production code has acceptable level of maintainability and is reasonably well commented
The documentation manual explains functionality, all features and facilitates maintenance
If the production software is an international service or application, it must be localized
Production code is used by end-users, often customers under conditions described in Terms-of-Service Agreement
Production software does not necessarily mean reliable mission critical software
The software does well, what it was intended to do
Log files provide an accurate description of run-time performance and software reliability metrics and reporting which do facilitate debugging and software maintainability

I think the best way to describe it, is as any code that "leads-to" deployment and "follows-up" deployment. Deployment itself is defined as all of the activities that make a software system available for use. If your code is ready to be used by people, in-house or otherwise, then it is production code.

In simple words "Production code which is live and in use by its intended audience"

The term "production code" mixes two different concepts. One is deployment management and the other is release life cycle.
In the strict sense of the word, a system is in production when it is being used as part of business or service operation. What's not in production are development, testing, QA, demo, and staging system. Production system does not immediately imply quality.
From release life cycle's point of view, a "production" build is the build that is released to general public or clients. It is the stage after pre-alpha, alpha, beta, (feature complete, code complete, etc.) and release candidate. For shrink-wrap products that cannot easily deploy updates, reaching the production stage likely implies series of testing and bug fixes.

Related

I want a sandboxed test environment that is *always* an exact copy of Production

I'm having an issue with a web application I am responsible for maintaining.
The system experiences regular bugs, and our support vendors are always asking us to see if we can "replicate the error in UAT". This is obviously a reasonable request. A lot of the time, for various reasons (some of which are clear, some of which are not), these errors are not present in UAT. This lack of bug reproducability in a testing environment is adding huge amounts of friction to the bug resolution process.
There are 3 key pieces of our system architecture where these bugs are flaring (the CMS, the API layer, and the database). I am proposing we set up a system job that perpetually clones these 3 parts of the system in to a sandboxed test environment. This cloning would happen periodically (eg, once every 24 hours), and automatically.
Is there a technical term for this sort of environment? Is this an established method of helping diagnose system issues? Is there somewhere I can read up on the industry best practices for establishing something like this? Thanks.
The technical term for this kind of process is replication it is often done for some systems like databases, but normally not for testing purpose, but in order to increase available, so the replication is used as a failover spare.
An exact copy of a production system, with all the data is not you'll find often, due to the high demand on resources. Also at some points to two systems have to differ. Most systems (I know of) have tons of interfaces you just can't copy a complete system systems.
Also: you only need the copy of the production system when you actually debugging an issue. And if you are in the middle of that you probably don't want everything to go away and get replaced by a new copy.
So instead I would recommend to setup scripts that allows to obtain a copy of the relevant parts on demand.
Also you might want to consider how you might be able to modify your system to make it easier to setup a copy.
For example, when you have all the setup automated (with chef/docker or similar) you should be able to setup the same system again anywhere you want, so you now you just have to get the production data over.
Which is an interesting point. Production data often contains secret information (because it is vital to the business, or because it is personal data). You don't want this kind of stuff hang around in a test system everybody can access.

Should developers be limited to certain software for development?

Should developers be limited to certain applications for development use?
For most, the answer would be as long as the development team agrees it shouldn't matter.
For a company that is audited for security certifications, is there a method that balances the risk of the company and the flexibility, performance of the developers?
Scope
coding/development software
build system software
3rd party software included with distribution (libraries, utilities)
(Additional) Remaining software on workstation
Possible solutions
Create white-list of approved software where developer must ask for approval for desired software before he/she can use it. Approval would be based on business purpose/security risk.
Create black-list for software. Developers list all software used. Review board periodically goes over list.
Has anyone had to work at a company that restricted developer tools beyond the team setting? How did they handle the situation?
Edit
Cleaned up question. Attempted to make less argumentative.
Limiting the software that developers can use on their work machines is a fantastic idea. This way, all the developers will quit, and then the company won't have to spend as much money on salaries and equipment, resulting in higher profits.
Real answer: NO!!!
No, developers should not be limited in the software they use, because it prevents them from successfully doing their jobs. Think about how much you are paying your team of developers, - do you really want all that money to go spiraling down the drain because you've artificially prevented them from solving problems?
1) Company locks down the pc and treats the developer as competent as a secretary
What happens when the developer needs to do something with administrative permissions? EG: Register a COM object, restart IIS, or install the product they're building? You've just shut them down.
2) Create a white-list of approved software...
This is also impractical due to the sheer amount of software. As a .NET developer I regularly (at least once per week) use upwards of 50 distinct applications, and am constantly evaluating newer upgrades/alternatives for many of these applications. If everything must go through a whitelist, your "approval" staff are going to be utterly swamped by just one or 2 developers, let alone a team of them.
If you take either of these actions, you'll achieve the following:
You'll burn giant piles of time and money as the developers sit on their thumbs waiting for your approval team, or doing things the long slow tedious way because they weren't allowed to install a helpful tool
You'll make yourself the enemy of the development department (not good if you want your devs to actually do what you ask them to do)
You'll depress team morale substantially. Nobody enjoys feeling like they're locked in a cage, and every time they think "This would be finished 5 hours ago if only I could install grep", they'll be unhappy.
A more acceptable answer is to create a blacklist for "problem" software (and websites) such as Pidgin, MSN messenger, etc if you have problems with developers slacking off. Some developers will also rail against this, but many will be OK with it, provided you are sensible in what you blacklist and don't go overboard.
I think developers should have total control on applications that they use as long as they can do their job with them. Developers' productivity is directly related to working environment and no one will like being restricted and everyone likes to use software they like themselves.
Of course there should be some standards in terms of version control, document format, etc., but generally developers should have right to use any programs they want.
And security should be developer's concern - company admins should care about setting up proper firewall to protect against any kinds of attacks.
A better solution would to create a secure independent environment for the developers. An environment that if compromised won't put the rest of the company at risk.
The very nature of the development is to create crafty ingenuous pithy solutions. To achieve this, failures must happen.
Whatever they do, don't take away the Internet in general. Google = Coding Help 101 :)
Or maybe just leave www.stackoverflow.com allowed haha.
I'd say this depends on quite a list of factors.
One is team size. If you have a team of half a dozen developers, this can be negotiated whenever a need for some application pops up. If you have a team of 100 developers, some policy is probably in order.
Another factor is what those developers do. If they compile C code using a proprietary compiler for an embedded platform, things are very different from a team producing distributed web or PC software in a constantly shifting environment.
The software you produce and the target customers are important, too. If you're porting the Linux kernel to some new platform, whether code leaks probably doesn't matter all that bad. OTOH, there are a lot of cases where this is very different.
There are more factors, but in the end it all boils down to two conflicting goals:
You want to give your developers as much freedom as possible, because that stimulates their creativity.
You want to restrict them as much as possible, as this reduces risks. (I'm talking of security risks as well as the risk to ship non-functioning software etc.)
You'll have to find a middle ground that doesn't hurt creativity while allowing enough guarantees to not to hurt the company.
Of course! If you want a repeatable build process, you don't want it contaminated by whatever random bit of junk a programmer happens to use as a tool to generate part of the code. Since whatever application you are building lasts much longer than anyone expects, you also want to ensure that the tools you use to build it are available for roughly the same duration; random tools from the internet don't provide any such gaurantee.
Your team should say "The following tools are allowed for build steps and nothing else" and attempt to make that list short.
Obviously, it shouldn't matter what a programmer looks at to decide what to do, so the entire Internet is just fine as long as its just-look. Nor does it matter if he produces code by magic (or random tool) as long as your team doesn't mind accepting just that tool's output as though it were written by hand.

What are the differences between programming in an IT department and a Product Development department?

Our company has recently decided that a good section of our IT department is actually doing product development and not internal IT development and now has created a new department.
What are the types of changes that developers should be looking to make during this type of transition?
Is there really any difference between internal development and product development?
I don't know how quickly the differences will assert themselves in an existing group which is transitioning from one role to the other, but having worked as both an internal developer and a product developer, two huge differences leap to mind: requirements and and testing.
As a developer of internal tools, I was pretty much given free reign regarding interface, organization, and even scope. Specs were in the form of an email saying "can you write something that does X?" Similarly, testing was almost non-existent. After whatever testing I was able to do, the tools would be deployed directly to their target audience and bug reports, when they came at all, were directly from those end-users, again usually via email or even hall-tackle.
Now that I'm doing product development, the difference is dramatic. Specs are 30-100 page Word documents and we have a dedicated testing department which makes darn sure that what we produce matches those specs. I'm much better supported by the project managers and have clear channels for any feedback I have on requirements or design. It could be argued that product development offers less freedom to the individual developer, but in exchange for being part of a (hopefully) better-organized and better-supported team.
[I work with Jeff.]
As others have mentioned, a key difference stems from the nature of the user:
When developing internal applications, you're typically dealing users from a single department or group who use a limited number of apps, and they're mostly a captive userbase.
When developing external products, you're dealing with users who see the whole enchilada, and they're paying customers who can take their business elsewhere.
That difference matters because a coherent user experience across multiple applications becomes much more important for paying customers who see the entire set of apps.
In the IT case, neither the business nor the users themselves are typically concerned about the fact that their app doesn't look and work like some other app that some other department uses. And if for whatever reason they did care, their ability to do anything about it is more limited. While it's possible to make a business case for consistency across apps (branding to internal employees, usability for employees who use multiple apps, reduced development costs resulting from the reuse of common libraries, patterns, services, etc.), this kind of concern typically takes a backseat to the development of new business functionality.
In the product case, the users do care about coherence across the entire set of apps, and they can do something about it if the experience is weak.
So one major difference is the relative importance of a coherent, quality user experience. But that goal itself has significant organizational ramifications that we're already starting to see.
We'll see an increased emphasis on "horizontal" activities that seek to establish standards and improve communications across teams, since such activities directly support the goal of producing coherent products. Cross-cutting teams (like user experience) will become more influential than they formerly were, we may see new cross-cutting teams (e.g. teams that look at architecture across many systems rather than just a single system), we'll probably see more presentations about what different apps do, more cross-training, etc.
App development teams will have less autonomy than they formerly had in charting their own course. The user experience team (working with end users, business stakeholders and engineering teams) will specify standards around visual design, interaction design, etc. and the app teams will be expected to adopt those. We'll see test practices become more standardized. Builds and deployments across apps will be more uniform and coordinated. Monitoring, alerting, response time SLAs and other operational metrics will be more uniform across apps. The app teams won't be able to define these for themselves anymore (though they can certainly contribute to the larger discussion).
Management will increasingly allocate resources in a way that seeks to optimize globally across the entire product instead of optimizing locally for individual apps. So where in the past the composition of individual app teams has been relatively fixed (developers, SQA, etc.), going forward we should expect to see more fluidity.
A huge difference is in your customer. IT developers have the rest of the company (and sometimes partner / subsidiary companies) as their primary customers. The Product Development developers have customers (i.e. the people who buy the product that is the companies reason for existing) as the primary customer.
Yes, very much so. There's a huge difference between performing an activity that drives revenue and invoices and one that's perceived as overhead.
My experiance is that people working on product development side of the house have bigger budgets, better training, better travel and more skilled employees. Having worked in a product development company it always felt like the lower skilled employees were thrown over to IT department.
In-house development is very process-orientated. They just want to deliver xyz functionality and have it work based on a company-wide strategy. It's not the iterative make-the-product better cycle you get when your primary product is your code. As a result, in-house stuff is often just 'good enough' whereas software companies probably tend to try to improve their product long after it 'just works'.
Note, in-house dev teams can get away with being 'good enough' but software development companies can get away with it for a while but will ultimately lose out to the ones who strive to improve.
I think moving between both environments could be a shock to the system in either direction but that's not to say they both can't be run in the same way - just that in my experience they probably won't. For example, UI is usually given a lower priority when it is in-house software as the customers are generally getting paid to use it rather than paying to use it.
I have been equals amounts of time in both types of organizations. The particular organization where I worked where the software development was part of an IT department treated the development of software as a cost center, where as software development as part of an product was seen as a profit center.
The two are very different. The skill level of developers in my case was vastly different-- those working on a public product were overall better skilled and cared more about the quality of their work. As a developer of a real public product you are actually making the company money.
In my internal software job I typically had a set of known fixed requirements mostly worked out. I designed a solution, but was always given a deadline that was unreasonable if quality was a concern (including code quality), rushed the coding, and delivered the result. Any bugs found that passed any short QA process typically only got fixed if they made formal requests for fixes.
Product development in my experience is almost the reverse. All the requirements are not fixed (only what I was working on for that week was fixed), design is usually dictated by someone who has worked on the product the longest. I get decide how long something takes (but got to really explain why and justify the time), and coding is usually not rushed. Experimenting with ideas generally is more difficult in product development because a product that is meant for public use should use tried and tested approaches.
Therefore, I would say that if creativity is really important then product development may not be for you because a particular idea that you personally have is unlikely to ever make it into the product unless you can make a business case for it and somehow make it more important than what the business has already been planning.
Choosing a particular library is also more difficult depending on how the software is deployed. For example software used by the government typically has to pass Common Criteria certification, which can eliminate certain library choices.

Is including system test cases into the final packaged product of your application contributing to bloat or increasing risk?

I am packaging up an rpm file which has a %postinstall section that detects certain conditions and runs a suite of unit, function, and system tests. I am getting some push back that it exposes some of the internal structure as I use some of the same environment variables the code itself uses for diagnostics. Thoughts?
UPDATE: I am not planning on running the tests automatically nor exposing their existance to the end users. I am proposing that the testing package simply be available to any machine where the suite lands. It adds roughtly 3% to the final size of the package and requires an obscene amount of internal knowledge to execute properly.
The program itself is a library which others may use and is exposed in an API. The internal knowledge of how things functions is not at issue. My main motivation is the lack of a suitable test resources and the large variability in the target environment. Some of the tests are really simple (similar to what configure might do to determine all the right features are available from the compiler). Other tests are more involved and they prove the basic functions the library should provide.
If you want to avoid the complaint that it runs on every install, at least use the %check rule of RPM.
Sounds like people are concerned about "reverse engineering". So the software is proprietary? This would seem to be the crux of your problem. Regardless, it's common for the test suite to be separate from the packaged software.
However, you're not being unrealistic: Allowing users to run tests themselves on their systems and give you the results is a great aspect of a collaborative relationship with users. Unfortunately, you're running up against the proprietary business model.
Perhaps you can compromise by trimming down or rewriting the tests and the diagnostics to only prove an adequate amount of fitness without revealing too much. I wouldn't back down from throwing out the tests and diagnostics of what you've written so far.
You really should make the argument that users will be pleased and have more confidence in a software package shipped with a thorough testing system, and that these outweigh any fears of revealing the software's internals.

Test accounts and products in a production system

Is it worth designing a system to expect test accounts and products to be present and active in production, or should there be no contamination of production databases with test entities, even if your shipping crew knows not to ship any box addressed to "Test Customer"?
I've implemented messaging protocols that have a test="True" attribute in the spec, and wondered if a modern schema should include metadata for tagging orders, accounts, transactions, etc. as test entities that get processed just like any other entity--but just short of the point where money gets spent. Ie: it fakes charging an imaginary credit card and fakes the shipment of a package.
This isn't expected to be a substitute for a fully separated testing, development, and QA database, but even with those, we've always had the well-known Test SKU and Test Customer in the production system. Harmless?
Having testing accounts in production is something I usually frown upon because it opens up a potential security hole. One should strive to duplicate as much of the production environment in testing as possible but there are obviously cases where that isn't possible. Expensive production only hardware is a prime example. I would say as a general practice it should be discouraged but as with all things if you can provide a reason which makes sense to you then you might overlook a hard and fast rule.
I imagine the Best Practice Police would state the mantra "never ever test in prod" and maybe even throw in "developers should not have access to prod".
However, I work on a mainframe-based system where there are huge differences between production and test/qa/qc; the larger the system, the more likely such a situation is. Additionally, the more groups that have a stake in the application, the more likely this is.
I need more than two hands to count how many times we could only duplicate a problem in the production environment. The option then becomes creating test tables/users/data or using live customer data.
At times we do also create test records in production tables, as some users/clients like having something they can search/retrieve that is always there.
So my advice is that it is OK to put test accounts/products into production if it will help to troubleshoot after go-live.
If your database is created from scripts in an automated fashion, then this becomes a non-question.
In my environment we use cruise control for continuous builds. The SQL Scripts for generating the database are checked into CVS with everything else, and the database is rebuilt from those scripts on a daily basis.
Our test data is a second set of sql scripts, which are run for the test database and are not run for the production database.
Given our environment test data never touches the production database.
This solution really works great for us.
I wouldn’t put any test data in a production system nor would I want to have access to this system as a developer.
I’m working in an industry with very sensitive medical and financial information and having such information would make it impossible to distinguish productive from data out of the testing system.
IMHO the best practice is to completely separate these two worlds and invest in setting up a procedure to prepare a comprehensive testing environment.
In out ERP systems (internally accessible only) we have test data so that when we move changes from test to production environments we can test the whole process. I view that data as a necessary evil, since subtle configuration differences between systems can cause catastrophic results, so once a change is in production we test is fully before "releasing" it to the users.
As I said though, these are internal apps only, so the security risks are lessened somewhat - that's a very valid concern.
Never ever test in prod, even though that is where all the revenue is generated/stats are collected/magic happens...?
Always have a production test plan. There are going to be problems that happen on prod, or, if you are unlucky, only happens on prod. If you don't have anything in place, the first time you need to test on prod (which are usually high-stress cases) you'll be up the creek without a paddle.
It's not harmless to have test data on prod, you do need to be careful.