What makes an application or a software development process "Enterprise"? [closed] - enterprise

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
After reading Wolfbyte's answer on Enterprise FizzBuzz I have thought about what constitutes a program as "Enterprise".
What makes an application or a software development process as Enterprise?
EDIT: It seems like there is a lot of negativity around the word Enterprise.
Are there anyone who actually enjoys writing Enterprise-Level applications?

What "enterprise-level" really means is:
Compatibility with architectural schemes and long-term technical plans that overarch anything you or your team will ever do, and thus cannot be changed.
Conforms to governance requirements
Expensive to build and maintain ;)
Has the following qualities:
Maintainability
Scalability
Functionality
Reusablility
Reliability
Understandability
Usability
Modifiability
Testability
Portability
Efficiency
Flexibility
Modularity
Interoperability
As far as "enjoying" writing Enterprise-level apps, it can be difficult to do so because one of the characteristics of an enterprise system is that it's larger than any one person. People usually enjoy their work because they can take ownership of it, but enterprise development isn't really "owned" in that sense, rather it's "produced" through a rigid, complex project path guided by acceptance gates and steering committees and business project owners.

Think about all the things that you, as a programmer, care about in a software product.
Now think about all the things that you, as a user, care about in a software product.
Now forget about all of those things. Enterprise software isn't purchased by users or programmers. Requirements like "intuitive", "fast", or "interoperable" just don't apply.
Instead, they must meet requirements such as, "vendor published big fat whitepaper full of words like 'fast', 'intuitive', and 'interoperable', so when the peons complain that it makes their jobs more difficult we have something to point at while writing 'difficult' into their employee records".

Slow. Hard to use. Expensive. Based on obsolete technology. See the rails plugin "acts_as_enterprisey"
I kid.
Seriously though, it generally refers to things written for use by Fortune-1000 types, where there are large numbers of users and complex business rules.

If you're an ordinary developer, it's anything bigger than what you're working on now.
If you're an architect, it's the stuff you did at the last client.
If you're the CIO, it's all the stuff that "really matters" -- the stuff above baseline, keep-the-lights-on operations.
If you're in sales, it's what you're bidding on.
If it's your product, of course it's enterprise-ready. You just spent a year making it "scalable" so it would grow to support "the enterprise".
If it's open source, of course it can't be enterprise-scale. Nor, for that matter, is your competitor's product.
And, of course, it varies by client. For the $1B per year companies, a few Oracle financial reports was an Enterprise Initiative. For a Fortune 100 company, almost nothing is really "enterprise" because the entire enterprise is so big and globe-spanning that it's hard to comprehend any one thing that actually fits all the nooks and crannies of that conglomerate business.
Usually Enterprise is used in the negative. "Your software/service/product/offering isn't enterprise ready" or "Open source isn't suitable for enterprise computing".

An Enterprise Application usually something that has multiple tiers and runs on many machines and is designed to fulfill the needs of an larger organization. In practice it usually has a database backend, business logic middle tier, and some kind of frontend like a web interface. Likely has performance and high availability requirements, as well as backup, logging, auditing, and authentication.

Related

Tool to migrate from Embedded SQL to ODBC [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a bunch of C code accessing database (Oracle, DB2 and Sybase) through Embedded SQL : the base code is the same, but with three different precompilers, three sort of executables are built, one for each database/platform.
I works perfectly fine, but we need now migrate to a solution using ODBC access.
The problem is : what tools / api can be used ? A direct way seems to write a custom precompiler (or modify an existent) to wrap all SQL and host variables calls to calls on an ODBC connection.
Can somebody recommend tools for that task or api to keep it simple ?
Or is it a simpler way, another approach ?
Thank you
As is usual for such situations, there are likely no off shelf answers; people's codebases always have a number of surprise in them, and the combination prevents a COTs tool from ever being economical for individual situations.
What you want is a program transformation system (PTS), with a C front end, that can be customized to parse embedded SQL. Such tools can apply source-to-source rewrite rules ("if you see this pattern, then replace it by that pattern") to solve the problem.
These tools require some pretty technical effort to configure. In your case, you'd have to adjust a C front end to handle embedded SQL; that's typically not in C parsers. (How is it that you can process this stuff in its current form?) You'll have trouble with the C preprocessor, because people do abusive things with it that really violate a parsers nested-structures-view of the universe. Then you'll have to write and test the rules.
This effort is a sunk cost to be traded against the effort of doing the work by hand or some more ad hoc scripting (e.g., Perl) that partially does the job leaving you to clean it up. Our experience is that it is not worth the trouble below 100K SLOC, and that you have no chance of manual/ad hoc remediation above 1M SLOC, and in between your mileage will vary.
At these intermediate sizes, you can agonize over the tradeoffs; that costs energy and time, too. Sometimes its just better to bite the bullet and do it any way you can an clean it up.
Our DMS Software Reengineering Toolkit is one of these PTS. It has a customizable C parser and preprocessor, precisely to help deal with these configuration troubles. The other PTSs mentioned in the Wikipedia article, do not, I beleive, have any serious C parser associated with them. (I'm the guy behind DMS).

What are the arguments for creating you own ORM layer? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
The advantages of ORM are pretty clear. But I noticed that some companies prefer to build their own home made ORM. Why?
There are only two arguments that I can possibly see for ever hand-rolling your ORM (and these have happened to me in the past, which forced me to write my own):
The company refuses to use Open Source software because of liabilities they assume might creep into their application.
The company refuses to spend money on a commercial ORM.
Any other argument (like the quality of Entity Framework is too poor for us to use it) is completely moot. No matter how bad Entity Framework (or whatever other ORM you may be referring to) is, you're not going to come close to the robustness and reliability by hand rolling your own.
As O/R mappers are very complex pieces of software, writing your own which goes beyond the typical datareader wrapper and pre-fab SQL query executor will take a lot of time (think 6+ months full time at least). That's not the biggest problem. The biggest problem is that once you go with your own O/R mapper, you have to maintain it for the rest of the time the application using it is in production. Which can be a long time. Make no mistake, maintaining an O/R mapper yourself is not a simple task: you have to re-invent every trick O/R mapper developers already know about and have solved themselves.
Last but not least: doing this yourself should not be done on a billable contract. After all, you're writing infrastructure code which is already available elsewhere.
I know I'm biased (I wrote LLBLGen Pro), but I also am one of the few people in this industry who has written a full O/R mapper framework and knows what it takes to get a decent one up and running with good performance and a great feature set.
Simply do the math: if it takes 1000$ to get an o/r mapper framework license (or less) and you can get started right away with the application of your customer, how many hours do you get for that 1000$ so you can built the O/R mapper without costing the company any money? And maintain it? No way you can do it for that money.
If you have an in-house database that has evolved to have a bad schema, it can be simpler to write your own ORM layer than try and get an out of the box solution to play nice with it.
In my opinion, ORMs are specialized and purposed to solve typical problems. If you want some more generic solution (e.g. for much more complex queries) or just different functionality you can either modify existing solution (what for various reasons often isn't the best choice) or create your own.
ORMs also limit you by forcing you to use their conventions and accept their limitations.

Agile practices on embedded software development [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have had great success e.g. with fast development cycles and continuous integration.
However, I think pair programming or continuous customer communication are less useful due to issues specific of embedded software programming.
What do you think? What are the most useful agile practices on embedded software development?
I would have to disagree. I've done it, and about 10 years ago I co-founded an agile coaching company specializing in embedded (we're no longer a company but the website is still up with several useful resources). I recently helped another company adopt agile for their embedded project, and it worked very well for them.
Agile practices like short iterations, pair programming, and frequent communication with the customer are even more important with embedded software because there's more at stake, both because embedded systems are usually harder/more expensive to update in the field, and because they are often used in mission-critical applications.
As for pair programming, if your company only has one person that knows the first thing about a component of the software, that's a huge risk, and pair programming is a great way of doing cheap knowledge transfer. Both developers don't have to be experts in that part of the code. You can have a primary that is and a secondary who isn't. The secondary partner is able to offer help on program structure, compare design decisions, ensure proper testing and documentation, etc. Of course each developer has to be a primary sometimes and secondary other times to make the crosstraining effective. This is also a very effective way of bringing new developers up to speed on your products.
Lastly, customers care about features and plans, not code. Embedded doesn't change this. Showing off what you have so far and what you plan to do next ensures you're working on what you're supposed to.
Embedded software development is no different then normal software development, therefore you can use every agile practice you find useful.
Concerning pair programming, I look at it as a code review on steroids. If your company can afford enough SW engineers, I don't see a reason why it could not be used for embedded software development.
by the way, what exactly do you consider under "issues specific of embedded software programming"? I do not have experience in non-embedded software development, and I do not see how it could be different.
It is not obvious to me the value of Agile in many applications.
Many applications, including embedded applications, often include standards based protocols or technologies. You download or buy the specification, you implement the specification, testing as you go, and then you are done. What would I do at my daily standup, "Um, today I read pages 1 through 9 of the standard, tomorrow I plan to read pages 10 through 17". How does standards based development benefit from Agile? Quick response to changing customer input, um, no. The standard doesn't change from day to day.
If Agile software really means "training" then paired programming fits. As pointed out above unless you can afford exactly double the number of engineers it is likely you will have different specific skill sets among your engineers. In a large organization with many engineers with overlapping duplicate skills maybe you can pair engineers efficiently. In a smaller organization how does that work? Unless it is actually paired training, then ok. Sounds expensive though.
Often a huge amount of infrastructure is required just to host or deploy the smallest amount of first pass functionality. How might I do test driven development for an embedded flight controller, or automotive engine controller? Years of effort are required just to get the infrastructure in place to host a test. I certainly don't want the rest of my designers and engineers sitting around idle waiting for the test infrastructure so they can do TDD. I need standards driven development while waiting for many of the pieces to come together.

What is “mature” software? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Jeffery Palermo says 'Classic WebForms More Mature Than ASP.NET MVC': "Is Classic WebForms More Mature Than ASP.NET MVC?".
It seems to be subjective, but what I want to know is, what exactly "mature" software is?
The answer is very subjective. But basically if the software can answer to most of these criteria (in no order of importance):
secure
reliable
actively maintained
has active community
field-proven
Then it can be considered "mature".
It is important to note that different clients would expect different levels of "maturity". A large corporation would demand that the software it uses is secure enough to protect its sensitive data, and that the software is supported by a support rep available 24/7. As opposed to a small private project of your own which you might care much less about security, and you do not need (nor can afford) a service package which includes 24/7 customer support.
So ,maturity differentiates according to the client, but the basic criteria remain the same.
Mature is when people have figured out how to deal with it.
(And we're talking about development platforms not about end-user apps, aren't we?)
For example, javascript only became mature with the introduction of prototype, jquery and the like.
Before that, people tend to code strange things they they'd regret.
So you're asking for subjective opinions on a subjective topic. :)
I would say, mature would add the following characteristic to a technology:
People know how to use it, know its possibilities and limitations
People know what the typical usage scenarios are, patterns, what are good usage scenarios for this technology so that it shows its best
People have found out how to deal with limitations/bugs, there is a community knowledge and help out there
The technology is trusted enough to be used not only by individuals but in productive commercial environment as well
Reduce Subjectivity by Developing a Measuring Tool for yourself.
My Criteria are for Business Software:
Feature Rich - handle lots of Business Rules
Flexible - Selectable Features via Parameters & Configuration
Stable - Few, if any bugs causing malfunction such as crashes
Well Documented - User and technical Documentation
User Friendly - as attested and recommended by users
Robust - Not very much fazed by events such as power failures and erroneous user input.
Installs & Runs "out of the box".
Take all the Criteria and place it in a spreadsheet with columns rating from 0 - 5 and do a rating by ticking the column corresponding to your rating of each criteria.
If overall score is 25 or better then the software is mature.
If the score is 15 to 24 then the software is average.
If below 15 then the software is immature.
Mature software has to be whatever you mean it to be. I don't think you will find an easy mechanism for measuring maturity, and everyone's definition is going to differ anyway.
It's always going to be a subjective view I'm afraid and therefore subject to a lot of argument.
I would say that mature software is stable, well documented, widely used and well tested.

Moving from Enterprise to World Wide Web [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am going to change my working sphere from Enterprise Web Applications written for concrete business process to Public Web Sites that will be accessible to all users around.
What is difference between this two spheres at the most top level? What specific characters I need to know about modern web sites development?
I suspect one could write books about this.
I suppose the first difference is the user base. With an enterprise, you can, at least partly, ensure the users are doing what they are supposed to - and if not you know who they are and where they live. Further, they can be fired for abuse. On a public web site, you almost have to assume that some part of your user base is not there for a positive reason. So be paranoid - if they're not attacking you yet, just wait.
A second related point is that users will find ways to use (abuse?) your site you never thought of. Plan for the worst, hope for better.
Third, language, culture and usage varies across the world. A form, for example, with "zip code" that accepts just 5 digits may make sense in the US but is useless in the UK. And asking for a state and restricting it to two characters likewise makes no sense say in Italy where Italy IS the "state". This also applies to actual content - that joke you think is so very funny may be offensive in other countries. And never under estimate the ability of some folks to be offended at anything.
Fourth - get a good bunch of beta tester and test your site, and updates, carefully and thoroughly.
Fith, have a plan for scalability - if you suddenly get "discovered" can your site take the traffic.
That's 5 things at least.
In an enterprise application, functionality and efficiency trump aesthetics every time. This is because you have a captive audience. The people who use your application are being paid to use it.
However, when opening an application up to the public, aesthetics becomes more important. There are always alternatives, and a given person will be more attracted to the application which looks better. Granted, functionality is still very important for repeat users, but you won't get people in the door if your application looks amateurish.
Browser agnosticism - In enterprise apps, it used to be that the developer would target the app at a specific browser, just for simplicity's sake.
In internet accessible apps, the developer must target the vast majority of browsers. While this has gotten easier in the last few years, it is still a issue that needs attention.
Scalability - its easier to scale an enterprise app, its easier to predict the growth of usage of the app, or simply design for access by all users in the org at once. This is not generally the case for internet sites. The day you get slashdotted, or dugg is the day that you learn this. Better to design scalability in from the start, rather than have to learn it at the time that your site starts to suffer.
In addition to Zack's answer, I would say that a web site/application that is open to the public needs to be constantly evolving/refreshed in order to grow your user base and keep them. Whereas on a more closed system, consistency and reliability are key priorities.
Depending on the nature of the application, if it has significant amounts of content Internationalization and presentation of content are hugely important.
As Zack mentions, public users have a lot less tolerance for poor UI than enterprise customers do. That said, public users are more tolerant of incremental change; you can upgrade a live site as you feel like it (as long as it works, of course!!) without having to go through endless feature-request prioritization committees and user-training requirements.
Public web sites needs to be easy to use. While it's important that they look somewhat polished, don't ever let polish get in the way of ease of use. For example many designers like fixed width layouts because they are more predictable, many users like fluid width layouts because they use the space more efficiently. Side with your users.
Enterprise users can be forced to deal with needlessly-complex systems (lord knows I am more than I'd like), the general public cannot.