Functional Server Naming Conventions - naming-conventions

I've seen "The Coolest Server Names," and I've seen another smaller-ish question related to mine, which was unfortunately closed.
It's a serious question though, as I'm on an internal applications dev team that manages the apps on a couple dozen servers. The networking folks typically don't care what we call the servers as long as they know about 'em, so we can come up with whatever conventions.
The apps the servers deal with can be home-grown custom apps, or they can be larger vendor ones like SharePoint. They can be:
In multiple networking environments that can't speak to each other (think firewalled-off external servers versus intranet-esque servers)
In different physical locations (California office versus New York, etc.)
In multiple deployment tiers (production, staging, testing, dev)
Have one or many functions (web server, DB server, mail server, app server)
Load-balanced or not
Standby (for disaster recovery purposes) or primary
Whew! Think it's even possible to come up with a convention that can address all of these aspects, or significant ones? It'd be nice to hear a server name (or DNS entry for it) and be able to immediately know what it does, and it works for getting new guys up to speed as well. "sharepoint-IPC-1 is down" could be parsed into "the internal production SharePoint web server in the California datacenter that's the first node in the load balancing is down!"...but that seems overly complicated at first glance.
Another thing in the back of my mind is that an old mail relay server is getting decommissioned, which means we have to scour through a lot of old apps to repoint hardcoded server values (I know... :).

Here are some general guidelines I try to abide by, based on mistakes I've made in the past.
Never base your machine names on...
Hardware Machines get swapped out all the time, and you don't want to have to do too much work if you change from an IBM server, to a Sun server, to a Dell server.
Location Equipment and even entire server rooms can be moved based on business requirements or technical issues.
Intended Use As your product evolves, so too may the intended use of each server. Having a machine named "dbsrv" but eventually acts as a file server too, is confusing.
Owner The person who "owns" the equipment (an employee) can change, due to firings, layoffs, and moves within the company.
Subnet As I said before, labs can move, and so can subnets. One of the main goals of DNS is to free you from being tied to a specific IP address, so why tie yourself down needlessly?
Now, some suggestions for the situation you described...
Machines spread across a region This is what subdomains are for in DNS. You could have "west.company.com" and "east.company.com".
Have one or many functions Don't name them based on intended use. If you name them based on some large collection of names--Greek gods, for example--you will eventually intuitively know that zeus.east means your master database server and apollo.west is your backup database server. Worst case, look it up in a spreadsheet.
Load-balanced or not You can take two approaches. You could have a unique name per node behind the load balancer, or you could do something like athena-1.east, athena-2.east, etc. Either way, a load balancer will (hopefully) free you from worrying too much about what each node is named.
Standby or not This doesn't sound like a criterion that should have an impact on the machine name.
What I'm essentially saying is:
Separate your equipment into different regional subdomains
Choose a naming scheme with plenty of names (Greek gods in this example)
Don't base the names on any of the criteria I mentioned above (intended use, location, etc.)
Trying to do anything more than that will be more trouble than it's worth.

I know that it's tempting to assign names to servers that describe their functions and other similar attributes and in a perfect world that will work but in practice I have found that after a while these things get messed up as functions and other parameters of the servers change (as the requirements of the business change) so the names no longer reflect the reality.
I think you should assign unique names to the servers that do not tell anything about the function or other parameters and have some sort of (up to date) list detailing those things so that your people can look it up. That's what we do here.
The other extreme is using IP addresses only or having names based on IP addresses which can lead to a disaster too if you ever have to change your IP addresses.

Related

Is it considered a poor practiced to use a single database for different uses across different applcations

What if you had one large database to server all your apps. So your website that needs to store customer orders can use the same database that your game uses to store registered users. Different applications could have tables only for them to use. Some may say that this could be a security issue, because if someone cracks your database, they could attack all your applications. But in a lot of databases you could use a line like the following to restrict access:
deny select on aTable to aUser;
I am wondering if this central database would be considered a poor practice, and if so why?
They way I look at it, a web application is nothing more than a collection of web pages. Because of this, it really doesn't matter if one page is about, say, cooking, while the other page is about computer programming.
If you also consider it, this is very similar to Openid, which I use to log into my SO account!
If you have your fundamental security implemented correctly, it doesn't matter how the user is interacting with your website. Where I would make this distinction is in two cases:
Don't mix http with https. On a shared host, this isn't going to be an issue anyway; if you buy the certificate for https, make everything that way (excluding the rare case where this might affect performance).
E-commerce or financial data should be handled fundamentally in a different way. If you look at your typical bank, they have multiple log-in protocols, picture verification and short log-in times. This builds confidence in user's securities. It would be a pain in the butt for a game site, or most other non-mission critical applications.
Regarding structure, if you do mix applications into one large database, you should consider the other maintenance issues, such as:
Keep tables separate; consider a prefix for every table unique to each application. Following my example above, you would then start the cooking DB table names with 'ck', and the computer programming DB table names with 'pg'. This would allow you to easily separate the applications if you need to in the future.
Use a matching table to identify which ID goes to which web application.
Consider what you would do and how to handle it if a user decided to register for both applications. Do you want to offer transparency that they can share the same username?
Keep an eye on both your data storage limit AND your bandwidth limit.
If you are counting on these applications to drive revenue, you are putting "all your eggs in one basket". Make sure if it goes down, you have options to restore or move to another host.
These are just a few of the things to consider. But fundamentally, outside of huge (big data) applications there is nothing wrong with sharing resources/databases/hardware between applications.
Conceptually, it could be done.
Implementation-wise, to make the various parts distinct from one another, you could use both naming conventions (as per #Sable Foste) and/or separate database schemas (table Finance.Users, GameApp.Users, etc.)
Management-wise, things could get tricky. Repeating some points, adding others:
One application could use a disproportionally large share of resources (disk space, I/O, CPU)
Tracking versions could be tricky (App is v4, finance is v7) -- depends on how many application instances you have to support.
Disaster recovery-wise, everything is lumped together. It all gets backed up as one set, it all gets restored as one set. Finance corrupt? Restore from backup... and lose your more recent game data.
Single point of failure. One database goes down, all your applications are down.
These (and other similar issues) are trade-offs you'll want to consider. Plan ahead, to lessen the chance that what's reasonable and economic today becomes a major headache tomorrow.

Merge multiple databases into one

I have a desktop app that clients are using at the moment and each client has access to their own local network database.
My manager has decided that its best to merge these databases and only have one. All clients would then access that one database through a webservice that sits on the cloud. I would like to weight the pros and cons before we go ahead with this decision.
The one option we have is to have a ClientID in each of the tables which will result in each table having a composite key .
I have heard that another option would be to use schemas .Please advise how the schema way would work and is this the best way in comparison to having a composite key in each table.
Thank you.
This is a seriously difficult and time consuming task. You will need to have extensive regression tests already built because the risk of things breaking is huge.
Let me tell you a story of a client that had a separate database on a separate suerver that got merged with another database that contained many clients. It took several months to make all the changes to convert the data. Everything looked good and it was pushed to prod. Unfortunately the developer missed one place where client id needed to be referenced (It usually wasn't in the old code since they were the only client on the server). The first day in production a process that sent out emails, sent client proprietary data not only to the client sales reps but to the sales reps of many of their competitors. Of all the places that the change could have been missed, this was the worst possible one. It not only harmed our relationship with the first client but with all the clients that got some other client's info by mistake.
There is also the problem of migrating the data, the project for that alone (without the code changes the application will need) will take months and then you have consider that the clients will be adding data as you go and the final push may run into unexpected hiccups due to new data. You may also have to turn off the odl system for at least a weekend to do the production change.
Using schemas won't make it any easier as you will then have to adjust the code to hit the correct schema per client. And when you change somethign you wil have to change it for each individual schema, so it tends to make the database much more difficult to maintain.
While I am a great fan of having multiple clients in one database, when you didn't start out that way, it is extremely risky and expensive to change. I would not do it al all unless I had these things:
Code in source control
Extensive Unit and regression tests
Separate dev, QA and prod environments
A process for client UAT testing
Extensive knowledge of how cloud computing and webservices works (everyone I know who has moved stuff to the cloud has had some real gotchas)
A QA department
Six months to one year time frame for the project
At least one senior data analyst on the team.

Working with patient/customer data outside of the office

Background
I am a developer that works for a health care organization. We build a variety of business apps that a majority of them contain PHI (Patient Health Information). We work on laptops in-house and occasionally have the option to work from home. Something we are discussing though is how do we handle the data stored on our laptops when we are working out of the office.
Although we have passwords and our laptops are encrypted that still doesn't seem like enough to us to protect data. What I mean by that is this. We are a small five person team. When we are working on a task we all work locally on our own databases, on our laptops. When the change is done we commit to svn and publish to a test server. Our concern is my local database is a copy of production sometimes so I can test against real data. That local database could contain thousands of records of PHI. This is obviously a major concern to us when we takes our laptops out of our building because if I have my laptop stolen, I would be putting thousands of patients health information at risk. Not something we want to do.
My Question
How do developers work as a best practice in regards to patient data safety. Or even if it was financial? Either way, how do people work with patient/customer data locally?
Is it fair to say that sometimes you just don't have the ability to connect in to a database behind a firewall or is that just negligence? Even if I keep the database internal I still have project code on my laptop. Is that bad too?
• Should I have fake data?
• Should all data be on an internal machine that you connect to?
• Should I only connect in to a machine that is internal?
I can’t imagine that is what people do all the time.
We are discussing this as a team and would love to hear your feedback in regards to "how do you or anyone work as a remote developer".
Thanks

Running the same web app on 2 or more physically separate servers?

I am not sure if I should be posting this question here or over at ServerFault so apologies if it is in the wrong place.
I have a small web app that is starting to get some more business.
Currently I have a single dedicated LAMP server for this, and this has worked well - the single server is able to handle all of our traffic.
However... Recently I have been approached by some potential customers who are interested in using the app, but only if their data can be stored on a server in the same province as they are (legal reasons).
I could migrate the server, but I am reluctant to do this. I like where it is now.
So, I am wondering what is involved in having multiple servers in physically separate datacentres far apart, running the same web app? Data between the servers would not need to stay synced, necessarily.
I have never done anything like this before, and am not sure how complicated a job it is. Any suggestions on how and where to start looking into this would be much appreciated.
Thanks (in advance) for your advice.
As long as each customer has their own set of data you can just install another copy of the application in the other datacenter. It will require you to get some structure to your source control and deployment process, but it works. This option will give you two separate databases.
If you have to have one common database for all the customers (e.g. some kind of booking/reservation system of common resources) then you're up to a completely other level of complexity with replicating databases etc. It's doable, but it's hard.

How to achieve high availability?

My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent).
In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal.
External cloud system not allowed as he has no confidence.
The system should take into account of scalability which means addition of new servers should be easy to configure.
The servers should be load balanced.
The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption).
The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time)
Please help. I'm already at the end of my wit. Thank you in advance.
I imagine that these requirements (if properly analysed) are essentially incompatible, in that they cannot work according to CAP Theorem.
If you have several datacentres, even if they are close by, partitions WILL happen. If a partition happens, either availability OR consistency MUST be lost, because either:
you have a pre-determined "master", which keeps working and other "slave" DCs which fail (or go readonly). This keeps consistency at the expense of availability.
OR you lose consistency for the duration of the partition (this means that operations which depend on immediate consistency are also unavailable).
This is incompatible with your requirements, as far as I can see. What your boss wants is clearly impossible. He needs to understand CAP theorem.
Now, in YOUR application case, you may decide that you can bend the rules and redefine what consistency or availiblity are, for convenience, and have a system which degrades into an inconsistent but temporarily acceptable state.
You probably want to get product management to have a look at the business case for these requirements. Dropping some of them is probably ok. Consistency is a good requirement to keep, as it makes things behave as people expect - this means to drop availability or partition-tolerance. Keeping consistency is definitely easier from an engineering perspective.
This is another one of those things where employers tend not to understand the benefits of using an off-the-shelf solution. If you as a programmer don't really even know where to start with this, then rolling your own is probably a going to be a huge money and time sink. There's nothing wrong with not knowing this stuff either; high-availability, failsafe networking that takes into consideration catastrophic failure of critical components is a large problem domain that many people pour a lot of effort and money into. Why not take advantage of what providers have to offer?
Give talking to your boss about using existing cloud providers one more try.
You could contact one of the solid and experienced hosting provides (we use Rackspace) that have data centers in different regions world wide and get their recommendations upon your requirements.
This will require expert assistance and a large budget, and serious planning.
I better option will be contact a reputable provider with a global footprint and select a premium solution with a solid SLA backing up there service and let them tailor a solution that comes close to your needs.
Just realize even the guys like Google, Yahoo, Microsoft and Amazon (to name a few), at one time or another have had some or other issue that rendered segments of there systems offline to certain users.