Performance cost of automated config management - configuration-management

I am learning about tools like Chef/Puppet/etc for the first time and was wondering how well (or poorly) they integrate with applications deployed on the cloud:
Why use Chef when there are vendor-specific APIs out there, as well as frameworks like JCloud which abstract even those APIs?
Is there a performance cost to using these configuration tools, or (once configured) do the nodes/machines just operate like any other (non-managed) machine on the cloud?
Can Chef be used to configure any technology thats out there, or does it provide a list of "supported vendors/systems"? Meaning, lets just say I have a bunch of Chef-configured PostgreSQL servers. Then the next day, some crazy new RDBMS comes out and I want to switch over to it. Do I need to wait for Chef to "support" this new system, or is Chef vendor-agnostic?
Thanks in advance!

Disclosure: I am one of the developers employed by Puppet Labs.
Why use Chef when there are vendor-specific APIs out there, as well as frameworks like JCloud which abstract even those APIs?
There are two reasons. One is that Chef, Puppet, and similar tools are like JCloud - they offer an abstraction over the specific cloud APIs. So, you use them for the same reason.
The other is that most cloud APIs are really about creating machines, and Chef, Puppet, and similar tools are really about configuration after the machine is created. The abstraction over the cloud API is more convenience than core focus.
Is there a performance cost to using these configuration tools, or do the nodes/machines just operate like any other (non-managed) machine on the cloud?
Is there a performance cost to using knife to create the machine? No, it is just like any other unmanaged node. If you create a machine without Chef installed it is just like a machine without Chef installed. The same is true on the Puppet side, etc.
(Keep in mind, Chef, Puppet and similar tools don't have any API that isn't present in the public cloud API. No sweetheart deals for us there. ;)
Can Chef be used to configure any technology thats out there, or does it provide a list of "supported vendors/systems"? Meaning, lets just say I have a bunch of Chef-configured PostgreSQL servers. Then the next day, some crazy new RDBMS comes out and I want to switch over to it. Do I need to wait for Chef to "support" this new system, or is Chef vendor-agnostic?
Chef, and Puppet, both have extensibility at their heart. The both have a set of things they can manage out of the box, and a community that contributes support for a whole pile of other stuff.
So, if a fancy new service comes along you might have some-but-not-all of the features that you could manage with either. (Both manage, for example, packages, files, services, and running arbitrary commands out of the box. That will do lots of what a random new service needs, even without a more detailed model for managing it.)
If you want more - to manage, for example, access control inside the database is a first class part of the model, you might have to wait for someone to write support for it in your product. (That someone can, obviously, be you. :)
So, you get basic support out of the box, and it is deliberately easy to build more on top of them.
None of the products are "vendor specific" in any meaningful sense, although they are both more effective on Unix than other platforms, and have more limited but still valuable support for Windows.
Almost everything here applies to other products in the space, also.

Related

How to migrate thick client to the cloud

Current situation:
Thick client wrote in .NET
We have a very old computation software that we can't maintain anymore.
We don't really know how the kernel is working (people left, 15 years old code).
We have the code and some technical experts.
We want to migrate it to the cloud behind a public API in order to serve some SPA application or even thick client applications.
What is you recommendation about that problem?
We have thought about:
Lift-n-Shift
Lift-Adjust-n-Shift
Rearchitecting or redeveloping from the ground
Repurchasing a new cloud solution (but it doesn't seem to have any)
All options that you mentioned are possible but which one to choose really depends on your business needs time and budget.
Lift and shift (vms)
This is mostly quickest approach and you may simply use VMs to migrate to cloud. But managing VMs is your responsibility and is on going committment.
Lift adjust and shift (containers)
in my opinion you get benefits of cloud when you start using PAAS services. You may consider containerize (docker) your application and migrate it to cloud and start using paas services. your dev ops cycle will be quick and scaling is easy. Since you are not managing vms anymore it's less hassle.
rearchitect amd redevlop
this could be costly and time consuming and really depends if your business requirements allow you to do that. if you plan to expand the existing code base then you may consider this else it could be big deal when you can simply migrate your services using approaches mentioned above.

IBM S/390 mainframe COBOL source code

We have an S/390 mainframe at my new job that’s been running COBOL applications since the late 90’s. The mainframe is getting old enough that we need to migrate to a newer system. We’re a small enough business that we can’t warrant spending the money to upgrade to new mainframe hardware and the program logic has been a constant work in progress for 30+ years, so it has a lot of functional value. I’ve been considering moving the functionality to a Linux machine and using something like OpenCOBOL to recompile as an executable binary instead of trying to rewrite it in a newer language. I haven’t messed with a mainframe enough to have any clue how or where to access this information and the gentleman that wrote all of the programs is unfortunately no longer with us. I’ve read that SSH is an option, but I’m not even sure how to get the ball rolling on that with a mainframe. I use Linux on a fairly regular basis, so I’m familiar with SSH, but from my understanding those mainframes aren’t a simple OS that you can merely connect to and navigate the file system to retrieve data like we can in modern operating systems. Can anyone give me some pointers to get a sense of direction for accessing the source code for the COBOL programs? Are there default locations that they are stored, etc.? They’re somewhat simple programs that don’t use any DB2 functionality and will hopefully compile on a different system with relatively minimal debugging and fixes. I’m certain that I’ve left out necessary information that would help getting an answer to this question, and I can provide any additional information that is needed to help you all help me. I suspect that SSH isn’t enabled by default, but maybe I’m wrong there too. Any assistance is greatly appreciated. Thanks everyone!
Although not a programming question I'll provide some guidance I think might help you.
First, this is a business decision about where to invest.
Do we upgrade the system to a newer model and upgrade some software and acquire the skills to keep the system running? (System Programming, OS upgrade and cost of migration, newer platform (used z13 could be an economical option, storage systems to support the mainframe)
Migration of existing workloads to other platforms. (Cost to migrate code, sizing of performance needs, new technologies to replace existing access methods like VSAM or dare I say ISAM if the applications are old enough)
Status Quo ... leave things where they are and keep the lights on
In evaluating any option you have to assess the risk to the business and what would a disruption cost? IMHO, its less about a technology like SSH or COBOL on Linux but requires some serious assessment of the current state, the acceptable to be scenarios and the cost of pursuing one of those options.
My comments are not intended to instill fear but provide a framework of how do I approach analyzing a challenge of this magnitude.
There is no default location where source code is stored on z/OS (it is z/OS you're talking about, right?). Source code is usually stored in PDS data sets. The naming of those depends on the installation, i.e. the company, and whether or not any software like Endevor, ChangeMan, etc. is being used to maintain the sources.
Since this is old z/OS (OS/390) COBOL code, chances are the code is making use of OS specifics such as record level I/O, VSAM data sets, etc. These are the parts that will not work on a non-z/OS platform without major rewrite. So, you will need to look into the sources.
SSH is available on z/OS, but it needs to be configured and enabled. You need to check with your z/OS sysprog. FTP, and NFS are other options, but again, they need to be configured and enabled.
Transfering the sources is the least of your problems, I'd say.
I have to agree with the prior two answers, but have some additional suggestions. This is a business decision what to do on the system.
Finding the program to understand what it does is the first requirement. Since you know what program is running that may be the name of the source file. That you will need to find. The source file probably will be in some library manager, the first place to look is in the ISPF menu system. There will be an option for the library manager you are using if you are using one. Based on your description you may be using something called SCLM which would should up, or you might see Librarian or Panvalet. You will need to get into ISPF by connecting using a 3270 connection emulator. Once you find the file, using FTP or SFTP may be the best, or your emulator may just provide a transfer mechanism. You will need to find the related files as well, which should also be defined in the library manager.
Once you have the file, you will need to figure out what it uses as mentioned above, it will be working with some kind of data file, and that will be the biggest part to deal with.
If it is a batch program it is probably part of a schedule, and there are other programs also running that you will need to find and figure out how they fit together.
Once you have an understanding of all the parts then you can work to make the right business decision as to how this should be run. You may want to upgrade, you may want to look to getting z/OS as a cloud service if you don't want to upgrade but you want the function. Or it may be a simple program you could move. That will be much easier to figure out once you have the details.
You say the program logic has changing for 30+ years. Was it only one person making all the changes ? Would anyone on the team have some idea about the PDS's that the user had access to ? That might be one of the places to look for. As the previous answers suggested , most shops would have store the source code in some kind of config mgmt tool like SCLM or panvelet. If you have access to the load code, there are utilities that can be used to inspect the load member to get a CSECT listing which would have the names of the obj members that make up that load.You can check with your mainframe admins. That can get you the source code file names. We use SSH from USS in our shop to move code from a HFS folder to gitlab. I have also used plain FTP to just transfer source code files to my workstation . But yes, first you have to find where it is stored.

SQL installation on Amazon Web Services

Folks, I have question this morning that hopefully one of you techies can answer – during past few months, I have been heavily involved in preparing several SQL certifications study guides as it’s my desire to secure Microsoft Certified Solutions Associate (MCSA) or associate level. While I have previous experiences within this skill set and wanted to sharpen it by obtaining further experiences and hopefully securing this certification, it has been quite challenging setting up a home lab that allows me to create environment similar to what the big dogs use nowadays – windows server/several sql instances/virtualization and all that – due to lack of proper hardware or cost. In any case, my question today is to seek your advices and guidance on other possible options, particularly if this task can be accomplished using Amazons AWS – I understand they offer some level of space that can be used as playground or if one want to extend the capacity, subscription is an option. So, if I was to subscribe the paid version of it, is it possible to install all software needed to practice and experiment all needed technologies to complete and or master contents on the training kit. Again, I’m already using my small home network and have all proper software, but just feel that it’s not enough as some areas require higher computing power to properly test or rung specific areas..
Short: Yes
You can create a micro instance for free and install whatever you want on it. If your not familiar with using the CLI, it can be a bit daunting but there are plenty of guides online.
They also offer an RDS service where, they will allow you to set up a database instance and will maintain it for you but it's not free.
Edit
Link to there MS Server Page
http://aws.amazon.com/windows/
Azure is the windows cloud service, I think the comment was have you considered looking at azure instead of AWS

What research-operating-system features would you advocate including in Google Chrome Operating System

Imagine that a large player is undertaking the construction of a new operating system, where backward compatibility requirements are limited to:
Run existing applications written in (or compiled to) JavaScript which are presented in HTML5 and styled with CSS3
Plug and play support for printers, external storage, and optical drives
Degrade gracefully when disconnected from the internet
Sufficient process quotas to support safely permitting tasks to run in the background, including timers
What specific features from existing research operating systems (such as Plan 9) would you like to see enter the mainstream through this channel? Please limit your suggestions to things that have been implemented, and provide a link to the implementation (or at least search terms).
From the Plan 9 docs:
Plan 9 began in the late 1980’s as an
attempt to have it both ways: to build
a system that was centrally
administered and cost-effective using
cheap modern microcomputers as its
computing elements.
Netbooks qualify as cheap modern microcomputers, and The Cloud qualifies as centrally administered. There is an opportunity to implement the features (in DDaviesBrackett's words) that we want netbooks to have other than by extending a 1970's time-sharing OS; the research operating systems may have proved the value of alternatives by example.
From the Plan 9 FAQ:
Subject: What are its key ideas?
Plan 9 exploits, as far as possible,
three basic technical ideas: first,
all the system objects present
themselves as named files that are
manipulated by read/write operations;
second, all these files may exist
either locally or remotely, and
respond to a standard protocol; third,
the file system name space - the set
of objects visible to a program - is
dynamically and individually
adjustable for each of the programs
running on a particular machine. The
first two of these ideas were
foreshadowed in Unix and to a lesser
extent in other systems, while the
third is new: it allows a new
engineering solution to the problems
of distributed computing and graphics.
Plan 9's approach means that
application programs don't need to
know where they are running; where,
and on what kind of machine, to run a
Plan 9 program is an economic decision
that doesn't affect the construction
of the application itself.
Does that not appear to be an excellent fit for the netbook/Cloud domain?
What operating system features I would advocate for Chrome OS?
Here my wish list as a Plan 9/Inferno fan:
Resources (ip stack, graphics, etc) as file systems.
Network transparent file system (ie., 9P).
Private per-process namespaces.
Factotum-like auth system (ie., no root user).
Pure UTF-8 everywhere.
Extremely lightweight processes.
Automatic snapshot and de-duplicating storage (ala venti+fossil).
And I guess many others, but this would be enough to make me quite happy.
This is not a 'OS feature' per see, but I would love to have a GUI with mouse-chording.
None.
I'd prefer for a new consumer OS, especially one targeted at Netbooks, to be very very good at doing the things that we already want OSes to be able to do rather than having time spent on features that are, by their nature, experimental.
(Of course, I'd be totally un-bothered by features I wasn't forced to use to develop on the platform; other people's toys are welcome as long as they don't make my job harder.)
I really think that Google might look into Plan9 for inspiration actually. Hearsay (the Internet) claims that several of those that initially developed UNIX and then later scrapped it for a better design (Plan9) are employed by Google. Google is also hosting its own version of Inferno, but I am not sure whether this is any central part of their plan. Further "evidence" could be that the plan9 authorization system (p9auth) for Linux was published by a Google researcher. The third "evidence" would be that Google claim that Chrome OS will have a novel security architecture.
The authorization seems to me to be one of the GREATEST parts of the Plan9 that can be included right now (/net would also be nice but there is no working code for that yet). The idea that a program that needs root access only gets limited access to the parts that are determined by the authorization server is definitely a great step forward compared to the now prevalent user/superuser/root division in Linux, where "a man in the middle" attacks can (theoretically) be done by gaining (full, as opposed to limited by the authorization server) root access via a bug in a program granted root.

Experiences and tips for programming with and for Amazon's cloud servers/apps/tools?

We're looking into developing a product that would use Amazon's cloud tools (EC2, SQS, etc), and I'm curious what tips/gotchas/pointers people that have used these technologies have.
One tip/whatever per post, please.
The Elasticfox plug-in for Mozilla makes doing a lot of the EC2 stuff easier. It can be found at: Elasticfox Firefox Extension for Amazon EC2. This page has links specifically to download the Elasticfox plug-in and also the associated Sourceforge project. Well worth using...
Get a developer account at Right Scale. It's free and a god-send for a guy who hates remembering those dumb commands and arguments. If you only resort to Amazon-supplied tools, you're throwing away your human rights.
We're interested in EC2 where i work. We don't care about web-serving or enterprisey stuff, just massive number crunching for physics, using python. This EC2 stuff had me befuddled, with most documentation oriented toward businessy applications and using C# or Java, but this slide show clarified much for me, especially for using python: http://www.datawrangling.com/pycon-2008-elasticwulf-slides
As for SimpleDB, it has a very limited query language and it is very restrictive. If you planning on having lot of complex queries, you must first sit down and think how to organize your data to make those queries possible. One thing missing, but that will probably will be added, is the ability to count the results of a given query, much like SQL's COUNT.
Performance is ok, but I consider the latency maybe a little high.
An important concept to grasp: the file system your EC2 instance lives on while it's running is not persistent. There are tools/services available that let you mount file systems backed by S3 storage, or you can upload to S3 or other storage service from the instance, but when an instance closes the associated file system is no more.
As for tools, I've found Amazon's tools to be great, but you should probably be comfortable with the command line if you're taking this route.
For managing your EC2 instances, etc. Amazon also offers - in beta since a couple of days - the management console which has similar functionality to the Elasticfox Firefox plugin but is a pure web console.
https://console.aws.amazon.com