I first heard the notion of autosys virtual machines, which seems capable of offloading a heavy loaded autosys job.
From some jil file examples I was able to make some oberversations:
in the jil file for a job, if there is a type attribute, does a
"type: v" mean virtual machine? But I also noticed in some other VM
jil example there is no "type" attribute, and the machine name is
like an alias with a "_V" suffix.
Do we need to specify two physical servers in the jil with one of them serving a primary
and the other backup (virtual) one?
what do attributes factor and max_load mean, and how are they properly setup?
How can we verify that both servers are hit if the jil file were configured as such? I supposed they are in log files.
You don't need to specify 2 machines for virtual. you can have 1 and use it as a new name or more recognizable one. If you specify 2 or more behind your set virtual machine, job will only run on one machine and not all. Depending on what virtual machine you define is how the scheduler will pick on which it should execute. With the new AE SP7 there are new possibilities like round robin and other options. I would suggest reading up on those and decide what best fits your need.
Related
We're thinking about moving to the Elastic Load Balancer on Amazon. However, it turns out that since we use more than one domain name, we would have to rename some of our applications to limit to a single ELB. Another issue is we currently use free level one certificates, whereas moving to ELB would require moving up to level 2, although that's not a huge deal. Another issue is we don't have a lot of volume at this point, and don't really have a need for load-balancing in terms of traffic alleviation. Also, in the case of a failure of an amazon instance, which seems to be quite rare (have not experienced in several years), we can quickly be up and running by creating another instance and restoring.
Otoh, according to all I read about it, people are generally happy and recommend it, due to ease of setup and the value it brings.
Given the above, is it worth it?
since we use more than one domain name, we would have to rename some of our applications to limit to a single ELB
What makes you say this? There's nothing preventing you from launching multiple ELB's if you really want to. And if your application already manages multiple domains properly then there's no reason a single ELB can't handle that either. We currently have one ELB fronting an application on a bunch of EC2 instances that 11 different domains all point to.
Another issue is we currently use free level one certificates, whereas moving to ELB would require moving up to level 2, although that's not a huge deal.
Not sure what you mean by "level one" and "level 2". If you're using a self-signed SSL certificate then you'll need to switch to using certificate signed by a third party Certificate Authority, which will indeed cost you some money. Amazon supports all manner of certificates, including simple certs, EV certs, SAN certs, etc. You'll find more information on ELB and SSL certs in the AWS documentation.
Also, in the case of a failure of an amazon instance, which seems to be quite rare (have not experienced in several years), we can quickly be up and running by creating another instance and restoring.
Consider yourself lucky. We've had Amazon instances fail from time to time, and we also regularly get notifications from Amazon that instances need to be rebooted in order to migrate them off of faulty/old hardware.
If you really don't care about being down for a while and feel like you don't need the capacity that a load balancer and multiple appservers provides then there's no reason for you to move to using an ELB. However if you want the reliability of multiple appservers then moving to an ELB is indeed a good idea.
And if you anticipate your traffic level growing then you might want to consider using Amazon's Auto Scaling tools. Using Auto Scaling you basically tell Amazon the minimum number of application servers you want running behind an ELB, and some parameters to indicate when they should automatically launch additional instances if/when load increases.
Our Amazon account rep actually recommended to us that if we had even a single instance that we wanted to minimize downtime of (like a monitoring server, etc) that we should create an Auto Scaling group with a limit of exactly 1 instance in it. That way if the instance ever does die for any reason whatsoever, Amazon will automatically spin up a new replacement instance.
Agree with Bruce, just wanted to add my 5 cents about Auto Scaling(ASG) and " Amazon will automatically spin up a new replacement instance.".
This is really cool way to get robust hosting solution, but will need some challenge to create CloudFormation template and bash auto install script that will be called from CloudFormation template to install all server software and deploy your app code.
So if you will have 2 instances and ASG with Min/Max = 2, then if some instance will be crashed, ASG will recreate it automaticly with all software installed and code deployed and ready to go
Also if you need to handle some periodic traffic jumps automaticly, then you can change the ASG as (Min=2, Max=5), create 2 CloudWatch alarms:
1. if cpu usage is 90+ for 5 or 10 mins
2. if cpu usage is 30- for 5 or 10 mins
Then assign Alarm 1 to scale up 1 additional instance and assign alarm 2 to destroy any additional instance created by 1
My name is Josue
I need your help with this:
Is there any way to audit or monitor the server processes that connect to the
Advantage Database Server?
Is there a log of running processes?
Thank's
There is no existing log of processes that use Advantage Database Server. Because it is a client/server architecture, there is no mechanism that I am aware of that can easily associate a connection on the server to a specific process.
However, it would be possible to use the system procedure sp_mgGetConnectedUsers() to obtain some of this information. It might be possible to use it to obtain the information you are looking for at a given point in time (a snapshot).
The output of that procedure includes three fields that you might be interested in. The Address column gives the address of the machine that connected to Advantage. It is typically the IP address of the client application. But it can also be of the form "IPC Connection N", which indicates that it is using shared memory for communications; this means that the client process is running on the same machine as the server.
The TSAddress column might also be of interest. If the connection is made by a client that is running through terminal services (e.g., a remote desktop), then that column contains the IP address of the client machine. If you are interested in knowing processes that originate from the server machine itself, then you would need this field to differentiate between those and clients that connected through terminal services.
The other column of potential interest would be ApplicationID. By default, that field contains the process name (e.g., the executable) of the client application. This could help identify the actual process. It is not guaranteed, though. The application itself can change that value through mechanisms such as sp_SetApplicationID.
How do you manage multiple projects on your development and/or testing machine, when some of those projects use Redis databases?
There are 2 major problems:
Redis doesn't have named databases (only numbers 0-16)
Tests are likely to execute FLUSHDB on each run
Right now, I think we have three options:
Assign different databases for each project, each dev and test environment
Prefix keys with a project name using something like redis-namespace
Nuke and seed the databases anytime you switch between projects
The first one is problematic if multiple projects assign "0" for the main use and "1" for the test and such. Even if Project B decided to change to "2" and "3", another member in the project might have a conflict in another projects for him. In other words, that approach is not SCM friendly.
For the second one, it's a bad idea simply because it adds needless overhead on runtime performance and memory efficiency. And no matter what you do, another project might be already using the same key coincidentally when you joined the project.
The third option is rather a product of compromise, but sometimes I want to keep my local data untouched while I deploy small patches for another projects.
I know this could be a feature request for Redis, but I need a solution now.
Any ideas, practices?
If the projects are independent and so do not need to share data, it is much better to use multiple redis instances - each project configuration has a port number rather than a database name/id. Create an appropriately named config file and startup script for each one so that you can get whichever instance you need running with a single click.
Make sure you update the save settings in each config file as well as setting the ports - Multiple instances using the same dump.rdb file will work, but lead to some rather confusing bugs.
I also use separate instances for development and testing so that the test instance never writes anything to disk and can be flushed at the start of each test.
Redis is moving away from multiple databases, so I would recommend you start migrating put of that mechanism sooner rather than later. This means one instance per db. Given the very low overhead of running Redis, this isn't a problem from a resources standpoint.
That said, you can specify the number of databases, and providing A naming standard would work. For example, configure redis to have say, 60 DBS and you add 10 for the test db. For example db3 uses db13 for testing.
It sounds like your dev, test, and prod environments are pretty tied together. If so, I'd suggest moving away from that. Using separate instances is the easiest route to that, and provides protection against cross purpose contamination. Between this and the future of redis being single-db per instance, separate instances is the best route.
I have a service based architecture where a web farm full of asp clients hit application server farm of WCF services. Obviously all the database access is done by the WCF services. Now I would like to cache my frequently used database retrieved objects using Velocity at the service tier level. I am considering to make each physical application server also part of the cache cluster.
According to Velocity documentation, if I use regions, objects are stored only at a single host. I actually wouldn't have any problem if each host kept it's own cache provided that I could somehow synchronize them.
So my questions are
If I create one region on one host is it also created on another one?
When I clear a cache region, is it cleared on one host only?
If I subscribe to a region level notification on all the hosts, can I catch events of one host on another one?
In this scenario should I use regions at all or stay away from them?
I hope my questions are clear. Actually I am more interested in a solution to my problem than answers to my questions
Yes you are right in reading the doc that the region will exists only in one host.
" I actually wouldn't have any problem if each host kept it's own cache provided that I could somehow synchronize them."
When you say synchronize, you mean when HA in enabled ? Velocity would actually take care of that if thats what you meant.
For the questions:
1. No.
2. Yes
3. Notifications will be sent to the client. So i am not sure if there is anyway to send notifications to other host.
4. Regions gives Search capabilities and takes away HA from you. In your case, you could use the advantages of HA.
Having regions not necessarily means that you don't have HA. if your create your own cache (and don't use the 'default' one) you can create it with Secondarys = 1 (HA on)
now let’s say you have 4 cache hosts; when you define a region , it will have both primary and secondary hosts. so each action on the region will result it being applied in both.
Shany
Named caches distribute across participating nodes. Named regions live on a single node. Regions can be HA, but they cannot take full advantage of distributed cache scaling, as their object load does not distribute across participating nodes in the cluster. Also, using named caches with HA requires three nodes minimum, rather than two nodes if you used the "default" cache only.
This question is for anyone who has actually used Amazon EC2. I'm looking into what it would take to deploy a server there.
It looks like I can start in VirtualBox, setup my server and then export the image using the provided ec2-tools.
What gets tricky is if I actually want to make configuration changes to my running server, they will not be persistent.
I have some PHP code that I need to be able to deploy (and redeploy) to the system, so I was thinking that EBS would be a good choice there.
I have a massive amount of data that I need stored, but it just so happens that latency is not an issue, so I was thinking something like s3fs might work.
So my question is... What would you do? What does your configuration look like? What have been particular challenges that perhaps you didn't see coming?
We have deployed a large-scale commercial app in the AWS environment.
There are three basic approaches to keeping your changes under control once the server is running, all of which we use in different situations:
Keep the changes in source control. Have a script that is part of your original image that can pull down the latest and greatest. You can pull down PHP code, Apache settings, whatever you need. If you need to restart your instance from your AMI (Amazon Machine Image), just run your script to get the latest code and configuration, and you're good to go.
Use EBS (Elastic Block Storage). EBS is like a big external hard drive that you can attach to your instance. Even if your instance goes away, EBS survives. If you later need two (or more) identical instances, you can give each one of them access to what you save in EBS. See https://stackoverflow.com/a/3630707/141172
Burn a new AMI after each change. There's a tool to create a new AMI from a running instance. If EBS is like having an external hard drive, creating a new AMI is like having a DVD-R. You can save the current state of your machine to it. Next time you have to start a new instance, base it on that new AMI. Good to go.
I recommend storing your PHP code in a repository such as SVN, and writing a script that checks the latest code out of the repository and redeploys it when you want to upgrade. You could also have this script run on instance startup so that you get the latest code whenever you spin up a new instance; saves on having to create a new AMI every time.
The main challenge that I didn't see coming with EC2 is instance startup time - especially with Windows. Linux instances take 5 to 10 minutes to launch, but I've seen Windows instances take up to 40 minutes; this can be an issue if you want to do dynamic load balancing and start up new instances when your load increases.
I'd suggest the best bet is to simply 'try it'. The charges to run a small instance are not high and data transfer rates are very low - I have moved quite a few GB and my data fees are still less than a dollar(!) in my first month. You will likely end up paying mostly for system time rather than data I suspect.
I haven't deployed yet but have run up an instance, migrated it from Ubuntu 8.04 to 8.10, tried different port security settings, seen what sort of access attempts unknown people have tried (mostly looking for phpadmin), run some testing against it and generally experimented with the config and restart of the components I'm deploying. It has been a good prelude to my end deployment. I won't be starting with a big DB so will be initially sticking with the standard EC2 instance space.
The only negativity I have heard it that some spammers have made some of the IP ranges subject to spam-blocking - but have not yet confirmed that.
Your virtual box approach I will suggest you take after you are more familiar with the EC2 infrastructure. I suggest that you go to EC2, open an account and follow Amazon's EC2 getting-started guide. This guide will give you enough overview on all things (EBS, IP, CONNECTIONS, and otherS) to get you started. We are currently using EC2 for production and the way we started was like I am explaining here.
I hope you become a Cloud Expert Soon.
Per timbo's concern, I was able to nab an IP that, so far hasn't legitimately shown up on any spam lists. You will have a few hiccups since many blacklists are technically whitelists and will have every IP on their list until otherwise notified that a Mail Server is running on that IP. It's really easy to remove, most of them have automated removal request forms and every one that doesn't has been very cooperative in removing me from their lists. Just be professional, ask if they can give a time and reason for the block and what steps you should take to remove your IP. All the services I have emailed never asked me to jump through any hoops, within two or three business days they all informed me my IP had been removed.
Still, if you plan on running a mail server I would recommend reserving IPs now. They're 1 cent per every hour they are not bound to an instance so it works out to being about $7 a month. I went ahead and reserved an extra one as I plan on starting up another instance soon.
I have deployed some simple stuff to EC2 Win2k3 instances. Here's my advice:
Find a tutorial. Sign up for the service. Just spend an afternoon setting up your first server. It's pretty darned easy, though there will be obstacles to overcome. It's not too tough.
When I was fooling with EC2 I think I spent like $2.00 setting up a server and playing with it for a while.
Some of your data will be persistent, but you can connect S3 to EC2 as well.
Just go for it!
With regards to the concerns about blacklisting of mail servers, you can also use Amazon's Simple Email Service (SES), which obviates the need to run the mail server on the EC2 instances.
I had trouble with this as well, but posted a note here in their forums - https://forums.aws.amazon.com/thread.jspa?threadID=80158&tstart=0