I have an application that needs to store a "last_updated_at" variable from a dataset that it obtains from an API. So that in the next job it takes that "last_updated_at" and starts looking only at data after that "last_updated_at" as it retrieves data from other API's. At the end of its execution, it refreshes the "last_updated_at" and saves it. Then a job will come in tomorrow and will start all over again with that "last_updated_at" stored value.
The question is, how is best to save that variable, what's the best practice on where to save it (and retrieve it next time)?
This application comes from a github repo, I built a container from it and have the container at AWS, on every push to the repo a new container will be built. We often update that repo->build the new container -> Pull image on machines.
So with that context, where's the best place to save that "last_updated_at" that needs to be consumed and updated on every execution. There will only be 1 machine with the container and running it, no more machines will have the container. So what's best considering we constantly update that repo and this is a prod environment?
In a csv or txt in the machine running the job
In some cloud like S3
As a OS environment variable on the machine?
As a environment variable on the container running this?
In a github file in a folder of parameters?
In a csv or txt in the container of the machine running the job?
Any other way?
Lastly, should the answer depend whether there's only 1 machine installing the container or more than 1 but only 1 is running at a given time?
Related
We want to use pan.sh to execute multiple kettle transformations. After exploring the script I found that it internally calls spoon.sh script which runs in PDI. Now the problem is every time a new transformation starts it create a separate JVM for its executions(invoked via a .bat file), however I want to group them to use single JVM to overcome memory constraints that the multiple JVM are putting on the batch server.
Could somebody guide me on how can I achieve this or share the documentation/resources with me.
Thanks for the good work.
Use Carte. This is exactly what this is for. You can startup a server (on the local box if you like) and then submit your jobs to it. One JVM, one heap, shared resource.
Benefit of that is then scalability, so when your box becomes too busy just add another one, also using carte and start sending some of the jobs to that other server.
There's an old but still current blog here:
http://diethardsteiner.blogspot.co.uk/2011/01/pentaho-data-integration-remote.html
As well as doco on the pentaho website.
Starting the server is as simple as:
carte.sh <hostname> <port>
There is also a status page, which you can use to query your carte servers, so if you have a cluster of servers, you can pick a quiet one to send your job to.
I'm curious how AX 2009 handles code propagation when operating in a load balanced environment.
We have recently converted our AX server infrastructure from a single AOS instance to 3 AOS instances, one of which is a dedicated load balancer (effectively 2 user-facing servers). All share the same application files and database. Since then, we have had one user who has been having trouble receiving code updates made to the system. The changes generally take a few days before they can see it, and the changes don't seem to update all at once.
For example, a value was added to an ENUM field, and they were not able to see it on a form where it was used (though others connected to the same instance were). Now, this user can see the field in the dropdown as expected, but when connected to one of the instances it will not flow onto a report as it should. When connected to the other instance it works fine, and for any other user connected to either instance it works properly.
I'm not certain if this is related to the infrastructure changes, but it does seem odd that only one user is experiencing it. My understanding was that with this setup, code changes would propagate across the servers either immediately (due to sharing the Application Files), or at least in a reasonable amount of time (<1 day). Is this correct or have I been misinformed?
As your cache problems seems to be per user, then go learn about AUC files.
The files are store on the client computer and can be tricky to keep in sync. There are other problems as well.
Start AX by a script, delete the AUC file before starting AX.
There is no cache coherency between AOS instances: import an XPO on one AOS server, and it is not visible on the other. You will either have to flush the cache manually or restart the other AOS. The simplest thing is to import on each server, this is especially true for labels, as this is the only way to bring labels in sync to my knowledge.
I am sort of curious to this as well, but what I do know, is that if a user has access to the AOT (member of admin or a group with developer access), the client will cache AOT-elements more aggressively than if not having developer access.
Elements (like an Enum) might be cached at client level, but also at AOS-level. Restarting the AOS (service) would flush out memory for that service, forcing it to reload elements upon restart.
I guess what I am suggesting is that you make sure the element is not cached client side. Either restart the client, or run the "Refresh AOD" from the developer tools menu. If that doesn't help, try restaring the AOS the client connects to, and see if that helps.
I think it is safe to say, if you want to be absolutely sure every user has the most recent "copy" of any element, you should not develop on the application files shared by all of these services, but rather develop in an environment with 1 AOS. And when you need to move things to production, you need to take down all AOSes in production and move the chances over while the system is down.
In such cases it is often difficult to find the exact cause for a specific case.
I try to follow some best practices to avoid such situations:
- Use separate environment for developing
- Deploy code changes using layer files, not XPOs
- When deploying, stop all AOSs, deploy files, delete index files in the application directory, start one AOSs, compile, sync DB, start other AOS (or even shut down all and start again)
- Try to have latest kernel versions for AOSs and client
I am new to AWS so I needed some advice on how to correctly create background jobs. I've got some data (about 30GB) that I need to:
a) download from some other server; it is a set of zip archives with links within an RSS feed
b) decompress into S3
c) process each file or sometime group of decompressed files, perform transformations of data, and store it into SimpleDB/S3
d) repeat forever depending on RSS updates
Can someone suggest a basic architecture for proper solution on AWS?
Thanks.
Denis
I think you should run an EC2 instance to perform all the tasks you need and shut it down when done. This way you will pay only for the time EC2 runs. Depending on your architecture however you might need to run it all the times, small instances are very cheap however.
download from some other server; it is a set of zip archives with links within an RSS feed
You can use wget
decompress into S3
Try to use s3-tools (github.com/timkay/aws/raw/master/aws)
process each file or sometime group of decompressed files, perform transformations of data, and store it into SimpleDB/S3
Write your own bash script
repeat forever depending on RSS updates
One more bash script to check updates + run the script by Cron
First off, write some code that does a) through c). Test it, etc.
If you want to run the code periodically, it's a good candidate for using a background process workflow. Add the job to a queue; when it's deemed complete, remove it from the queue. Every hour or so add a new job to the queue meaning "go fetch the RSS updates and decompress them".
You can do it by hand using AWS Simple Queue Service or any other background job processing service / library. You'd set up a worker instance on EC2 or any other hosting solution that will poll the queue, execute the task, and poll again, forever.
It may be easier to use Amazon Simple Workflow Service, which seems to be intended for what you're trying to do (automated workflows). Note: I've never actually used it.
I think deploying your code on an Elasticbeanstalk Instance will do the job for you at scale. Because I see that you are processing a huge chunk of data here, and using a normal EC2 Instance might max out resources mostly memory. Also the AWS SQS idea of batching the processing will also work to optimize the process and effectively manage time outs on your server-side
I currently have an amazon instance (Medium - High CPU) running off the instance store with most of my data and code sitting in /mnt mounted to sda2. The instance is just the way i need it to work. How can I clone this instance and make an exact copy (data and all) to another (preferably cheaper, micro) instance for testing my new code changes? Also what backup suggestions are recommend for this setup?
Thanks
Be careful with instance store, your instance if terminated will restore your data. I suggest you put the important data to an EBS volumes.
Please see my post http://www.capsunlock.net/2009/12/create-ebs-boot-ami.html
It's possible to clone the current instance and make an EBS backed AMI.
This question is for anyone who has actually used Amazon EC2. I'm looking into what it would take to deploy a server there.
It looks like I can start in VirtualBox, setup my server and then export the image using the provided ec2-tools.
What gets tricky is if I actually want to make configuration changes to my running server, they will not be persistent.
I have some PHP code that I need to be able to deploy (and redeploy) to the system, so I was thinking that EBS would be a good choice there.
I have a massive amount of data that I need stored, but it just so happens that latency is not an issue, so I was thinking something like s3fs might work.
So my question is... What would you do? What does your configuration look like? What have been particular challenges that perhaps you didn't see coming?
We have deployed a large-scale commercial app in the AWS environment.
There are three basic approaches to keeping your changes under control once the server is running, all of which we use in different situations:
Keep the changes in source control. Have a script that is part of your original image that can pull down the latest and greatest. You can pull down PHP code, Apache settings, whatever you need. If you need to restart your instance from your AMI (Amazon Machine Image), just run your script to get the latest code and configuration, and you're good to go.
Use EBS (Elastic Block Storage). EBS is like a big external hard drive that you can attach to your instance. Even if your instance goes away, EBS survives. If you later need two (or more) identical instances, you can give each one of them access to what you save in EBS. See https://stackoverflow.com/a/3630707/141172
Burn a new AMI after each change. There's a tool to create a new AMI from a running instance. If EBS is like having an external hard drive, creating a new AMI is like having a DVD-R. You can save the current state of your machine to it. Next time you have to start a new instance, base it on that new AMI. Good to go.
I recommend storing your PHP code in a repository such as SVN, and writing a script that checks the latest code out of the repository and redeploys it when you want to upgrade. You could also have this script run on instance startup so that you get the latest code whenever you spin up a new instance; saves on having to create a new AMI every time.
The main challenge that I didn't see coming with EC2 is instance startup time - especially with Windows. Linux instances take 5 to 10 minutes to launch, but I've seen Windows instances take up to 40 minutes; this can be an issue if you want to do dynamic load balancing and start up new instances when your load increases.
I'd suggest the best bet is to simply 'try it'. The charges to run a small instance are not high and data transfer rates are very low - I have moved quite a few GB and my data fees are still less than a dollar(!) in my first month. You will likely end up paying mostly for system time rather than data I suspect.
I haven't deployed yet but have run up an instance, migrated it from Ubuntu 8.04 to 8.10, tried different port security settings, seen what sort of access attempts unknown people have tried (mostly looking for phpadmin), run some testing against it and generally experimented with the config and restart of the components I'm deploying. It has been a good prelude to my end deployment. I won't be starting with a big DB so will be initially sticking with the standard EC2 instance space.
The only negativity I have heard it that some spammers have made some of the IP ranges subject to spam-blocking - but have not yet confirmed that.
Your virtual box approach I will suggest you take after you are more familiar with the EC2 infrastructure. I suggest that you go to EC2, open an account and follow Amazon's EC2 getting-started guide. This guide will give you enough overview on all things (EBS, IP, CONNECTIONS, and otherS) to get you started. We are currently using EC2 for production and the way we started was like I am explaining here.
I hope you become a Cloud Expert Soon.
Per timbo's concern, I was able to nab an IP that, so far hasn't legitimately shown up on any spam lists. You will have a few hiccups since many blacklists are technically whitelists and will have every IP on their list until otherwise notified that a Mail Server is running on that IP. It's really easy to remove, most of them have automated removal request forms and every one that doesn't has been very cooperative in removing me from their lists. Just be professional, ask if they can give a time and reason for the block and what steps you should take to remove your IP. All the services I have emailed never asked me to jump through any hoops, within two or three business days they all informed me my IP had been removed.
Still, if you plan on running a mail server I would recommend reserving IPs now. They're 1 cent per every hour they are not bound to an instance so it works out to being about $7 a month. I went ahead and reserved an extra one as I plan on starting up another instance soon.
I have deployed some simple stuff to EC2 Win2k3 instances. Here's my advice:
Find a tutorial. Sign up for the service. Just spend an afternoon setting up your first server. It's pretty darned easy, though there will be obstacles to overcome. It's not too tough.
When I was fooling with EC2 I think I spent like $2.00 setting up a server and playing with it for a while.
Some of your data will be persistent, but you can connect S3 to EC2 as well.
Just go for it!
With regards to the concerns about blacklisting of mail servers, you can also use Amazon's Simple Email Service (SES), which obviates the need to run the mail server on the EC2 instances.
I had trouble with this as well, but posted a note here in their forums - https://forums.aws.amazon.com/thread.jspa?threadID=80158&tstart=0