I'm not sure how to do this...
I have a database which contains a messages and categories tables.
The categories table has a field which has a count of the number of messages related to it.
Sometimes however I need to deactivate (active = 0) a message, at the moment this doesn't then update the category table... I will implement this is in the end but for the time I would just like to run a script perhaps daily that goes through all the categories, counts up the messages and updates the field.
What the best way of doing this?
Thanks in advance
Chris
If you are happy to do this manually you can create a Rake task. Push to Heroku, then execute using "heroku rake".
You could also use delayed_job, which Heroku supports. This does however cost you money $0.05 an hour. Given that it won't run very often or for very long, might be quite cheap.
The only other way I have done this stuff is to make a controller that wraps the logic and then call this. You can use one of the many and various ping services to trigger the call (these services are generally setup to request a page in your site) or setup a cron to use wget or curl to make the request.
Related
I recently ran into such problem:
For each user, I need to do the following on server side:
First
(SQL) Insert user's record with a Unique constraint on ID
Then Parallel
(Http) Subscribe user to Service A, get subscription_id_A
(Http) Subscribe user to Service B, get subscription_id_B
Finally
(SQL) Update user's record with both subscription ids
Ideally I want this entire operation to be transactional, eg if any of http requests or sql fails, it would be as if nothing happened. Added: if Request A fails but B succeeds, I would be stuck: Do I cancel the transaction and end up with an untracked subscription or do I commit it and end up with user missing a subcription
Given that this is likely impossible to achieve, what would be the next best thing I can do?
The service A and B does provide APIs to check for existence of subscriptions and to modify, delete a subscription too, but I want to avoid the Check Then Act style. The SQL server has highest isolation level
This is indeed a standard problem. (Often, developers are not aware of this problem and only find out in production.) There is no standard solution. It is impossible to solve in general (see the http://en.wikipedia.org/wiki/Two_Generals%27_Problem - two systems can never agree with 100% certainty on whether they should commit or abort).
Maybe you can perform all the SQL work first. Insert the user but without subscription IDs. You then try to add the subscriptions one by one and add their IDs in separate transactions once you got them.
Install a background job that periodically checks for users that have been created a long time ago but that do not have subscriptions yet. If you find any discrepancies fix them and log this fact.
This periodic cleanup ensures that temporary failures (which will occur due to network glitches, timeouts, redeployments, bugs, ...) are temporary. It also ensures that they are being detected and reported to developers if you like.
This would be an eventually consistent system. The idea is to first transactionally record the target state (the user and the goal to create two subscriptions) and then have a background job try to converge the data to the target state.
I have a Django application hosted on Heroku using the PostgreSQL database addon. Upon performing a GET request for the front page, my applications performs a SQL query to extract some necessary display information. I also create a subprocess with Popen on each GET request.
However, when I notice that the number of GET requests increasing to around once every second, I being erroring at the statement model.objects.get(id="----"). I get an OperationalError ; I'm assuming that either my free plan on heroku isn't keeping up or my database isn't keeping up.
In this case, I don't want to leave Heroku's free plan, but I was wondering if I did, would I need to create more workers? Upgrade my database? What are ways to diagnose the issue? And why would a simple SQL query cause issues as the number of requests increase to an interval of around once every second? Does this seem reasonable?
My hack solution was just to sleep the view whenever I catch an OperationalError. Any other approaches recommended?
I am using c#.net, the db is MS SQL 2008 R2.
I have a question that seems to have been asked a lot in the forums here. I want to use a database table as a a queue...but the processing of these messages cannot be done from the database.
I have a table that stores the requests i get from a .Net component. I now have to read the data from these tables and make http calls to 2 webservices. Based on the response received from the webservices, the data gets archived or deleted.
I had a few specific questions:
1. How do i make sure that if i pick a record for processing and the http call fails I should be able to go on to the next record, and then come back to this record at the end of the run
2. Is there an alternative to using the database as a queue(like MSMQ etc.), which option is better
3. I want to maintain an audit trail of the record status. Is creating a trigger to log the changes before the edit the best way to do it?
Regards
Leo
Use Service Broker!
I am using it for a while and think its great thing althought its takes time to understand how it works. Was using book to learn.
Service Broker solves:
concurency
application state
... many, many other things
Will be sending out e-mails from an application on a scheduled basis.
I have an EmailController in my ASP.NET MVC application with action methods, one for each kind of notification/e-mail, that will need to be called at different times during the week.
Question: Is Windows Scheduler (running on a Server 2008 box) any better or worse than scheduling this via a SQL Server job? And why?
Thanks
IMHO having scheduler call into the controller and execute the action methods to fire off notifications worked out best. My process (for better of for worst) is as such:
Put the code to call the controller/action in a .vbs file. The action method requires a "security code" that must match a value in the web.config or else it will not execute (my thinking is that this will lessen the chance of some folk hitting the action method with there browser and running the send notification code when it shouldn't be run).
Create a scheduled task in Scheduler to call that file on a regular basis.
In my database, log all notification executions and include an attribute that defines the frequency in which different notification types should go out. This, again, is to lessen the chance of someone sending out notifications when they shouldn't.
Anyhow, this works. The only problem I had was hitting vis https. That didn't work as I believe the task was being challenged to provide some credentials (which it couldn't as it was being run programmatically). Changing it to http worked and imo doesn't create any kind of security risk.
Thoughts? Better way to implement this? I'd love to hear anything anyone has to offer.
Thanks
I prefer sending emails with a SQL server job. As we already had several jobs running on our SQL server it made sense to stick with this one approach. If we had gone down the scheduled task route we would then of had 2 different task scheduling systems which adds needless complexity. With all scheduled tasks occurring through one system its easy to track and maintain them.
I have a procedure written in PLJava that sends out updates over JMS in my postgres database.
What I would like to do is have that function called on an interval (every 15 seconds) internally in the database (preferably not from an outside process). Is this possible? Any ideas?
If you need no external access, you are presumably able to modify the database design so that you don't need the update at all. Can you explain more about what the update is doing?
As depesz said, you could use either cron or pgAgent, but they are only able to go down to a one minute granularity, not 15 seconds. Considering sleeping inside the stored procedure until the next iteration is not a good idea, because you will have an open transaction for all that time which is a really bad idea.
Strict answer: it is not possible. Since you don't want outside process, and PostgreSQL doesn't support jobs - you are out of luck.
If you'll reconsider using outside processes, then you're most likely want something like cron, or better yet pgagent.
On absolutely other hand - what do you need to do that has to happen every 30 seconds? this seems like a problem with design.
First, you'll spend the least amount of effort if you just go with a cron job.
However, if you were starting from scracth: You are trying to periodically replicate rows from your database. I think you are looking at a replication queue.
The PGQ project (used for Londiste replication, both from Skype's SkyTools) has a queue that you can use independently. When configuring it, you set a maximum event count, and a loop delay, before batched events are generated. You can get batches spaced by no more than 15 seconds that way. You now have to produce the events that will be batched, using a trigger that calls pgq.insert_event; and consume the queues. The consumer can call your PL/Java stored proc; you'll have to rewrite the procedure to send everything in the batch instead of scanning the base table for new events.
As far as I know postgresql doesn't support scheduled tasks. You'll need to use a script with cron or at (depending on your operating system.)
Sounds like you're doing sort of replication? Every 15s sounds like a lot of updates. Could you setup a trigger (or a number of triggers) instead of polling?
If you are using JMS why not just have th task wait for input on the queue?
Per your depesz comment, you have a PL/Java stored procedure that "flushes out database tables (updates) as java objects". Since you want it to run in 15 second intervals, it must be processing a batch of updates each time. Rather than processing a batch of updates in a stored procedure every 15 seconds, why not process them one at a time when they happen via an after update trigger and eliminate the need for a timed interval. If you are aggregrating data from multiple tables to build your objects than add the triggers to you upper most tables only.
In my case the problem was that agent couldn't authorize to database so after I've made all connections trusted from localhost the service started successfully and job works fine
for more information about error you should see into windows event viewer or eq in unix based system. see my config file C:\Program Files\PostgreSQL\10\data\pg_hba.conf