i have an SQL query (shown below) that i need to run on a regular basis:
db.execute("UPDATE property_info SET IsActive=false WHERE ExpiryDate > #0", CurrentDate);
This query is basically intended to check ALL properties, and to see whether or not they are past their expiration date. If they are, then it will automatically set the property to Inactive. Because "CurrentDate" is a rolling window, i want to re-run this query automatically, probably every day.
Is this something i should be using a stored procedure for?
Any suggestions on the best way to achieve this without any user interaction?
One simple way to achieve this would be to add the line of code to _PageStart.cshtml in the root of your project. This will make it execute every time any page on the site is executed. That is probably massively overkill for something that, by the looks of it, only needs to be checked once a day or so. To alleviate this you could employ a simple DateTime stamp in the Application collection to make sure it only runs a maximum of once every day or so (or tune the interval as appropriate for your needs). This is in no way a solution for fully scheduled code execution, but it may well serve your purposes (and your budget).
Related
Let's say I write this little linqpad snippet and run it, I get what I expect
But when I hit F5 again, the list will have two items in it:
I was not expecting it to do this and can't figure how why it would.
The list will be growing everytime I run it unless I do something to the code, even add a comment. Then it will reset to one entry.
Is this by design? If so why?
I'm on 5.08.01
It is by design. It does not reset the Application Domain unless you do one of the following:
1) Use Ctrl+Shift+F5 to reset it on demand
or
2) Go into Edit/Preferences/Advanced and set "Always use Fresh Process per execution" to True. This will reset every time you run a script
or
3) Put the following code into your query (this tells LINQPad to use a fresh domain next time you run):
Util.NewProcess = true;
As for why, there are probably multiple benefits but I'd say performance is the main one. You could put the results of an expensive query in a static variable and only run it the first time.
Is there a way to update the Liferay's site page's friendly name through a SQL script?
We generally do this in the control panel through admin user.
While #steven35's answer might do the job, you're hitting a pet peeve of mine. On a different level, you're doing it right if you're doing it on the Control Panel, or through the API and you should not think about a way to ever write to Liferay's database. It might work for the moment, but it might also fail in unforeseen ways - sometimes long after your update.
There have been enough samples for this to happen. If you're changing data while Liferay is running, the cache will not be updated. In case these values are also indexed in the search index, they won't be updated there and random later uses might not find the correct page without you reindexing everything. The same value might be stored somewhere else - or translated. Numerous conditions can fail - and there's always one condition more than you expect and cater for. That one condition might break your neck.
Granted, the friendly name of a page might not fall into the most complex of these cases, but just don't get into the habit of writing to Liferay's database. Or, if you do, don't complain about future upgrades failing or requiring extra work, because the database contains values that the API didn't expect. The problem is that during the next upgrade (if you do it in - say - one year) you'll long have forgotten that you manually changed data in the database and blame Liferay for problems during your upgrade.
Changing data is exactly what the UI and the API are for.
Friendly urls are stored in LayoutFriendlyURL.friendlyURL in your Liferay database so the following query should work
UPDATE "yourdatabase"."LayoutFriendlyURL" SET "friendlyURL"="/newurl" WHERE "layoutFriendlyURLId"=12345;
You will also need to update the Layout table accordingly to match the new friendly url.
I am trying to run an SQL package from our SQL Server via a scheduled job at different times of the day with different parameters. The package imports property specific information (our company has multiple properties) that is available after a certain time of the day. The package accepts a property code parameter to identify the property information to be imported.
If possible, I would like to re-use one package/job and set up multiple steps that execute at certain times of the day.
Is there a better way to set this up besides using multiple jobs that run the same package with its own parameters?
I would really appreciate some advice, thank you.
Multiple steps allow you to stop execution if a step fails so the thing you do in step 10, will NOT run if step 9 fails.
If you care about that requirement, use a single job with multiple steps. If you don’t, use multiple jobs.
If you need to control the time of each step, then you must use multiple jobs.
Really, the answer is based on what you need to accomplish.
I have got a database of ms-sql server 2005 named mydb, which is being accessed by 7 applications from different location.
i have created its copy named mydbNew and tuned it by applying primary keys, indexes and changing queries in stored procedure.
now i wants to replace old db "mydb" from new db "mydbnew"
please tell me what is the best approach to do it. i though to do changes in web.config but all those application accessing it are not accessible to me, cant go for it.
please provide me experts opinion, so that i can do replace database in minimum time without affecting other db and all its application.
my meaning of saying replace old db by new db is that i wants to rename old db "mydb" to "mydbold" and then wants to remname my new db "mydbnew" to "mydb"
thanks
Your plan will work but it does carry a high risk, especially since I'm assuming this is a system that has users actively changing data, which means your copy won't have the same level of updated content in it unless you do a cut right before go-live. Your best bet is to migrate your changes carefully into the live system during a low traffic / maintenance period and extensively test it once your done. Prior to doing this, or the method you mentioned previously, backup everything.
All of the changes you described above can be made to an online database without the need to actually bring it down. However, some of those activities will change the way in which the data is affected by certain actions (changes to stored procs), that means that during the transition the behaviour of the system or systems may be unpredicatable and therefore you should either complete this update at a low point in day to day operations or take it down for a maintenance window.
Sql Server comes with a function to make a script file out of you database, you can also do this manually but clicking on the object you want to script and selecting the Script -> CREATE option. Depending on the amount of changes you have to make it may be worthwhile to script your whole new database (By clicking on the new database and selecting Tasks -> Generate Scripts... and selecting the items needed).
If you want to just script out the new things you need to add individually then you simply click on the object you want to script, select the Script <object> as -> then select DROP and CREATE to if you want to kill the original version (like replacing a stored proc) or select CREATE to if your adding new stuff.
Once you have all the things you want to add/update as a script your then ready to execute that against the new database. This would be the part where you backup everything. Once your happy everything is backed up and the system is in maintenance or a low traqffic period, you execute the script. There may be a few problems when you do this, you will need to fix these as quickly as possible (usually mostly just 'already exisits' errors, thats why drop and create scripts are good) and if anything goes really wrong restore your backups and try again (after figuring out what happened and how to fix it).
Make no mistake if you have a lot of changes to make this could be a long process, or it could take mere minutes, you just need to adapt if things go wrong and be sure to cover yourself with backups/extensive prayer. Good Luck!
OK, first let me state that I have never used this control and this is also my first attempt at using a web service.
My dilemma is as follows. I need to query a database to get back a certain column and use that for my autocomplete. Obviously I don't want the query to run every time a user types another word in the textbox, so my best guess is to run the query once then use that dataset, array, list or whatever to then filter for the autocomplete extender...
I am kinda lost any suggestions??
Why not keep track of the query executed by the user in a session variable, then use that to filter any further results?
The trick to preventing the database from overloading I think is really to just limit how frequently the auto updater is allowed to update, something like once per 2 seconds seems reasonable to me.
What I would do is this: Store the current list returned by the query for word A server side and tie that to a session variable. This should be basically the entire list I would think. Then, for each new word typed, so long as the original word A exists, you can filter the session info and spit the filtered results out without having to query again. So basically, only query again when word A changes.
I'm using "session" in a PHP sense, you may be using a different language with different terminology, but the concept should be the same.
This question depends upon how transactional your data store is. Obviously if you are looking for US states (a data collection that would not change realistically through the life of the application) then I would either cache a System.Collection.Generic List<> type or if you wanted a DataTable.
You could easily set up a cache of the data you wish to query to be dependent upon an XML file or database so that your extender always queries the data object casted from the cache and the cache object is only updated when the datasource changes.
RAM is cheap and SQL is harder to scale than IIS so cache everything in memory:
your entire data source if is not
too large to load it in reasonable
time,
precalculated data,
autocomplete webservice responses.
Depending on your autocomplete desired behavior and performance you may want to precalculate data and create redundant structures optimized for reading. Make use of structs like SortedList (when you need sth like 'select top x ... where z like #query+'%'), Hashtable,...
While caching everything is certainly a good idea, your question about which data structure to use is an issue that wasn't fully answered here.
The best data structure for an autocomplete extender is a Trie.
You can find a good .NET article and code here.