I create some /etc/init scripts, but one script (script B) depend on other (script A). How to order execution of those scripts? I want to system first execute script A, and then script B. Process that start script B will be killed and execute again couple of times during one day, and I want to script A execute only on reboot not every time before script B. How to do that?
You have a couple of options:
Put script A outside /etc/init and make a call to it in the very beginning of script B.
Put script B outside /etc/init and make a call to it in the very end of script A
EDIT
If the calling sequence is always A; B for B, but A runs independently only on reboot, you have another option:
Put script B outside /etc/init since it does not belong there and call /etc/init/A from the beginning of it.
Related
I need to run a batch file on one of my database tables. The best solution for me is to run it in a trigger. I'd like to know whether it's a good idea or not. And in case it isnĀ“t what other solutions could I implement?
-I need to run it everytime there's an insert or update.
-My bat file takes 20-30 seconds to finish.
-The table has around 10 inserts a day.
I want to implement below design.
Data is coming to source table - source_tab and I have 3 procedures to invoke for processing this available data. I am invoking each procedure manually now.
Is there any way to create a job which will be invoked as soon as new data is available in source_tab and start processing data by invoking those 3 procedures sequentially. Also, next job cycle should not trigger until current execution gets finished. This should behave in same as Java listeners do.
I don't want to use TRIGGERS.
I agree with the comment suggesting MSMQ. A dirty method, if you don't want to use triggers is to set up a Job on a low interval schedule.
It could check the table, if new data exists then go to first step to execute and flow from there, if not then do nothing.
How you determine what "new" data is that would depend on the data. Easy enough if you have dateadded column or something similar. If not then you may have to have an additional job step to write the table as it is now into a staging table, then compare this version to the next one on the next run through.
Like I said, not nice but an option.
I am running a Select query and getting lots of rows from it(sometimes more than 500) then inserting thoes values one by one into another table. Now i want to know which insert step failed so that i can start inserting from that failed step again so that table does not update with duplicate value.
Syntax may vary from system to system, however structure and process remains.
When one does database deployments to live system he prepares two SQL scripts.
1.Change script that has structure like this:
PRINT 'Doing X'
SCRIPT X
....
GO
PRINT 'DOING Y'
SCRIPT Y
....
GO
2.Rollback script that reverts all the changes that section of script does.
Revert scripts are executed in reverse order.
REVERT SCRIPT Y
REVERT SCRIPT X
When change script fails you will know which section failed by the last message printed out, then you will take all sections of rollback script that run after the failed section and revert changes.
We have many SQL Server scripts. But there are a few critical scripts that should only be run at certain times under certain conditions. Is there a way to protect us from ourselves with some kind of popup warning?
i.e. When these critical scripts are run, is there a command to ask the user if they want to continue?
(We've already made some rollback scripts to handle these, but it's better if they not be accidentally run at all).
No, there is no such thing.
You can write an application (windows service?) that will only run the scripts as and when they should be.
The fact that you are even asking the question shows that this is something that should be automated, the sooner the better.
You can mitigate the problem in the meanwhile by using if to test for these conditions and only execute if they are met. If this is a series of scripts you should wrap them in transactions to boot.
One work-around you can use is the following, which would require you to update a value in another table:
CREATE PROC dbo.MyProc
AS
WHILE (SELECT GoBit FROM dbo.OKToRun) = 0
BEGIN
RAISERROR('Waiting for GoBit to be set!', 0,1)
WAITFOR DELAY '00:00:10'
END
UPDATE dbo.OKtoRun
SET GoBit = 0
... DO STUFF ...
This will require you to, in another spid or session, update that table manually before it'll proceed.
This gets a lot more complicated with multiple procedures, so it will only work as a very short-term workaround.
sql is a query language. does not have ability to accept user inputs.
only thing i can think of would be to have it #variable driven. first part should update #shouldRunSecond = 1. and the second part should be wrapped in a
if #shouldRunSecond = 1
begin
...
end
second portion will be skipped if not desired.
The question is - where are these scripts located ?
If you have them as .sql file that you open every time before you run, then you can simply add some "magic numbers" before beginning of the script, that you will have to calculate every time, before you run it. In example below each time before you run your script you have to put correct date and minute into IF fondition, other wise script will not run
IF DATEPART(dd,GETDATE())!=5 or DATEPART(mi,(GETDATE()))!=43
BEGIN
RAISERROR ('You have tried occasionally to run your dangerous script !!!',16,1);
RETURN
END
--Some dangerous actions
drop database MostImportantCustomer
update Personal set Bonus=0 where UserName=SUSER_SNAME()
If your scripts reside in stored procedure - you can add some kind of "I am sure, I know what I do" parameter, where you will always pass, for example Minute multiplied by Day.
Hote it helps
I have seen batch scripts containing SQLCMD ..., so instead of running the .sql script from code or management studio, you could add a prompt in the script.
I have (on limited occasion) created an #AreYouSure parameter that must be passed into a stored procedure, then put comments next to the declaration in the stored procedure explaining the danger of running said procedure.
At least that way, no RANDOs will wander into your environment and kick off stored procedures when they don't understand the consequences. The parameter could be worked into an IF statement that checks it's value, or it doesn't really have to be used at all, but if it must be passed, then they have to at least figure out what to pass.
If you use this too much, though, others may just start passing a 'Y' or a 1 into every stored procedure without reading the comments. You could switch up the datatypes, but at some point it becomes more work to maintain this scheme than it is worth. That is why I use it on limited occasion.
I have a script in T-SQL that goes like this:
create table TableName (...)
SET IDENTITY INSERT TableName ON
And on second line I get error:
Cannot find the object "TableName" because it does not exist or you do not have permissions.
I execute it from Management Studio 2005. When I put "GO" between these two lines, it's working. But what I would like to acomplish is not to use "GO" because I would like to place this code in my application when it will be finished.
So my question is how to make this work without using "GO" so that I can run it programmatically from my C# application.
Without using GO, programmatically, you would need to make 2 separate database calls.
Run the two scripts one after the other - using two calls from your application.
You should only run the second once the first has successfully run anyway, so you could run the first script and on success run the second script. The table has to have been created before you can use it, which is why you need the GO in management studio.
From the BOL: "SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to SQL Server". Therefore, as Jose Basilio already pointed out, you have to make separate database calls.
If this can help, I was faced with the same problem and I had to write a little (very basic) parser to split every single script in a bunch of mini-script which are sent - one at a time - to the database.
something even better than tpdi's temp table is a variable table. they run lightning fast and are dropped automatically once out of scope.
this is how you make one
declare #TableName table (ColumnName int, ColumnName2 nvarchar(50))
then to insert you just do this
insert into #TableName (ColumnName, ColumnName2)
select 1, 'A'
Consider writing a stored proc that creates a temporary table and does whatever it needs to with that. If you create a real table, your app won't be able to run the script more than once, unless it also drops the table -- in which case, you have exactly the functionality of a temp table.