How can I script the replacement of multiple lines of unique text? - scripting

I run a server that, when I update, I have to reconfigure all the prompts in their respective source files again. There are several hundred, if not over 1,000 lines that I end up having to reconfigure in order to get the server functioning as desired every update. It takes nearly three days to update, and two of those days are spent doing just that.
I'm looking for a way to replace (for example) "godmodemsg: You have been godded." on line 30 with "godmodemsg: Your forcefield is now active" , as well as several other lines that are different. I need it to be able to replace it if the line, say, changes to line 53 as well. Most of this is done in a normal text file. How would I go about doing this?

sed -i -e 's/You have been godded./Your forcefield is now active/g' /dir/yourfile.txt

If the version of sed you have does not support the -i (in place) option you'll have to redirect the output to a temporary file and then do a mv from the temporary file to the original one. Be careful, you should cat the temporary file first to check if the result of the replacement was what you expected.

Related

PostgreSQL and queue commands

I would like to know if there is a way to quere my queries. I am doing some basic text matching in psql and each query (which is saved in a different script) takes about 6 hours to run. I was wondering if there is a way to queue my scripts?
For example;
my database is called; "data"
my scipts are called; cancer, heart, death
and I am doing the following;
data; \i cancer;
data; \i heart;
data; \i death;
But I have to come back every so often and check whether it is running or not etc which doesn't seem very efficient.
I am new to postgresql so appreciate any help.
this is the most easiest/fastest solution I can think of, but should work for your case ;)
When using psql from command line, you can start it with
-f filename
where filename is a SQL script. It will run the query and send the output to stdout. Also you can forward this to a file. Just put your queries into that SQL-file and you got your own queuing.
Assuming you might run Linux, you could use screen to have a simple way to leave your session open when logging of for the night.
The easiest solution was to create a separate sql file which runs through the commands sequentially.

SQL batch variable

I need to automate a query so I can perform it on all servers on a network. The thing is more servers are added constantly, so I need to keep it dynamic. On a table I have a list of the currently active servers, but they are repeated several times as each one has different data. Also it shows only a number, and the server name has an specific format. I did this to solve it:
select distinct ('swp0'+ cast(rtl_loc_id as nvarchar(4000)) +'r01')
from basename..inv_valid_destinations
I got this to an output file, but now I want to use it as an input to sqlcmd. Each server name (each line from previous output) should be used as an -s argument. I have tried different ways of doing this to no avail. It should be something like this:
SQLCMD -Sswp0241r01 -Uswpos -isalto_folio.sql -osalto_folio.txt
As I said, more servers appear constantly and we need to perform a query on all active servers at the time and produce an output file. Could you help me out?
If you really want to do this in batch you can use a loop, where yourfile is the file containing a list of servers.
for /F "tokens=*" %%A in (readme.txt) do (
SQLCMD -S%%A -Uswpos -isalto_folio.sql -osalto_folio.txt
)

Oracle SQLPlus: How to display the output of a sqlplus command without having to first issue the spool off command?

Is there a way to display the output of a sqlplus command without having to first issue the spool off command?
I am spooling the results of a sqlplus session to a file while at the same time tailing the file. The reason for this is that for table with very long rows the format is easier to look at from a file. The problem is to see the output i have to issue the spool off command everytime i run a command in sqlplus.
Can i configure sqlplus so that after i have issued the spool command all the output is viewable straight away on the file.
(Formating the way the rows are displayed on the screen is not an option. )
THanks
SPOOL is really intended for creating a file of SQL*Plus output, for whatever purpose: logging, input to another process, etc. There is no facility for inflight viewing of its output.
There are a number of ways of solving this particular problem, but the easiest is surely to use an IDE which includes a data browser, thus obviating the need to tail a file. There are a number on the market, including Quest's TOAD and Allround Automation's PL/SQL Developer, but if you don't want to spring for a license fee then you should have a look at Oracle's own (free) SQL Developer.
If you are spooling the results of multiple statements you could turn spooling off and then turn it back on between each statement. When you turn spooling back on add the append keyword so that it will continue in the same file rather than overwriting it.
If you want to see the results of one query in your spool file you could break the query up into multiple queries that returned specific ranges of the data. This would be slower, but you could cycle spooling to get faster feedback.
If your problem is that you can't open the output file (as the spool process has a lock on it) then try copying the file output to another file and opening that file instead.
Since it sounds like your real problem is formatting of output in SQLPlus -- can you make your SQLPlus window wider and SET LINESIZE so the output looks better in SQLPlus to start with? Then you might not need to spool at all.
I tried to add a comment but for some reason it doesnt save it so ill try the "Answer your question" option :)
I do use SQLDeveloper but there are situations where i have to use sqlplus where SQLDeveloper is not available then i am stuck with plain old sqlplus.
There are other situations where i would use sqlplus over sqldeveloper purely for the fact that it would take me 1/2 minute to find out what i am looking for in sqlplus rather than several minutes with SQLDeveloper as it would take ages to load.
I have checked the time it takes before the output is flushed out and it looks like it does flush it out after a certain number of rows. Isnt there a way to reduce the buffer so that they are flushed out quicker?
There is no problem opening the file the problem is even with the file opened i cant see the output unless i issue the "spool off" command or the output has several hundred rows. I am using a free program called baretail (http://www.baretail.com) to tail the spool file on windows.
Thanks

Are there any programs that will shrink the size of a sql script file?

I have a SQL script which is extremely large (about 700 megabytes). I am wondering if there is a good way to reduce the size of the script?
I know there are code minimizers for JavaScript and am looking for one to use with SQL scripts.
I am not looking to get performance on the SQL script. I am trying to make the file size smaller. Removing excess whitespace. Keeping name-qualification down so that the script file sizes can be smaller.
If I attempt to load the file in SQL Server Management Studio I get this error.
Not enough storage is available to
process this command. (Exception from
HRESULT: 0x80070008) (mscorlib)
What's in this script of 700MB?! I would hope that there are some similarities/repetitions that would allow it to shorten the file.
Just some guesses:
Instead of inserting a million records using Insert statements, use a bulk loading tool
Instead of updating a number of individual records, try to batch updates to the same value into one (e.g. Update tab set col=1 where id in (..) instead of individual updates)
long manipulations can be defined as a stored procedure (before running the script) and the script would only have to call the stored proc
Of course, splitting the script up into smaller portions and calling each one from a simple batch file would work too. But I'd be a little worried about performance (how long does the execution take?!) and would look for some faster ways.
What about breaking your script into several small files, and calling those files from a single master script?
This link describes how to do it from a stored procedure.
Or you can do it from a batch file like this:
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
REM Define widely-used variables up here so they can be changed in one place
REM Search for "sqlcmd.exe" and make sure this path is valid for you
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
set sqlcmd="C:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqlcmd.exe"
set uname="your_uname_here"
set pwd="your_pwd_here"
set database="your_db_name_here"
set server="your_server_name_here"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script1.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script2.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script3.sql"
pause
I like the batch file approach myself, because it is easier to tinker with it, and you can schedule it as a windows job.
Make sure the .BAT file is in a folder with the appropriate security restrictions, since it has your credentials in a plain text .BAT file.
gzip should do.
SQL is much harder to shrink, the field, table names and commands need to be what they are. Plus, you wouldn't just want to rewrite the commands as something shorter because it could have implications on performance.
Depending on the DBMS that you use, it may allow short names for commands, and then there might be a converter.
(Answering this because it is the top item returned when I searched for "SQL script size")
I got the same error when trying to load a large script into Management Studio. In my case I was trying to downgrade a database from SQL2008 R2 to SQL 2008 by using the SQL Server script generator, which created a 700mb structure and data .sql file.
To get around it I used the command line to run the script instead:
C:>sqlcmd -S [SQLSERVER\INSTANCE] -i [FILELOCATION\FILENAME].sql
Hopefully this helps someone else.
Compress the sql file will have the most compression ratio.
Minimizing the txt sql file will reduce some bytes/kilobytes per mega.. is not worth...
The better approach is to create a "function" to unzip and read the file. The best benefit I guess.
Today, filesize shouldn't be a problem. Dial-up connection? Floppy disks?
pg-minify can do it, and not just for PostgreSQL, but for most notations, including MS, MySql, etc.

Unable to update the table of SQL Server with BCP utility

We have a database table that we pre-populate with data as part of our deployment procedure. Since one of the columns is binary (it's a binary serialized object) we use BCP to copy the data into the table.
So far this has worked very well, however, today we tried this technique on a Windows Server 2008 machine for the first time and noticed that not all of the columns were being updated. Out of the 31 rows that are normally inserted as part of this operation, only 2 rows actually had their binary columns populated correctly. The other 29 rows simply had null values for their binary column. This is the first situation where we've seen an issue like this and this is the same .dat file that we use for all of our deployments.
Has anyone else ever encountered this issue before or have any insight as to what the issue could be?
Thanks in advance,
Jeremy
My guess is that you're using -c or -w to dump as text, and it's choking on a particular combination of characters it doesn't like and subbing in a NULL. This can also happen in Native mode if there's no format file. Try the following and see if it helps. (Obviously, you'll need to add the server and login switches yourself.)
bcp MyDatabase.dbo.MyTable format nul -f MyTable.fmt -n
bcp MyDatabase.dbo.MyTable out MyTable.dat -f MyTable.fmt -k -E -b 1000 -h "TABLOCK"
This'll dump the table data as straight binary with a format file, NULLs, and identity values to make absolutely sure everything lines up. In addition, it'll use batches of 1000 to optimize the data dump. Then, to insert it back:
bcp MySecondData.dbo.MyTable in MyTable.dat -f MyTable.fmt -n -b 1000
...which will use the format file, data file, and set batching to increase the speed a little. If you need more speed than that, you'll want to look at BULK INSERT, FirstRow/LastRow, and loading in parallel, but that's a bit beyond the scope of this question. :)