Unload statement - sql

I am new to programming and I am running into a issue. I am calling a table and need to put my results into a csv file in a certain path.
This is what I am doing and the error I get.
dbuser#cbos1:/var/lib/dbspace/bosarc/testing/Abe_Lincoln> cd dbaccess labor32<<?
> UNLOAD TO '/var/lib/dbspace/bosarc/Active_Sites/Cronos_test/Position7'
> select * from informix.position;
> ?
-bash: cd: dbaccess: No such file or directory
dbuser#cbos1:/var/lib/dbspace/bosarc/testing/Abe_Lincoln>
the file path exist but keeps getting message.

Using just $ as the command line prompt, you should be using just:
$ dbaccess labor32 <<?
> UNLOAD TO '/var/lib/dbspace/bosarc/Active_Sites/Cronos_test/Position7'
> select * from informix.position;
> ?
…message(s) from dbaccess
$
This will run the dbaccess program (usually from $INFORMIXDIR/bin) against the database labor32, and generate an UNLOAD format file in the given file name.
The cd command is for changing directory; you don't have a directory called dbaccess (and probably shouldn't), and even if you did have such a directory, you shouldn't provide more options to the cd command, or a here document as standard input — it will ignore them.
Note that the file generated (Position7 will be the base name of the file) will be in Informix's UNLOAD format (pipe delimited fields by default), not CSV. It's certainly possible to convert between the two; I have Perl scripts that can do the conversions — last modified about a decade ago, but not much has changed in the interim. You could also consider using SQLCMD (available as open source from the IIUG Software Repository) which does have support for CSV load and unload formats. (This is the original SQLCMD — or at least an original SQLCMD — and is not Microsoft's Johnny-come-lately program of the same name.)
Create a file unload-table.sh containing:
#!/bin/sh
dbaccess labor32 <<EOF
UNLOAD TO '/var/lib/dbspace/bosarc/Active_Sites/Cronos_test/Position7'
SELECT * FROM informix.position;
EOF
You can then run this as bash unload-table.sh, or make it executable and install it in your $HOME/bin directory (which is on your PATH, isn't it?) so that you can simply run unload-table.sh. Or you can arrange to 'compile' (copy) the file to unload-table (no .sh suffix) so you don't have to type it to execute it: unload-table. You can enhance the script to allow the program (dbacess), database (labor32), table (informix.position) and file (/var/lib/dbspace/bosarc/Active_sites/Cronos_test/Position7) to be set as command line arguments or via environment variables. That requires a bit of fiddling in the script, but nothing outrageous. I'd probably allow the file name to be specified separately from the directory where the file is to be stored so that it is easier to configure on the command line.

Related

How sql loader scrip is working?

Currently I'm working with Exasol database first time and came across one script which is responsible to run sql script written in .sql file.
Here is the script
C:\Program Files\EXASOL\EXASolution\EXAplus\exaplusx64.exe -configDir EXASolutionConfig -profile profile_PROD_talend -q -f D:/Data/Customer/PROD/EXASolution_SQL/EXASOL_data_script.sql -- databaseName tableName /exasolution/StageArea/fileName.csv
I want to know, how this script is working and what its doing actually ? What I understood so far is below
First "C:\Program Files\EXASOL\EXASolution\EXAplus\exaplusx64.exe " is starting a Exasol on command line and then its pointing to the script where .sql file is located.
Not getting:
1) What this part is doing "-configDir EXASolutionConfig -profile profile_PROD_talend -q -f "?
2) What are these identifiers doing "-q -f "?
3)After launching exaplusx64.exe, Is exasol going to connect with database and table name mentioned in script ? If then How cav file is paying its role in this script ? I mean in .sql there is just an sql statement, If its taking data from file then how ? I'm not getting this ..!!
Please share your comments
1) This where you say to Exasol to read the profile profile_PROD_talend in the folder EXASolutionConfig and execute the file D:/Data/Customer/PROD/EXASolution_SQL/EXASOL_data_script.sql in quiet mode (-q).
From the manual:
-configDir *This is not actually in the EXASOL manual, I assume it's the folder with the profiles, or maybe it does nothing*
-profile Name of connection profile defined in <configDir>/profiles.xml (profiles can be edited in the GUI version). You can use a profile instead of specifying all connection parameters.
-q Quiet mode which suppresses additional output from EXAplus.
-f Name of a text file containing a set of instructions that run and then stop EXAplus.
2) Quiet mode and flag for the name of the file.
3) When you run this command EXAPlus connects to the db using the information provided in the profile and it will execute the .sql file passed.
Now things become interesting, the -- allows you to pass some arguments to the .sql file. So you are passing three parameters (databaseName, tableName, and /exasolution/StageArea/fileName.csv). If you open the sql script you will find &1, &2, and &3, these are the placeholders for the parameters passed by your command.
From the manual again:
-- <args> SQL files can use arguments given over via the parameter “-- ” by evaluating the variables &1, &2 etc. .
For example, the file test.sql including the content
--test.sql
SELECT * FROM &1;
can be called in the following way:
exaplus -f test.sql -- dual

IntelliJ: Dynamically updated file header

By default, IntelliJ Idea will insert (something like) the following as the header of a new source file:
/**
* Created by JohnDoe on 2016-04-27.
*/
The corresponding template is:
/**
* Created by ${USER} on ${DATE}.
*/
Is it possible to update this template so that it inserts the last date of modification when the file is changed? For example:
/**
* Created by JohnDoe on 2016-03-27.
* Last modified by JaneDoe on 2016-04-27
*/
It is not supported out of the box. I suggest you do not include information about author and last edit/create time in file at all.
The reason is that your version control system (Git, SVN) contains the same information automatically. So the manual labelling is just duplicate of already existing info, but is only more error prone and needs to be manually updated.
Here's a working solution similar to what I'm using. Tested on mac os.
Create a bash script which will replace first occurrence of Last modified by JaneDoe on $DATE only if the exact value is not contained in the file:
#!/bin/bash
FILE=src/java/test/Test.java
DATE=`date '+%Y-%m-%d'`
PREFIX="Last modified by JaneDoe on "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
sed -i '' "1,/$(echo "$STRING")/ s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
fi
Install File Watchers plugin.
Create a file watcher with appropriate scope (it may be this single file or any other scope, so that any change in project's source code will update modified date or version etc.) and put a path to your bash script into Program field.
Now every time the file changes the date will update. If you want to update date for each file separately, an argument $FilePath$ should be passed to the script.
This might have been just a comment to #oleg-mikhailov excellent idea, but the code snippet won't fit. Basically, I just tweaked his solution.
I needed a slightly different syntax but that's not the issue. The issue was that when the script ran automatically upon file save using the File Watchers plugin, if ran on a file which doesn't include PREFIX it would run over and over for ever.
I presume the that the issue is with the plugin itself, as it didn't happen when run from the shell, but I'm not sure why it happened.
Anyway, I ended up running the following script (as I said only a slight change with respect to the original). The new script also raises an error if the the prefix doesn't exist. For me this is a feature as Pycharm prompts me with the error, and I can fix the file.
Tested with PyCharm 2021.2.3 on macOS 11.6.
#!/bin/bash
FILE=$1
DATE=`date '+%Y-%m-%d'`
PREFIX="last_modified_date: "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
if grep -q "$PREFIX" "$FILE"; then
sed -i '' "s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
else
echo "Error!"
echo "'$PREFIX' doesn't appear in $FILE"
exit 1
fi
fi
PHPStorm has not a "hook" for launching task after detect a change in file (just for uploading in server yes). Code templating is based on the creation of file not change.
The behaviour you want (automatic change file after manual change file) can be useful for lot of things but it's circular headhache for editor. Because if you change a file it must change file (and if a file is change ? it change file ?).
However, You can, perhaps, "enable Live Templates" when you launch a "reformat code" which able to rewrite your begin template code that way rewrite date modification.
Other solution is that use a tools with as grunt but I don't know if manage php file.

want to run multiple SQL script file in one go with in SQLPLUS

I have to run multiple SQL script file in one go.
Like every time i have to write command in SQLPLUS
SQL>#d:\a.txt
SQL>#d:\a2.txt
SQL>#d:\a3.txt
SQL>#d:\a4.txt
is there any way put all file in one folder & run all script file in one go without missing any single file like #d:\final.txt or #d\final.bat
There is no single SQL*Plus command to do that, but you can create a single script that calls all the others:
Put the following into a batch file
#echo off
echo.>"%~dp0all.sql"
for %%i in ("%~dp0"*.sql) do echo #"%%~fi" >> "%~dp0all.sql"
When you run that batch file it will create a new script named all.sql in the same directory where the batch file is located. It will look for all files with the extension .sql in the same directory where the batch file is located.
You can then run all scripts by using sqlplus user/pwd #all.sql (or extend the batch file to call sqlplus after creating the all.sql script)
If you're using gnu linux, you could use process substitution:
sqlplus USERNAME/PASSWORD#DOMAIN < <(cat a.txt a2.txt a3.txt a4.txt)
# ... or a for loop on input files, inside the process substitution
Alternatively, you can create a .pdc file and list your sql scripts:
-- pdc file
#a.txt;
#a2.txt;
#a3.txt;
#a4.txt;
and call sql plus:
sqlplus USERNAME/PASSWORD#DOMAIN < my_scripts.pdc
Some tricks and command can help you to generate master.sql file and you can run from that location.
c:\direcotory_location\dir *.sql /-t /b >master.sql
Go to the parent directory open master.sql open using notepad++
remove master.sql line and use regular expression to replace
\n with \n #
go to cmd
From cmd
C:\root_directory\sqlplus user/password #master.sql
I find this process very convenient if i have 30 to 40 scripts placed in a single directory.
Use *.PDC extension file like this
install.pdc file content
whenever sqlerror exit sql.sqlcode
prompt started!
prompt 1.executing script 1
##install/01.script_1.sql
prompt 2.executing script 2
##install/02.script_2.sql
prompt 3.executing script 3
##install/03.script_3.sql
prompt finished!
where ##install/ points in which directory is the SQL script located
It might be worth the time to write a shell script that runs multiple files.
#!/bin/ksh
sqlplus user/password#instance <<EOF
#a.txt
#a1.txt
exit
EOF
For more on the syntax, look into Here Document
here is similar solution but you do not have to iterate and to have special formated an sql file names. You compose an one sql file and run it once.
cat table_animal.sql > /tmp/temp.sql
cat table_horse.sql >> /tmp/temp.sql
cat table_fish.sql >> /tmp/temp.sql
sqlplus USERNAME/PASSWORD#DOMAIN #/tmp/temp.sql
For Windows try
copy /b *.sql +x final.sql
sqlplus user/password #final.sql
Special Thanks to Joseph Torre
sqlplus login/password#server #filename
reference link

How to convert few line batch script to perl script

I am new in perl script. Can you please suggest me how to convert few lines of batch script into perl script.
My batch script is :
:: delete *.bak file from store_id Patel General
del "D:\Database\Patel General Store\*.bak"
:: change Directory
cd "C:\Program Files\7-Zip"
:: extract new data from drop box to database folder
"C:\Program Files\7-Zip"\7z.exe -o"D:\Database\Patel General Store\" e "D:\Dropbox\Database Backup\Patel General Store\*"
d
:: Rename new .bak data with storelocation_storeid_store_na
cd\
rename "D:\Database\Patel General Store\*.bak" MUM_099_Patel_General_Stores_30.bak
cd\
sqlcmd -S"ADMIN-PC\SQLEXPRESS" -E -Q "restore database MUM_099_Patel_General_Stores_30 from disk='D:\Database\Patel General Store\MUM_099_Patel_General_Stores_30.bak' with move 'Account180001' to 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\MUM_099_Patel_General_Stores_30.mdf',Move 'Account180001_LOG' to 'C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\MUM_099_Patel_General_Stores_30_Log.ldf',replace"
If new store added then I need to copy all line and paste one more time. It is all hard coded,will please suggest some thing so that I will free from this hard coding
use system() call from perl.
i.e put the above commands in a batch file say xyz.bat
and from perl use
system("xyz.bat");
presuming xyz.bat is in your perl directory. The control will return to your perl script after executing the commands.
If you do not want to return to the calling perl script...use 'exec' instead as follows..
exec("xyz.bat");
Happy computing............

OpenVMS - Add STRING to Filename via DCL

I have a number of files created by a program on our selling system that are produced in a format like the following:
CRY_SKI_14_EDI.LIS
CRY_SUM_14_EDI.LIS
THO_SKI_14_EDI.LIS
THO_LAK_14_EDI.LIS
CRY_SKI_IE_14_EDI.LIS
These files differ in numbers depending on the split of our product to different brandings. Is it possible to rename them all so that they read like the following:
CRY_SKI_14_EDI_DEMO.LIS
CRY_SUM_14_EDI_DEMO.LIS
THO_SKI_14_EDI_DEMO.LIS
THO_LAK_14_EDI_DEMO.LIS
CRY_SKI_IE_14_EDI_DEMO.LIS
I need the files to be correctly named prior to their FTP as a hardcoded file could potentially not exist due to the brand not being on sale and terminate the FTP which would prevent the other files following it from being transmitted to our FTP server.
The OpenVMS rename command is more handy (imho) than the windows or unix variants, because it can bulk change chuncks of the full file name. Such as 'name', 'type' or (sub)directory.
For example:
$ rename *.dat *.old
That's great but it will not change within the chunks (components) like the name part requested here.
For that, the classic DCL approach is a quick loop, either parsing directory output (Yuck!) or using F$SEARCH. For example:
$loop:
$ file = f$search("*EDI.LIS")
$ if file .eqs. "" then exit
$ name = f$parse(file,,,"name","syntax_only") ! grab name component from full name
$ rename/log 'file' 'name'_demo ! rename 'fills in the blanks'
$ goto loop
Personally I use PERL one-liners for this kind of work.
(and I test with -le using 'print' instead of 'rename' first. :-)
$ perl -e "for (<*edi.lis>) { $old = $_; s/_edi/_edi_demo/; rename $old,$_}"
Enjoy!
Hein