I provide some hivevars while starting beeline as follows
beeline -u "jdbc:hive2://....;ssl=true" --hivevar SCHEMA_NAME=my_schema --hivevar SRC_SCHEMA=src_schema
So far, so good. But I want to provide the hivevars from a file, instead of --hivevar, e.g
$ cat hivevars.properties
SCHEMA_NAME=my_schema
SRC_SCHEMA=src_schema
...
Would this even be possible? So far, I haven't seen much literature on this, so any help would be great.
Related
I was given this file:
hashes.txt
experthead:e10adc3949ba59abbe56e057f20f883e
interestec:25f9e794323b453885f5181f1b624d0b
ortspoon:d8578edf8458ce06fbc5bb76a58c5ca4
reallychel:5f4dcc3b5aa765d61d8327deb882cf99
simmson56:96e79218965eb72c92a549dd5a330112
bookma:25d55ad283aa400af464c76d713c07ad
popularkiya7:e99a18c428cb38d5f260853678922e03
eatingcake1994:fcea920f7412b5da7be0cf42b8c93759
heroanhart:7c6a180b36896a0a8c02787eeafb0e4c
edi_tesla89:6c569aabbf7775ef8fc570e228c16b98
liveltekah:3f230640b78d7e71ac5514e57935eb69
blikimore:917eb5e9d6d6bca820922a0c6f7cc28b
johnwick007:f6a0cb102c62879d397b12b62c092c06
flamesbria2001:9b3b269ad0a208090309f091b3aba9db
oranolio:16ced47d3fc931483e24933665cded6d
spuffyffet:1f5c5683982d7c3814d4d9e6d749b21e
moodie:8d763385e0476ae208f21bc63956f748
nabox:defebde7b6ab6f24d5824682a16c3ae4
bandalls:bdda5f03128bcbdfa78d8934529048cf
I thought I had to separate them, for example I put the experthead, interestec, etc. in one file named wordtext.txt and e10adc3949ba59abbe56e057f20f883e, etc in another file called hash.txt.
I then ran this:
hashcat -m 0 -a 0 /Users/myname/Desktop/hash.txt /Users/myname/Desktop/wordtext.txt -O
but I couldn't get anything. And then I googled e10adc3949ba59abbe56e057f20f883e and the output was 123456 so now I don't know how to approach this problem.
Just leave the hashes (erase the plaintext) on the txt file, hashcat will sort them out by itself. What I do is: hashcat.exe -m 0 -a 0 hashFile.txt dict.txt --show
The file appears to be in username:hash format. By default, hashcat assumes that only hashes are in the target file.
You can change this behavior with hashcat's --username option.
You don't need to place the -O at the end. It should work perfectly without it, but you do need hashcat.exe in the beginning.
I want to figure out how to run multiple sql files on one go. Suppose I have this test.sql file which has file1.sql, file2.sql and file3.sql and so on. Along with some DML/DDL.
use database &{db};
use schema &{sc};
file1.sql;
file2.sql;
file3.sql;
create table snow_test1
(
name varchar
,add1 varchar
,id number
)
comment = 'this is snowsql testing table' ;
desc table snow_test1;
insert into snow_test1
values('prachi', 'testing', 1);
select * from snow_test1;
here what I run in power shell,
snowsql -c pp_conn -f ...\test.sql -D db=tbc -D sc=testing;
Is there any way to do this ? I know It is possible in Oracle but I want to do this using snowsql. Please guide me. Thanks in advance!
you can run multiple files in a single call:
snowsql -c pp_conn -f file1.sql -f file2.sql -f file3.sql -D db=tbc -D sc=testing;
You might need to put the addition DMLs in a file.
I have tried defining .sql file with !source inside my test.sql file and its working:
!source file1.sql;
!source file2.sql;
!source file3.sql;
....
Also, run the same command in power shell using one .sql file and it is working.
I'm looking a way to store a relation into splitted folders in a CSV format.
I'm launching the pig from a shell.
I looked on stack but I don't found anything speaking about this case.
I'm using the piggybank 0.14 and the java of the last multistorage to use the multifield selection.
If I use the CSVExcelStorage to store the relation, I can cut the output file in shell, but I think this action will make me lost the CSV format.
If I use the multiStorage to store the relation, I'm not able to format the output file in CSV.
So, is it possible to apply the CSVExcelStorage from a Relation to a Relation?
Do you have any other suggestion?
Thanks,
In fine, i used a shell to simulate the multistorage with some filter and CSVExcelStorage.
sklt="file.pig.skeleton"
pig="file.pig"
cp ${sklt} ${pig}
for waza in $anOtherVar
do
echo "R2 = R1 FILTER JEANNO IN ('${waza}')" >> ${pig}
echo -e "STORE R2 INTO '$myPath/${waza}' USING org.apache.pig.piggybank.storage.CSVExcelStorage(';');\n" >> ${pig}
done
pig -f ${pig} -p table=$anOtherVar -p myPath=/past/a/box/
if this awesome solution can help some other pig addict...
I am using BCP To export data from sqlserver 2008R2 Database Name Health,and a table name patient.The out of the query should be save in a textfile:ApplicantsName.txt located at:
C:\Users\meuser\Desktop ApplicantsName.txt -C -T
After running the following query on the command prompt:
bcp "Select FirstName,LastName,PatientNumber from Health.dbo.Patient order by FirstName" queryout "C:\Users\meuser\Desktop ApplicantsName.txt" -C -T
It prompted me this:
Enter the file storage type of fiedl FirstName [char]:varchar
and then this:
Enter prefix-length of field FirstName[2]:FirstName
I have been entering some values but i think the best is to know how it works.After some time of research on the internet, know using bcp utility is one fastest way to export or import data between instance to a file.I follow exactly the samples provided by MS here but i think i need some practical explanation. Can some guide me how to go about this and a little bit of explanation or relevant ref. will be appreciated too.
#one angry researcher's solution of adding '-C RAW' did not work in my particular case but adding lower-case '-c' did. It performs the operation using a character data type
For instance:
bcp mydb.mytable out c:/temp/data.txt -T -c
You need to add a value for the -C parameter (capital C!). If you do not know what you're using it for, you probably won't be needing it and can omitt it.
Refer to the official documentation: http://msdn.microsoft.com/en-us/library/ms162802.aspx
edit: you could, for example, use
bcp "Select FirstName,LastName,PatientNumber from Health.dbo.Patient order by FirstName" queryout "C:\Users\meuser\Desktop\ApplicantsName.txt" -C RAW -T
You will need to fix your output directory too (seems you forgot a backslash there).
heres the sample bcp command with query and credentials (param)
bcp "SELECT * from yourtable" queryout c:\StockItemTransactionID_c.txt -c -Uusername -Pdbpassword -Sinstance -dYourDBName
Note: -U -P -S are case sensitive.
I have a long text file of redis commands that I need to execute using the redis command line interface:
e.g.
DEL 9012012
DEL 1212
DEL 12214314
etc.
I can't seem to figure out a way to enter the commands faster than one at a time. There are several hundred thousands lines, so I don't want to just pile them all into one DEL command, they also don't need to all run at once.
the following code works for me with redis 2.4.7 on mac
./redis-cli < temp.redisCmds
Does that satisfy your requirements? Or are you looking to see if there's a way to programmatically do it faster?
If you don't want to make a file, use echo and \n
echo "DEL 9012012\nDEL 1212" | redis-cli
The redis-cli --pipe can be used for mass-insertion. It is available since 2.6-RC4 and in Redis 2.4.14.
For example:
cat data.txt | redis-cli --pipe
More info in: http://redis.io/topics/mass-insert
I know this is an old old thread, but adding this since it seems missed out among other answers, and one that works well for me.
Using heredoc works well here, if you don't want to use echo or explicitly add \n or create a new file -
redis-cli <<EOF
select 15
get a
EOF