ODI | Decrypt a file Using DES - jython

I have a file encrypted using DES algorithm. I want to decrypt the file in ODI (Oracle data integrator) by writing procedure in Jython. I have explored the built-in functions, but I am unable to find one for decryption.
How can I do this?

You can use Python based libraries.
pyDes: http://twhiteman.netfirms.com/des.html
pycrypto: https://www.dlitz.net/software/pycrypto/
using DES/3DES with python

Related

Openvms: Extracting RMS Indexed file t to Windows as a sequential flat file

I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.

Relacing a word in an db2 sql file causes DSNC105I : End of file reached while reading the command error

I have a dynamic sql file in which name of TBCREATOR changes as given in a parameter.
I use a simple python script to change the TBCREATOR=<variable here> and write the result to an output sql file.
calling this file using db2 -td# -vf <generated sql file>gives
DSNC105I : End of file reached while reading the command
Here is the file i need the TBCREATOR variable replaced:
CONNECT to 204.90.115.200:5040/DALLASC user *** using ****#
select REMARKS from sysibm.SYSCOLUMNS WHERE TBCREATOR='table' AND NAME='LCODE'
#
Here is the python script:
#!/usr/bin/python3
# #------replace table value with schema name
# print(list_of_lines)
fin = open("decrypt.sql", "rt")
#output file to write the result to
fout = open("decryptout.sql", "wt")
for line in fin:
fout.write(line.replace('table', 'ZXP214'))
fin.close()
fout.close()
After decryptout.sql is generated I call it using db2 -td# -vf decryptout.sql
and get the error given above.
Whats irritating is I have another sql file that contains exactly same data as decryptout.sql which runs smoothly with the db2 -td# -vf ... command. I tried to use the unix command cmp to compare the generated file and the one which I wrote, with the variable ZXP214 already replaced but there are no differences. What is causing this error?.
here is the file (that executes without error) I compare generated output with:
CONNECT to 204.90.115.200:5040/DALLASC user *** using ****#
select REMARKS from sysibm.SYSCOLUMNS WHERE TBCREATOR='ZXP214' AND NAME='LCODE'
#
I found that specifically on the https://ibmzxplore.influitive.com/ challenge, if you are using the java db2 command and working in the Zowe USS system (Unix System Services of zOS), there is a conflict of character sets. I believe the system will generally create files in EBCDIC format, whereas if you do
echo "CONNECT ..." > syscat.clp
the resulting file will be tagged as ISO8859-1 and will not be processed properly by db2. Instead, go to the USS interface and choose "create file", give it a folder and a name, and it will create the file untagged. You can use
ls -T
to see the tags. Then edit the file to give it the commands you need, and db2 will interoperate with it properly. Because you are creating the file with python, you may be running into similar issues. When you open the new file, use something like
open(input_file_name, mode=”w”, encoding=”cp1047”)
This makes sure the file is open as an EBCDIC file.
If you are using the Db2-LUW CLP (command line processor) that is written in c/c++ and runs on windows/linux/unix, then your syntax for CONNECT is not valid.
Unfortunately your question is ambigiously tagged so we cannot tell which Db2-server platform you actually use.
For Db2-LUW with the c/c++ written classic db2 command, the syntax for a type-1 CONNECT statement does not allow a connection-string (or partial connection string) as shown in your question. For Db2-LUW db2 clp, the target database must be externally defined (i.e not inside the script) , either via the legacy actions of both catalog tcpip node... combined with catalog database..., or must be defined in the db2dsdriver.cfg configuration file as plain XML.
If you want to use connection-strings then you can use the clpplus tool which is available for some Db2-LUW client packages, and is present on currently supported Db2-LUW servers. This lets you use Oracle style scripting with Db2. Refer to the online documentation for details.
If you not using the c/c++ classic db2 command, and you are instead using the emulated clp written in java only available with Z/OS-USS, then you must open a ticket with IBM support for that component, as that is not a matter for stackoverflow.

Change output file format to *.csv using dymosim.exe instead of *.mat

I am trying to understand if it's possible to change the model output format to .csv instead of the default .mat file when simulating a model using dymosim.exe.
I can do this in dymola itself by using the function "convertMATtoCSV" in the base Data files library. Something like below,
DataFiles.convertMATtoCSV("output.mat", {"t"}, "output.csv");
Is there a way to do this conversion using dymosim.exe?
Kindly advise.
Thanks.
Note: cmd "dymosim.exe -h" has some options for .csv but I am not sure how to use this.
No, it is currently not possible to have dymosim.exe generated by Dymola write the result as csv-file. The CSV-options used by dymosim.exe are only for running multiple simulations.
You can:
Generate a txt result instead, if that is easier to handle for you. (By setting Simulation Setup>Output>Textual data format, this is stored as last element of settings in dsin.txt).
Perform the conversion using dymola\bin\alist.exe
Have the model write a cvs-file as well
Set up to perform this as a post-processing command in Dymola 2017 FD01.

issue on creating language model for sinhala usin SRILM

I'm trying to create a sinhala voice recognition system using pocketsphinx. I use SRILM tool to create language model. My source files to create the laguage model are Here . Im using cygwin on windows 8.1 to run SRILM 1.7.1. But once i run the command
ngram-count -vocab sinhalalexicon.txt -text sinhalacorpus.Train -order 3 -write sinhala.count -unk
I'm getting
iconv: Invalid or incomplete multibyte or wide character
iconv: Invalid or incomplete multibyte or wide character
What did I do wrong here? sinhalacorpus.Train file was created by manually using Notepad++
I found the solution to my issue. once I convert the corpus and lexicon files to Unix format and change the encoding to UTF-8 without BOM it worked. I used Notepad++ to do the changes.

BZip decompress

Is there a way to decompress a BZip2 compressed string in MS SQL? Other than using xp_cmdshell and running it through bzip2.exe?
I have a string like BZh41AY&SY3‹Ï¬€ !˜„]ÉáB#Î/>° this is simply 'test'
You could use SQL CLR to do this - utilizing the GZipStream class to decompress the value, or some third party library if you so choose.