Find the date/time a process was created in OpenVMS - openvms

I can't seem to find the date/time a process was created in OpenVMS V8.3-1H1.
The sh proc/all PROCESS_NAME command does show a Connect time which at first I though it is the time since the process was created, but after a few tests I discovered that it is not.
I have search the PDF documents I have from HP, the Ask the Wizard archive and openvms.org but to no avail.

The HP OpenVMS DCL Dictionary, for f$getjpi explains the item code LOGINTIM as String, Process creation time. So try a
$ write sys$output f$getjpi(0,"LOGINTIM")
and see if that's what you want/need.

Related

Openvms: Extracting RMS Indexed file t to Windows as a sequential flat file

I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.

Finete State machine visualizer

I need an application that prints/visualizes input/output pairs during the FST runs. I mean, for each state of the fst, it needs to print out a tuple that contains input for that state and output of the state. Right now I can generate fst files that is compatible with foma,hfst and xfst fst tools. So, I guess the visualization tool I need should be enough to compatible with any of them. Is there anyone who knows such a tool ?
foma can produce dot format files that can be visualized by graphviz. On Debian/Ubuntu, install graphviz with
$ sudo apt-get install graphviz
foma can read att format files (produced with hfst-fst2txt for anything HFST can read, or lt-print for anything from lttoolbox); assuming you've got such a file named myfst.att, you can do
$ foma
foma[0]: read att myfst.att
foma[1]: view
to display the full FST. That will show each input/output pair on each edge between states of the FST.
But you say "during runs" – are you talking about also showing the queue of "live states"? If so, I don't know of a tool that does this, that would be nice! One thing you could do is to modify the HFST source to output the list of live states and string vectors as it's processing, and then combine that with the dot file to e.g. colour in the live states. (If so, you may want to take this to the #hfst channel on irc.freenode.net.)
There is also a script att2dot.py on https://ftyers.github.io/2017-%D0%9A%D0%9B_%D0%9C%D0%9A%D0%9B/hfst.html that can be used on the command line like
hfst-fst2txt chv.lexc.hfst | python3 att2dot.py | dot -Tpng -ochv.lexc.png if you prefer something more scriptable. If you use that from the Python library of HFST, you might be able to get the "live states" for every part of an analysis more easily.

DATASET_CANT_CLOSE error number 32 "Broken Pipe"

I experienced an error in SAP ABAP which says DATASET_CANT_CLOSE with error number 32 (Broken Pipe). Question is: what procedure triggered this kind of error?
As far as I know, this error was triggered by:
CLOSE DATASET dset
But I can't reproduce the error since I don't know what procedure does trigger this kind of error.
This is the code I use:
method GENERATE_TXT_FILE.
DATA :
lwa_data TYPE t_line,
lv_param TYPE sxpgcolist-parameters.
"Upload File to Server
*Open Dataset
OPEN DATASET im_file_name FILTER 'dos2ux'
FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
CLEAR lwa_data.
LOOP AT it_data INTO lwa_data.
CATCH SYSTEM-EXCEPTIONS file_access_errors = 4
OTHERS = 8.
TRANSFER lwa_data-lines TO im_file_name.
ENDCATCH.
IF sy-subrc <> 0.
CLEAR lwa_data.
EXIT.
ENDIF.
CLEAR lwa_data.
ENDLOOP.
*Close Dataset
CLOSE DATASET im_file_name.
As I have investigated through the background job log, it seems that the current server which run the background job haven't got mapped yet to the text file folder. Solution is to re-map the server to text file folder.
You are using the FILTER extension to OPEN DATASET - which can be a HUGE security issue as well as raise loads of portability issues unless you know what you're doing, but that's not what the question is about. From the documentation:
When the statement OPEN DATASET is executed, a process is started in
the operating system for the specified statement. When the file is
opened for reading, a channel (pipe) is linked with STDOUT of the
process, from which the data is read during file reading. The file
itself is linked with STDIN of the process. When the file is opened
for writing, a channel (pipe) is linked to STDIN of the process, to
which data is passed when writing. The output of the process is
diverted to this file.
In your case, the filter command probably decided to bail out - see this answer among many. Why is hard to investigate - you may have to go through various system logs to find out. If the problem really is some unmapped network folder, you could try switching to UNC paths.

How to split sql in MAC OSX?

Is there any app for mac to split sql files or even script?
I have a large files which i have to upload it to hosting that doesn't support files over 8 MB.
*I don't have SSH access
You can use this : http://www.ozerov.de/bigdump/
Or
Use this command to split the sql file
split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart-
The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended.
Now you should be able to import your files one at a time through phpMyAdmin without issue.
More info http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/
This tool should do the trick: MySQLDumpSplitter
It's free and open source.
Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax.
Full disclosure: I am a share holder of the company that hosts this program.
The UploadDir feature in phpMyAdmin could help you, if you have FTP access and can modify your phpMyAdmin's configuration (or are allowed to install your own instance of phpMyAdmin).
http://docs.phpmyadmin.net/en/latest/config.html?highlight=uploaddir#cfg_UploadDir
You can split into working SQL statements with:
csplit -s -f db-part db.sql "/^# Dump of table/" "{99}"
Which makes up to 99 files named 'db-part[n]' from db.sql
You can use "CREATE TABLE" or "INSERT INTO" instead of "# Dump of ..."
Also: Avoid installing any programs or uploading your data into any online service. You don't know what will be done with your information!

How to get a list of files modified since date/revision in Accurev

I have created a workspace backed by some collaboration stream. The stream is updated regularly by team members. My goal is to take modified files in a given path and put them to another repository (do it regularly).
The question is how to create a list of files which were modified since a revision or date or ..? (I don't know which approach is the best.) The command line is preferable.
Once I get the file list I create an automating script to take the files from one place and put them to another.
accurev hist -s Your_Stream -t "2013/05/16 01:00:00"-now -a -fl
You can run accurev stat -m -fx and then parse resulting XML. element elements will have modTime attribute, which is the UNIX timestamp when the file was modified.