Intersystems Cache routine to write process information to a file on local system? - process

I am interested in creating a routine that would query the currently running cache processes and then write this information to a file. How could this be done in Cache 2008.2?

PERFMON might be what you're looking for. That's app with it's own UI, but you can call it's functions directly too, as an API.
Check the Cache docs for "Cache Monitoring Guide". That will give you links to PERFMON docs, as well as docs for other system monitoring tools.
You might find something useful in the Class Reference, under packages %SYSTEM, %SYS, and %Monitor.
For some process info you might need to shell out to the OS. In that case check into the $ZF function. That will let you invoke os-level commands from within Cache.
Oh, and you might want to consider saving the process data within the Cache DB, rather than dumping it out to a file. That is, create a Persistent Class with Properties corresponding to each process attribute that you want to capture, then write code to create, populate, and save instances of that class, taking the data from PERFMON or whatever other source you choose.
If you do that you can use Cache SQL to generate whatever kind of report you need. (Cache will automatically generate a SQL Table corresponding to your Persistent Class.) Cache supports ODBC, so you can use an external tool like Crystal Reports or Access for that part.
Obviously that will be more work than just echoing data to a file, but some kind of structure will be needed if you're going to do anything interesting with the information.

Related

Using OSquery to modifying or kill processes, etc

From what I read osquery is used for querying / reading the system information.
By any chance it has facility to modify the system state like killing the process or deleting a registry key ??
I am using osqueryi commands like select * form users before diving in programatically.
Generally not.
osquery itself aims to not change anything in the filesystem. The main distribution has no mechanisms that would do that. (Except, of course, it's local state files)
osquery extensions, however, can be written to do whatever the extension author desires. Further, osquery supports the idea of "writeable tables" which extensions may use to present a simpler interface.
Check out https://blog.trailofbits.com/2018/05/30/manage-your-fleets-firewalls-with-osquery/ for a writable table example.

Best way to export data from other company's SAP

I need to extract some data from my client's SAP ECC (the SUIM -> Users by Complex Selection Criteria -program RSUSR002)
Normally I give them a table of values that I they have to fill some field to extract what I need.
They have to make 63 different extractions (with different values of objects, for example - but inside the same transaction - you can see in the print) from their SAP, to later send to me all extracted files.
Do you know if there is an automated way to extract that, so they don't have to make 63 extractions?
My biggest problem is that every time they make mistakes. It's a lot of things to fill..
Can I create a variant and send it to them? Is it possible to export my variant so they can import it without the need to fill 63x different data?
Thank you.
When this is a task which takes considerable effort by multiple people each year, then it is something which might be worth automatizing.
First you need to find out where that transaction gets its data from. If you spend some time analyzing and debugging the program behind the transaction, you will surely find which SELECT's on which database table(s) provide that data. If you are lucky, there might even be a function module for it.
Then you just need to write an own ABAP program which performs the same selections.
Now about the interesting part: How to get that data to you. There are several approaches here. The best one depends on your requirements and your technical infrastructure. Some possibilities are:
Let users run the program in foreground, use the method cl_gui_frontend_services=>gui_download to save the data to a file on the user's PC and ask them to send it to you via email
Run the program in background and save the file on the application server. Then ask your sysadmins how to get that file from their application server to you. The simplest way would be to just map a network fileserver so they all write to the same place, but there might be some organizational hurdles in the way which prevent that. (Our security people would call me crazy if I proposed to allow access to SMB shares from outside of our network, but your mileage may vary)
Have the program send the data to you directly via email. You can send emails from an SAP system using the function module SO_NEW_DOCUMENT_ATT_SEND_API1. This of course requires that the system was configured to be able to send emails (which you can do with transaction code SCOT). Again, security considerations apply. When it's PII or other confidential data, then you should not send it in an unencrypted email.
Use an RFC call to send the data to your own SAP system which aggregates the data
Use a webservice call to send the data to your own non-SAP system which aggregates the data
You can create a recording in transaction SM35.
There you fill a tcode (SUIM), start recording, make some input in transaction SUIM and then press 'Execute'. Then you can go back to recording (F3 multiple times) and the system will generate some table with commands (structure is BDCDATA). You can delete unnecessary part (i.e. BACK button click) and save it to use as a 'macro'. Then you can replay this recording and it will do exactly what you did.
Also it's possible to export/import the recording to text file, so you can explore it's structure, write some VBA script to create such recording from your parameters and sent it to users. But keep in mind that blanks are meaningful.
It's a standard tools so there's no any coding in the system.
You can save the selection as a variant.
Fill in the selection criteria and press Save.
It can be reused.
You can also transport Variants if the they have a special name

Does DirX provide a changelog to query modifications

I'm in a situation where I need to query modifications out of an DirX Directory Server (LDAP).
In more commonly products like OpenDS, Oracle DSEE, etc. there is usually come kind of changelog that can be queried, which gives you the sequence of modifications performed in that server.
Unfortunately, there is basically no information available online that helps me with this question.
Can anybody with some insight to DirX give some hints if DirX provides anything like this?
DirX doesn't provide the cn=changelog node/subtree that you're looking for.
DirX changelogs are written as LDIF change files. These files can simply be dumped to the filesystem for later use/processing, or as they are written you can invoke any application/script you like do do something with the LDIF data. For example, you can pipe the ldif data to ldapmodify and send every change made in DirX out to another ldap server in real-time. You could pipe the data to a custom application or script that filters it for certain types of operations and writes the wanted info to a sql db, or to whatever output you want. There really aren't any limits here. You just need to read LDIF.
The LDIF data can be written (and piped to your application/script) on change to handle real-time requirements, or on a scheduled basis for batch based processes.
BTW, I've seen implementations where the cn=changelog node (like you'd find on Oracle DSEE) is created in DirX using the LDIF changelog data. i.e. as the LDIF data is written on change, the data is piped to a script that creates the entries you expect under cn=changelog. Obviously this was done to provide more familiar changelog functionality for Oracle DSEE users.
Check whether DirX supports the persistent search control. If it does, this provides change notification, but not history like the UnboundID change log or the retro-changelog of DSEE.

File handling in ABAP

Can file operations, like creation of a file, be done in ABAP?
Yes it can be done.
You can code in ABAP by using 'open dataset' / 'transfer' / 'close dataset' statements to create files on the Application Server.
You can also create your file directly to a certain application for e.g. MS Excel like so.
Also there are several function modules and classes that can simplify certain tasks like gathering your report output, putting your file on the AS (such as 'GUI_UPLOAD' / 'GUI_DOWNLOAD' / 'WS_DOWNLOAD' / 'SAP_CONVERT_TO_CSV_FORMAT' / etc.) ...
Bear in mind that certain functions modules were built for foreground tasks so they won't work in background job scheduling ...
Yes, it's possible, as nict said before. You should start reading here - that's the official documentation, it covers pretty much everything, including working with files on both the application and the presentation server. It also explains how to use platform-independent filenames - always remember, someday you might encounter an application server running on OS/400 that will not let you write stuff to C:\Temp\MyExport.csv. One more hint: Be careful about the function modules nict mentioned, some of them are not safe to use when unicode content is involved. Always use the methods of class CL_GUI_FRONTEND_SERVICES to be on the safe side.
You can use CL_GUI_FRONTEND_SERVICES class or GUI_DOWNLOAD function. Here is a link
You may use CL_GUI_FRONTEND_SERVICES class. But this services only work on front end. Or you can use some function modules like GUI_DOWNLOAD, GUI_UPLOAD etc.
we can create a flat file with data entered into it, with tabs-separated.
Now, that dota corresponds to the sap tables-fields, where the tables are related to an application, like say, material master.
Now we can use the standard FMs to upload the data to the internal tables of the program and followed by updating the database.
So, uploading flat-file data can be done.

How can I speed up batch processing job in Coldfusion?

Every once in awhile I am fed a large data file that my client uploads and that needs to be processed through CMFL. The problem is that if I put the processing on a CF page, then it runs into a timeout issue after 120 seconds. I was able to move the processing code to a CFC where it seems to not have the timeout issue. However, sometime during the processing, it causes ColdFusion to crash and has to restarted. There are a number of database queries (5 or more, mixture of updates and selects) required for each line (8,000+) of the file I go through as well as other logic provided by me in the form of CFML.
My question is what would be the best way to go through this file. One caveat, I am not able to move the file to the database server and process it entirely with the DB. However, would it be more efficient to pass each line to a stored procedure that took care of everything? It would still be a lot of calls to the database, but nothing compared to what I have now. Also, what would be the best way to provide feedback to the user about how much of the file has been processed?
Edit:
I'm running CF 6.1
I just did a similar thing and use CF often for data parsing.
1) Maintain a file upload table (Parent table). For every file you upload you should be able to keep a list of each file and what status it is in (uploaded, processed, unprocessed)
2) Temp table to store all the rows of the data file. (child table) Import the entire data file into a temporary table. Attempting to do it all in memory will inevitably lead to some errors. Each row in this table will link to a file upload table entry above.
3) Maintain a processing status - For each row of the datafile you bring in, set a "process/unprocessed" tag. This way if it breaks, you can start from where you left off. As you run through each line, set it to be "processed".
4) Transaction - use cftransaction if possible to commit all of it at once, or at least one line at a time (with your 5 queries). That way if something goes boom, you don't have one row of data that is half computed/processed/updated/tested.
5) Once you're done processing, set the file name entry in the table in step 1 to be "processed"
By using the approach above, if something fails, you can set it to start where it left off, or at least have a clearer path of where to start investigating, or worst case clean up in your data. You will have a clear way of displaying to the user the status of the current upload processing, where it's at, and where it left off if there was an error.
If you have any questions, let me know.
Other thoughts:
You can increase timeouts, give the VM more memory, put it in 64 bit but all of those will only increase the capacity of your system so much. It's a good idea to do these per call and do it in conjunction with the above.
Java has some neat file processing libraries that are available as CFCS. if you run into a lot of issues with speed, you can use one of those to read it into a variable and then into the database
If you are playing with XML, do not use coldfusion's xml parsing. It works well for smaller files and has fits when things get bigger. There are several cfc's written out there (check riaforge, etc) that wrap some excellent java libraries for parsing xml data. You can then create a cfquery manually if need be with this data.
It's hard to tell without more info, but from what you have said I shoot out three ideas.
The first thing, is with so many database operations, it's possible that you are generating too much debugging. Make sure that under Debug Output settings in the administrator that the following settings are turned off.
Enable Robust Exception Information
Enable AJAX Debug Log Window
Request Debugging Output
The second thing I would do is look at those DB queries and make sure they are optimized. Make sure selects are happening with indicies, etc.
The third thing I would suspect is that the file hanging out in memory is probably suboptimal.
I would try looping through the file using file looping:
<cfloop file="#VARIABLES.filePath#" index="VARIABLES.line">
<!--- Code to go here --->
</cfloop>
Have you tried an event gateway? I believe those threads are not subject to the same timeout settings as page request threads.
SQL Server Integration Services (SSIS) is the recommended tool for complex ETL (Extract, Transform, and Load) work, which is what this sounds like. (It can be configured to access files on other servers.) The question might be, can you work up an interface between Cold Fusion and SSIS?
If you can upgrade to cf8 and take advantage of cfloop file="" which would give you greater speed and the file would not be put in memory (which is probably the cause of the crashing).
Depending on the situation you are encountering you could also use cfthread to speed up processing.
Currently, an event gateway is the only way to get around the timeout limits of an HTTP request cycle. CF does not have a way to process CF pages offline, that is, there is no command-line invocation (one of my biggest gripes about CF - very little offling processing).
Your best bet is to use an Event Gateway or rewrite your parsing logic in straight Java.
I had to do the same thing, Ben Nadel has written a bunch of great articles uses java file io, to allow you to more speedily read files, write files etc...
Really helped improve the performance of our csv importing application.