how would you retrieve cacti data remotely - cacti

I have a cacti instance that polls many servers. I have a different analytic platform where I need to get the data to this platfrom from cacti. Has anybody done something like this? Is it possible to retrieve cacti data remotely via web service calls or anything?

You could use the rrdtool dump data. Find where cacti stores the rrd files. Usually something like /var/lib/cacti/rra or /usr/share/cacti/rra
For each graph there should be a graph_name.rrd. Use rrdtool dump command to convert these into XML files which can be parsed and sent to your other program?
rrdtool dump graph_name.rrd

Please verify that the correct datasource is created as well as there is no mistakes when creating the graph template. You can also use a debug function at the top of the graph that tells you if it found the rd database or not.

Related

Does Informatica Powercenter provide API to access session logs

Question - Does Informatica PowerCenter provide API to access session logs - I believe No but wanted to through in forum to be sure?
Objective -Actually I want to extract session logs and process them through Logstash and perform reactive analytics periodically.
Alternate - The same could be solved using Logstash input plugin for Informatica - but I did not find that either.
Usage - This will be used to determine common causes, analyze usage of cache at session level, throughput, and any performance bottlenecks.
You can call Informatica Webservice's getSessionLog. Here's a sample blog post with details: http://www.kpipartners.com/blog/bid/157919/Accessing-Informatica-Web-Services-from-3rd-Party-Apps
I suppose that the correct answer i 'yes', since there is a command line tool to convert logfiles to txt or even xml format.
The tool for session/workflow logs is called infacmd with the 'getsessionlog' argument. You can look it up in the help section of you powercenter clients or here:
https://kb.informatica.com/proddocs/Product%20Documentation/5/IN_101_CommandReference_en.pdf
That has always been enough for my needs..
But there is more to look into: when you run this commandline tool (which is really a BAT file) a java.exe does the bulk of the processing in a sub-process. The jar files used by this process could potentially be utilized by somebody else directly, but I don't know if it has been documented anywhere publicly available....?
Perhaps someone else knows the answer to that.

Merge two Endeca Servers (Endeca 3.1) into one. Including their current data

Let me explain in more detail:
1st: I'm running endeca 3.1, so Endeca Server here refers to 3.0's Data Domain.
I'm required to use an Endeca Server currently present on Endeca (Downloaded a Demo VM). All the info on it, including, groups, attributes and data, must be merged into out Endeca Server. (It can also be the other way around, i could merge my Endeca Server into this one.)
So far, i've tried to do the following:
1) Clone the Endeca Server
2) use the putCollection sconfig operation to create a collection on it with the same name i have on mine.
3) Load configurations using the LoadCollection & LoadAttributes graphs from OEID POC Template 3.1. I point to the new collection on the Configuration.xls file.
This is where i encounter an issue. The LoadAttributes graph gets a T/O message from the server's WS. Then the config WSDL becomes inaccesible for a while. I can't go beyond this point.
I've been able to load data into the collection, but i need to load the attributes first.
THanks in advance for your replies.
Regards
There are a few techniques.
Have you tried exporting the data domain and then importing it?
You can use the endeca-cmd tools to export to a file, and then import from that file. This would enable you to add 2 datastores into one server.
If you want to combine 2 datastores then that is a different question.
The simplest approach in 3.1 if the data collections are small. Extract then as CSV (via a data-table), convert to XLS and add them via self provisioning into separate collections within a single data store. If you are running in the VM this is potentially the easiest approach.
This can also be done using Integrator.
You don't need to load the attributes unless you are using multi-value types. You can call against the conversation web-service to extract data and then load it using 'bulk-load' I would not worry too much about creating the attributes unless this becomes essential due to their type or complexity. If you cannot call against the conversation web-service, then again extract as csv and load using Integrator.

File handling in ABAP

Can file operations, like creation of a file, be done in ABAP?
Yes it can be done.
You can code in ABAP by using 'open dataset' / 'transfer' / 'close dataset' statements to create files on the Application Server.
You can also create your file directly to a certain application for e.g. MS Excel like so.
Also there are several function modules and classes that can simplify certain tasks like gathering your report output, putting your file on the AS (such as 'GUI_UPLOAD' / 'GUI_DOWNLOAD' / 'WS_DOWNLOAD' / 'SAP_CONVERT_TO_CSV_FORMAT' / etc.) ...
Bear in mind that certain functions modules were built for foreground tasks so they won't work in background job scheduling ...
Yes, it's possible, as nict said before. You should start reading here - that's the official documentation, it covers pretty much everything, including working with files on both the application and the presentation server. It also explains how to use platform-independent filenames - always remember, someday you might encounter an application server running on OS/400 that will not let you write stuff to C:\Temp\MyExport.csv. One more hint: Be careful about the function modules nict mentioned, some of them are not safe to use when unicode content is involved. Always use the methods of class CL_GUI_FRONTEND_SERVICES to be on the safe side.
You can use CL_GUI_FRONTEND_SERVICES class or GUI_DOWNLOAD function. Here is a link
You may use CL_GUI_FRONTEND_SERVICES class. But this services only work on front end. Or you can use some function modules like GUI_DOWNLOAD, GUI_UPLOAD etc.
we can create a flat file with data entered into it, with tabs-separated.
Now, that dota corresponds to the sap tables-fields, where the tables are related to an application, like say, material master.
Now we can use the standard FMs to upload the data to the internal tables of the program and followed by updating the database.
So, uploading flat-file data can be done.

Intersystems Cache routine to write process information to a file on local system?

I am interested in creating a routine that would query the currently running cache processes and then write this information to a file. How could this be done in Cache 2008.2?
PERFMON might be what you're looking for. That's app with it's own UI, but you can call it's functions directly too, as an API.
Check the Cache docs for "Cache Monitoring Guide". That will give you links to PERFMON docs, as well as docs for other system monitoring tools.
You might find something useful in the Class Reference, under packages %SYSTEM, %SYS, and %Monitor.
For some process info you might need to shell out to the OS. In that case check into the $ZF function. That will let you invoke os-level commands from within Cache.
Oh, and you might want to consider saving the process data within the Cache DB, rather than dumping it out to a file. That is, create a Persistent Class with Properties corresponding to each process attribute that you want to capture, then write code to create, populate, and save instances of that class, taking the data from PERFMON or whatever other source you choose.
If you do that you can use Cache SQL to generate whatever kind of report you need. (Cache will automatically generate a SQL Table corresponding to your Persistent Class.) Cache supports ODBC, so you can use an external tool like Crystal Reports or Access for that part.
Obviously that will be more work than just echoing data to a file, but some kind of structure will be needed if you're going to do anything interesting with the information.

Shell Scripting and Intersystems Cache: Extracting Information?

I would like to be able to execute a script to draw out the current cache process information. Has anybody done much scripting with cache? Is there an easier way to basically log the process information? The end result of this is I would like to present this information in a way that I could log it into Splunk
You try to solve an easy problem using the hard way. Just use the built-in SNMP provider.
The documentation for Cache 2008.2.6 contains a document Monitoring Cache Using SNMP.