VisualVM Reporting - visualvm

I am performing load test on JMeter for webbased application, and i have to get CPU Utilization, Memory Utilization, number of Threds. For this i am using VisualVM. Is there any way we can get as Report in the form of XLS or CSV or any other format we can give to the Customer.
Could you pleasee help me on this or otherwise is there any other Performance tool we can get CPU,Memory utilization?
--
Thanks,
Raghu.ch,

You can use Tracer plugin with various probes. Tracer can export data in CSV, HTML or XML.

Using Java VisualVM 1.8, you can generate any of several formats, including CSV, from a snapshot.
Your CSV file will look something like this:
"Class Name - Live Objects";"Live Bytes [%]";"Live Bytes";"Live Objects"
"char[]";"24.76%";"237499352";"1472791"
"byte[]";"12.27%";"117657848";"80945"
...
For the specific data you mentioned, it looks like you will need to download one or more of the Tracer plugins that Tomas Hurka mentioned. You can do this from the Java VisualVM GUI via Tools -> Plugins.
After restarting the tool, you can save to various formats.

Related

Bulk edit UrbanCode configuration?

I want to do some bulk search/edit operation on the scripts embedded in our UrbanCode components and applications, and possibly on the flowcharts and blueprints. Unfortunately a lot of this is stored in UrbanCode's own repository, where it can only be access through the browser GUI and I can't do things like grep for common patterns across the whole set.
Is there any documented way to check out/check in, or at least download, a copy of an entire UCD environment as text files that I could analyze?
Thanks.
I think the closest documented way to get some of the things you are looking for is to export the application and to search through the json file. Component processes with all their steps are included in the application export.

Is it possible to run an OpenRefine script in the background?

Can I trigger an OpenRefine script to run in the background without user interaction? Possibly use a windows service to load a OpenRefine config file or start the OpenRefine web server with parameters and save the output?
We parse various data sources from files and place the output into specific tables and fields in sql server. We have a very old application that creates these "match patterns" and would like to replace it with something more modern. Speed is important but not critical. We are parsing files with 5 to 1,000,000 lines typically.
I could be going in the wrong direction with OpenRefine if so please let me know. Our support team that creates these "match patterns" would be best suited with a UI like OpenRefine instead of writing Perl or Python scripts.
Thanks for your help.
OpenRefine has a set of library that let you automated an existing job. The following are available:
* two in Python here and here
* one in ruby
* one in nodejs
Those libraries needs two inputs:
a source file to be processed in OpenRefine
the OpenRefine operation in JSON format.
At RefinePro (disclaimer I am the founder and CEO of RefinePro), we have written some extra wrapper to schedule to select an OpenRefine project, extract the JSON operations, start the library and save the result. The newly created job can then be scheduled.
Please keep in mind that OpenRefine has very poor error handling which limits it's usage as an ETL platform.

how would you retrieve cacti data remotely

I have a cacti instance that polls many servers. I have a different analytic platform where I need to get the data to this platfrom from cacti. Has anybody done something like this? Is it possible to retrieve cacti data remotely via web service calls or anything?
You could use the rrdtool dump data. Find where cacti stores the rrd files. Usually something like /var/lib/cacti/rra or /usr/share/cacti/rra
For each graph there should be a graph_name.rrd. Use rrdtool dump command to convert these into XML files which can be parsed and sent to your other program?
rrdtool dump graph_name.rrd
Please verify that the correct datasource is created as well as there is no mistakes when creating the graph template. You can also use a debug function at the top of the graph that tells you if it found the rd database or not.

WSO2 Gadget Gen Tool -

I have an external Hadoop cluster (CDH4) with Hive. I used the Gadget Gen tool (BAM 2.3.0) to create a simple table gadget, but no data is populated when I add the gadget to a dashboard using the URL supplied from the gadget gen tool.
Here's my data source settings from the Gadget Generator Wizard
jdbc:hive://x.x.x.x:10000/default
org.apache.hadoop.hive.jdbc.HiveDriver
I added the following jar files to make sure I had everything required for the JDBC connection and restarted wso2server:
hive-exec-0.10.0-cdh4.2.0.jar hive-jdbc-0.10.0-cdh4.2.0.jar
hive-metastore-0.10.0-cdh4.2.0.jar hive-service-0.10.0-cdh4.2.0.jar
libfb303-0.9.0.jar commons-logging-1.0.4.jar slf4j-api-1.6.4.jar
slf4j-log4j12-1.6.1.jar hadoop-core-2.0.0-mr1-cdh4.2.0.jar
I see map reduce jobs running on my cluster during step 2 and 3 of the wizard (and the wizard shows me previews of the actual data), but I don't see any jobs submitted after the gadget is generated.
Any help appreciated.
Gadgen gen tool is for RDBMS database such as MySQL,h2, etc. you can't provide hive URL from the gadget gen tool and run it.
Generally in WSO2 BAM, the hive is used to summarize the collected data which was stored in cassandra and write the summarized final result on RDBMS database. Then from Gadget-gen tool, the gdaget xmls are created by pointing to the final result stored RDBMS database.
You can find more information on WSO2 BAM 2.3.0 documentation. http://docs.wso2.org/wiki/display/BAM230/Gadget+Generation+Tool
Make sure the URL generated for the location of Gadget XML has the correct IP/Host Name. See whether the given gadget xml is located in the registry location of the generated url. You do not have to worry about Hive / Hadoop / Cassandra stuff as they are not relevant to the Gadget. Only the RDBMS (H2 by default) data matters. Hope your problem will be resolved when Gadget location is corrected.

Opening Excel files from SSIS package

How many excel files can an SSIS package able to open and insert if the data in my excel files are less than 500KB?
Here it was tested for XML connection test - there is no known limitation. In other word, limitation depends from computer resources.
As far as I know, 500 KB files can be easily handled by SSIS packages if the machine running the packages meets the minimum requirements listed by Microsoft.
Please take a look at my answer in this link. In the link, tab delimited files are being loaded into SQL using SSIS. The .txt files used in the example were of size 41 MB containing a million rows. Configuration of the machine used for testing is provided in the answer. That should hopefully give an idea about SSIS capabilities of handling large files.
Hope that helps.