How do i get my CSV file? - testing

I have done the following changes in the jmeter.properties file :
jmeter.save.saveservice.output_format=csv
jmeter.save.saveservice.assertion_results_failure_message=true
jmeter.save.saveservice.default_delimiter=|
But still I could not find where my .csv file.
Can anyone please help me.

Please see first answers to these posts:
How to save JMeter Aggregate Report results to a CSV file using command prompt?
How do I save my Apache jMeter results to a CSV file?.
In addition to your configuration done in jmeter.properties:
1) GUI:
2) CLI:
jmeter –n –t test.jmx -l test.csv
In test.csv you'll get results in csv format.

Related

Run Apache Jmeter though command line and generate "view result tree" file

I'm running apache jmeter 3.3 on centos command line and generating ".jtl" Summary Report file using following command
./jmeter -n -t requests.jmx -l log.jtl
Can I generate some file and view result tree by importing file to apache jmeter GUI. If yes , then how.
To do that, just add a View Result Tree to your test and fill in the filename field:
Ensure you check the fields you want by clicking on "Configure":
Note that the more you save things the more you impact performances of JMeter
You can run your test as:
./jmeter -Jjmeter.save.saveservice.output_format=xml -Jjmeter.save.saveservice.response_data=true -Jjmeter.save.saveservice.samplerData=true -Jjmeter.save.saveservice.requestHeaders=true -Jjmeter.save.saveservice.url=true -Jjmeter.save.saveservice.responseHeaders=true -n -t requests.jmx -l log.jtl
Alternatively you can add the next lines to user.properties file (lives in "bin" folder of your JMeter installation)
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
jmeter.save.saveservice.samplerData=true
jmeter.save.saveservice.requestHeaders=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.responseHeaders=true
This way JMeter will store results in a way they could be examined in the View Results Tree listener.
References:
Configuring JMeter
Results file configuration
Apache JMeter Properties Customization Guide

Valgrind log both xml.log and text.log at the same time

I wanted to log both xml and text output results of valgrind memcheck.
I tried this command .
valgrind --tool=memcheck --xml=yes --log-file=TextLog.log --xml-file=XMLFile.log test
but only xml file was written.text file had no data..
No need of using tool option to set it to 'memcheck', since by default valgrind uses 'memcheck' as the tool. Just for your information.
Even there is no issue also, in using tool option.
Try using the below command, to get the logs in both XML file and log file.
valgrind --xml=yes --xml-file=XMLFile.log > Textlog.log 2>&1 test
Is this one you are expecting?
more information can be found in below link,
http://valgrind.org/docs/manual/manual-core.html#manual-core.basicopts

How upload file to Pentaho User Console server?

I need :
1) Let the user select a file from his local pc
2) Upload that file to the pentaho server
3) Process the file using a kettle transformation
I tried with a csv data source in Pentaho User Console (PUC) 5.0 but found no way to access it from a .ktr file uploaded to PUC repository. I also try to upload the csv file to a folder and still not able to access it from a .ktr file.
I think this requirement is valid :
Upload a csv data file and .ktr file to PUC folder. The .ktr should be able to read the uploaded csv file when it is executed from PUC
Imagine a simple user, with a csv. Will he be able to upload csv file to linux host using wincsp, filezilla or another ftp tool??
We need to give an easy upload functionality to our user, so after several researching hours (pentaho source code) without one line of Pentaho documentation, I found this test:
https://github.com/pentaho/pentaho-platform/blob/master/extensions/src/test/java/org/pentaho/platform/plugin/services/importer/PlatformImporterTest.java that showed me that a mimetype list should be exist somewhere.
So after search some words in all pentaho folder wiht grep command, I found this file:
/my_apps/pentaho-server-ce-7.1.0.0-12/pentaho-server/pentaho-solutions/system/ImportHandlerMimeTypeDefinitions.xml
With some intuition, I added this xml
<ImportHandler class="org.pentaho.platform.plugin.services.importer.RepositoryFileImportFileHandler">
<MimeTypeDefinitions>
<MimeTypeDefinition mimeType="text/plain" >
<extension>csv</extension>
</MimeTypeDefinition>
</MimeTypeDefinitions>
</ImportHandler>
At the bottom of file:
<tns:ImportHandlerMimeTypeDefinitions xmlns:tns="http://www.pentaho.com/schema/" .....
<ImportHandler ../>
<ImportHandler ../>
<!-- PUT CSV CONFIG HERE -->
</tns:ImportHandlerMimeTypeDefinitions>
Finally, I restarted my pentaho-server-ce-7.1.0.0-12 server and I was able to upload my csv file with this steps :
go to http://localhost:8080/pentaho
click en browse files
select some folder
click in upload (right side)
select csv and ok
Read this csv file from ktr is pending...
I hope this helps

Scrapy not exporting to csv

I have just created a new scrapy project after ages, and seem to be forgetting something. In any case, my spider runs great, but does not store the output to a csv. Is there something that needs to go into the pipeline or settings files? I am using this command:
scrapy crawl ninfo -- set FEED_URI=myinfo.csv --set FEED_FORMAT=csv
Any help is appreciated, Thanks.
TM
Try with this command:
$ scrapy crawl ninfo -o myinfo.csv -t csv
See http://doc.scrapy.org/en/latest/intro/tutorial.html#storing-the-scraped-data (the only difference being they use it to generate JSON data, but Scrapy embarks a CSV exporter: http://doc.scrapy.org/en/latest/topics/feed-exports.html#topics-feed-format-csv)

Hadoop put command doing nothing!

I am running Cloudera's distribution of Hadoop and everything is working perfectly.The hdfs contains a large number of .seq files.I need to merge the contents of all the .seq files into one large .seq file.However, the getmerge command did nothing for me.I then used cat and piped the data of some .seq files onto a local file.When i want to "put" this file into hdfs it does nothing.No error message shows up,and no file is created.
I am able to "touchz" files in the hdfs and user permissions are not a problem here.The put command simply does not work.What am I doing wrong?
Write a job that merges the all sequence files into a single one. It's just the standard mapper and reducer with only one reduce task.
if the "hadoop" commands fails silently you should have a look at it.
Just type: 'which hadoop', this will give you the location of the "hadoop" executable. It is a shell script, just edit it and add logging to see what's going on.
If the hadoop bash script fails at the beginning it is no surprise that the hadoop dfs -put command does not work.