JProfile jpexport csv output missing headings - jprofiler

I'm evaluating JProfiler to see if it can be used in a automated fashion. I'd like to run the tool in offline mode, save a snapshot, export the data, and parse it to see if there is performance degradation since the last run.
I was able to use the jpexport command to export the TelemetryHeap view from a snapshot file to a csv file (sample below). When I look at the csv file I see 10 to 13 columns of data but only 4 headings. Is there documentation that explains the output more fully?
Sample output:
"Time [s]","Committed size","Free size","Used size"
0.0,450,880,000,371,600,000,79,280,000
1.0,450,880,000,371,600,000,79,280,000
2.0,450,880,000,371,600,000,79,280,000
3.0,450,880,000,371,600,000,79,280,000
4.0,450,880,000,371,600,000,79,280,000
5.0,450,880,000,371,600,000,79,280,000
6.0,450,880,000,371,600,000,79,280,000
7.0,450,880,000,355,932,992,94,947,000
8.0,450,880,000,355,932,992,94,947,000
9.58,969,216,000,634,564,992,334,651,008
11.05,1,419,456,000,743,606,016,675,849,984
12.05,1,609,792,000,377,251,008,1,232,541,056
17.33,2,524,032,000,1,115,268,992,1,408,763,008
19.43,2,588,224,000,953,451,008,1,634,772,992
26.08,3,711,936,000,1,547,981,056,2,163,954,944
39.75,3,711,936,000,1,145,185,024,2,566,750,976
40.75,3,711,936,000,1,137,052,032,2,574,884,096
41.75,3,711,936,000,1,137,052,032,2,574,884,096
42.75,3,711,936,000,1,137,052,032,2,574,884,096
43.75,3,711,936,000,1,137,051,008,2,574,885,120
44.75,3,711,936,000,1,137,051,008,2,574,885,120
45.75,3,711,936,000,1,137,051,008,2,574,885,120
46.75,3,711,936,000,1,137,051,008,2,574,885,120
47.75,3,711,936,000,1,137,051,008,2,574,885,120
48.75,3,711,936,000,1,137,051,008,2,574,885,120
49.75,3,711,936,000,1,137,051,008,2,574,885,120
50.75,3,711,936,000,1,137,051,008,2,574,885,120
51.75,3,711,936,000,1,137,051,008,2,574,885,120
52.75,3,711,936,000,1,137,051,008,2,574,885,120
53.75,3,711,936,000,1,137,051,008,2,574,885,120
54.75,3,711,936,000,1,137,051,008,2,574,885,120
55.75,3,711,936,000,1,137,051,008,2,574,885,120
56.75,3,711,936,000,1,137,051,008,2,574,885,120
57.75,3,711,936,000,1,137,051,008,2,574,885,120
58.75,3,711,936,000,1,137,051,008,2,574,885,120
60.96,3,711,936,000,1,137,051,008,2,574,885,120
68.73,3,711,936,000,1,137,051,008,2,574,885,120
74.39,3,711,936,000,1,137,051,008,2,574,885,120

Related

Extract an attribute in GPKG

I am trying to extract rivers from OSM. I downloaded the waterway GPKG where I believe there are over 21 million entries (see link) with a file size of 19.9 GB.
I have tried using the split vector layer in QGIS, but it would crash.
Was thinking of using GDAL ogr2ogr, but having trouble generating the command line.
I first isolated the multiline string with the following command.
ogr2ogr -f gpkg water.gpkg waterway_EPSG4326.gpkg waterway_EPSG4326_line -nlt linestring
ogrinfo water.gpkg INFO: Open of water.gpkg' using driver GPKG' successful. 1: waterway_EPSG4326_line (Line String)
Tried the following command below, but it is not working.
ogr2ogr -f GPKG SELECT * FROM waterway_EPSG4326_line - where waterway="river" river.gpkg water.gpkg
Please let me know what is missing or if there is any easy way to perform the task. I tried opening the file in R sf package, but it would not load after a long time.
Thanks

Snowflake - Azure File upload - How can i partition the file if size is more than 40MB

I have to upload the data from a Snowflake table to Azure BLOB using COPYINTO command. The copy command I have is working for SINGLE = TRUE property but I want to break the in multiple files if the size exceeds 40MB.
For example, There is a table 'TEST' in snowflake with 100MB, I want to upload this data in azure BLOB.
The copy into command should create files in below format
TEST_1.csv (40MB)
TEST_2.csv (40MB)
TEST_3.csv (20MB)
--COPY INTO Command I am using
copy into #stage/test.csv from snowflake.test file_format = (format_name = PRW_CSV_FORMAT) header=true OVERWRITE = TRUE SINGLE = TRUE max_file_size = 40000000
We cannot control the output size of file unloads, only the max file size. The number and size of the files are based on maximum performance as it parallelizes the operation. If you want to control the number/size of files, that would be a feature request. Otherwise, just work out a process outside of Snowflake to combine the files afterward. For more details about unloading, please refer to the blog

Using schema update option in beam.io.writetobigquery

I am loading a bunch log files into BigQuery using apache beam data flow. The file format can change over a period of time by adding new columns to the files. I see Schema Update Option ALLOW_FILED_ADDITION.
Anyone know how to use it? This is how my WriteToBQ step looks:
| 'write to bigquery' >> beam.io.WriteToBigQuery('project:datasetId.tableId', ,write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)
I haven't actually tried this yet but digging into the documentation, it seems you are able to pass whatever configuration you like to the BigQuery Load Job using additional_bq_parameters. In this case it might look something like:
| 'write to bigquery' >> beam.io.WriteToBigQuery(
'project:datasetId.tableId',
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
additional_bq_parameters={
'schemaUpdateOptions': [
'ALLOW_FIELD_ADDITION',
'ALLOW_FIELD_RELAXATION',
]
}
)
Weirdly, this is actually in the Java SDK but doesn't seem to have made its way to the Python SDK.

Combine multiple csv files time based visual basic

I have multiple csv files, with the same headers and in the same folder. My goal is to combine all the csv files using visual basic (vb.net) based on certain time.
I am using data logger that can log every 15 minutes. So, 1 file csv for 15 minutes logging, 2 files csv for 30 minutes logging, and so on.
I want to combine all of those csv files in one day (from 14.00 on day 1 to 13.59 on day 2) automatically with the file names based on the time.
For example :
Flowdata20170427220000.csv (27-04-2017 22:00:00)
Name,Attribute1,Attribute2,Attribute3
name1,111,abc,zzz
name2,222,def,yyy
Flowdata20170427221500.csv (27-04-2017 22:15:00)
Name,Attribute1,Attribute2,Attribute3
name3,333,ghi,xxx
name4,444,jkl,www
and so on until
Flowdata20170428214500.csv (28-04-2017 21:35:00)
Name,Attribute1,Attribute2,Attribute3
name5,555,mno,vvv
name6,666,pqr,uuu
and the final file is :
Flowdata20170427-20170428.csv
Name,Attribute1,Attribute2,Attribute3
name1,111,abc,zzz
name2,222,def,yyy
name3,333,ghi,xxx
name4,444,jkl,www
...,...,...,...
...,...,...,...
name5,555,mno,vvv
name6,666,pqr,uuu
Can you help me out please? Searching in google but nothing helps me.

AMPL:How to print variable output using NEOS Server, when you can't include data and model command in the command file?

I'm doing some optimization using a model whose number of constraints and variables exceeds the cap for the student version of, say, AMPL, so I've found a webpage [http://www.neos-server.org/neos/solvers/milp:Gurobi/AMPL.html] which can solve my type of model.
I've found however that when using a solver where you can provide a commandfile (which I assume is the same as a .run file) the documentation of NEOS server tells that you should see the documentation of the input file. I'm using AMPL input which according to [http://www.neos-guide.org/content/FAQ#ampl_variables] should be able to print the decision variables using a command file with the appearance:
solve;
display _varname, _var;
The problem is that NEOS claim that you cannot add the:
data datafile;
model modelfile;
commands into the .run file, resulting in that the compiler cannot find the variables.
Does anyone know of a way to work around this?
Thanks in advance!
EDIT: If anyone else has this problem (which I believe many people have based on my Internet search). Try to remove any eventual reset; command from the .run file!
You don't need to specify model or data commands in the script file submitted to NEOS. It loads the model and data files automatically, solves the problem, and then executes the script (command file) you provide. For example submitting diet1.mod model diet1.dat data and this trivial command file
display _varname, _var;
produces the output which includes
: _varname _var :=
1 "Buy['Quarter Pounder w/ Cheese']" 0
2 "Buy['McLean Deluxe w/ Cheese']" 0
3 "Buy['Big Mac']" 0
4 "Buy['Filet-O-Fish']" 0
5 "Buy['McGrilled Chicken']" 0
6 "Buy['Fries, small']" 0
7 "Buy['Sausage McMuffin']" 0
8 "Buy['1% Lowfat Milk']" 0
9 "Buy['Orange Juice']" 0
;
As you can see this is the output from the display command.