We can do things rather declaratively in X3D, like saying there is a box in what position and size:
<Shape>
<Box size='1 2 3'/>
<Appearance>
<Material/>
</Appearance>
</Shape>
I'd like to know is there any tool that can convert an X3D object to its triangular mesh representation?
Thanks
A) Install the InstantReality package from instantreality.org
B) Look for the 'aopt' command line tool (depends on your platform)
C) aopt -i foo.x3d -e foo.wrl (exports the rendergraph to a VRML/WRL file)
Related
I need an application that prints/visualizes input/output pairs during the FST runs. I mean, for each state of the fst, it needs to print out a tuple that contains input for that state and output of the state. Right now I can generate fst files that is compatible with foma,hfst and xfst fst tools. So, I guess the visualization tool I need should be enough to compatible with any of them. Is there anyone who knows such a tool ?
foma can produce dot format files that can be visualized by graphviz. On Debian/Ubuntu, install graphviz with
$ sudo apt-get install graphviz
foma can read att format files (produced with hfst-fst2txt for anything HFST can read, or lt-print for anything from lttoolbox); assuming you've got such a file named myfst.att, you can do
$ foma
foma[0]: read att myfst.att
foma[1]: view
to display the full FST. That will show each input/output pair on each edge between states of the FST.
But you say "during runs" – are you talking about also showing the queue of "live states"? If so, I don't know of a tool that does this, that would be nice! One thing you could do is to modify the HFST source to output the list of live states and string vectors as it's processing, and then combine that with the dot file to e.g. colour in the live states. (If so, you may want to take this to the #hfst channel on irc.freenode.net.)
There is also a script att2dot.py on https://ftyers.github.io/2017-%D0%9A%D0%9B_%D0%9C%D0%9A%D0%9B/hfst.html that can be used on the command line like
hfst-fst2txt chv.lexc.hfst | python3 att2dot.py | dot -Tpng -ochv.lexc.png if you prefer something more scriptable. If you use that from the Python library of HFST, you might be able to get the "live states" for every part of an analysis more easily.
I have been using QGIS to display a map of the long term precipitation average of the Netherlands. However, when QGIS opens the data, the map is shown upside down
I noticed that the coordinates are displayed from 0 - 266 (lon) and -315 - 0 (lat). I figured that the latitude is projected upside down
In stead of -315 - 0 it should be 0 - 315 and the map should look fine. But I can't figure out how to inverse this value.
The file is a NetCdf file. I openend the XML metadata QGIS made for me with EmEditor, but it did show the right coordinates (in lat/lon), So I think it has something to do with the way QGIS sets up the map or the way it converses the lat/lon to meters.
Anybody who encountered the same problem as me? Thank you in advance!
I'm pretty sure you can use the GDAL configuration option GDAL_NETCDF_BOTTOMUP=[YES/NO] to convert from NetCDF to a geotiff, and get the resulting raster correctly oriented north-up. Try using gdal_translate with the above option. See here for some more details.
Thanks to Micha (see the comments):
I was told to solve the problem using GDAL (Geospatial Data Abstraction Library), a method to look into and translate/process metadata. This was quitte hard to understand, while I am relatively new in programming and using powerfull 'languages' like GDAL.
To enter GDAL codes I used the OSGeo4W Shell, which comes with QGIS. The command that I used to flip my map was:
gdal_translate -of netCDF -co WRITE_BOTTOMUP=NO my netcdf.nc output.nc
(see also this short GDAL/netCDF introduction).
In R you can use rotate function
library(raster)
library(gdalUtils)
workdir <- "Your workind dir"
setwd(workdir)
ncfname <- "adaptor.mars.internal-1563580591.3629887-31353-13-1b665d79-17ad-44a4-90ec-12c7e371994d.nc"
# get the variables you want
dname <- c("v10","u10")
# open using raster
datasetName <-dname[1]
r <- raster(ncfname, varname = datasetName)
r2 <- rotate(r)
writeRaster(r2,"wind.tif",driver = "TIFF")
I was using valgrind tool - callgrind and kcachegrind for profiling a large project and was wondering if there is a way that callgrind reports the stats from all the functions (not just the most expensive functions).
To be specific - When I visualized the callgraph in kcachegrind, it included only those functions that are quite expensive, but I was wondering if there is a way to include all the functions from the project in the callgraph. Command used for generating profiling info is given below :
valgrind --dsymutil=yes --tool=callgrind $EXE
I am not sure if I have to give any options to valgrind or may be compile the application at a different optimization. This might be something trivial but I couldn't find a solution. Any pointers regarding this highly appreciated.
Thanks !
It occurred to me yesterday. As shown in the picture, I found in call graph of kcachegrind, there is a right click menu, in which you can set up the threshold above which the node will be visualized.
There is also a option "no minimum", however it can not be chosen. I think maybe it's because, if every function, no matter how trivial it is, takes up a node, the graph may be too large to handle.
I just found that the script gprof2dot can handle this.
The script can convert the output of callgrind to dot, which can be visualized as graph. The script has two relevant parameters:
-n PERCENTAGE, --node-thres=PERCENTAGE to eliminate nodes below this threshold [default: 0.5]. In order to visualize all nodes in the graph, you can set the parameter like -n0
-e PERCENTAGE, --edge-thres=PERCENTAGEto eliminate edges below this threshold [default: 0.1]. In order to visualize all edges in the graph, you can set the parameter like -e0
In order to generate the complete call graph you would use both of the options (-n0 and -e0).
I've tried this, however, as the graph generated is too large, the dot software warned me that "graph is too large for cairo-renderer bitmaps. Scaling by 0.328976 to fit. " But you can set up the output format as eps which can handle this. You also can change the parameter to adapt your objective.
Example
Let's say that you have a callgrind output file called callgrind.out.1992. To generate a complete call graph you would use:
gprof2dot.py -n0 -e0 ./callgrind.out.1992 -f callgrind
To generate a PNG output image of the graph, you could run the following commands:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind > out.dot
dot -Tpng out.dot -o out.png
Now you have an out.png image with the full graph.
Note the usage of the -f parameter to specify the profile format (callgrind in our case).
The command I am using is
valgrind --tool=callgrind --dump-instr=yes --collect-jumps=yes $EXE and as far as I have seen it includes all the functions in the call graph.
Hope it helps.
I'm going to complete rengar's answer with information that will allow you to generate the complete call graph, as well as give an example of the full process.
You can use the gprof2dot to show all functions in a callgraph. The script can convert the output of callgrind to dot, which can be visualized as graph. The script has two relevant parameters:
-n PERCENTAGE, --node-thres=PERCENTAGE to eliminate nodes below this threshold [default: 0.5]. In order to visualize all nodes in the graph you should set this parameter to -n0
-e PERCENTAGE, --edge-thres=PERCENTAGEto eliminate edges below this threshold [default: 0.1]. In order to visualize all edges in the graph you should set this parameter to -e0
In order to generate the complete call graph you would use both of the options: -n0 and -e0.
Example
Let's say that you have a callgrind output file called callgrind.out.1992. To generate a complete call graph you would use:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind
To generate a PNG output image of the graph, you could run the following commands:
gprof2dot -n0 -e0 ./callgrind.out.1992 -f callgrind > out.dot
dot -Tpng out.dot -o out.png
Now you have an out.png image with the full graph.
Note the usage of the -f parameter to specify the profile format (callgrind in our case).
I use gprof2dot to profile my application:
./gprof2dot.py -f callgrind callgrind.out.x | dot -Tsvg -o output.svg
source
Even though, it gives me beautiful graphical profiling, the name of each function in each box is very long and goes far over screen size. Since boost library has high usage of templates. Just look at one of the function names:
std::reference_wrapper<boost::numeric::odeint::controlled_runge_kutta<boost::numeric::odeint::runge_kutta_dopri5, boost::numeric::odeint::default_error_checker<double, boost::numeric::odeint::range_algebra, boost::numeric::odeint::default_operations>, boost::numeric::odeint::initially_resizer, boost::numeric::odeint::explicit_error_stepper_fsal_tag> > std::ref<boost::numeric::odeint::controlled_runge_kutta<boost::numeric::odeint::runge_kutta_dopri5, boost::numeric::odeint::default_error_checker<double, boost::numeric::odeint::range_algebra, boost::numeric::odeint::default_operations>, boost::numeric::odeint::initially_resizer, boost::numeric::odeint::explicit_error_stepper_fsal_tag> >(boost::numeric::odeint::controlled_runge_kutta<boost::numeric::odeint::runge_kutta_dopri5, boost::numeric::odeint::default_error_checker<double, boost::numeric::odeint::range_algebra, boost::numeric::odeint::default_operations>, boost::numeric::odeint::initially_resizer, boost::numeric::odeint::explicit_error_stepper_fsal_tag>&)
Is there any way to strip out the name space and template and even arguments of the function to make it look smaller in the graph?
PS. The image is very big and I could not convert it into png. I do not know if you can download and open this 10MB image (link).
./gprof2dot.py has two related options:
-s, --strip strip function parameters, template parameters, and
const modifiers from demangled C++ function names
-w, --wrap wrap function names
I personally prefer -w as I can still tell templates apart.
I would like to extract text from a portion (using coordinates) of PDF using Ghostscript.
Can anyone help me out?
Yes, with Ghostscript, you can extract text from PDFs. But no, it is not the best tool for the job. And no, you cannot do it in "portions" (parts of single pages). What you can do: extract the text of a certain range of pages only.
First: Ghostscript's txtwrite output device (not so good)
gs \
-dBATCH \
-dNOPAUSE \
-sDEVICE=txtwrite \
-dFirstPage=3 \
-dLastPage=5 \
-sOutputFile=- \
/path/to/your/pdf
This will output all text contained on pages 3-5 to stdout. If you want output to a text file, use
-sOutputFile=textfilename.txt
gs Update:
Recent versions of Ghostscript have seen major improvements in the txtwrite device and bug fixes. See recent Ghostscript changelogs (search for txtwrite on that page) for details.
Second: Ghostscript's ps2ascii.ps PostScript utility (better)
This one requires you to download the latest version of the file ps2ascii.ps from the Ghostscript Git source code repository. You'd have to convert your PDF to PostScript, then run this command on the PS file:
gs \
-q \
-dNODISPLAY \
-P- \
-dSAFER \
-dDELAYBIND \
-dWRITESYSTEMDICT \
-dSIMPLE \
/path/to/ps2ascii.ps \
input.ps \
-c quit
If the -dSIMPLE parameter is not defined, each output line contains some additional info beyond the pure text content about fonts and fontsize used.
If you replace that parameter by -dCOMPLEX, you'll get additional infos about colors and images used.
Read the comments inside the ps2ascii.ps to learn more about this utility. It's not comfortable to use, but for me it worked in most cases I needed it....
Third: XPDF's pdftotext CLI utility (more comfortable than Ghostscript)
A more comfortable way to do text extraction: use pdftotext (available for Windows as well as Linux/Unix or Mac OS X). This utility is based either on Poppler or on XPDF. This is a command you could try:
pdftotext \
-f 13 \
-l 17 \
-layout \
-opw supersecret \
-upw secret \
-eol unix \
-nopgbrk \
/path/to/your/pdf
- |less
This will display the page range 13 (first page) to 17 (last page), preserve the layout of a double-password protected named PDF file (using user and owner passwords secret and supersecret), with Unix EOL convention, but without inserting pagebreaks between PDF pages, piped through less...
pdftotext -h displays all available commandline options.
Of course, both tools only work for the text parts of PDFs (if they have any). Oh, and mathematical formula also won't work too well... ;-)
pdftotext Update:
Recent versions of Poppler's pdftotext have now options to extract "a portion (using coordinates) of PDF" pages, like the OP asked for. The parameters are:
-x <int> : top left corner's x-coordinate of crop area
-y <int> : top left corner's y-coordinate of crop area
-W <int> : crop area's width in pixels (defaults to 0)
-H <int> : crop area's height in pixels (defaults to 0)
Best, if used with the -layout parameter.
Fourth: MuPDF's mutool draw command can also extract text
The cross-platform, open source MuPDF application (made by the same company that also develops Ghostscript) has bundled a command line tool, mutool. To extract text from a PDF with this tool, use:
mutool draw -F txt the.pdf
will emit the extracted text to <stdout>. Use -o filename.txt to write it into a file.
Fifth: PDFLib's Text Extraction Toolkit (TET) (best of all... but it is PayWare)
TET, the Text Extraction Toolkit from the pdflib family of products can find the x-y-coordinate of text content in a PDF file (and much more). TET has a commandline interface, and it's the most powerful of all text extraction tools I'm aware of. (It can even handle ligatures...) Quote from their website:
Geometry
TET provides precise metrics for the text, such as the position on the page, glyph widths, and text direction. Specific areas on the page can be excluded or included in the text extraction, e.g. to ignore headers and footers or margins.
In my experience, while it's does not sport the most straight-forward CLI interface you can imagine: after you got used to it, it will do what it promises to do, for most PDFs you throw towards it...
And there are even more options:
podofotxtextract (CLI tool) from the PoDoFo project (Open Source)
calibre (normally a GUI program to handle eBooks, Open Source) has a commandline option that can extract text from PDFs
AbiWord (a GUI word processor, Open Source) can import PDFs and save its files as .txt: abiword --to=txt --to-name=output.txt input.pdf
I'm not sure GhostScript can accept coordinates, but you can convert the PDF to a image and send it to an OCR engine either as a subimage cropped from the given coordinates or as the whole image along with the coordinates. Some OCR API accepts a rectangle parameter to narrow the region for OCR.
Look at VietOCR for a working example, which uses Tesseract as its OCR engine and GhostScript as PDF-to-image converter.
Debenu Quick PDF Library can extract text from a defined area on a page. The SetTextExtractionArea function lets you specify the x and y coordinates and then you can also specify the width and height of the area.
Left = The horizontal coordinate of the left edge of the area
Top = The vertical coordinate of the top edge of the area
Width = The width of the area
Height = The height of the area
Then the GetPageText function can be called immediately after this to extract the text from that defined area.
Here's an example using C# (though the library is multi-platform and can be used with many different programming languages):
DPL.LoadFromFile(#"Sample.pdf", "");
DPL.SetOrigin(1); // Sets 0,0 coordinate position to top left of page, default is bottom left
DPL.SetTextExtractionArea(35, 35, 229, 30); // Left, Top, Width, Height
string ExtractedContent = DPL.GetPageText(8);
Console.WriteLine(ExtractedContent);
Using GetPageText it is also possible to return just the text located in that area or the text located in that area as well as information about the text's font such as name, color and size.