I have an oni file where depth and rgb images are not aligned. I read in other questions that in this case it is useful to use GetAlternativeViewPointCap(). However this does not work.
I tried the following code:
if (depth.IsCapabilitySupported(XN_CAPABILITY_ALTERNATIVE_VIEW_POINT)) {
depth.GetAlternativeViewPointCap().SetViewPoint(image);
}
In the xml file I tried (one of the several attempts)
....
<Recording file="file.oni" />
<Node type="Depth" >
<Query>
<Capabilities>
<Capability>Alternative View</Capability>
</Capabilities>
</Query>
<Configuration>
</Configuration>
</Node>
....
Is it possible to use GetAlternativeViewPointCap also for already recorded files?
How the xml file should be configured?
Based on the capabilities that I add I get the error: Open failed: The node is locked for changes!
Any idea?
Tks!
If you load a recording file you cannot change its Capabilities, Resolution, etc.
Configuring is only relates when connecting to a real hardware.
Related
I've been reading that there is a way to create/reference a .FDF file (used for merging data with a PDF form) that has an embedded URL to the original PDF to be filled in it, and a specifically crafted URL can be sent to the browser that will load Acrobat or related programs that will merge the FDF data with the PDF.
Presumably something like:
http://host/path/original.pdf#FDF=http://host/path/data.fdf
But I can't get that to work. Perhaps the path to the original PDF can be embedded within the .FDF file? I've heard there is something called an /F key that can be used. Can anyone give me an example of it's use, and if there is a special MIME format that is needed to have the browser recognize the file/merge, what would it be?
Here's an example of an FDF file - where do I put the original PDF url?
%FDF-1.2
%▒▒▒▒
1 0 obj
<</FDF<</F(Stand Alone EE.pdf)/Fields[
<</T(date)/V(07/11/2018)>>
<</T(uname)/V(Jennifer Smith)>>
<</T(pctbefore)/V(35)>>
<</T(aftfee)/V(40)>>
<</T(newclient)/V(Yes)>>
<</T(current_dateplus90days)/V(10/11/2018)>>
<</T(im_url)/V(http://internal_usl_for_something_else)>>
]/ID[<A3715E58793D9B5B9A48E8B2E0E057FF><BCE39B7672548444B7E6606F1B14E048>]/UF(Stand Alone EE.pdf)>>/Type/Catalog>>
endobj
trailer
<</Root 1 0 R>>
%%EOF
I have kind of given up on the .fdf format since apparently a newer version, .xfdf is available, but if either way can work, it would be great.
I have also tried this with the XFDF format as documented here:
https://forums.adobe.com/thread/425699
The problem I've run into with the XFDF version is, when I click on the .xfdf it wants to save the file locally. I checked to make sure the mime type is set in the web server and it is. But if I save the .fdf and click on it, it opens up Acrobat, which then tries to open a document in Firefox with the same URL, which then prompts me if I want to open the document in Acrobat and it keeps spawning a circle of windows in Mozilla. Any idea what's wrong?
Here's a sample .fdf
<?xml version="1.0" encoding="UTF-8"?>
<xfdf xmlns="http://ns.adobe.com/xfdf/" xml:space="preserve">
<f href="http://myhost/sample.pdf" />
<fields>
<field name="date"><value>07/11/2018</value></field>
<field name="atty"><value>Jennifer Smith</value></field>
<field name="typeofclaim"><value>Automobile Collision</value></field>
</fields>
</xfdf>
Anybody see anything wrong in the composition? What mime.type issues could I be having? My server is set to recognize .xfdf as "application/vnd.adobe.xfdf"
I set up a shell script to output the mime-type "application/vnd.adobe.xfdf", and then dump the content of the fxdf file above. The browser recognizes it as an Adobe app and I can open it with Acrobat, but then I get this error:
Xml parsing error: xml processing instruction not at start of external entity (error code 17) line 2 of file xxxxx.xfdf
Any idea what the error could be in the xml file? The parser says it's correct. Is it ok to terminate lines with \n?
One problem I found with the field parsing is that it's very picky about the content of the form data if it's not encapsulated in some kind of structure (i.e. CDATA) that says "leave as is", so if you have a data form field, for example, that has unbalanced parenthesis, this can cause bizarre errors.
One of my fixes was to fully-validate the form field data. And make sure things like for every open parenthesis, there were closed parenthesis. This at least fixed one of the errors.
the .XFDF format here works:
<?xml version="1.0" encoding="UTF-8"?>
<xfdf xmlns="http://ns.adobe.com/xfdf/" xml:space="preserve">
<f href="http://myhost/sample.pdf" />
<fields>
<field name="date"><value>07/11/2018</value></field>
<field name="atty"><value>Jennifer Smith</value></field>
<field name="typeofclaim"><value>Automobile Collision</value></field>
</fields>
</xfdf>
If I try to open a text file larger than 20MB, I get the message: File <path> is too large (21.97MB). Where could I relax this restriction?
Found by inspecting intellij source code:
you have to edit the property idea.max.intellisense.filesize in idea.properties located in idea_home/bin.
The maximum file size to load = max(20MB, <value of idea.max.intellisense.filesize>)
Below is my xml input:
<projects>
<project>
<name >project1</name>
<language>java</language>
</project>
<project>
<name>project2</name>
<language>mainframe</language>
</project>
</projects>
I want to convert this .xml to .csv file using data-mapper, but unfortunately it doesn't work.
Can anyone send me the sample flow xml for that? It is very important for my project now.
This is a very simple requirement.
You should select your sample XML file in the input section of the datamapper and define a user defined output of type CSV. Once you create a mapping, map the fields from XML (input) to CSV (output). The code would like as below.
//MEL
//START -> DO NOT REMOVE
output.__id = str2long(input.__id);
//END -> DO NOT REMOVE
output.name = input.name;
output.language = input.language;
You can now click on Preview button and run the preview with your sample XML file. Wasn't that easy? Try this and let me if any issues.
I'm getting the above error when I try uploading an auto-suggest xml file to Google's custom site search. I tried trimming the file down to the bear minimum to see if I could isolate the problem but even the following won't upload:
<?xml version="1.0" encoding="utf-8"?>
<Autocompletions>
<Autocompletion term="My term" type="1" />
</Autocompletions>
Am I missing something blindingly obvious?
Kind regards,
Karl
It turns out, despite the Google information to the contrary, that the 'language' attribute is required even if it's value is blank. I added that and the file was successfully imported.
<?xml version="1.0" encoding="utf-8"?>
<Autocompletions>
<Autocompletion term="My term" type="1" language="" />
</Autocompletions>
Make sure that your encoding is UTF-8. Also, term="" i.e. an empty term should not be there in the XML file.
I'd like to take text from a standard text file and insert it into an XML that is copied with replace tokens by Apache Ant. Is this possible?
Example (this is what I use so far):
<macrodef name="generateUpdateFile">
<sequential>
<echo message="Generating update file ..." level="info"/>
<copy file="update.xml" tofile="${path.pub}/update.xml" overwrite="true">
<filterchain>
<replacetokens>
<token key="app_version" value="${app.version}"/>
<token key="app_updatenotes" value="${app.updatenotes}"/>
</replacetokens>
</filterchain>
</copy>
</sequential>
</macrodef>
The ${app.updatenotes} are currently a string that is defined in a build.properties file. But instead I'd like to write update notes in a simple text file and take them from there.
The apache ant loadfile task will allow to read your text file, and put its content into the app.updatenotes property.
You can simply use:
<loadresource property="app.updatenotes">
<file file="notes.txt"/>
</loadresource>
Then, use your filterchain, just as before.
loadresource has some options, for instance to control the encoding of your file, or to control how to react if the file is not present, or not readable.