According to the Create Library Document section of their API:
createLibraryDocument is used to create a document in a user's
document library. The library can be used to send the same document
for signature multiple times, either through the web application or
through the API.
It doesn't make it clear whether you can put something like %ProductName% in the document and find/replace it when distributing, or whether you have to upload a brand new document each time. I'm planning on using the API to send out identical agreements but with different product and company names on them.
Any idea if this is possible?
The question is rather old, so I'm adding this for future reference.
I was dealing with the same problem and I found a solution. Or rather a hack. Instead of createLibraryDocument I'm using sendDocument directly. It has a mergeFieldsInfo property, which according to documentation cannot be used with library documents, but would work if you pass url of file. I tried the option with the url, and it works, I have the fields prefilled in the test document.
The example request body which have worked for me:
<?xml version="1.0"?>
<sendDocument>
<apiKey>XXXXX</apiKey>
<senderInfo nil="true"/>
<documentCreationInfo>
<fileInfos>
<FileInfo>
<fileName>Merchant Agreement.pdf</fileName>
<url>https://my.public.host.com/GetFinancing%20Merchant.pdf</url>
</FileInfo>
</fileInfos>
<mergeFieldInfo>
<mergeFields>
<MergeField>
<defaultValue>test</defaultValue>
<fieldName>companyName</fieldName>
</MergeField>
<MergeField>
<defaultValue>test</defaultValue>
<fieldName>companyAddress</fieldName>
</MergeField>
<MergeField>
<defaultValue>0123456789</defaultValue>
<fieldName>companyPhone</fieldName>
</MergeField>
</mergeFields>
</mergeFieldInfo>
<name>Merchant Agreement</name>
<recipients>
<RecipientInfo>
<email>kowalski0123#gmail.com</email>
<role>SIGNER</role>
</RecipientInfo>
</recipients>
<reminderFrequency>NEVER</reminderFrequency>
<signatureFlow>SENDER_SIGNATURE_NOT_REQUIRED</signatureFlow>
<signatureType>ESIGN</signatureType>
</documentCreationInfo>
</sendDocument>
Related
I want to integrate Adobe Captivate Content (Export: index.html, along with src-folder) into ODOO Community Edition v13 e-Learning Module (website_slides).
The slide.slide model already offers slide_type 'webpage' alongside the field 'html_content'.
The field 'html_content' is of type odoo.fields.HTML. To get the requirement stated above to work, I need to embed Javascript in the given html_content. It seems like the JS-scripts are not working. I also tried with a simple Hello World script.
Can someone help?
Best regards,
Lars
I found the solution already.
Looking at odoo/fields.py -> class Html, you can see that by default the given value is being sanitized using odoo/tools/mail.py -> html_sanitize(), which removes the HTML-Elements in 'tags_to_kill'. 'tags_to_kill' also contains "script".
After overriding html_content in slide.slide with the following, the Javascript-code is being executed:
html_content = fields.Html(
sanitize=False,
sanitize_tags=False,
sanitize_attributes=False)
I have images with people tagging information in xml format. I wish to edit this information and also add it to pictures that do not yet have it. By looking at the xml I assume it is based on the people tagging used in the microsoft imaging component.
I haven't quite understood the format, but I understood it sof far, that I can alter or gemerate the xml, I just do not know where to write it in the image. I am probably just doing some stupid mistake, because I am not experienced with these image metadatas. So if you think I'm just on the wrong track and that can be done much simpler, please tell me.
In those images that already contain this xml, I can use search and replace to update the xml. However I have a lot of pictures that do not yet contain that information and I do not know where I should write it to inside the image.
Images that already contain this information can be read with exiftool as follows:
exiftool -xmp -b existingTags.JPG
The result is the following xml:
<?xpacket begin="" id="W5M0MpCehiHzreSzNTczkc9d"?> <x:xmpmeta xmlns:x="adobe:ns:meta/" x:xmptk="XMP
Core 4.4.0-Exiv2"> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"> <rdf:Description rdf:about=""
xmlns:xmp="http://ns.adobe.com/xap/1.0/" xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:MP="http://ns.microsoft.com/photo/1.2/" xmlns:MPRI="http://ns.microsoft.com/photo/1.2/t/RegionInfo#"
xmlns:MPReg="http://ns.microsoft.com/photo/1.2/t/Region#" xmp:Rating="0"> <dc:subject> <rdf:Bag> <rdf:li>Valeriya
</rdf:li> </rdf:Bag> </dc:subject> <MP:RegionInfo rdf:parseType="Resource"> <MPRI:Regions> <rdf:Bag> <rdf:li
MPReg:Rectangle="0.48, 0.418, 0.059333, 0.089" MPReg:PersonDisplayName="findus_l"/> </rdf:Bag> </MPRI:Regions>
</MP:RegionInfo> </rdf:Description> </rdf:RDF> </x:xmpmeta> <?xpacket end="w"?>
However I cannot write the information using exiftool. When I ran this command, it simply reads the information again, instead of writing the contents of the file to the image:
exiftool -xmp<=alteredXMP.txt existingTags.JPG
A bit of research has shown me, that exiftool can only write specific xmp tags, and the people tagging tags from windows imaging component do not seem to be part of this.
Where in the image file should I write the information? Can I somehow find this spot programmatically and then just insert the xml there?
I am using Kotlin as programming language but I don't mind having to call command line functions or other programs.
Background: I have a Synology Diskstation and use the included software called photo station. The photo station supports tagging of people on the images and uses this given format. I like the photo station in many ways, but the face recognition is bad, so I want to use my own but have photo station be able to read it.
The data you are trying to write is part of the Microsoft Region Structure. XMP Structured data is a complex subject but you should be able to add the data with exiftool by writing region names to the RegionPersonDisplayName tag and the region dimensions to the RegionRectangle. Using the data in your example, the command would be:
exiftool -RegionPersonDisplayName=findus_l -RegionRectangle="0.48, 0.418, 0.059333, 0.089" /path/to/files
If you have to write multiple regions, you can just add them on, but you must keep names and the matching dimensions in the same order. For example
exiftool -RegionPersonDisplayName=findus_l -RegionRectangle="0.48, 0.418, 0.059333, 0.089" -RegionPersonDisplayName="John Smith" -RegionRectangle="0.37645533, 0.04499886, 0.35111009, 0.26633097" /path/to/files
These commands would overwrite any existing region data. If you are adding new names without overwriting, you would change the equal signs to PlusEqual +=.
I'm using file upload and download control. I understand how to use the provided display columns, but how would I go about collecting other info about each uploaded file and then displaying it (i.e. Display Name and Notes that the user would enter)?
<xp:fileUpload id="fileUpload1"
value="#{document1.files}" style="width:80%"
useUploadname="false">
<xp:eventHandler event="onchange"
submit="true" refreshMode="complete"
disableValidators="true">
</xp:eventHandler>
</xp:fileUpload>
<xp:br></xp:br>
<xp:fileDownload rows="30" id="FD1"
displayLastModified="false" value="#{document1.files}"
style="width:98%" hideWhen="true" displayType="false"
displayCreated="true" rules="all"
lastModifiedTitle="Last Modified">
<xp:this.allowDelete><![CDATA[${javascript:database.queryAccessRoles(session.getEffectiveUserName()).contains('[Admin]')}]]></xp:this.allowDelete>
</xp:fileDownload>
If I understand your question correctly: you want to add additional information columns into the file download control that are derived from information stored or computed elsewhere, e.g. from a NotesItem (a field in the Notes document)?
In this case you need to construct your own output using a repeat control. You can render a table or a list - whatever you deem fit for display.
The “trick” is how to construct the URL for download - which is simply:
/yourdatabase.nsf/0/unid/AttachmentName?OpenAttachment
(typed off memory. You might need to double check syntax).
Word of caution: if you have lots of attachments, you might consider having separate documents for them and use a view - above URL works in views too. Saves you a versioning headache (in case multiple users can upload to the same document).
Let us know how it goes
So I have this bit as a part of the a code that came with the html template that I purchased. I was told that in order for this to work, I need to use the absolute URL of 'api/tweet.php'.
This is all I one line:
(function($){$.fn.twittie=function(options){var settings=$.extend({'count':10,'hideReplies':false,'dateFormat':'%b/%d/%Y','template':'{{date}} - {{tweet}}'},options);var linking=function(tweet){var parts=tweet.split(' ');var twit='';for(var i=0,len=parts.length;i<len;i++){var text=parts[i];var link="https://twitter.com/#!/";if(text.indexOf('#')!==-1){text=''+text+''}if(text.indexOf('#')!==-1){text=''+text+''}if(text.indexOf('http://')!==-1){text=''+text+''}twit+=text+' '}return twit};var dating=function(twt_date){var time=twt_date.split(' ');twt_date=new Date(Date.parse(time[1]+' '+time[2]+', '+time[5]+' '+time[3]+' UTC'));var months=['January','February','March','April','May','June','July','August','September','October','November','December'];var _date={'%d':twt_date.getDate(),'%m':twt_date.getMonth()+1,'%b':months[twt_date.getMonth()].substr(0,3),'%B':months[twt_date.getMonth()],'%y':String(twt_date.getFullYear()).slice(-2),'%Y':twt_date.getFullYear()};var date=settings.dateFormat;var format=settings.dateFormat.match(/%[dmbByY]/g);for(var i=0,len=format.length;i<len;i++){date=date.replace(format[i],_date[format[i]])}return date};var templating=function(data){var temp=settings.template;var temp_variables=['date','tweet','avatar'];for(var i=0,len=temp_variables.length;i<len;i++){temp=temp.replace(new RegExp('{{'+temp_variables[i]+'}}','gi'),data[temp_variables[i]])}return temp};this.html('<span>Loading...</span>');var that=this;$.getJSON('api/tweet.php',{count:settings.count,exclude_replies:settings.hideReplies},function(twt){that.find('span').fadeOut('fast',function(){that.html('<ul></ul>');for(var i=0;i<settings.count;i++){if(twt[i]){var temp_data={date:dating(twt[i].created_at),tweet:linking(twt[i].text),avatar:'<img src="'+twt[i].user.profile_image_url+'" />'};that.find('ul').append('<li>'+templating(temp_data)+'</li>')}else{break}}})})}})(jQuery);
Does anyone know how to use or get the absolute URL of tweet.php? I've tried researching or doing a google search on it but none seem to work.
The location of the tweet.php is
http://exampledomain.com/api/tweet.php
EDIT:
This is the thread discussion that I posted on their support website. I didn't share the link since it requires visitors to open an account with them just to view responses
Support Thread Picture
According to the Tweetie jQuery plugin documentation, you have to use the apiPath option:
$('.foo').twittie({
'apiPath': 'http://exampledomain.com/api/tweet.php',
});
But specifying the domain is discouraged and unnecessary. So :
$('.foo').twittie({
'apiPath': '/api/tweet.php',
});
My use case is to index 2 files: metadata file and a binary PDF file to a unique solr id. Metadata file has content in form of XML file and some schema fields are mapped to elements in that XML file.
What I do: Extract content from PDF files(using pdftotext), process that content and retrieve specific information(example: PDF's first page/line has information about the medicine, research stage). Information retrieved(medicine/research stage) needs to be indexed and one should be able to search/sort/facet.
I can create a XML file with information retrieved(lets call this as metadata file). Now assuming my schema would be
<field name="medicine" type="text" stored="true" indexed="true"/>
<field name="researchStage". ../>
Is there a way to put this metadata file and the PDF file in Solr?
What I have tried:
Based on a suggestion in archives, I zipped these files and gave to ExtractRequestHandler. I was able to put all the content in SOLR and make it searchable. But it appear as content of zip file.(I had to apply some patches to Solr Code base to make this work). But this is not sufficient as the content in metadata file is not mapped to field names.
curl "http://localhost:8983/solr/update/extract?literal.id=doc1&commit=true" -F "myfile=#file.zip"
I tried to work with DataImportHandler(binURLdatasource). But I don't think I understand how it works. So could not go far.
I thought of adding metadata tags to PDF itself. For this to work, ExtractrequestHandler should process this metadata. I am not sure of that either.
So I tried "pdftk" to add metadata. Was not able to add custom tags to it. It only updates/adds title/author/keywords etc. Does anyone know similar unix tool.
If someone has tips, please share.
I want to avoid creating 1 file(by merging PDF text + metadata file).
Given a file record1234.pdf and metadata like:
<metadata>
<field1>value1</field1>
<field2>value2</field2>
<field3>value3</field3>
</metadata>
Do the programmatic equivalent of
curl "http://localhost:8983/solr/update/extract?
literal.id=record1234.pdf
&literal.field1=value1
&literal.field2=value2
&literal.field3=value3
&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_txt&boost.foo_txt=3&" -F "tutorial=#tutorial.pdf"
Adapted from http://wiki.apache.org/solr/ExtractingRequestHandler#Literals .
This will create a new entry in the index containing the text output from Tika/Solr CEL as well as the fields you specify.
You should be able to perform these operations in your favorite language.
the content in metadata file is not mapped to field names
If they dont map to a predefined field, then use dynamic fields. For example you can set a *_i to be an integer field.
I want to avoid creating 1 file(by merging PDF text + metadata file).
That looks like programmer fatigue :-) But, do you have a good reason?