I'm trying the following:
https://jsfiddle.net/zgaxy70t/1/
However it looks like the settings aren't being honored. How can I configure max file size, max # files, and file types within my call to unsigned_cloudinary_upload?
$('.upload_field').unsigned_cloudinary_upload(preset_name, {
cloud_name: 'cloudname',
disableImageResize: false,
imageMaxWidth: 2000, // 800 is an example value
imageMaxHeight: 2000, // 600 is an example value
maxFileSize: 8000000, // 20MB is an example value
//loadImageMaxFileSize: 200000, // default is 10MB
acceptFileTypes: /(\.|\/)(gif|jpe?g|png|bmp|ico)$/i
}, {
multiple: true
})
I also tried this:
//{
// cloud_name: 'test',
// max_files: settings.maxFiles,
// tags: tags,
// client_allowed_formats: ["png", "gif", "jpeg", "jpg", "jpe", "jpc", "jp2", "j2k", "wdp", "jxr", "hdp", "webp", "bmp", "tif", "tiff", "ico", "ps", "ept", "eps", "eps3", "psd", "svg", "ai", "djvu", "flif", "tga"],
// max_file_size: 8000000
//},
Currently, it's not possible to use the unsigned_cloudinary_upload method and limiting client allowed formats, max files, max file size.
A close workaround will be to use the cloudinary_fileupload method to have the upload unsigned. This will allow you to set the max file size, and use a regular expression to limit the file type.
An example for using maxFileSize , and acceptFileTypes would look like the below:
$('.cloudinary-fileupload').cloudinary_fileupload({maxFileSize:100000000,acceptFileTypes: /(.|\/)(gif|jpe?g|png|bmp|ico)$/i});
Related
I finally got my html2pdf to work showing my web page just how I want it in the pdf(Any other size was not showing right so I kept adjusting the format size until it all fit properly), and the end result is exactly what I want it to look like... EXCEPT even though my aspect ratio is correct for a landscape, it is still using a very large image and the pdf is not standard letter size (Or a4 for that matter), it is the size I set. This makes for a larger pdf than necessary and does not print well unless we adjust it for the printer. I basically want this exact image just converted to a a4 or letter size to make a smaller pdf. If I don't use the size I set though things are cut off.
Anyway to take this pdf that is generated and resize to be an a4 size(Still fitting the image on it). Everything I try is not working, and I feel like I am missing something simple.
const el = document.getElementById("test);
var opt = {
margin: [10, 10, 10, 10],
filename: label,
image: { type: "jpeg", quality: 0.98 },
//pagebreak: { mode: ["avoid-all", "css"], after: ".newPage" },
pagebreak: {
mode: ["css"],
avoid: ["tr"],
// mode: ["legacy"],
after: ".newPage",
before: ".newPrior"
},
/*pagebreak: {
before: ".newPage",
avoid: ["h2", "tr", "h3", "h4", ".field"]
},*/
html2canvas: {
scale: 2,
logging: true,
dpi: 192,
letterRendering: true
},
jsPDF: {
unit: "mm",
format: [463, 600],
orientation: "landscape"
}
};
var doc = html2pdf()
.from(el)
.set(opt)
.toContainer()
.toCanvas()
.toImg()
.toPdf()
.save()
I have been struggling with this a lot as well. In the end I was able to resolve the issue for me. What did the trick for me was setting the width-property in html2canvas. My application has a fixed width, and setting the width of html2canvas to the width of my application, scaled the PDF to fit on an A4 paper.
html2canvas: { width: element_width},
Try adding the above option to see if it works. Try to find out the width of your print area in pixels and replace element_width with that width.
For completeness: I am using Plotly Dash to create web user interfaces. On my interface I include a button that when clicked generates a PDF report of my dashboard. Below I added the code that I used for this, in case anybody is looking for a Dash solution. To get this working in Dash, download html2pdf.bundlemin.js and copy it to the assets/ folder. The PDF file will be downloaded to the browsers default downloads folder (it might give a download prompt, however that wasn't how it worked for me).
from dash import html, clientside_callback
import dash_bootstrap_components as dbc
# Define your Dash app in the regular way
# In the layout define a component that will trigger the download of the
# PDF report. In this example a button will be responsible.
app.layout = html.Div(
id='main_container',
children = [
dbc.Button(
id='button_download_report',
children='Download PDF report',
className='me-1')
])
# Clientside callbacks allow you to directly insert Javascript code in your
# dashboards. There are also other ways, like including your own js files
# in the assets/ directory.
clientside_callback(
'''
function (button_clicked) {
if (button_clicked > 0) {
// Get the element that you want to print. In this example the
// whole dashboard is printed
var element = document.getElementById("main_container")
// create a date-time string to use for the filename
const d = new Date();
var month = (d.getMonth() + 1).toString()
if (month.length == 1) {
month = "0" + month
}
let text = d.getFullYear().toString() + month + d.getDay() + '-' + d.getHours() + d.getMinutes();
// Set the options to be used when printing the PDF
var main_container_width = element.style.width;
var opt = {
margin: 10,
filename: text + '_my-dashboard.pdf',
image: { type: 'jpeg', quality: 0.98 },
html2canvas: { scale: 3, width: main_container_width, dpi: 300 },
jsPDF: { unit: 'mm', format: 'A4', orientation: 'p' },
// Set pagebreaks if you like. It didn't work out well for me.
// pagebreak: { mode: ['avoid-all'] }
};
// Execute the save command.
html2pdf().from(element).set(opt).save();
}
}
''',
Output(component_id='button_download_report', component_property='n_clicks'),
Input(component_id='button_download_report', component_property='n_clicks')
)
For years, I have been using Google Cloud Print to print labels in our laboratories on campus (to standardize) using a Google Apps Script custom HtmlService form.
Now that GCP is becoming depreciated, I am in on a search for a solution. I have found a few options but am struggling to get the file to convert to a pdf as would be needed with these other vendors.
Currently, when you submit a text/html blob to the GCP servers in GAS, the backend converts the blob to application/pdf (as evidenced by looking at the job details in the GCP panel on Chrome under 'content type').
That said, because these other cloud print services require pdf printing, I have tried for some time now to have GAS change the file to pdf format before sending to GCP and I always get a strange result. Below, I'll show some of the strategies that I have used and include pictures of one of our simple labels generated with the different functions.
The following is the base code for the ticket and payload that has worked for years with GCP
//BUILD PRINT JOB FOR NARROW TAPES
var ticket = {
version: "1.0",
print: {
color: {
type: "STANDARD_COLOR",
vendor_id: "Color"
},
duplex: {
type: "NO_DUPLEX"
},
copies: {copies: parseFloat(quantity)},
media_size: {
width_microns: 27940,
height_microns:40960
},
page_orientation: {
type: "LANDSCAPE"
},
margins: {
top_microns:0,
bottom_microns:0,
left_microns:0,
right_microns:0
},
page_range: {
interval:
[{start:1,
end:1}]
},
}
};
var payload = {
"printerid" : QL710,
"title" : "Blank Template Label",
"content" : HtmlService.createHtmlOutput(html).getBlob(),
"contentType": 'text/html',
"ticket" : JSON.stringify(ticket)
};
This generates the expected following printout:
When trying to convert to pdf using the following code:
The following is the code used to transform to pdf:
var blob = HtmlService.createTemplate(html).evaluate().getContent();
var newBlob = Utilities.newBlob(html, "text/html", "text.html");
var pdf = newBlob.getAs("application/pdf").setName('tempfile');
var file = DriveApp.getFolderById("FOLDER ID").createFile(pdf);
var payload = {
"printerid" : QL710,
"title" : "Blank Template Label",
"content" : pdf,//HtmlService.createHtmlOutput(html).getBlob(),
"contentType": 'text/html',
"ticket" : JSON.stringify(ticket)
};
an unexpected result occurs:
This comes out the same way for direct coding in the 'content' field with and without .getBlob():
"content" : HtmlService.createHtmlOutput(html).getAs('application/pdf'),
note the createFile line in the code above used to test the pdf. This file is created as expected, of course with the wrong dimensions for label printing (not sure how to convert to pdf with the appropriate margins and page size?): see below
I have now tried to adopt Yuri's ideas; however, the conversion from html to document loses formatting.
var blob = HtmlService.createHtmlOutput(html).getBlob();
var docID = Drive.Files.insert({title: 'temp-label'}, blob, {convert: true}).id
var file = DocumentApp.openById(docID);
file.getBody().setMarginBottom(0).setMarginLeft(0).setMarginRight(0).setMarginTop(0).setPageHeight(79.2).setPageWidth(172.8);
This produces a document looks like this (picture also showing expected output in my hand).
Does anyone have insights into:
How to format the converted pdf to contain appropriate height, width
and margins.
How to convert to pdf in a way that would print correctly.
Here is a minimal code to get a better sense of context https://script.google.com/d/1yP3Jyr_r_FIlt6_aGj_zIf7HnVGEOPBKI0MpjEGHRFAWztGzcWKCJrD0/edit?usp=sharing
I've made the template (80 x 40 mm -- sorry, I don't know your size):
https://docs.google.com/document/d/1vA93FxGXcWLIEZBuQwec0n23cWGddyLoey-h0WR9weY/edit?usp=sharing
And there is the script:
function myFunction() {
// input data
var matName = '<b>testing this to <u>see</u></b> if it <i>actually</i> works <i>e.coli</i>'
var disposeWeek = 'end of semester'
var prepper = 'John Ruppert';
var className = 'Cell and <b>Molecular</b> Biology <u>Fall 2020</u> a few exercises a few exercises a few exercises a few exercises';
var hazards = 'Lots of hazards';
// make a temporary Doc from the template
var copyFile = DriveApp.getFileById('1vA93FxGXcWLIEZBuQwec0n23cWGddyLoey-h0WR9weY').makeCopy();
var doc = DocumentApp.openById(copyFile.getId());
var body = doc.getBody();
// replace placeholders with data
body.replaceText('{matName}', matName);
body.replaceText('{disposeWeek}', disposeWeek);
body.replaceText('{prepper}', prepper);
body.replaceText('{className}', className);
body.replaceText('{hazards}', hazards);
// make Italics, Bold and Underline
handle_tags(['<i>', '</i>'], body);
handle_tags(['<b>', '</b>'], body);
handle_tags(['<u>', '</u>'], body);
// save the temporary Doc
doc.saveAndClose();
// make a PDF
var docblob = doc.getBlob().setName('Label.pdf');
DriveApp.createFile(docblob);
// delete the temporary Doc
copyFile.setTrashed(true);
}
// this function applies formatting to text inside the tags
function handle_tags(tags, body) {
var start_tag = tags[0].toLowerCase();
var end_tag = tags[1].toLowerCase();
var found = body.findText(start_tag);
while (found) {
var elem = found.getElement();
var start = found.getEndOffsetInclusive();
var end = body.findText(end_tag, found).getStartOffset()-1;
switch (start_tag) {
case '<b>': elem.setBold(start, end, true); break;
case '<i>': elem.setItalic(start, end, true); break;
case '<u>': elem.setUnderline(start, end, true); break;
}
found = body.findText(start_tag, found);
}
body.replaceText(start_tag, ''); // remove tags
body.replaceText(end_tag, '');
}
The script just changes the {placeholders} with the data and saves the result as a PDF file (Label.pdf). The PDF looks like this:
There is one thing, I'm not sure if it's possible -- to change a size of the texts dynamically to fit them into the cells, like it's done in your 'autosize.html'. Roughly, you can take a length of the text in the cell and, in case it is bigger than some number, to make the font size a bit smaller. Probably you can use the jquery texfill function from the 'autosize.html' to get an optimal size and apply the size in the document.
I'm not sure if I got you right. Do you need make PDF and save it on Google Drive? You can do in Google Docs.
As example:
Make a new document with your table and text. Something like this
Add this script into your doc:
function myFunction() {
var copyFile = DriveApp.getFileById(ID).makeCopy();
var newFile = DriveApp.createFile(copyFile.getAs('application/pdf'));
newFile.setName('label');
copyFile.setTrashed(true);
}
Every time you run this script it makes the file 'label.pdf' on your Google Drive.
The size of this pdf will be the same as the page size of your Doc. You can make any size of page with add-on: Page Sizer https://webapps.stackexchange.com/questions/129617/how-to-change-the-size-of-paper-in-google-docs-to-custom-size
If you need to change the text in your label before generate pdf or/and you need change the name of generated file, you can do it via script as well.
Here is a variant of the script that changes a font size in one of the cells if the label doesn't fit into one page.
function main() {
// input texts
var text = {};
text.matName = '<b>testing this to <u>see</u></b> if it <i>actually</i> works <i>e.coli</i>';
text.disposeWeek = 'end of semester';
text.prepper = 'John Ruppert';
text.className = 'Cell and <b>Molecular</b> Biology <u>Fall 2020</u> a few exercises a few exercises a few exercises a few exercises';
text.hazards = 'Lots of hazards';
// initial max font size for the 'matName'
var size = 10;
var doc_blob = set_text(text, size);
// if we got more than 1 page, reduce the font size and repeat
while ((size > 4) && (getNumPages(doc_blob) > 1)) {
size = size-0.5;
doc_blob = set_text(text, size);
}
// save pdf
DriveApp.createFile(doc_blob);
}
// this function takes texts and a size and put the texts into fields
function set_text(text, size) {
// make a copy
var copyFile = DriveApp.getFileById('1vA93FxGXcWLIEZBuQwec0n23cWGddyLoey-h0WR9weY').makeCopy();
var doc = DocumentApp.openById(copyFile.getId());
var body = doc.getBody();
// replace placeholders with data
body.replaceText('{matName}', text.matName);
body.replaceText('{disposeWeek}', text.disposeWeek);
body.replaceText('{prepper}', text.prepper);
body.replaceText('{className}', text.className);
body.replaceText('{hazards}', text.hazards);
// set font size for 'matName'
body.findText(text.matName).getElement().asText().setFontSize(size);
// make Italics, Bold and Underline
handle_tags(['<i>', '</i>'], body);
handle_tags(['<b>', '</b>'], body);
handle_tags(['<u>', '</u>'], body);
// save the doc
doc.saveAndClose();
// delete the copy
copyFile.setTrashed(true);
// return blob
return docblob = doc.getBlob().setName('Label.pdf');
}
// this function formats the text beween html tags
function handle_tags(tags, body) {
var start_tag = tags[0].toLowerCase();
var end_tag = tags[1].toLowerCase();
var found = body.findText(start_tag);
while (found) {
var elem = found.getElement();
var start = found.getEndOffsetInclusive();
var end = body.findText(end_tag, found).getStartOffset()-1;
switch (start_tag) {
case '<b>': elem.setBold(start, end, true); break;
case '<i>': elem.setItalic(start, end, true); break;
case '<u>': elem.setUnderline(start, end, true); break;
}
found = body.findText(start_tag, found);
}
body.replaceText(start_tag, '');
body.replaceText(end_tag, '');
}
// this funcion takes saved doc and returns the number of its pages
function getNumPages(doc) {
var blob = doc.getAs('application/pdf');
var data = blob.getDataAsString();
var pages = parseInt(data.match(/ \/N (\d+) /)[1], 10);
Logger.log("pages = " + pages);
return pages;
}
It looks rather awful and hopeless. It turned out that Google Docs has no page number counter. You need to convert your document into a PDF and to count pages of the PDF file. Gross!
Next problem, even if you managed somehow to count the pages, you have no clue which of the cells was overflowed. This script takes just one cell, changes its font size, counts pages, changes the font size again, etc. But it doesn't granted a success, because there can be another cell with long text inside. You can reduce font size of all the texts, but it doesn't look like a great idea as well.
We're utilizing the V Cloud API to interact with virtual machines (create machines, perform actions, switch media, etc). One requested function is to be able to upload media (specifically ISO's) to a particular a catalog. The API guide (pg 67) is fairly straightforward, and our multi-part requests to the URL that is provided when the upload starts go off without a hitch.
Note: We have to declare the file size before starting the upload
The only thing that seems amiss during the upload itself is that the "transferred size" ends up being larger than the "file size" at the end of the process. This is somewhat odd because our content-range never exceeds the expected file size (we assume that the meta data is being included without us having a say). Once this transferred size exceeds the file size, the status of the file upload changes to "Error" but still returns a 200 OK
{
"name": "J Small 4",
"description": "",
"files": [{
"name": "file",
"totalSize": 50696192,
"status": "Error",
"link": "https://cloud01.cs2cloud.com/transfer/27b8f93c-8319-419e-9e8c-15622097670b/file",
"transferredSize": 54293177
}],
"id": "urn:vcloud:media:1cec68ef-f22e-4ec7-ae5d-dfbc4f7137d9",
"catalogId": "urn:vcloud:catalogitem:19dbfdd8-ea70-4355-abc7-96e34dccb869"
}
Not sure where to even start debugging this since all the API calls come back with 200 OK, the .ISO file seems to be fine, our content-range headers never go outside the established file size, and the meta-data seems to be out of our control in terms of editing or measuring it.
Hoping some soul has experienced this issue before and can provide some insight into working towards a solution
It turns out the issue wasn't with the vmware at all, but how we were chunking up the media file. We initially used FileReader() to chunk up the file and send it over to the VMware API.
Theoretically, we were choosing the chunk size and could then generate and set the content range, but in reality we were choosing the content-range but the content-length was different than the chunk size. We're still not entirely sure why it happened (maybe extra meta data being added on) but we found a solution.
The fix: We eliminated FileReader() altogether and just put the file slices directly into a blob (you can see below)
$scope.parseMediaFile = function(url, file, catalogId) {
$scope.uploadingMediaFile = true;
var fileSize = file.size;
var chunkSize = 1024 * 1024 * 5; // bytes
var offset = 0;
var self = this; // we need a reference to the current object
var chunkReaderBlock = null;
var chunkNum = 0;
if (fileSize < chunkSize) {
chunkSize = fileSize;
}
chunkReaderBlock = function(_offset, length, _file) {
var blob = _file.slice(_offset, length + _offset);
var beginRange = _offset;
var endRange = _offset + length;
if(endRange > _file.size) {
endRange = _file.size
}
var contentRange = beginRange + "-" + endRange;
vdcServices.uploadMediaFile(url, blob, fileSize, contentRange).then(
function(resp) {
vdcServices.getUploadStatus($scope.company, catalogId).then(function(resp) {
var uploaded = resp.data.files[0].transferredSize;
$scope.mediaPercentLoaded = $scope.trunc((uploaded / fileSize) * 100);
if (endRange == _file.size) {
$scope.closeModal();
return;
}
chunkReaderBlock(_offset+length, chunkSize, file);
}, function(err) {
$scope.errorMsg = err;
chunkReaderBlock(_offset-length, chunkSize, file);
})
},
function(err) {
$scope.errorMsg = err;
}
)
}
// Starts the read with the first block
if (offset < fileSize) {
chunkReaderBlock(offset, chunkSize, file)
}
}
Doing so allowed us to actually control the content-length, and since we can identify when the number of bytes transferred is equal to the file size we could then complete the process.
How can I serialize RDF in turtle using rdflib.js? There's not much documentation. I can use:
Serializer.statementsToN3(destination);
to serialize into the N3 format, but not much besides that. I've tried altering the aforementioned command to stuff like statementsToTtl/Turtle/TURTLE/TTL, but nothing seems to work.
Figured it out. Courtesy of this (secret) Github gist.
$rdf.serialize(undefined, source, undefined,` 'text/turtle', function(err, str){
// do whatever you want, the data is in the str variable.
})
This is the code from the aforementioned Github gist.
/**
* rdflib.js with node.js -- basic RDF API example.
* #author ckristo
*/
var fs = require('fs');
var $rdf = require('rdflib');
FOAF = $rdf.Namespace('http://xmlns.com/foaf/0.1/');
XSD = $rdf.Namespace('http://www.w3.org/2001/XMLSchema#');
// - create an empty store
var kb = new $rdf.IndexedFormula();
// - load RDF file
fs.readFile('foaf.rdf', function (err, data) {
if (err) { /* error handling */ }
// NOTE: to get rdflib.js' RDF/XML parser to work with node.js,
// see https://github.com/linkeddata/rdflib.js/issues/47
// - parse RDF/XML file
$rdf.parse(data.toString(), kb, 'foaf.rdf', 'application/rdf+xml', function(err, kb) {
if (err) { /* error handling */ }
var me = kb.sym('http://kindl.io/christoph/foaf.rdf#me');
// - add new properties
kb.add(me, FOAF('mbox'), kb.sym('mailto:e0828633#student.tuwien.ac.at'));
kb.add(me, FOAF('nick'), 'ckristo');
// - alter existing statement
kb.removeMany(me, FOAF('age'));
kb.add(me, FOAF('age'), kb.literal(25, null, XSD('integer')));
// - find some existing statements and iterate over them
var statements = kb.statementsMatching(me, FOAF('mbox'));
statements.forEach(function(statement) {
console.log(statement.object.uri);
});
// - delete some statements
kb.removeMany(me, FOAF('mbox'));
// - print modified RDF document
$rdf.serialize(undefined, kb, undefined, 'application/rdf+xml', function(err, str) {
console.log(str);
});
});
});
i'm programming a mod activity for moodle which load files and show'em to any student who can access to the course.
The problem is that handing files in moodle is damn hard.
this is what i have done so far:
option page with importers
$mform->addElement('filepicker', 'slidesyncmedia', get_string('slidesyncmedia', 'slidesync'), null, array('maxbytes' => $maxbytes, 'accepted_types' => '*'));
$mform->addElement('filemanager', 'slidesyncslides', get_string('slidesyncslides', 'slidesync'), null, array('subdirs' => 0, 'maxbytes' => $maxbytes, 'maxfiles' => 50, 'accepted_types' => array('*') ));
after submit the files are stored in draft
and everything is loaded in another page that save all on db
if ($draftitemid = file_get_submitted_draft_itemid('slidesyncmedia')) {
file_save_draft_area_files($draftitemid, $context->id, 'mod_slidesync', 'slidesyncmedia', 0, array('subdirs' => 0, 'maxfiles' => 1));
}
if ($draftitemid = file_get_submitted_draft_itemid('slidesyncslides')) {
file_save_draft_area_files($draftitemid, $context->id, 'mod_slidesync', 'slidesyncslides', 0, array('subdirs' => 0, 'maxfiles' => 50));
}
in the end i use again the first page in another place (if files are there, then shows them)
$fs = get_file_storage();
if ($files = $fs->get_area_files($context->id, 'mod_slidesync', 'slidesyncslides', '0', 'sortorder', false)) {
// Look through each file being managed
foreach ($files as $file) {
// Build the File URL. Long process! But extremely accurate.
$fileurl = moodle_url::make_pluginfile_url($file->get_contextid(), $file->get_component(), $file->get_filearea(), $file->get_itemid(), $file->get_filepath(), $file->get_filename());
echo $fileurl;
}
} else {
echo '<p>Please upload an image first</p>';
}
this make an url but if clicked moodle says that file does not exist
mysite.com/pluginfile.php/53/mod_slidesync/slidesyncslides/0/Koala.jpg
in the db the file are correctly saved!!!
53 mod_slidesync slidesyncslides 0 / Koala.jpg
what i'm missing?
thanks
A long time passed but I was working on a plugin and had the same problem.
I manage to solve it.
To provide the file you need to create the:
function MYPLUGIN_pluginfile($course, $cm, $context, $filearea, $args, $forcedownload, array $options=array())
Here is the example function: https://docs.moodle.org/dev/File_API#Serving_files_to_users
Remember change the last call to send_file to send_stored_file in Moodle 2.3+