I am going to make an app. But i am stuck in one issue. I am unable to find in which file of nextcloud, the codes are available which uploads file.
i want to find the file in which file uploading codes are situated.
I am going to make an app which will make a duplicate of uploaded file and will save in same directory with slightly changed name.
The public API for handling files lives in the \OCP\Files namespace, the implementation is in the \OC\Files namespace (https://github.com/nextcloud/server/tree/master/lib/private/Files).
Rather than modifying this code you should use the hooks functionality (never use classes or functions in the \OC\* namespace!): https://docs.nextcloud.com/server/12/developer_manual/app/hooks.html. This way you can execute your own code when a file is created or updated etc.
I guess you need the postWrite hook. Some sample code (untested):
\OC::$server->getRootFolder()->listen('\OC\Files', 'postWrite', function(\OCP\Files\Node $node) {
$node->copy('my/path');
});
Related
I am working on a webapp where the user provides an image file-text sequence. I am compressing the sequence into a single ZIP file uisng JSZip.
On the server I simply use PHP move_uploaded_file to the desired location after having checked the file upload error status.
A test ZIP file created in this way can be found here. I have downloaded the file, expanded it in Windows Explorer and verified that its contents (two images and some HTML markup in this instance) are all present and correct.
So far so good. The trouble begins when I try to fetch that same ZIP file and expand it using JSZip.loadAsync which consistently reports Corrupted zip: missing 210 bytes. My PHP code for squirting back the ZIP file is actually pretty simple. Shorn of the various security checks I have in place the essential bits of that code are listed below
if (file_exists($file))
{
ob_clean();
readfile($file);
http_response_code(200);
die();
} else http_response_code(399);
where the 399 code is interpreted in my webapp as a need to create a new resource locally instead of trying to read existing resource data. The trouble happens when I use the result text (on an HTTP response of 200) and feed it to JSZip.loadAsync.
What am I doing wrong here? I assume there is something too naive about the way I am using readfile at the PHP end but I am unable to figure out what that might be.
What we set out to do
Attempt to grab a server-side ZIP file from JavaScript
If it does not exist send back a reply (I simply set a custom HTTP response code of 399 and interpret it) telling the client to go prepare its own new local copy of that resource
If it does exist send back that ZIP file
Good so far. However, reading the existent ZIP file into PHP and sending it back does not make sense + is fraught with problems. My approach now is to send back an http_response_code of 302 which the client interprets as being an instruction to "go get that ZIP for yourself directly".
At this point to get the ZIP "directly" simply follow the instructions in this tutorial on MDN.
I have a web application that hosts several tools. E.g. docx-to-pdf, pdf-to-docx, etc... each is a vue module file within the application.
When the user goes to the docx-to-pdf tool, uploads the file using a dropzone, the server's file manager will generate a uuid (I call it a module session id) and use this as the directory name to place the uploaded file and return the uuid to the browser. Then when the user clicks on 'convert', the uuid is sent with the 'convert' command and the server will perform the conversion and allows the user to download the converted file.
This works fine until I have a tool called combine-pdf and have 2 dropzones on the page. When I'm uploading file1 in dropzone1 and file2 in dropzone2 at the same time, each goes into its own directory because the server's file manager thinks they're the first file to be uploaded. Unless I complete file1's upload first before I start file2, otherwise when I try click on 'combine', the server will only have one of the two uuids and will try to combine but only find one file there.
The most logical solution I can think of would be to generate the uuid in Vue, and when I upload files to the server, it'll validate that it's a proper uuid and use this throughout the session in this module. I can put this is Vue's created hook. This is fine but I find that as me or my teammates add modules, we keep repeating this same code in every module which seems repetitive.
Is there a place where I can generate this uuid and eventually pass it to the module's data so it's write once but every module gets a new uuid?
I thought of having a parent module for all these tool modules and in this parent module I would perform this uuid generation in its created hook but this is only loaded once and not every time I visit a module.
You could move the the fileUpload function to the dropzone parent so that each dropzone component emits the file to the parent. The parent can then upload both files as an array to the server, and return an array of uuids to the client.
I am currently trying to extract property values from my properties file, but am running into some problems. I can't test this in ML query console, because the properties file doesn't exist there. I am currently trying to grab the values of the file like this
let $port := #{#properties["ml.properties-name"]}
I've also looked at
xdmp:document-get-properties(
$uri as xs:string,
$property as xs:QName
however that is limited to .xml files I believe. Does anyone have a way/work-around of accessing these values? I can't seem to find one I've looked at some documentation on Marklogic's website, but can't seem to get anything to work. The way I was accessing before was in ruby, through monkey-patching allowing me to access those private fields.The problem with that is the ruby script I call is only called once, while my .xqy file is ran every minute that sends args to another function. I need to access those args from the properties file, right now I just have them hard-coded in. Any thoughts?
Thanks
You cannot access deployment properties like that, but you can pass them along with deployment. If you create a new REST app with latest Roxy, you should get a copy of this config.xqy added to src/config/:
https://github.com/marklogic-community/roxy/blob/master/deploy/sample/custom-config.xqy
That file is treated specially when deployed to the modules database. Properties references are replaced inside there. In your case, add another variable, and give it a string value following the #ml.xyz pattern:
declare variable $c:port := "#ml.property-name";
You can then import the config lib, and use it in your code.
These so-called Deployer Substitutions are described in more detail on the Roxy wiki:
https://github.com/marklogic-community/roxy/wiki/Deployer-Substitutions
I have an existing FineUploader implementation for small files using the Traditional (upload-to-server) version which is working great. However, I'd like to also allow Direct S3 uploads from a different part of the application which deals with large attachments, without rewriting the existing code for small files.
Is there some way to allow both Direct S3 and Traditional uploads to work alongside each other? This is a single-page application, so I can't just load one or the other fine-uploader versions depending on which page I'm on.
I tried just including both fine-uploader JS files, but it seemed to break my existing code.
Client-side code:
$uploadContainer = this.$('.uploader')
$uploadButton = this.$('.upload-button')
$uploadContainer.fineUploader(
request:
endpoint: #uploadUrl
inputName: #inputName
params:
authenticity_token: $('meta[name="csrf-token"]').attr('content')
button: $uploadButton
).on 'complete', (event, id, fileName, response) =>
#get('controller').receiveUpload(response)
Good find, #Melinda.
Fine Uploader lives within a custom-named namespace so that it does not conflict with other potential global variables, this is the qq namespace (historically named). What is happening is that each custom build is redeclaring this namespace along with all member objects when you include it in the <script> tags on your page.
I've opened up an issue on our bug tracker that explains the issue in more technical details, and we're looking to prioritize a fix to the customize page so that in the future no one will have this issue.
I have a PPC2003 project in VS2005. I have added a resource file (SomeResources.resx) to the project. I can access the test string I have in the file by using My.Resources.SomeResources.MyTestString (I am using the default Custom Tool Name that VS provides).
When the Build Action property of the is set to Embedded Resource, the application references the MyTestString successfully.
But I do not want to embed the file, so that it's string values can be modified after it has been deployed/installed.
I, therefore, changed the Build Action to Content, so that the file gets copied out to the device for potential future manipulation. When I call MyTestString I get the following error:
MissingManifestResourceException Stack Trace: at System.Resources.ResourceManager.InternalGetResourceSet() at System.Resources.ResourceManager.InternalGetResourceSet() at System.Resources.ResourceManager.InternalGetResourceSet() at System.Resources.ResourceManager.GetString() at MyApp.My.Resources.SomeResources.get_MyTestString() at MyApp.fMain.fMain_Load() at System.Windows.Forms.Form.OnLoad() at System.Windows.Forms.Form._SetVisibleNotify() at System.Windows.Forms.Control.set_Visible() at System.Windows.Forms.Application.Run() at MyApp.fMain.Main()
As the file is not embedded, do I maybe need to manually load it first? If so, how? Any other ideas? Is it not possible to do what I'm after achieving and should I just create my own XML file/reader?
Resources (resx files) are specifically designed to be compiled into the application. If you want it to be an editable content file on the target, then you have to approach it differently and use something like an XML file and wrap that with accessors (akin to the Configuration namespace stuff in the full framework).