PHPSpreadsheet to Read Passworded XLSX - php-7

I've spent all day searching both the documentation and the web but there doesn't seem to be a way to programmatically open a passworded spreadsheet with phpspreadsheet; and spout library for that matter.
In Phpspreadsheet, I'm interested in the xlsx reader and have followed the inheritance chain to the root class but none have a "setpassword" type method (I notice writer class has though.)
Is this a true oversight in both libraries or have I missed something somehow? Is there any other way to do this in PHP? Or next best, strip the password off a set of xlsx files programmatically and then read it with either library?

Seeing no other solution, I solved this by writing VBA to open each file and save without the password. Then I could process with PHPSpreadsheet.

Related

Sub VI connector for different cluster types

I want to program an API to generate JSON files.
The standard Labview VI "Flatten to Json" has the connector "anything" which I also want to use.
How is that possible?
https://www.ni.com/docs/de-DE/bundle/labview-2020/page/glang/flatten_to_json.html
Use the existing JSONText library found in LV2019 and later. The subVIs therein are malleable VIs that do exactly what you're requesting. If for some reason you don't like how they work, they're open source and editable.

Nextcloud in which source file is file uploading handled

I am going to make an app. But i am stuck in one issue. I am unable to find in which file of nextcloud, the codes are available which uploads file.
i want to find the file in which file uploading codes are situated.
I am going to make an app which will make a duplicate of uploaded file and will save in same directory with slightly changed name.
The public API for handling files lives in the \OCP\Files namespace, the implementation is in the \OC\Files namespace (https://github.com/nextcloud/server/tree/master/lib/private/Files).
Rather than modifying this code you should use the hooks functionality (never use classes or functions in the \OC\* namespace!): https://docs.nextcloud.com/server/12/developer_manual/app/hooks.html. This way you can execute your own code when a file is created or updated etc.
I guess you need the postWrite hook. Some sample code (untested):
\OC::$server->getRootFolder()->listen('\OC\Files', 'postWrite', function(\OCP\Files\Node $node) {
$node->copy('my/path');
});

.NET ZipPackage vs DotNetZip when getting streams to entries

I have been using the ZipPackage-class in .NET for some time and I really like the simple and intuitive API it has. When reading from an entry I do entry.GetStream() and I read from this stream. When writing/updating an entry I do entry.GetStream(FileAccess.ReadWrite) and write to this stream. Very simple and useful because I can hand over the reading/writing to some other code not knowing where the Stream comes from originally.
Now since the ZipPackage-API doesn't contain support for entry properties such as LastModified etc I have been looking into other zip-api's such as DotNetZip. But I'm a bit confused over how to use it. For instance, when wanting to read from an entry I first have to extract the entire entry into a MemoryStream, seek to the beginning and hand-over this stream to my other code. And to write to an entry I have to input a stream that the ZipEntry itself can read from. This seem very backwards to me. Am I using this API in a wrong way?
Isn't it possible for the ZipEntry to deliver the file straight from the disk where it is stored and extract it as the reader reads it? Does it really need to be fully extracted into memory first? I'm no expert but it seems wrong to me.
using the DotNetZip libraries does not require you to read the entire zip file into a memory stream. When you instantiate an instance an instance of ZipFile as shown below, the library is only reading from the zip file header. The zip file headers contain properties such as last modified, etc. Here is an example of opening a zip file. The DotNetZip library then reads the zip file headers and constructs a list of all entries on the zip:
using (Ionic.Zip.ZipFile zipFile = Ionic.Zip.ZipFile.Read(this.FileAbsolutePath))
{
...
}
It's up to you to then extract zip files either to a stream, to the file system, etc. In the example below, I'm using a string property accessor on zipFile to get a zip file named SomeFile.txt. The matching ZipEntry object is then extracted to a memory stream.
MemoryStream memStr = new MemoryStream();
zipFile["SomeFile.txt"].Extract(memStr); // Response.OutputStream);
Zip entries must be read into the .NET process space in order to be deflated, there's no way to bypass that by going straight into the filesystem. Similar to how your Windows Explorer shell zip extractor would work - The Windows shell extensions for 7zip or Windows built in Compressed Folders have to read entries into memory and then write them to the file system in order for you to be able to open an entry.
Okey I'm answering this my self because I found the answers. There are apparently methods for both these things I wanted in DotNetZip. For opening a read-stream -> myZipEntry.OpenReader() and for opening a write-stream -> myZipFile.UpdateEntry(e, (fn, obj) => Serialize(obj)). This works fine.

Stopping invalid file type or file name submissions in coldfusion

So, I'm having this lovely issue where people like to submit invalid file types or funky named files... (like.. hey_i_like_"quotes".docx) Sometimes they will even try to upload a .html link...
How should I check for something like this? It seems to create an error every time someone submits a poorly named item.
Should I create a cfscript that checks it before submission? Or is there an easier way?
If it was before submission it would be javascript not cfscript. Javascript can always be got round, so I'd say you'd be better doing it server-side with ColdFusion. Personally I'd just wrap the whole thing in a try/catch (you should do this anyway as a matter of course with all file upload type things), and throw an error back at them if their filename is no good.
When you say submit are you using cffile to allow your users to upload file.
If so, use the attribute "accept" with a try and catch around. for example....
<cftry>
<cffile action = "upload"
fileField = "FileContents"
destination = "c:\files\upload\"
accept="image/jpg, application/msword"
>
<cfcatch type="Any" >
<p>sorry we could not upload your file!</p>
</cfcatch>
</cftry>
I personally would not use "just" JavaScript as this could be disabled and you are back in the same boat.
Hope this helps.
On the server, as part of validation, use reFindNoCase() along with an appropriate regex to check for a properly formatted file path. You can find lots of example regex expressions for a file path on the internet, such at this one. Hope that helps.
As #Duncan pointed out, a client-side validation would most likely be in JavaScript. Personally, if I had time/resources, I would do this as a convenience for the end user. If they upload an enormous PDF when a DOCX is required by the system, it would be annoying for them not to receive a message until the upload is complete.
As far as filenames go, it seems to me that the simplest solution (and one I've used in the past) is to assume all filenames are bad, and rename them. There are several ways to do this. If you need to preserve the original filename, I would just use urlEncodedFormat() ot clean the filename into something that is web-friendly. If you need to preserve all versions, you can append a date/time stamp, so bob.xocx becomes bob_201104051129.docx or somesuch. If you must keep the original filename without any changes, I would recommend seting up a DB table as a pinter system, keeping the original name, timestamp, and other metadata there and referring to the file by renaming it to the ID.
But urlEncodedFormat() is probably enough for what you've outlined.
For user experience it's best to do it client-side but it's not bad at all to double check server side too.
For the client side part, I recommend using the jQuery validation plugin, easy to use.

powershell - check if pdf is encrypted

Using powershell I need to loop a series of pdf file and make some operation on them using pdftk. I'd like to know if exists some method to detect if pdf is encrypted or not. In this way, if the pdf is encrypted I don't work on it and my loop skips to the next file. Thanks for the attention.
edit. While I wait for some answer I've found that itextsharp has an isencrypted method.
After I load the assembly
[System.Reflection.Assembly]::LoadFrom("c:\my_path\itextsharp.dll")
what do I have to do to use the above method?
[System.Reflection.Assembly]::LoadFrom("c:\itext\itextsharp.dll")
$itext = new-object itextsharp.text.pdf.PdfReader("c:\itext\1.pdf")
$itext.isEncrypted()
You should get either true or false as a result.
For the people that reach this page searching for a way to check if files are NTFS encrypted, this is the way to go:
[System.IO.File]::GetAttributes($RootFolder).ToString().Contains("Encrypted")