Delete all Storage Files in Backand - backand

I need to delete all the storage files in the backand...
In Panel > Hosting > Storage Filess - it's possible to remove only one per.
In Actions it's possible to remove only one per to.
files.delete(**filename**);

When you delete an app from Backand, it will also delete any stored files for your application. There should be no additional actions needed on your part. If you still have concerns, you can use the command-line interface to point your hosting files at an empty directory, which will have a side effect of deleting the existing files. More information is available at http://docs.backand.com/en/latest/getting_started/hosting/index.html

Related

How to delete one of the persistent volume claim attached to one Storage class

I created a 'file share' in the storage account and connected it to my Kubernetes cluster (AKS). I have connected as a dynamic file share following the example from this link:
https://github.com/HoussemDellai/aks-file-share/tree/main/dynamic-file-share.
Thus, I created a Storage Class. And then I made several Persistent Volume Claims in each required namespace.
I added Persistent Volume Claim to my deployment for each application and everything works fine. All my applications in all the necessary namespaces see the shared folder and can write and read to it.
Now I want to remove or detach this storage for one of the namespaces. If I delete one of the Persistent Volume Claims, then the entire file share in the storage account is deleted. But I, for example, need to delete all links and references to this resource in only one namespace. What I should do?
And the second question, after I deleted, for example, all references in the namespace to a file share, after a while if I want to create it again, what steps should I take here?
Once again I will emphasize the essence of the question, the main thing in all this is to remove the link somehow, Persistent Volume Claim or something else to the file share in only one namespac, so that in all the rest all the referring resources and Persistent Volume Claim continue to work correctly with the same 'file share'!
For your first question, I think you just need to change the reclaimpolicy of your storageclass from default ( which is Delete) to Retain.
See https://faun.pub/kubernetes-how-to-set-reclaimpolicy-for-persistentvolumeclaim-7eb7d002bb2e for more explanation.

Programmatically set default column values based on folder in SharePoint Online

I'm working on enhancing metadata in our SharePoint online (O365) environment. Since a portion of my user base is used to foldering (explorer style), I've started using default column values to automatically set values on any files added to that specific folder (we have content organized categorically by folder currently). An example is our HR documents library - we have separate folders for recruiting, payroll, personnel files, etc. that automatically categorize files added to that folder with the same categories (recruiting, payroll, personnel, etc.). This supports both "search" and "click" users and makes adoption WAY easier while getting important metadata.
I want to implement this in a larger, more dynamic fashion, so manually setting default column values on each folder is not going to be scalable.
How can I reference the top level folder within the library (or even the current folder) for each newly added file and populate the "category" field for that new file with that folder name? I can do some very basic C# or Java code copy/paste, but bonus points for non-coding solutions =)
This problem can be solved through no coding.You can use the workflow to implement this by SharePoint Designer.
Create different view for different function team, and then use the view filter to show the document.
If you upload a file, use the workflow to set the metadata of the file. There are some known limitations: if you upload multiple files at the same time, the metadata for the file maybe does not work well; or if you upload a folder, the meta will not work for it and the file in the folder may not be set to right metadata.
I was actually able to use MS Flow to accomplish this in a pretty simple and straightforward fashion without managing custom views per team. The concept at a high level was:
(Trigger) When a new document is created in a folder in the library
Get the link of the parent folder of the newly added document
Create a variable (or just code it out in the Flow step) to parse out the name of the parent folder from the parent folder link (should be all text to the right of the last "/")
Set the category field as the variable
I'm sure that you could do the same right in a SharePoint designer workflow, but I prefer flow due to the visual aspect of it and being far easier to troubleshoot.

How do services like Dropbox implement delta encoding if their files are stored in the cloud?

Dropbox claims that during syncing only the portion of files that changes are transmitted back to main server, which is obviously a great functionality, but how do they perform changes to files stored in Amazon S3 cloud? So for example, lets say a 30 page document on user's desktop contains changes to only page 4. Dropbox now syncs the blocks representing the changes and what happens on the backend if they files that they store are in the cloud? Does that mean they have to download the 30 page document stored in S3 to their server, then perform replacement of blocks representing page 4, and then uploading back to the cloud? I doubt this would be the case because that would be somewhat inefficient. The other option I could think of is if Amazon S3 provides update of file stored in the cloud based on byte ranges, so for example, make a PUT request to file X from bytes 100-200 which will replace all the bytes from 100 to 200 with value of PUT request. So I was curious how companies that use other cloud services such as Amazon, implement this type of syncing.
Thanks
As S3 and similar storages don't offer filesystem capabilities, anything that pretends to store files and directories needs to emulate a file system. And when doing this files are often split to pages of certain size, where each page is stored in a separate file in the storage. This way the changed block requires uploading only one page (for example) and not the whole file. I should note, that with files like office documents this approach can be faulty if file size is changed - for example, if you insert a page at the beginning or delete a page, then the whole file will be changed and the complete file would need to be re-uploaded. We didn't analyze how Dropbox in particular does his job, and I just described the common scenario. There exist also different "patch algorithms", where a patch can be created locally (if Dropbox has an older local copy in the cache) and then applied to one or more blocks on the server.
There are several synchronizing tools which transfer deltas over the wire like rsync, rdiff, rdiff-backup, etc. For bi-directional synchronising with S3 there are paid services like s3rsync for example. For pure client-side synchronising, tools like zsync can be considered (which is what many people employ to roll-out app updates).
An alternative approach would be to tar-ball a directory, generate a delta file (using rdiff or xdelta3), and upload the delta file by using a timestamp as part of the key. In order to sync, all you need to do is to perform these 2 checks client-side:
You have all the delta files from S3. If not pull them and apply them to generate the latest backup state.
Your last backup state corresponds to your current directory. If not generate a new delta file and push to S3.
The concerning factor here would be the at least 100% additional space utilization, client-side. But this approach will help you revert changes if needed.

How do i force a file to be deleted? Windows server 2008

On my site a user may upload a file (pic, zip, audio, video, whatever). He then may decide to replace it with a newer revision. This user may upload a file, make a post then decide to put up a new revision replacing the old (lets say its a large zip or tar.gz file). Theres a good chance people may be downloading it if he sent out an email or even im for the home user.
Problem. I need to replace the file and people may be downloading and it may be some minutes before it is deleted. I dont want my code to stall until i cant delete or check every second to see if its unused (especially bad if another user can start and he takes long creating a cycle).
How do i delete the file while users are downloading the file? i dont care if they stop i just care that the file can be replaced and new downloads are the new revision.
What about referencing the files indirectly?
A mapping script, maps a virtual file entry from your site to a real file . If the user wants to upload a new revision of his file you just update the mapping, not the real file.
You can install a daily task that scans all files and deletes all files without a mapping and without open connections.
lajuette's answer is right, the easiest solution is to work around the file locking altogether:
When a user uploads file foo.zip, internally store it as foo-v1.zip.
Create a mapping file somewhere (database, code, whatever) that maps foo.zip to foo-v1.zip.
Rather than exposing a direct link to the file, expose a link to a service that gets the file: mysite.com/Download?foo.zip or something. This service uses the mapping to determine which version of the file to send to the client.
When a new version is uploaded, create foo-v2.zip and update the mapping file.
It wouldn't be that hard to write a scheduled task that cleans up old, un-mapped files.
If your oppose to a database and If the filenames are in a fix format (such as user/id.ext) you could append the id with a revision number and enumerate the folder using a pattern (user/id-*) and use the latest revision.

Vb.Net Document Storage

I am attempting to add a document storage module to our AR software.
I will be prompting the user to attach a doc/image to thier account. I will then put a copy of this file into our folder so that we can reference it without having to rely on them keeping the file in its original place. This system is not using a database but instead its using multiple flat files.
I am looking for guidance on how to handle these files once they have attached them to our system.
How should I store these attached files?
I was thinking I could copy the file over to a sub directory then renaming it to a auto-generated number so that we do not have duplicates. The bad thing about this, is the contents of the folder can get rather large.
Anyone have a better way? Should I create directories and store them...?
This system is not using a database but instead its using multiple flat files.
This sounds like a multi-user system. How are you handing concurrent access issues? Your answer to that will greatly influence anything we tell you here.
Since you aren't doing anything special with your other files to handle concurrent access, what I would do is add a new folder under your main data folder specifically for document storage, and write your user files there. Additionally, you need to worry about name collisions. To handle that, I'd name each file there with by appending the date and username to the original file name and taking the md5 or sha1 hash of that string. Then add a file to your other data files to map the hash values to original file names for users.
Given your constraints (and assuming a limited number of total users) I'd also be inclined to go with a "documents" folder -- plus a subfolder for each user. Each file name should include the date to prevent collisions. Over time, you'll have to deal with getting rid of old or outdated files either administratively or with a UI for users. Consider setting a maximum number of files or maximum byte count for each user. You'll also want to handle the files of departed users.