Folders in azure storage account are not deleted even after all blobs of the folders are deleted - blob

I have a blob container in Azure called UploadedFiles.
It has folders and the folder has some blobs.
Even after all the blobs are deleted,the folder is still visible.
How that folder will disappear?
CloudBlobDirectory directory = GetContainerHandle(eContainer).GetDirectoryReference(strVirtualDirectory);
foreach (CloudBlockBlob blockBlob in directory.ListBlobs()) {
blockBlob.Delete();
}

Related

Uploading and saving a file to IIS virtual directory outside the root folder

Based on Microsoft's website as well as even this forum the general consensus seems to be to keep uploaded user files from the website outside of the wwwroot folder (C:\inetpub\wwwroot) and even better outside of the system drive. I have setup a virtual directory(C:\inetpub\files) in IIS for my file uploads which is outside of the wwwroot but still on the C drive (we only have one drive and I cannot partition it to make another drive). So hopefully this is still considered secure in that aspect! My issue however is I use the following code to get the directory to my hosting enviroment:
var filePath = Path.Combine(env.WebRootPath, document.DOCUMENT_LOCATION);
var fileName = document.FILENAME_AFTER_UPLOAD;
var fullPath = Path.Combine(filePath, fileName);
I am not sure exactly what file path I am suppose to use for saving to virtual directory. The virtual directory has an alias of "files" so its virtual path is /files so do I use env.WebRootPath + "/files" or is there some other way to access the virtual directory/path? For background document is a model object from a SQL query that returns my file path to save to and the filename we create in the SQL server.
So,you want to upload a file outside of that in the parent directory of env.webrootpath means wwwroot folder.so for that try this below code:-
var filePath = Path.Combine(Directory.GetCurrentDirectory(), "wwwroot/img", document.Document.FileName);
above, env.webrootpath no need to use because you want to path more dynamic.
Or if you want to upload to C drive instead of wwwroot.
string SavePath = Path.Combine(Directory.GetCurrentDirectory(), (#"C:\", model.FormFile.FileName);

Move files from S3 subfolder to the S3 bucket root

I need to move all files of a subfolder to it s3 bucket root.
Right now I'm using cmd AWS CLI
aws s3 mv s3:\\testbucket\testsubfolder\testsubfolder2\folder s3:\\testbucket\
My main issue is that the subfolder "folder" changes every day after a TeamCity run. It is ay way to know if there is a new folder inside "testsubfolder2", and copy its content to the S3bucket root?
I want to automate this, as every day we run reports and are stored in an S3, but TeamCity create a project folder tree, and we need all the files on the S3 root
Thanks.
Here is some code that will move any objects in a given Prefix (and sub-folders under that Prefix) into the root of the bucket. (Actually, it copies the object and then deletes it.)
import boto3
BUCKET = 'stack-move'
PREFIX = 'foo1/foo2/'
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket(BUCKET)
for object in bucket.objects.filter(Prefix=PREFIX):
print(object.key)
copy_source = {
'Bucket': BUCKET,
'Key': object.key
}
target_key = object.key[object.key.rfind('/')+1:] # Get just the name after the last slash
# Copy the object
target = bucket.Object(target_key)
target.copy(copy_source)
# Delete the object
object.delete()
You can trigger a Lambda when a file is uploaded in this testsubfolder2 directory.
Check this tutorial from AWS: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
Be careful with your S3 rules because you can create a loop and increase your billing because AWS S3 MV uses COPY and DELETE behind the command line.

AWS s3 event ObjectRemoved - get file

I am trying to access the file that has been deleted from an s3 Bucket using aws lambdas.
I have set up a trigger for s3:ObjectRemoved*, however after extracting the bucket and file name of the deleted file, the file is deleted from s3 so I do not have access to the contents of the file.
What approach should be taken with AWS lambda to get the contents of the file after a file is deleted from an s3 bucket.
Comment proposed by #keithRozario was useful however with versioning, applying a GET request will result in a not found error as per the s3 documentation.
#Ersoy suggestion of creating a 'bin' bucket or directory/prefix with the same file name and working with that as per your requirements.
In my case copying the initial object created to a bin directory and then accessing that folder when the file is deleted from the main upload directory.

How to create directories in AWS S3 using Apache NiFi putS3Object

I have a working config to push files from a directory on my server to an S3 bucket. NiFi is running on a different server so I have a getSFTP. The source files have subfolders my putS3Object current config does not support and jams all of the files at the root level of the S3 bucket. I know there's a way to get putS3Object to create directories using defined folders. The ObjectKey by default is set to ${filename}. If set to say, my/directory/${filename}, it creates two folders, my and the subfolder directory, and puts the files inside. However, I do NOT know what to set for the object key to replicate the file(s) source directories.
Try ${path}/${filename} based on this in the documentation:
Keeping with the example of a file that is picked up from a local file system, the FlowFile would have an attribute called filename that reflected the name of the file on the file system. Additionally, the FlowFile will have a path attribute that reflects the directory on the file system that this file lived in.

Copy files from Azure BLOB storage to SharePoint Document Library

I cannot find a way to copy files\folders from Blob storage to a SharePoint document library. So far, I've tried AZCopy and PowerShell:
*AZCopy cannot connect to SP as the destination
*PowerShell works for local files but the script cannot connect to Blob storage ( Blob storage cannot be mapped as a networkdrive)
For anyone else who needs to do this, AZCopy worked. I just had to use a different destination. When you map a SharePoint document library as a mapped drive, it assigns a drive letter but it also shows the UNC path. That's what you have to use:
/Dest:"\\Tenant.sharepoint.com#SSL\DavWWWRoot\Sites\sitename\library"