How to check if multiple files exists in azure container - azure-storage

To check if single blob exists in azure container we have below solution,
public bool DoesFileExistsInContainer(string fileName, string containerName)
{
try
{
if (fileName == null)
{
throw new ArgumentException("File name to be moved is empty");
}
CloudBlobContainer containerReference = blobClient.GetContainerReference(containerName);
CloudBlockBlob blob = containerReference.GetBlockBlobReference(fileName);
bool isFileExist = blob.Exists();
return isFileExist;
}
catch (StorageException ex)
{
Logger.LogError("error while checking if blob exists : {0}" + ex);
throw;
}
}
But I want to check if multiple files do exists in azure container or not?
string[] filesToSearchInBlob = {"file1.xml", "file2.xml", "file3.xml"};
IS there an efficient way to check other than in foreach loop.. ?
using LINQ? can we do it in better way?
Thanks in advance
Vinu

I don't think there is a more efficient way than foreach loop. If you do think performance is a concern, you may consider invoking Exists method simultaneously.

Related

How to upload and read an excel file (.xlsx) in a blazor server project

I have a blazor server project and I need to send an excel file with data to create objects of an entity. I have searched a lot and have not found anything that has worked for me. I believe my problem is accessing the file to then be able to do what I want.
In my blazor component i have:
<InputFile OnChange="#ImportExcelFile" accept=".xlsx" multiple="false"></InputFile>
#code {
async Task ImportExcelFile(InputFileChangeEventArgs e)
{
await EnrollmentService.CreateEnrollmentByExcel(e);
}
}
In my EnrollmentService.cs i need to read file.
If anyone can help me I would be very grateful.
I can already access my entered file, I was researching and found several ways but it didn't satisfy my requirements because they were trying to store the file in a folder, I just wanted to read the data and store it in memory, and I got this that helped me. Thanks.
async Task ImportExcelFile(InputFileChangeEventArgs e)
{
foreach (var file in e.GetMultipleFiles(1))
{
try
{
using (MemoryStream ms = new MemoryStream())
{
// copy data from file to memory stream
await file.OpenReadStream().CopyToAsync(ms);
// positions the cursor at the beginning of the memory stream
ms.Position = 0;
// create ExcelPackage from memory stream
ExcelPackage.LicenseContext = LicenseContext.NonCommercial;
using (ExcelPackage package = new ExcelPackage(ms))
{
ExcelWorksheet ws = package.Workbook.Worksheets.FirstOrDefault();
int colCount = ws.Dimension.End.Column;
int rowCount = ws.Dimension.End.Row;
var s = ws.Cells[2, 2].Value;
// rest of the code here...
}
}
}
catch (Exception ex)
{
throw;
}
}
}

Unable to delete documents in documentum application using DFC

I have written the following code with the approach given in EMC DFC 7.2 Development Guide. With this code, I'm able to delete only 50 documents even though there are more records. Before deletion, I'm taking the dump of object id. I'm not sure if there is any limit with IDfDeleteOperation. As this is deleting only 50 documents, I tried using DQL delete command, even there it is limited to 50 documents. I tried using destory() and destroyAllVersions() method that document has, even this didn't work for me. I have written everything in main method.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.*;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfLoginInfo;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
import com.documentum.operations.IDfDeleteNode;
import com.documentum.operations.IDfDeleteOperation;
import java.io.BufferedWriter;
import java.io.FileWriter;
public class DeleteDoCAll {
public static void main(String[] args) throws DfException {
System.out.println("Started...");
IDfClientX clientX = new DfClientX();
IDfClient dfClient = clientX.getLocalClient();
IDfSessionManager sessionManager = dfClient.newSessionManager();
IDfLoginInfo loginInfo = clientX.getLoginInfo();
loginInfo.setUser("username");
loginInfo.setPassword("password");
sessionManager.setIdentity("repo", loginInfo);
IDfSession dfSession = sessionManager.getSession("repo");
System.out.println(dfSession);
IDfDeleteOperation delo = clientX.getDeleteOperation();
IDfCancelCheckoutOperation cco = clientX.getCancelCheckoutOperation();
try {
String dql = "select r_object_id from my_report where folder('/Home', descend);
IDfQuery idfquery = new DfQuery();
IDfCollection collection1 = null;
try {
idfquery.setDQL(dql);
collection1 = idfquery.execute(dfSession, IDfQuery.DF_READ_QUERY);
int i = 1;
while(collection1 != null && collection1.next()) {
String r_object_id = collection1.getString("r_object_id");
StringBuilder attributes = new StringBuilder();
IDfDocument iDfDocument = (IDfDocument)dfSession.getObject(new DfId(r_object_id));
attributes.append(iDfDocument.dump());
BufferedWriter writer = new BufferedWriter(new FileWriter("path to file", true));
writer.write(attributes.toString());
writer.close();
cco.setKeepLocalFile(true);
IDfCancelCheckoutNode cnode;
if(iDfDocument.isCheckedOut()) {
if(iDfDocument.isVirtualDocument()) {
IDfVirtualDocument vdoc = iDfDocument.asVirtualDocument("CURRENT", false);
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
} else {
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
}
if(cnode == null) {
System.out.println("Node is null");
}
if(!cco.execute()) {
System.out.println("Cancel check out operation failed");
} else {
System.out.println("Cancelled check out for " + r_object_id);
}
}
delo.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
IDfDeleteNode node = (IDfDeleteNode)delo.add(iDfDocument);
if(node == null) {
System.out.println("Node is null");
System.out.println(i);
i += 1;
}
if(delo.execute()) {
System.out.println("Delete operation done");
System.out.println(i);
i += 1;
} else {
System.out.println("Delete operation failed");
System.out.println(i);
i += 1;
}
}
} finally {
if(collection1 != null) {
collection1.close();
}
}
} catch(Exception e) {
e.printStackTrace();
} finally {
sessionManager.release(dfSession);
}
}
}
I don't know where I'm making mistake, every time I try, the program stops at 50th iteration. Can you please help me to delete all documents in proper way? Thanks a lot!
At first select all document IDs into List<IDfId> for example and close the collection. Don't do another expensive operations inside of the opened collection, because you are then unnecessarily blocking it.
This is the cause why it did only 50 documents. Because you had one main opened collection and each execution of delete operation opened another collection and it probably reached some limit. So as I said it is better to consume the collection at first and then work further with those data:
List<IDfId> ids = new ArrayList<>();
try {
query.setDQL("SELECT r_object_id FROM my_report WHERE FOLDER('/Home', DESCEND)");
collection = query.execute(session, IDfQuery.DF_READ_QUERY);
while (collection.next()) {
ids.add(collection.getId("r_object_id"));
}
} finally {
if (collection != null) {
collection.close();
}
}
After that you can iterate through the list and do all actions with the document you need. But don't execute delete operation in each iteration - it is ineffective. Instead of it add all documents into one operation and execute it once at the end.
IDfDeleteOperation deleteOperation = clientX.getDeleteOperation();
deleteOperation.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
for (IDfId id : ids) {
IDfDocument document = (IDfDocument) session.getObject(id);
...
deleteOperation.add(document);
}
deleteOperation.execute();
The same is for the IDfCancelCheckoutOperation.
And another thing - when you are using FileWriter use close() in the finally block or use try-with-resources like this:
try (BufferedWriter writer = new BufferedWriter(new FileWriter("file.path", true))) {
writer.write(document.dump());
} catch (IOException e) {
throw new UncheckedIOException(e);
}
Using of StringBuilder is good idea, but create it only once at the beginning, append all attributes in each iteration and then write the content of the StringBuilder into the file at the end and not during each iteration - it is slow.
You could just do this from inside your code:
delete my_report objects where folder('/Home', descend)
no need to fetch information you are throwing away again ;-)
You're probably facing result set limit for DFC client.
Try adding to dfc.properties these lines and rerun your code to see if can delete more than 50 rows and after it adjust to your needs.
dfc.search.max_results = 100
dfc.search.max_results_per_source = 100

deleteObject not working as expected - Amazon S3 Java

I'm trying to delete a file from bucket using following code. but I can still view the file via browser
if (isValidFile(s3Client, BucketName, keyName)) {
try{
s3Client.deleteObject(new DeleteObjectRequest(BucketName,keyName));
}catch(Exception e){
e.printStackTrace();
}}
Why is delete not working??
For me, working here is an option.
public boolean deleteFileFromS3Bucket(String fileUrl) {
String fileName = fileUrl.substring(fileUrl.lastIndexOf("/") + 1);
try {
DeleteObjectsRequest delObjReq = new DeleteObjectsRequest(bucketName).withKeys(fileName);
s3client.deleteObjects(delObjReq);
return true;
} catch (SdkClientException s) {
return false;
}
}
If the object is public, it may cached by browser. Besides, DELETE OBJECT operation is eventual consistent

Backup Database( .sdf) file to skydrive by changing it to .txt

I'm a beginner programmer. I have a database file (MyDatabase.sdf) in my windows phone mango app. What I am trying to accomplish is copy and convert the MyDatabase.sdf file as MyDatabaseBackup.txt in isolated storage and then upload it to skydrive as backup. Since skydrive doesn't support .sdf files to be uploaded some people have suggested this conversion method and have got it to work.
So I am trying to do the same but I'm unable to copy the .sdf file to .txt file in isolated storage. Here's my code...
//START BACKUP
private void Backup_Click(object sender, RoutedEventArgs e)
{
if (client == null || client.Session == null)
{
MessageBox.Show("You must sign in first.");
}
else
{
if (MessageBox.Show("Are you sure you want to backup? This will overwrite your old backup file!", "Backup?", MessageBoxButton.OKCancel) == MessageBoxResult.OK)
UploadFile();
}
}
public void UploadFile()
{
if (skyDriveFolderID != string.Empty) //the folder must exist, it should have already been created
{
this.client.UploadCompleted
+= new EventHandler<LiveOperationCompletedEventArgs>(ISFile_UploadCompleted);
infoTextBlock.Text = "Uploading backup...";
dateTextBlock.Text = "";
using (AppDataContext appDB = new AppDataContext(AppDataContext.DBConnectionString))
{
appDB.Dispose();
}
try
{
using (IsolatedStorageFile myIsolatedStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
if (myIsolatedStorage.FileExists("MyDatabase.sdf"))
{
myIsolatedStorage.CopyFile("MyDatabase.sdf", "MyDatabaseBackup.txt"); //This is where it goes to the catch statement.
}
this.client.UploadAsync(skyDriveFolderID, fileName, true, readStream , null);
}
}
catch
{
MessageBox.Show("Error accessing IsolatedStorage. Please close the app and re-open it, and then try backing up again!", "Backup Failed", MessageBoxButton.OK);
infoTextBlock.Text = "Error. Close the app and start again.";
dateTextBlock.Text = "";
}
}
}
private void ISFile_UploadCompleted(object sender, LiveOperationCompletedEventArgs args)
{
if (args.Error == null)
{
infoTextBlock.Text = "Backup complete.";
dateTextBlock.Text = "Checking for new backup...";
//get the newly created fileID's (it will update the time too, and enable restoring)
client = new LiveConnectClient(session);
client.GetCompleted += new EventHandler<LiveOperationCompletedEventArgs>(getFiles_GetCompleted);
client.GetAsync(skyDriveFolderID + "/files");
}
else
{
this.infoTextBlock.Text =
"Error uploading file: " + args.Error.ToString();
}
}
Here's how I am creating the database in my app.xaml.cs file.
// Specify the local database connection string.
string DBConnectionString = "Data Source=isostore:/MyDatabase.sdf";
// Create the database if it does not exist.
using (AppDataContext appDB = new AppDataContext(AppDataContext.DBConnectionString))
{
if (appDB.DatabaseExists() == false)
{
//Create the database
appDB.CreateDatabase();
appDB.SubmitChanges();
}
}
Some have suggested that make sure "no processes/functions/threads have the sdf file open."
I tried to that in the UploadFile() method but I am not entirely sure if I did it correctly.
Can someone please give some code help on these two issues. Thanks for the help!
First, create the local copy using File.Copy method as shown below, then upload the .txt file:
File.Copy (Path.Combine ([DbFileDir], MyDatabase.sdf), Path.Combine ([SomeLocalDir], MyDatabaseBackup.txt), true)
Note: you have to have proper access rights to the original/new local folders.
Hope this will help. Rgds,

Using Ado.net and SqlCeEngine together in SQL Compact edition

I have an application for Windows CE 5 that uses Ado.net to backup/Restore the database I do simple copy.
Before restoring a database from a backup I use SqlCeEngine to verify that the database is OK and fix it if not. This works fine but when I restore large database after few times I get verify method returns false and the repair functions throw an exception
Could not load sqlcecompact30.dll. Operation has been aborted.
This happens now for every database file I want to restore until I exit the application.
Could not find the reason if I remove the test and repair everything is working OK and the database is OK but I want to check if the database is corrupted before restoring it.
I use the following CAB files to install the SQL on the PDA (iPAQ 310).
sqlce30.ppc.wce5.armv4i.CAB
sqlce30.repl.ppc.wce5.armv4i.CAB
Visual Studio 2005
Microsoft SQL server 2005 compact
Microsoft ssql Client 2.0
This is the code for verify and repair:
private static SqlCeEngine CreateEngine(string DBFileName)
{
return new SqlCeEngine("Data Source = '" + DBFileName + "'");
}
static public bool CheckDB(string DBFileName)
{
SqlCeEngine engine = null;
try
{
FileInfo file = new FileInfo(DBFileName);
if (file.Exists)
{
engine = CreateEngine(DBFileName);
return engine.Verify();
}
}
catch
{
}
finally
{
if (engine != null)
{
engine.Dispose();
}
}
return false;
}
static public bool RepairDB(string DBFileName)
{
SqlCeEngine engine = null;
try
{
FileInfo file = new FileInfo(DBFileName);
if (file.Exists)
{
engine = CreateEngine(DBFileName);
engine.Repair(null, RepairOption.RecoverCorruptedRows);
return engine.Verify();
}
}
catch (Exception ex)
{
Ness300Logger.Logger.Log("Repair failed: " + ex.Message);
}
finally
{
if (engine != null)
{
engine.Dispose();
}
}
return false;
}