I have a Azure Storage account with several containers. I am using BlobTrigger Function to detect new blobs.
Now this is seems working perfectly. But a BlobTrigger only works on one container. Seems like a bad idea to create one trigger pr container (I will duplicate a lot of code).
Is there any way to simplify this? Ideas are appreciated.
PS: I know that BlobTrigger is not reliable if there are 10K blobs or more in the container (but this is not my case).
Is there any way to simplify this? Ideas are appreciated.
An Azure Storage blob trigger only lets you monitor one storage container for new and updated blobs. If you want to monitor multi containers in a Azure Storage Account, you need to create multi functions.
I suggest you write the blob changed processing logic in one method and invoke this method when other functions are called.
public static void ProcessBlob(string containerName, string blobName, CloudBlockBlob blob)
{
//Write your logic here
}
public static void ProcessBlobContainer1([BlobTrigger("container1/{blobName}")] CloudBlockBlob blob, string blobName)
{
ProcessBlob("container1", blobName, blob);
}
public static void ProcessBlobContainer2([BlobTrigger("container2/{blobName}")] CloudBlockBlob blob, string blobName)
{
ProcessBlob("container2", blobName, blob);
}
There is open issue on GitHub which related to your question, hoping that it will be solved soon.
Add ability to create blob triggers on a container names that match a pattern
Related
These are great guides for migrating between the different versions of NuGet package:
https://github.com/Azure/azure-sdk-for-net/blob/Azure.Storage.Blobs_12.6.0/sdk/storage/Azure.Storage.Blobs/README.md
https://elcamino.cloud/articles/2020-03-30-azure-storage-blobs-net-sdk-v12-upgrade-guide-and-tips.html
However I am struggling to migrate the following concepts in my code:
// Return if a directory exists:
container.GetDirectoryReference(path).ListBlobs().Any();
where GetDirectoryReference is not understood and there appears to be no direct translation.
Also, the concept of a CloudBlobDirectory does not appear to have made it into Azure.Storage.Blobs e.g.
private static long GetDirectorySize(CloudBlobDirectory directoryBlob) {
long size = 0;
foreach (var blobItem in directoryBlob.ListBlobs()) {
if (blobItem is BlobClient)
size += ((BlobClient) blobItem).GetProperties().Value.ContentLength;
if (blobItem is CloudBlobDirectory)
size += GetDirectorySize((CloudBlobDirectory) blobItem);
}
return size;
}
where CloudBlobDirectory does not appear anywhere in the API.
There's no such thing as physical directories or folders in Azure Blob Storage. The directories you sometimes see are part of the blob (e.g. folder1/folder2/file1.txt). The List Blobs requests allows you to add a prefix and delimiter in a call, which are used by the Azure Portal and Azure Data Explorer to create a visualization of folders. As example prefix folder1/ and delimiter / would allow you to see the content as if folder1 was opened.
That's exactly what happens in your code. The GetDirectoryReference() adds a prefix. The ListBlobs() fires a request and Any() checks if any items return.
For V12 the command that'll allow you to do the same would be GetBlobsByHierarchy and its async version. In your particular case where you only want to know if any blobs exist in the directory a GetBlobs with prefix would also suffice.
I noticed this in the debug environment where I have to do many re-installs in order to test persistent data storage, initial settings, etc... It may not be relevant in production, but I mention this anyway just to inform other developers.
Any files created by an app in its App Folder are not 'visible' to queries after manual un-install / re-install (from IDE, for instance). The same applies to the 'Encoded DriveID' - it is no longer valid.
It is probably 'by design' but it effectively creates 'orphans' in the app folder until manually cleaned by 'drive.google.com > Manage Apps > [yourapp] > Options > Delete hidden app data'. It also creates problem if an app relies on finding of files by metadata, title, ... since these seem to be gone. As I said, not a production problem, but it can create some frustration during development.
Can any of friendly Googlers confirm this? Is there any other way to get to these files after re-install?
Try this approach:
Use requestSync() in onConnected() as:
#Override
public void onConnected(Bundle connectionHint) {
super.onConnected(connectionHint);
Drive.DriveApi.requestSync(getGoogleApiClient()).setResultCallback(syncCallback);
}
Then, in its callback, query the contents of the drive using:
final private ResultCallback<Status> syncCallback = new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
if (!status.isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(SearchableField.TITLE, "title"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(metadataCallback);
}
};
Then, in its callback, if found, retrieve the file using:
final private ResultCallback<DriveApi.MetadataBufferResult> metadataCallback =
new ResultCallback<DriveApi.MetadataBufferResult>() {
#SuppressLint("SetTextI18n")
#Override
public void onResult(#NonNull DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
MetadataBuffer mdb = result.getMetadataBuffer();
for (Metadata md : mdb) {
Date createdDate = md.getCreatedDate();
DriveId driveId = md.getDriveId();
}
readFromDrive(driveId);
}
};
Job done!
Hope that helps!
It looks like Google Play services has a problem. (https://stackoverflow.com/a/26541831/2228408)
For testing, you can do it by clearing Google Play services data (Settings > Apps > Google Play services > Manage Space > Clear all data).
Or, at this time, you need to implement it by using Drive SDK v2.
I think you are correct that it is by design.
By inspection I have concluded that until an app places data in the AppFolder folder, Drive does not sync down to the device however much to try and hassle it. Therefore it is impossible to check for the existence of AppFolder placed by another device, or a prior implementation. I'd assume that this was to try and create a consistent clean install.
I can see that there are a couple of strategies to work around this:
1) Place dummy data on AppFolder and then sync and recheck.
2) Accept that in the first instance there is the possibility of duplicates, as you cannot access the existing file by definition you will create a new copy, and use custom metadata to come up with a scheme to differentiate like-named files and choose which one you want to keep (essentially implement your conflict merge strategy across the two different files).
I've done the second, I have an update number to compare data from different devices and decide which version I want so decide whether to upload, download or leave alone. As my data is an SQLite DB I also have some code to only sync once updates have settled down and I deliberately consider people updating two devices at once foolish and the results are consistent but undefined as to which will win.
I am trying to extract data from local bitcoin database. As I know, bitcoin-qt is using BerkeleyDB. I have installed BerkleyDB from Oracle web site, and found there a DLL for .NET: libdb_dotnet60.dll. I am trying to open a file, but I get a DatabaseException. Here is my code:
using BerkeleyDB;
class Program
{
static void Main(string[] args)
{
var btreeConfig = new BTreeDatabaseConfig();
var btreeDb = BTreeDatabase.Open(#"c:\Users\<user>\AppData\Roaming\Bitcoin\blocks\blk00000.dat", btreeConfig);
}
}
Does anyone have examples how to work with a Bitcoin database (in any other language)?
What are you trying to extract? Only the wallet.dat file is Berkeley database.
Blocks are stored one after the other in the blkxxxxx.dat files with four bytes representing a network identifier and four bytes giving the block size, before each block.
An index for unspent outputs in stored as a leveldb database.
Knowing what type of information you are looking for would help.
There is library NBitcoin: https://github.com/MetacoSA/NBitcoin
How to enumerate blocks:
var store = new BlockStore(#"C:\Bitcoin\blocks\", Network.Main);
// this loop will enumerate all blocks ordered by height starting with genesis block
foreach (var block in store.EnumerateFolder())
{
var item = block.Item;
string blockID = item.Header.ToString();
foreach (var tx in item.Transactions)
{
string txID = tx.GetHash().ToString();
string raw = tx.ToHex();
}
}
In .NET you could use something like BitcoinBlockchain that is available as a NuGet package at https://www.nuget.org/packages/BitcoinBlockchain/. Its usage is trivial. If you want o see how it is implemented the sources are available on GitHub.
If you want to store the blockchain in a SQL database that you could query faster and in more ways that the raw blockchain you could use something like the BitcoinDatabaseGenerator tool available at https://github.com/ladimolnar/BitcoinDatabaseGenerator.
I am using the Microsoft Azure .NET client libraries to interact with Azure cloud storage. I need to be able to access additional information about each blob in its metadata collection. I am currently using CloudBlobDirectory.ListBlobs() method to get a list of blobs in a particular directory of a directory structure I've devised in the blob names. The ListBlobs() method returns a list of IListBlobItem objects. They only have a couple of properties: Url and references to parent directory and parent container. I need to get to the metadata of the actual blob objects.
I envisioned there would be a way to either cast the IListBlobItem to a BlockBlob object or use the IListBlockItem to get a reference to the BlockBlob, but can't seem to find a way to do that.
My question is: Is there a way to get a BlockBlob object from this method, or do I have to use a different way of getting the actual BlockBlob objects? If different, then can you suggest a way to achieve this, while also being able to filter by the "directory" scheme?
OK... I found a way to do this, and while it seems a little clunky and indirect, it does achieve the main thing I thought should be doable, which is to cast the IListBlobItem directly to a CloudBlockBlob object.
What I am doing is getting the list from the Directory object's ListBlobs() method and then looping over each item in the list and casting the item to a CloudBlockBlob object and then calling the FetchAttributes() method to retrieve the properties (including the metadata). Then add a new "info" object to a new list of info objects. Here's the code I'm using:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(true);
foreach (IListBlobItem item in blobs)
{
CloudBlockBlob blob = (CloudBlockBlob)item;
blob.FetchAttributes();
files.Add(new ImageInfo
{
FileUrl = item.Uri.ToString(),
FileName = item.Uri.PathAndQuery.Replace(restaurantId.ToString().PadLeft(3, '0') + "/", ""),
ImageName = blob.Metadata["Name"]
});
}
The whole "Blob" concept seems needlessly complex and doesn't seem to achieve what I'd have thought would have been one of the main features of the Blob wrapper. That is, a way to expand search capabilities by allowing a query over name, directory, container and metadata. I'd have thought you could construct a linq query that would read somewhat like: "return a list of all blobs in the 'images' container, that are in the 'natural/landscapes/' directory path that have a metadata key of 'category' with the value of 'sunset'". There doesn't seem to be a way to do that and that seems to be a missed opportunity to me. Oh, well.
If I'm wrong and way off base here, please let me know.
This approach has been developed for Java, but I hope it can somehow be modified to fit any other supported language. Despite the functionality you ask has not been explicitly developed yet, I think I found a different (hopefully less clunky) way to access CloudBlockBlob data from a ListBlobItem element.
The following code can be used to delete, for example, every blob inside a specific directory.
String blobUri;
CloudBlobClient blobClient = /* Obtain your blob client */
try{
CloudBlobContainer container = /* Obtain your blob container */
for (ListBlobItem blobItem : container.listBlobs(blobPrefix)) {
if (blobItem instanceof CloudBlob) {
blob = (CloudBlob) blobItem;
if (blob.exists()){
System.out.println("Deleting blob " + blob.getName());
blob.delete();
}
}
}
}catch (URISyntaxException | StorageException ex){
Logger.getLogger(BlobOperations.class.getName()).log(Level.SEVERE, null, ex);
}
The previous answers are good. I just wanted to point out 2 things:
1) Nowadays ASYNC programming is recommended to do and supported by Azure SDK as well. So try to use it:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(true);
foreach (IListBlobItem item in blobs)
{
CloudBlockBlob blob = (CloudBlockBlob)item;
await blob.FetchAttributesAsync(); //Use async calls...
}
2) Fetching Metadata in a separate call is not efficient. The code makes 2 HTTP request per blob object. ListBlobs() method supports getting Metadata with as well in one call by setting BlobListingDetails parameter:
CloudBlobDirectory dir = container.GetDirectoryReference(dirPath);
var blobs = dir.ListBlobs(useFlatBlobListing: true, blobListingDetails: BlobListingDetails.Metadata);
I recommend to use second code it it is possible. Since it is the most efficient way to fetch Metadata.
We have a situation where we have multiple databases with identical schema, but different data in each. We're creating a single session factory to handle this.
The problem is that we don't know which database we'll connect to until runtime, when we can provide that. But on startup to get the factory build, we need to connect to a database with that schema. We currently do this by creating the schema in an known location and using that, but we'd like to remove that requirement.
I haven't been able to find a way to create the session factory without specifying a connection. We don't expect to be able to use the OpenSession method with no parameters, and that's ok.
Any ideas?
Thanks
Andy
Either implement your own IConnectionProvider or pass your own connection to ISessionFactory.OpenSession(IDbConnection) (but read the method's comments about connection tracking)
The solution we came up with was to create a class which manages this for us. The class can use some information in the method call to do some routing logic to figure out where the database is, and then call OpenSession passing the connection string.
You could also use the great NuGet package from brady gaster for this. I made my own implementation from his NHQS package and it works very well.
You can find it here:
http://www.bradygaster.com/Tags/nhqs
good luck!
Came across this and thought Id add my solution for future readers which is basically what Mauricio Scheffer has suggested which encapsulates the 'switching' of CS and provides single point of management (I like this better than having to pass into each session call, less to 'miss' and go wrong).
I obtain the connecitonstring during authentication of the client and set on the context then, using the following IConnectinProvider implementation, set that value for the CS whenever a session is opened:
/// <summary>
/// Provides ability to switch connection strings of an NHibernate Session Factory (use same factory for multiple, dynamically specified, database connections)
/// </summary>
public class DynamicDriverConnectionProvider : DriverConnectionProvider, IConnectionProvider
{
protected override string ConnectionString
{
get
{
var cxnObj = IsWebContext ?
HttpContext.Current.Items["RequestConnectionString"]:
System.Runtime.Remoting.Messaging.CallContext.GetData("RequestConnectionString");
if (cxnObj != null)
return cxnObj.ToString();
//catch on app startup when there is not request connection string yet set
return base.ConnectionString;
}
}
private static bool IsWebContext
{
get { return (HttpContext.Current != null); }
}
}
Then wire it in during NHConfig:
var configuration = Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2005
.Provider<DynamicDriverConnectionProvider>() //Like so