I move my web application to Azure.
My application has many small (till 1 Mb, approx. 100 kb per file) files (image formats). Each file has the unique name (which can be found by name). Right now there is a simple folder on web hosting with 1000x files. What is more effectively to use - Azure Tables or Blob ?
Blob service would be the best choice for this scenario. Couple things to consider:
Since the partition key is down to the blob name, we can load balance access to different blobs across as many servers in order to scale out access to them.
If you need to, you could make your container(s) public, allowing unauthenticated access to the blobs.
If #2 is not desired, you could still use Shared Access Signatures to make your blobs downloadable with a browser or another HTTP client that is not aware of Azure Storage.
Related
I've been trying to find the most effective (elegant) solution to achieve what I'm trying to do. I'd like to hear from the community, thank you.
Situation:
Need to geo-enrich IP Address records on Sentinel. Example: Successful SigninLogs, since MSFT enrichment sometimes generates "Unknown" results in the IP enrichment maps.
External reference file (subnet, country_code, country_name) are available publicly, however the size and # of records are rather large. (~12MB, 200K+records).
Issue:
Tried using storage account blob to host the "reference table", apparently hitting the limit on max. blob size in Storage Account.
Looks like there are max. 30.000 records on Workbooks to read from external sources using 'externaldata' command. Hence, only partial reference data can be read and referred to.
Options considered:
Ingest the reference table into the log analytics workspace, do a join/lookup to this custom reference table for enrichment
Export the IP addresses from SigninLogs table to a blob storage, enrich the IP address using logicapps, and then put it back to a 'reference' blob storage. then read the 'reference' blob storage using 'externaldata' syntax.
Limitation Observed:
Came to a realization that Sentinel couldn't perform API call for enrichment from external data. (CMIIW). I've done similar stuff with Splunk, and we could enrich the data on the fly, by calling in multiple API calls to outside database.
Ingest the Data - As you've mentioned, ingest the data and join the tables. You would need to regularly ingest this though to ensure you can lookup the data within the desired time range (e.g. If you have an Analytics Rule, then this only looks up data for a 14 day period).
Use a Playbook - If you want the Geo-IP lookup post incident, you can perform this with a Logic App
Use Jupyter Notebooks - This have the flexibility to perform API calls against external locations and join the data to that hosted in Sentinel. An example notebook is the IP Explorer Notebook. Use Jupyter notebooks to hunt for security threats
Threat Intelligence - Microsoft enriches all imported threat intelligence indicators with GeoLocation and WhoIs data, which is displayed together with other indicator details.
Since March 2022, you can upload large CSV files into a Sentinel Watchlist. This way, you can upload a complete GeoIP database and perform ipv4_lookups. This blog post explains you how to do this: https://cryptsus.com/blog/enrich-geolocation-sentinel-siem.html
Is there a way of splitting a Content repository into multiple databases? There is a great chance I'll have TBs of data, maybe even tens of TBs of data. Maintaining database bigger than 1 TB becomes an issue, so I can't imagine dealing with a bigger database. I've considered using Filestream, but having multiple databases would be much more viable solution.
If not, is there at least a way of having several repositories contained in a single web site?
Currently (as of version 7.2) sensenet requires a central database to connect to, you cannot split that into multiple parts.
There is the blob storage feature however that lets you store binaries outside of the main metadata database. You choose a blob storage implementation (e.g. the MongoDb blob provider), install it and you can start uploading files to sensenet. Binaries above a certain (configured) size will go to the external provider.
You'll have to take care of the backup of the blob storage though, because that is different for every db provider. At least the size of the metadata db will be significantly lower.
When using Azure Web Sites (WAWS) general opinion seems to be that uploaded content such as photo's or files should be stored in Azure Storage Blobs and not in the WAWS File System.
Clearly using Azure Storage is a great idea if you have a lot of data and need scale and redundancy however for small or simple sites it seems to add another layer of complexity and also means you can't easily use things like ImageResizer without purchasing the Azure compatible licence etc.
So given that products like WordPress from the Azure Gallery uses "/site/wwwroot/wp-content/uploads/" to store all uploaded files on WAWS is there anything wrong with using the WAWS file system for storage or are there other considerations to take into account when using Azure WAWS?
The major drawback to using the WAWS storage is that your data is now intermingled with the application. By saving all of your plugins/images/blobs externally in a database or blob storage, you retain the flexibility to redeploy your application to a new region/datacenter by just pushing your code to the new website and changing connection strings.
If your plugins/images are stored on disk in WAWS, then you need to make sure that you are backing it up appropriately. If anything happens, you need to restore the site along with all of the data that had been uploaded.
Azure Web Sites is using Azure storage as a file storage so essentially the level of complexity you're talking about is abstracted.
Another great benefit that comes with this approach is if you scale your web site to multiple instances all of them will work with exact same file content.
Of course if you want to use pure Azure Storage features like snapshots or sharing specific content to specific users this is not available as is. But for the web site purposes is quite good.
Hope that helps
Suppose I have files in blob storage, and these files are constantly used by my web application hosted in Windows Azure.
Should I perform some sort of caching of these blobs, like downloading them to my app's local hard-drive?
Update: I was requested to provide a case to make it clear why I want to cache content, so here it goes: imagine I have an e-commerce web-site and my product images are all high-resolution. Sometimes, though, I would like to serve them as thumbnails (eg. for product listings), and one possible solution for that is to use an HTTP handler to resize the images on demand. I know I could use output-cache so that the image just needs to be resized once, but for the sake of this example, let us just consider I would process the image every time it was requested. I imagine it would be faster to have the contents cached locally. In this case, would it be better to cache it on the HD or to use local-storage?
Thanks in advance!
Just to start answering your question, yes accessing a static content from Role specific local storage would be faster compare to accessing it from Azure blob storage due to network latency even when both compute and blob are in same data center.
There could be a solution in which you can download X amount of blobs from Azure storage during startup task (or a background task) in Role specific Local Storage and reference these static content via local storage however the real question is for what reason you want to cache the content from Azure blob storage? Is it for faster access or for reliability? If reason is to have static content accessible almost immediately then I could think of having it cached at local storage.
There are pros and cons of each approach however if you can provide the specific why would you want to do that, you may get much better to the point response.
Why not use a local resource? It gives you a path to a folder on the HD, and you can get a lot of space. You can even keep it around between restarts.
Another option is Azure Cloud Drive. It's fast, and would allow you to share the cache among instances (but only can write at once).
Erick
I am new to CouchDB/Cloudant and CDN (CloudFront).
I am about to build an application using CouchDB as database.
This web application will handle a lot of files.
I know that CouchDB can store files in the database as attachments. But then I have heard about leveraging CDN to store and distribute the files all over the world.
My questions:
How is storing files in CouchDB compared to CDN (CloudFront)?
How is Cloudant's service compared to CDN (CloudFront)?
Is Google storage also a CDN?
What is the difference between Amazon CloudFront and S3?
Do I have to choose to store files either in CouchDB/Cloudant or CDN, or could/should I actually combine them?
What are best practices for storing files when using CouchDB?
Some of these questions are based on your specific implementation, but here's a generalization (not in any particular order):
Unless they have Cloudant mirrored on numerous servers around the world (effectively a CDN in its own right, just sans static files), a true CDN would probably have better response time, depending mostly on how you used Cloudant (eg, you might get good response times, but if you load the entire file into memory before outputting it, you're losing the CDN battle).
CouchDB has to process more data server-side before it can output an attachment.
CloudFront (and CDNs in general) are optimized for the fastest possible response time with the closest server.
S3 is only storage; CloudFront uses that storage and distributes it across many servers that serve the content based upon which one is closer to the user requesting that content.
Yes, you have to choose between Cloudant or the CDN; one stores them in the filesystem verbatim, the other stores them in the filesystem within the database.
I don't know the answer to some of these, eg, how CouchDB actually handles attachment storage at a low level, nor its best practices, however, this should give you enough of an idea to at least start thinking about which suits your needs best.