Just read a release that says all azure storage is encrypted at rest, looking for a way to verify this for non-blobs or files . Specifically Table Storage.
https://azure.microsoft.com/en-us/blog/announcing-default-encryption-for-azure-blobs-files-table-and-queue-storage/
Interesting question. There is no way for a confirm this with a code check as the data is encrypted/decrypted transparently during write/read. You can check to see if a particular blob is encrypted using get-blob - see https://learn.microsoft.com/en-us/rest/api/storageservices/get-blob
x-ms-server-encrypted: true/false Version 2015-12-11 or newer. The value of this header is set to true if the blob data and application metadata are completely encrypted using the specified algorithm. Otherwise, the value is set to false (when the blob is unencrypted, or if only parts of the blob/application metadata are encrypted).
Also see: https://learn.microsoft.com/en-us/azure/security/azure-security-encryption-atrest
More detail:
If I am parsing your question correctly, Azure has rolled out encryption by default in all regions in Public Azure. This means that any new storage account (Classic or ARM) created will have Encryption at Rest enabled by default. Encrypting data in existing Storage accounts is ongoing (ETA to be completed by end of the year). Encryption for Tables and Queues at rest is also in the works.
You can check the blob and file headers per above to verify data is encrypted. Unfortunately there isn't a way currently to do such verifications for Tables and Queues.
Related
I would like to know how do I create a process in Dell Boomi that will meet the following criteria:
Read data directly from Database poduction table then will send the data to SaaS (public internet) using REST API.
Another process will read data from SaaS (REST API) and then write it to another Database table.
Please see attached link as to what I have done so far and I really don't know how to proceed. Hope you can help me out. Thank you.Boomi DB connector
You are actually making a good start. For the first process (DB > Saas) you need to:
Ensure you have access to the DB - if your Atom is local than this shouldn't be much of an issue, but if it is on the Boomi Cloud,
then you need to enable access to this DB from the internet (not
something I would recommend).
Check what you need to read and define Boomi Operation - from the image you have linked I can see that you are doing that, but not
knowing what data you need and how it is structured, it is impossible to say if you have defined all correctly.
Transform data to the output system format - once you get the data from the DB, use the Map shape to map it to the Profile of the Saas you are sending your data to.
Send data to Saas - you can use HttpClient connector to send data in JSON or XML (or any other format you like) to the Saas Rest API
For the other process (Saas > DB) the steps are practically the same but in reverse order.
I know this is not the most secure way to access a blob storage but I am not too worried about security, at the moment. The files I will be storing are not confidential and I will be keeping a close eye on the usage of the blob.
I found code that will allow me to upload a file to a blob from a VBA solution but I am currently unable to do an upload/download because I need to be able to generate the hashed signature that will go inside of the token I will be sending along with the URL in order to upload/download a file from the blob. Even though I am not too concerned about security, I would rather not generate a token from Azure that wouldn't expire a long time into the future. I'd like to have the token expire after a short period of time.
Would it be possible to generate this hashed signature in VBA? From what I could find online, I am currently unsure if it is even possible at this time.
As per our current architecture, we have Datapower that acts as a gatekeeper for validating each incoming request (in JSON) against JSON schemas.
We have lot of restful services having corresponding JSON schemas residing at Datapower itself. However, every time there is a change in service definition corresponding schema has to be changed. That results in a Datapower deployment of affected schema.
Now we are planning to have a restful service that will be called by Datapower for every incoming request and it will return the JSON schema for the service to be invoked and that schema will be present along with service code itself not on Datapower. That way even if there are any changes in service definition, there itself we can make the changes in schema as well and deploy the service. It will save us an unnecessary Datapower deployment.
Is there any better approach to validate the schema? All I want is not to have Datapower deployment for every schema change.
Just FYI we get schema changes on frequent basis.
Keep your current solution as is as pulling in new JSON schemas for every request will affect performance. Instead when you deploy the schema in the backend system have a RMI (REST management interface) or SOMA call that uploads the new schema or simply a XML Firewall where you add a GWS script that writes the json data to file in the directory (requires 7.5 or higher).
Note that you have to clean the cache as well through the call!
A better approach is to have some push system based on subscription to changes. You can store schemas in etcd, redis, postgres or any other system that has notification channels for data changes so you can update schemas in the validating service without doing it on every request. If your validating service uses validator that compiles schemas to code (ajv - I am the author, is-my-json-valid, jsen) it would be even better performance gain if you only do it on change.
With a TDE encrypted database I understand that data is encrypted on a per page basis as it's written to the database. When I run a query that joins a few tables together and applies some filtering to those tables, when does the decryption occur ?
Does it have to decrypt the tables first then perform the joins and filtering or is it able to do that joining and filtering with encrypted data and then just decrypt the results ?
From MSDN:
Encryption of the database file is performed at the page level. The pages in an encrypted database are encrypted before they are written to disk and decrypted when read into memory.
You need to understand how the buffer pool works. The buffer pool is a cache of the data on disk. Queries always read data from the BP and write changes to the BP (simplified explanation). Encryption occurs when data is transferred from BP to disk and decryption occurs when data is transferred from disk to BP. Read Understanding how SQL Server executes a query for details how all this works.
It appears that the decryption is performed as the rows are read from disk. Notice that data in rest(saved to disk) is only considered protected by TDE. Once in memory, the data is no longer protected by tde.
TDE and Decryption
TDE is designed to protect data at rest by encrypting the physical data files rather than the data itself. This level of protection prevents the data and backup files from being opened in a text editor to expose the file’s contents.
TDE encryption occurs prior to writing data to disk, and the data is decrypted when it is queried and recalled into memory. This encryption and decryption occurs without any additional coding or data type modifications; thus it’s transparency. Once the data is recalled from disk, into memory, it is no longer considered to be at rest. It has become data in transit, which is beyond the scope of this feature. As such, alongside TDE, you should consider applying additional supporting layers of protection to your sensitive data, to ensure complete protection from unauthorized disclosure. For example, you may wish to implement, in addition to TDE, encrypted database connections, cell-level encryption, or one-way encryption. For additional data in transit protection that is required, externally from the database, you may need to consult with, or defer to, your Network Administration team.
Hello I have two blobs in my account:
Blob1
Blob2
Blob2 is empty, how can I take all the containers from Blob1 and move it to Blob2?
I am doing this because I would like to use a different subscription to help save some money. It doesn't seem like its possible any other way.
This is all under the same windows live account.
Thank you!
I am glad to hear that Azure Support was able to reassign your subscription. In the future, if you would like to copy Azure Storage blobs from one account to another, you can use the Copy Blob REST API. If you are using Azure Storage Client Library, the corresponding method is ICloudBlob.StartCopyFromBlob. The Blob service copies blobs on a best-effort basis and you can use the value of x-ms-copy-id header to check the status of a specific copy operation.