Azure Data Factory with Integration Runtime - Delete (or move) file after copy - azure-data-factory-2

I have an on premise server with the Microsoft Integration Runtime installed.
In Azure Data Factory V2 I created a pipeline that copies files from the on premise server to a blob storage.
After a successful transfer I need to delete the files on the on premise server. I am not able to find a solution for this in the documentation. How can this be achieved?

Recently Azure Data Factory introduced a Delete Activity to delete files or folders from on-premise storage stores or cloud storage stores.

You have the option to call Azure Automation using webhooks, with the web activity. In Azure Automation you can program a powershell or python script with a Hybrid Runbook Worker to delete the file from the on premise server. You can read more on this here: https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker
Another easier option would be to program a script to be run on the server with the windows task scheduler where you run a script to delete the file. Make sure you program the script to be run after data factory has copied the files to the blob, and that's it!
Hope this helped!

If you are simply moving the file then you can use a Binary dataset in a copy activity. This combination makes a checkbox setting visible that when enabled will automatically delete the file once the copy operation completes. This is a little nicer as you do not need the extra delete activity and the file is "moved" only if the copy operation is a success.

Related

OperationalError: Attempt to Write A ReadOnly Database on Google Cloud Application

Recently, I have been trying to deploy an interactive Google App Engine that writes to a SQLite database, which works fine when running the app locally, but when running it through the server, I receive the error:
OperationalError: attempt to write a readonly database
I tried changing the permissions on my .db, .sql but no luck.
Any advice would be greatly appreciated.
You can try changing permission of the directory and checking that .sqllite file exists and is writable
But generally speaking is not a good idea to rely on disk data when working on app engine as disk storage is ephemeral (unless you are using persistent disks on flex) but even then its better to use a cloud database solution
App Engine has exactly read-only file system, i.e. no files can be modified. It has, however, /tmp/ folder to store temporary files as the name suggests. It actually uses RAM, so not a good idea if the database is huge.
On app startup you can copy your original database file to /tmp/ folder and use it from there afterwards.
This works. However, all the changes in the database are lost when the app nodes scale to 0. Each node of the app has its own database copy and the data is not shared between the nodes. If you need the data to be shared between the app nodes, better use CloudSQL.

Restore from an Azure Storage Explorer downloaded through Powershell failing

We receive weekly FULL and daily DIFF back ups from our hosted ERP quoting system provider. We are using Powershell code and a task to download the most recent file from a blob container to a local server location we use for our back ups.
When I download the files from Azure Storage explorer manually and run the restore job it works fine. When I run the restore job from the Powershell downloaded file i get an error.
.bak' is incorrectly formed and can not be read.
I cannot figure out why this is happening. Anyone run into this and fix it?

SSIS package to run script file on Azure

I want to create a SSIS package which will first get the .sql script file from Azure Blob storage and then execute the same in Azure sql server. I am able to do it when I am using local storage. That means when I am executing local file not from azure storage.
Please help me!!!
Frist, you're going to want to set up your Azure storage with Azure File Storage That way the machine running your SSIS package will be able to use the file storage like a mapped network drive.
Once you've got that setup, the first step in your package will first run an Execute process Task. This Task should be configured so that it runs a simple batch file (bat) that reconnects to the file share you setup in Azure File Storage. Here is a basic example of what would be in this batch file:
if not exist e:\*
net use e: \\<storageAccountName>.file.core.windows.net\<fileContainerName> /u:<storageAccountName> <primary access key for storage account>
After this is run, your SSIS package would then be able to access your .sql files stored on this share.
Simply pick up the file, read the contents and execute the statement contained in the .sql file.

how to Back up azure cloudservice?

Is there a way I can back up my azure cloud service( backup of .pkg and .cscfg files) .
We have an existing azure cloud service deployed an year ago. Now we don't have either the old version the source code nor the setup files (.pkg file and the .cscfg files ). We want to create a backup of the current cloud service . We created a new version of our cloud service and tried to do a VIP swap which didn't worked as (Windows Azure cannot perform a VIP swap between deployments that have a different number of endpoints.) our new version has many new changes which are not compatible with old version .
I need to find a way to take a backup of .pkg and .cscfg files from existing deployment in cloud .
Any suggestions /workarounds for this situation
It looks like there is a Get Package operation on the service management API that can retrieve the .cscfg and .cspkg files from a Cloud Service deployment. See http://msdn.microsoft.com/en-us/library/windowsazure/jj154121.aspx
You could also try Cerebrata's Azure Management Studio (AMS). AMS contains a "Save Package" button in the Cloud (Hosted) Service deployment. I'm assuming AMS is using the same API to download the .cscfg and .cspkg files. Done in a few button clicks. :)

How to use a Post deployment script to bulk insert a CSV on remote server

I am using a post deployment script in my SQL server database project to bulk insert data. This works great if I publish to a local database, but if I publish to a remote database, obvisouly the CSV I am bulk inserting from, will not exists on the remote server. I have considered using a command line copy on the CSV to a shared folder, but this raises security concerns, as anyone with access to this folder could possibly tamper with a deployment.
How should I be using post deployment scripts to copy data to a remote server? A CSV is easier to maintian than a bunch of inserts, so I would prefer using a CSV. Has anyone ran into and solved this issue?
The best way is the one you mentioned. Command line copy it to a secured shared folder and BCP from there.
From that point on the security of the folder depends on network services.