Azure Data Factory Failing with Bulk Load - azure-data-factory-2

I am trying to extract data from a Azure SQL Database, however I'm getting the
Operation on target Copy Table to EnrDB failed: Failure happened on 'Source' side. ErrorCode=SqlOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A database operation failed with the following error: 'Cannot bulk load because the file "https://xxxxxxx.dfs.core.windows.net/dataverse-xxxxx-org5a2bcccf/appointment/2022-03.csv" could not be opened. Operating system error code 12(The access code is invalid.).
You might be thinking this is permission issue, but if you take a look at the error code 12 you will see the issue is related to Bulk Load.. a related answer can be found here..
https://learn.microsoft.com/en-us/answers/questions/988935/cannot-bulk-load-file-error-code-12-azure-synapse.html
I thought I might be able to fix the issue by selecting Bulk lock see image.
But I still get the error.
Any thoughts on how to resolve this issue?

As I see that the error is refering to a source side (2022-03.csv) , so I am not sure as to why are you making changes on the sink side . As explained in the threads which you referd , it appears the the CSV file is getting updated once the you pipeline starts execute by some other process . Refering back to the same thread .https://learn.microsoft.com/en-us/answers/questions/988935/cannot-bulk-load-file-error-code-12-azure-synapse.html
The changes suggested below should be made on the pipeline/process which is writing to 2022-03.csv .
[![enter image description here][1]][1]
HTH
[1]: https://i.stack.imgur.com/SSzwt.png

Related

SSIS ERROR: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020

I have problems with SSIS process(acctually the same problem occurs for two different processes). So, we are doing some ETL work using SSIS. We have Business Intelligence project that executes without errors from Visual Studio. However when its deployed on the server and scheduled as a job it fails with errors like:
INTRASTAT_Towar:Error: SSIS Error Code
DTS_E_PROCESSINPUTFAILED. The ProcessInput
method on component "Union All 3" (876) failed with error
code 0xC0047020 while processing input "Union All Input
2" (916). The identified component returned an error from
the ProcessInput method. The error is specific to the
component, but the error is fatal and will cause the Data
Flow task to stop running. There may be error messages
posted before this with more information about the failure.
INTRASTAT_Towar:Error: SSIS Error Code
DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput
method on istat_dekl_wynik_poz SELECT returned error
code 0xC02020C4. The component returned a failure
code when the pipeline engine called PrimeOutput(). The
meaning of the failure code is defined by the component,
but the error is fatal and the pipeline stopped executing.
There may be error messages posted before this with
more information about the failure.
INTRASTAT_Towar:Error: The attempt to add a row to the
Data Flow task buffer failed with error code 0xC0047020.**
The other returned errors are similar, sometimes instead of 0xC0047020, there is unspecified error. Errors occur only on this one table, that has a lot of different stuff inside data flow task(unions, multicasts, conditional split). Most other data flows have just source, dest and transformation, and they are not causing any problems. I've been suggested to try manipulating DefaultBufferMaxRows and DefaultBufferSize properties values for the data flow task, but after doing some research I dont belive that it will solve the issue, as they are currently put on default values. Any suggestions?
Well, I managed to work the issue with my packages. I was using 2012 SSIS version, but I executed packages in 32 bit environment in BIDS. The server acctually executed in 64 bit and for some projects that was the problem. One checkbox in step properties to make it execute in 32 bit env and I solved the problem we have been fighting for weeks.
I was also facing the same issue, I just did below step.
Open Data Flow tab>>Click anywhere except any task and then Right Click>>Properties>>Change **ForceExecutionValueType** to **Int64**
You can follow my screenshot:
Watch out for indexes on the destination tables - especially unique ones because this will throw an error that's doesn't pinpoint the problem.
For people who stumbled here for the same error. If you are trying to copy data from one Azure SQL database to another using SQL Server Import and Export Wizard. Use the 64-bit version.
From your windows search for SQL Server 2019 Import and Export Data (64-bit).

Cannot process data in separate locations

I am trying to load csv file to BigQuery from Google Cloud Storage by WebUI.
But sometimes occurs error.
Error message is "Cannot process data in separate locations".
What does it mean?
And how can I fix it?
This was an unintended consequence of an update to the BigQuery service. We'll provide additional followup on this bug:
https://code.google.com/p/google-bigquery/issues/detail?id=270

SQL Server - insufficient memory (mscorlib) / 'the operation could not be completed'

I have been working on building a new database. I began by building the structure within the database it is replacing and populating this as I created each set of tables. Once I had made additions I would drop what had been created and execute the code to build the structure again and a separate file to insert the data. I repeated this until the structure and content was complete to ensure each stage was as I intended.
The insert file is approximately 30mb with 500,000 lines of code (I appreciate this is not the best way to do this but for various reasons I cannot use alternative options). The final insert completed and took approximately 30 minutes.
A new database was created for me, the structure executed successfully but the data would not insert. I received the first error message shown below. I have looked into this and it appears I need to use the sqlcmd utility to get around this, although I find it odd as it worked in the other database which is on the same sever and has the same autogrow settings.
However, when I attempted to save the file after this error I received the second error message seen below. When I selected OK it took me to my file directory as it would if I selected Save As, I tried saving in a variety of places but received the same error.
I attempted to copy the code into notepad to save my changes but the code would not copy to the clipboard. I accepted I would lose my changes and rebooted my system. If I reopen this file and attempt to save it I receive the second error message again.
Does anyone have an explanation for this behaviour?
Hm. This looks more like an issue with SSMS and not the SQL Server DB/engine.
If you've been doing few times, possibly Management Studio ran out of RAM?
Have you tried breaking INSERT into batches/smaller files?

“Error: Connection error. Please try again.” when uploading a table

I am trying to upload a json file through the web UI but I am receiving this generic error message: Error: Connection error. Please try again.. Can you please let me know what's wrong.
Job Id = job_VmEiQY0xYPWjjLa-Knaz-C3INNA
Thanks.
It looks like your job encountered a transient error in one of our data centers that prevented us from loading your data into BigQuery. This problem appears to be resolved as of 2014-02-12.
As always, we recommend that you write client code that retries on error. We also recommend that you generate your own job IDs when loading data. That way, if you encounter an error, you can retry with the same job ID and be assured that at most one of your attempts to load the data will succeed.

SQL Server merge replication error "The schema script 'xxx.sch' could not be propagated to the subscriber"

I recently made some changes to a working publication under Merge replication which seem to have broken synchronization for the subscriber.
The error message I'm getting is:
The schema script 'ftdb_arcmessagefac64b65_76.sch' could not be propagated
to the subscriber. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
Get help: http://help/MSSQL_REPL-2147201001
The process could not read file 'D:\Program Files\Microsoft SQL
Server\MSSQL.1\MSSQL\ReplData\unc\xxx\20120701000581\xxxx.sch' due to OS error 3. (Source: MSSQL_REPL, Error number: MSSQL_REPL0)
Get help: http://help/MSSQL_REPL0
I've looked in the unc directory, and there's no directory 20120701000581, but there's a directory 20120706110881 from when the snapshot of the publication was updated.
I've tried reinitializing the subscription and recreating the snapshot, but the process still fails expecting the 20120701000581.
I haven't tried deleting and recreating the subscription yet, as I would rather get to the bottom of the issue before trying this. Can someone explain what may be happening and how to fix this?
this happen because subscriber cant locate snaps so you can share snaps folder in your network using UNC
http://msdn.microsoft.com/en-us/library/ms151151.aspx
After some further investigation, it appeared that I was able to make some changes to the article properties in the subscription and once the snapshot was rebuilt the subscription resynchronization ran successfully.
View snapshot agent status-> Monitor-> Right Click on Error Subscription and choose Reinitialize.
Good luck