Data Factory v2 copy from FTP strange fails - azure-data-factory-2

I am trying to binary copy a few .ZIP files sequentially from FTP to ADLS. Sometimes its failed, sometimes not, it's really strange for me. I got this type of error only working with this external FTP server.
Error type:
{
"errorCode": "2200",
"message": "Failure happened on 'Sink' side. ErrorCode=UserErrorFailedToReadFtpData,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to read data from ftp: The remote server returned an error: (530) Not logged in.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The remote server returned an error: (530) Not logged in.,Source=System,'",
"failureType": "UserError",
"target": "Copy from FTP"
}
A connection is good, as I said sometimes it copy files without any errors, this is a simple activity so I don't know what can cause this type of error.
Sometimes it throws an error after copying 50mb on adls.
Can it be related to the FTP server?

A possible root cause could be :
Your FTP server does not support SSL but you enabled SSL in the FTP linked service. If so, You can disable the SSL in FTP linked service. Check out the FTP properties here: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-ftp-connector

From telemetry, it shows Copy can sometimes pass or fail with same payload, so it looks like a transient failure. But it is hard to determine the RCA from error message ("530 Not logged in"). What I'm suspecting is Copy hit throttling or similar transient issue from FTP server which will block the read request in the middle.
For further troubleshoot, could you check from FTP server side to see whether there's any detailed failure log. Besides, it will be a great help if I can get a test account to test the FTP server behavior and try to repro the issue. Please let me know if it is possible for you.
Regards,
Gary

Related

User login failed for <user> in production but same connection string works from local client

I have an Azure database setup of which I have included the below connection string as I believe it should be. Problem is when I try to run my client app in production, the server returns a 500 internal error. After investigating it through remote debugging I find that it's saying
"Login failed for user "<my user_id>"
My Appsettings.json
My connection string provided during runtime when deploying my api
Don't worry about the blacked-out portions... I've verified those to be the same in both.
Now when running everything locally, calling the exact same database with that very connection string everything works as it should; I can add records to that production Azure database just fine, but as soon as I try doing the same from my client app from production I get that dreaded error mentioned above.
Can anyone tell me what might be happening? I've been over and over this and it's driving me mad. I've even gone as far as changing the connection string to be Server=... I've made sure to append the # to the user_id. I believe I've tried just about everything I could find that wasn't 8 years old, including searching similar issues here... nothing seems to be quite like my issue exactly.
If you need more information let me know and I'll update my question.
Thanks!
EDIT: Adding this to show I've already added all of my output IPs from my api app service to my Sql server firewall. Can someone tell me if all my settings look good?

Encrypted connection throwing error "cannot find the file specified"

Any ideas wellcome:
HANA 2.0.44 with encryped connection from windows clients using ODBC driver 2.5.105 (with trust to the server): We observer sporadic errors using a DSN-based connection:
[SAP AG][LIBODBCHDB DLL][HDBODBC] Communication link failure;-10709 Connection failed (RTE:[1000013] The system cannot find the file specified. (server:port))
In some situations the errors are correlated to privileges of the user. In some situations the error an be removed by testing the connection within ODBC-Manager. Sometime it looks like there is a correlation to the reuse of the same connection - sometimes this works withour problems. The error can be reproduced within DSN-less (driver-based) connections.
Any ideas how to find a solution?
As the error message is not clear about whether the issue occurs on the client- or the server-side of the communication the investigation should look at both ends.
For the server-side the nameserver and indexserver trace files are the ones to check.
For the client-side I think the best option here is to use the ODBC trace option of the HANA ODBC driver.
The tool to use here is called hdbodbc_cons (located in the folder of the HANA client) and the linked documentation explains the different options in detail.

Azure Devops pipeline SSH error connection

Is there a problem with Service connections?
After upgrade I can not create a private key in ssh service.
Formerly in the text field I had the option to upload, now the field is of type password and not file upload.
I tried to put the key in this field, but it does not work, I get the following pipeline error:
Error: Failed to connect to remote machine. Verify the SSH service connection details. Error: Cannot parse privateKey: Unsupported key format.
can you help me?
This is the new service connection feature experience we provided recently, until now, it is keeping in Preview state. Anyway, need apologize for this unstable usage.
As work around, just go Preview Feature --> turn New service connections experience off, to use the old/stable service connection temporarily:
This issue has been taken by product group and is trying to fixing this unstable preview experience.

SVN Repository - can checkout but commit gives a timeout

I have configured a svn server and I can the checkout with no issue. But when I try commit, it takes long time and gives a "connection time out" error in Tortoise svn client. Does anyone have an idea how to resolve or where to check. seems this is a permission issue and I believe the user that I used has required write permission to the server.
Error
seems this is a permission issue and I believe the user that I used
has required write permission to the server.
This is not a permission issue. You have to check firewalls and your server configuration.
Does anyone have an idea how to resolve or where to check.
No. This is just a timeout error. You should revise your server configuration and check firewalls. There is a chance that your firewall rejects certain HTTP requests.

Weblogic Bridge Message - Failure of server APACHE bridge:

Here in this case, when a client is trying to access a particular URI, they are able to get a SUCCESS response for GET, but for the POST message, they are recieving a 500 error. Which is nothing but INTERNAL SERVER ERROR.
please look at the ERROR below:
Weblogic Bridge Message
Failure of server APACHE bridge:
Cannot open TEMP post file '/tmp/_wl_proxy/_post_1818_8' for POST of 3978 bytes
Weblogic Bridge Message
Failure of server APACHE bridge:
Internal Server failure, APACHE plugin. Cannot continue.
Eventually this was resolved after giving a 777 permission to /tmp/_wl_proxy and the client was able to access the page SUCCESSFULLY.
If this is a permission issue, then apache should throw 403 error, but i dont know why it was throwing 500 error.
if there is a internal server error, then both GET and POST response should be 500, So if anyone can answer this , it would be a great learning experience. Thanks..!!
You many not want to give Write to all (777) permission to /tmp/_wl_proxy directory. Check which userID apache is running under and only give writer permission to that userID.
This could be also with tmp directory full.
Here is what I did to solve this problem:
Go to weblogic management console (http://:7001/console)
Go to the server section, you will probably find one of your servers' status not running. This is what I found in my case;
Some features couldn't be run unless you have administrative privilege, I advice you stop all servers and re-run them as an administrator as you follow your servers status in the administrative console.
If the above doesn't work, something has to do with your either report or form server configuration is hindering the servers from starting up. In all cases you need to monitor the administration console.