I have a whole bunch of SSIS packages that are failing during the GetDetailListing within an ftp script component of SSIS jobs. These were all working fine for years up until about 1.5 days ago. Currently they are all failing. We have traced it to the point of being the fact that the ftp itself is not set to passive. We can set the FtpClientConnection setting like this. :
FtpClientConnection.UsePassiveMode = True
Upon setting the above prior to the ftp connection being set, it will connect and work without error. What I am trying to determine prior to going through and fixing all these packages. Is what would have changed that would cause all ftps that were previously not set to passive to now require passive? At first I thought it was some sort of network setting or something of that sort but have been unable to determine what that would be. I can't believe that all the ftp locations that we were connecting to would have performed a security update all at the same time.
Any ideas? I am stumped and have been looking at this for > 8 hours now.
This ended up being our barracuda server. Upon being rebooted these failures just disappeared. An auto update to this server must have been the cause to all these issues.
Related
I have an Azure database setup of which I have included the below connection string as I believe it should be. Problem is when I try to run my client app in production, the server returns a 500 internal error. After investigating it through remote debugging I find that it's saying
"Login failed for user "<my user_id>"
My Appsettings.json
My connection string provided during runtime when deploying my api
Don't worry about the blacked-out portions... I've verified those to be the same in both.
Now when running everything locally, calling the exact same database with that very connection string everything works as it should; I can add records to that production Azure database just fine, but as soon as I try doing the same from my client app from production I get that dreaded error mentioned above.
Can anyone tell me what might be happening? I've been over and over this and it's driving me mad. I've even gone as far as changing the connection string to be Server=... I've made sure to append the # to the user_id. I believe I've tried just about everything I could find that wasn't 8 years old, including searching similar issues here... nothing seems to be quite like my issue exactly.
If you need more information let me know and I'll update my question.
Thanks!
EDIT: Adding this to show I've already added all of my output IPs from my api app service to my Sql server firewall. Can someone tell me if all my settings look good?
On Monday I messed up with a database.
We have an application running on a VPS, using cPanel and phpmyadmin, and I informed the developers I will be doing some queries on the DB to extract information.
So, I did a few large queries using the "Visual Builder" query tool and the web-application got stuck. The queries weren't loading and even refreshing the page did not work. The website wasn't loading and users couldn't log in. So I used WHM to log in as root and kill the queries manually. After I did this, the system was still not running.
Then, the database completely freaked out and I got these error messages:
After doing this, the DB somehow fixed itself and the web application was working again. However, we saw that we could not update some jobs or add new jobs in the system. If you pressed the "SAVE" button on a job, the system just gave an "undefined" message.
The developers had a look and discovered this was causing the issue:
[
The devs went ahead and added the definer and the issue was resolved. The blacked out "user"#1.0.0.0" is the actual cPanel account username.
However, this did not last as yesterday evening the exact same situation was occurring. The web-application was running fine on Tuesday and most of Wednesday, then all of a sudden users couldn't update their jobs again which means the definer user was removed once again even though nobody did anything in the database.
Has anyone encountered this issue before? I read this thread on the topic and even though what they say makes sense, I believe the developers did this but the error still occurred.
When I log into phpmyadmin via cPanel, I get a weird user called "cpses_234ikjih#localhost.com". Does this perhaps have something to do with this error? I believe before the server went crazy, this user was only the name of the cPanel account (for example: "cPanelAccountName#localhost.com".
To summarize your post, what I'm seeing is that you have a MySQL user, the user disappeared, you recreated the user, and it went away again.
There must be some external factor here. Someone could have access to your database and is deleting the user maliciously or out of misunderstanding, there could be a scheduled job, or it could be something to do with your web host.
I'd start by auditing the database accounts, and restricting access as much as possible. Check any interface that's exposed to the web, such as WordPress, Joomla, or other applications.
You should enable logging, there are several degrees of logging that MySQL can allow. I think the most useful for you would be the audit log, although honestly I've never used that specifically. You'd enable that to log future events. The binary log may contain record of what has already occurred.
SOLVED
I managed to solve this by changing MySQL database password and cPanel account password.
I read one post by someone saying that there was a session file which perhaps stored an old session and that changing passwords could resolve this. Luckily it did, have not had the error 1449 appearing for 5 days now.
I'm working on SQL 2012 Enterprise and I have a set of SSIS package exports which push data out to text files on a shared network folder. The packages aren't complex and under most circumstances they work perfectly. The problem I'm facing is that they do not work when scheduled - despite reporting that they have succeded.
Let me explain the scenarios;
1) When run manually from within BIDS, they work correctly, txt files are created and populated with data.
2) When deployed to the SSISDB and run from the Agent job they also work as expected - files are created and populate with data.
3) When the Agent job is scheduled to run in the evening, the job runs and reports success. The files are created but the data is not populated.
I've checked the reports on the Integration Services Catalogs and compared the messages line by line from the OnInformation. Both runs reports that the Flat File Destination wrote xxxx rows.
The data is there, the Agent account has the correct access. I cannot fathom why the job works when started manually, but behaves differently when scheduled.
Has anyone seen anything similar? It feels like a very strange bug....
Kind Regards,
James
Make sure that the account you have set up as the proxy for the SSIS task has read/write access to the file.
IMX, when you run an SQL Agent job manually, it appears to use the context of the user who initiates it in some way. I always assumed it was a side effect of impersonation. It's only when it actually runs with the schedule that everything uses the assigned security rights.
Additionally, I think when the user starts the job, the user is impersonating the proxy, but when the job is run via the schedule, the agent's account is impersonating the proxy. Make sure the service account has the right to impersonate the proxy. Take a look at sp_grant_login_to_proxy and sp_enum_login_for_proxy.
Here's a link that roughly goes through the process:
http://www.mssqltips.com/sqlservertip/2163/running-a-ssis-package-from-sql-server-agent-using-a-proxy-account/
I also recall this video being useful:
http://msdn.microsoft.com/en-us/library/dd440761(v=SQL.100).aspx
I had the same problem with Excel files. It was permission rights.
What worked for me was adding the SERVICE account to the folder's security tab. Then the SQL Agent can access the files.
I have an API developed in ColdFusion 9 that continuously searches for items and inserts a record on the outcome of that search into a SQL Server 2008 table but I'm noticing a lot of errors in my Application log for the following error:
Error Executing Database Query.[Macromedia][SQLServer JDBC Driver]Error establishing socket to host and port: X.X.X.X:X. Reason: Too many open files. The specific sequence of files included or processed is: foo.cfm, line: 203
I realise there's not much to go on here but that's all the info I have from the logs.
Anyone have the faintest idea what might be going on?!
I got a similar error from using and old version of Lucene. Because Lucene used an old version of apache commons io that would sometimes stop closing the file read by the Lucene Index. So every time someone woul do a search a file would be opened and never closed. Eventually we hit the file open limit which would cause various problem on the server. One Of which is you can't connect to a datasource.
We had to bounce the server a couple times to release the open files. And then we updated our Lucene software to the latest version.
I believe Lucene is what Solr runs on (the cf index).
This happened on a Linux machine and we were running java, not coldfusion (but cf runs on Java)
My problem is this: I am running Oracle 10G on windows 98 on a virtual machine using VMWare on my desktop computer. I can connect to several users (SYS, HR, OE, ... ) with SQL Developer (which is on my desktop not on the virtual machine) but if a don't run any SQL statement for a short while, say about 2 minutes, I lose my connection and get an error like connection closed or IO-fault:connection reset by peer.
Could this have anything to do with the sp_reset_connection?
When I open SQL Plus on my virtual machine itself I don't lose the connection at all, even if it has been idle for 30 minutes or longer. So now I'm thinking there could be a problem between the Virtual Machine and my desktop computer. Before this all worked fine.
I tried closing recently installed anti-malware apps without any result.
Anybody has an idea what I could do to fix this problem?
Kind regards,
Veek
I stumbled upon the Keep-ALive Extension and tried it as well but without succes. Standard it is set to a 2 minute interval. I've changed this value to 1 and to 60 minutes but as soon as I stop running statements for a short while I lose my connection. There must be something else. I've already installed the newest SQL Developer version but still it is the same. (I did import the settings from my earlier release maybe I have to try without importing them.)
Any other suggestions perhaps?
Kind regards,
Veek
This extension works for SQL Developer 4:
https://github.com/scristalli/SQL-Developer-4-keepalive
DISCLAIMER: I'm the developer of the extension. I hope the answer is not considered advertising, because this extension is open source (and the previous non-open-source extensions were accepted as an answer).
MinChen Chai has created Keep-Alive Extension exactly for your situation:
https://sites.google.com/site/keepaliveext
It will continually send TCP keep-alive packets and prevent server disconnection by inactivity timeout.
When used with the latest SQL Developer Version 4.0.0.13:
- MinChen's extension (http://sites.google.com/site/keepaliveext) doesn't work.
- The keepconnext extension (http://sites.google.com/site/keepconnext) too doesn't work anymore.
On SQL Developer go to Tools > Preferences > Databases > Instances Viewer. and the option traffic duration change it to the max, this worked for me.
Go to Tools > Monitor Sessions... and select your Connection. Set the refresh value to 60 (seconds).
While monitoring, your connection will not be lost.
Oracle SQL Developer Version 4.1.3.20
As the suggested extensions in this thread have had issues with recent versions of SQL Developer, I tried my way and got scristalli's code to work in a new project based on his code AND the oracle example repo.
Needs a lot of work but oh well, at least I can install the new build on SQLDev v19.2 and it works as expected.
Disclaimer: I'm the owner of the linked repo, although it's MIT as the previous versions. Feel free to fork it, pullrequest-it or do as you like