Receive and process email in SQL Server - sql

I am looking to see if there is a viable solution to allow us to receive emails into SQL Server, process the content or linked attachments, and run a script on a database.
Currently we have a process where certain developers create basic scripts to modify one of a few of our production databases. They upload the scripts to our SVN and then email the DBA's with a link to the script. We then open it in SSMS and run, copy/pasting the output in response to the email. The process is primarily a separation of duties and to ensure that the DBA's are the only ones that can run the scripts.
But there are times when we may be unavailable and not able to be at a computer to run these in SSMS, but the business is waiting on these tables getting updated. I am trying to find a solution where we might be able to forward these emails if we see them, such as on our phones, to a receiving email on the SQL Server, and have a proc grab the script from SVN, maybe inserting the content of the script and mail into a database, and then running that script on the appropriate DB. My idea was having this on our MSX server and then it would be able to run on one of the servers via linked server or something. Then I want the output emailed back as confirmation.
I have looked around and most email related questions are about sending email. The only things I have found remotely related are about SQL Mail, which I understand is outdated and inefficient, and potentially CLR. Most of our environments are 2012+.
Am I on the right track with my thinking or is there something else I should be considering?
Thanks

Related

SQL Server Agent Jobs Run Successfully but Output No Data

Recently I was tasked to move two of our SQL Server Agent Jobs from one server to another as the old server is getting retired. These jobs run perfectly on the old server. Keep in mind I was not the one who created the SSIS packages that these jobs use. I also consider myself to have basic knowledge of SSIS.
I don't have permission to manipulate our servers so I had to work with our IT department to get this work done. I sent the IT department the two .dtsx files for the packages and they setup copies of the two jobs on the new server.
When I run these two jobs on the new server, they complete successfully but they run very quickly (compared to the old server's jobs) and I notice while looking in the message logs that they're writing 0 rows to my Excel output files.
There are no errors or even warnings that differ from the message logs I see for them on the old server where they're working perfectly so I'm at a loss for what's going on. I'm assuming I missed something very obvious like having to modify the jobs in Visual Studio in some way to account for what server they actually live on as I literally sent the same exact .dtsx files that are used on the old server (I was assuming what server the jobs live on doesn't matter from the SSIS/Visual Studio perspective because they don't pull any data from either one).
Anyway I'm just spitballing what the problem might be. Any help would be appreciated.

Fixing Connection Pool maxing out / Finding open connections to SQL Server

I'm having a bit of trouble with an old website that I have inherited from someone.
It throw errors regarding a connection pool being maxed out. When it happens the website would simply then just load the HTML and nothing from the database. When it is left for a while it would then work fine, or when I would recycle the IIS application pool in Plesk.
I have done a lot of reading and research but I can't quite work it out still.
The first thing I read was to look for was any code where the database connection was not closed after it had retrieved the information. I haven't found anything like this so far.
The next thing was I found a stored procedure sp_who2 which I was led to believe would give me the open connection but I'm a little confused as to if this is what I'm actually looking at.
When running sp_who2 I get the below.
Is this an open connection? Or is it simply my connection that is currently looking at the database through SQL Server Management Studio?
This database is currently on a shared hosting platform so I don't have the access needed to run some of the other commands that I found.
Ideally I will move the website off a shared host, but I'd like to find out the reason for this before I do. What I'm hoping to find is where the code needs to be adjusted to make it work.
Is this an open connection? Or is it simply my connection that is currently looking at the database through SQL Server Management Studio?
It's both. Your connection is coming from SSMS, and it is an open connection.
There are ways to get more detailed infomration than this. You can, for example, use sys.dm_exec_connections, along with other DMV's like sys.dm_exec_sessions, and construct a query which tells you a lot of information.
But rather than write a query like that yourself, I suggest you download Adam Machanic's excellent sp_whoisactive. It's just a stored procedure you create, which contains a query to pull lots of useful infomration from the system metadata, with the ability to provide options to customise the output. This procedure is, I would be confident to say, the "default" procedure that everyone eventually uses for this kind of query. I might write my own more limited query against the DMV's from time to time, but most of the time, sp_whoisactive can give you everything you want.
One of the parameters, for example, is #show_sleeping_spids. This will show you connections even if they're not actually running any query. Which is sort of funny given that the procedure name is "who is active", but the usefulness is clear. You would want to execute this procedure like so: exec sp_whoisactive #show_sleeping_spids = 1, and perhaps add other paramters besides. It has powerful filtering options too, but if you don't typically have a lot of connections, you probably can just grab the entire output and eyeball it for relevant info.
If you can't run this because of the permissions available to you by shared hosting, then your options are severely limited. If you are only getting your own session from sp_who2, but you know other stuff is running, then you don't have view server state permission. In such cases, sp_who2 only outputs your own session (because you are allowed to see information about your own session).

Distributing .mdf files to field sites

I am trying to find the best procedure to get data from our SQL server at headquarters to update apps running on local machines in various locations not connected to our network. Our current data and application is in Foxpro where you simply copied the data file, so I am not very familiar with using SQL databases.
The field app uses localdb and users don't save anything to the database. When the app opens it checks a web site to for updates. I tried detaching our HQ .mdf and .ldf, downloading it and overwriting it on the local machine, but localdb would not attach to the new file (same name). I thought localdb closes and detaches when the application closes , but maybe I am wrong. I also wonder if I need the log file since no changes are made and I dont need to rollback anything. I have searched for a good article on this topic but haven't found anything. This must be a fairly common scenario in many companies.
You want to look into using replication, probably snapshot replication. This allows you to distribute on whatever schedule is applicable to send one or more tables, or other objects, to off site sql server instances. You can use Http to send data.

how to display the result of a procedure running from the server side in oracle?

I am trying to automate a daily monitoring activity where there are set of scripts to be executed(all are select statements). I am in the process of creating a procedure which runs these scripts and by means of scheduler, this will be running daily once. My problem is since these activities are taking place in server side(server backbone), How do i save the results? Earlier we will run all the scripts manually and save it in a notepad. Is there any option to do the same in automation? Like saving in our PC or SQL developer? Instead of logging in to server and searching the path where the file is saved? I thought of saving the results in a table but i am looking for a better option.Please suggest...
Generally it is a good idea to save the results in a table as this gives you flexibility when querying the results or exporting them in multiple formats.
There are multiple options to get the data to the client:
Query the table with the results from the client
Generate a HTML from the results table and have make it accessible from a HTTP server.
You can also create a web PL/SQL package and generate the HTML within (http://docs.oracle.com/cd/B28359_01/appdev.111/b28424/adfns_web.htm#i1006207)
Export the data from the results table to a file and put it in a shared directory that is accessible by the client.
Email the results from the PL/SQL package.
I thought of saving the results in a table but i am looking for a better option.
What is exactly the issue with the "table" option?
Regarding "saving in our PC or SQL developer": one problem with that is that a PC/app screen is:
a PC is usually less resilient to reboots, crashes, etc.
it's intended for private use. Unless you're working alone - these logs may be of interest to other people;
..
Other options: it can be made to send e-mail; copy the file to a well known place (incl. one which is directly mounted on your PC); write to database table (as already suggested); and more.

Lotus Notes ODBC Connection

I need to connect and send/receive information from an MS SQL server in my Lotus Notes app using #formula in realtime (I can connect using an agent, but I need to use inline code for this).
The commands themselves seem pretty straight forward, but setting up the configurations seems to be a topic with scarce documentation. Apparently I need to install an ODBC driver. Where would I find that, and do I install that onto the server or onto the workstations that will run this app?
If any Lotus gurus could step me through setting this up, it would be greatly appreciated.
Thanks
You'll need to install the ODBC driver on the workstations that run this app, if the users will be triggering the ODBC connections. If at all possible, I highly suggest setting this up on the server side, and having it run via an agent. That'll save you from a few headaches, including having to maintain the ODBC connections on each workstation and worrying if each workstation has access to the data and server.
You first just want to make sure your ODBC setup is correct. You'll need the appropriate driver, of course, and the connection information. This site has a walkthrough to give you an idea of how to setup an ODBC database connection
If you have MS Access you can use it to test querying from the ODBC data source. Once you've tested the connection works, you'll just refer to the data source name (DSN) in your #DbColumn, #DbLookup, or #DbCommand formulas.
Back to my suggestion on setting this up on the server side, that would mean you'd keep a copy of the data you're querying within the Notes database itself, and then users would be interacting with read-only data in Notes. You could schedule updates regularly on the server side of that read-only data and effectively create a cache of the data in your Notes environment. Then that data would replicate around to other replicas of the database, but remove the trouble of the ODBC connection being needed everywhere.
If you need realtime data, though, that solution is out the window and you'll have to go with a local solution. In that case, you might want to look at the LCConnection class or using an ADODB.Connection from script, as both will allow you to create DSN-less connections to data sources. You'd then save the trouble of requiring ODBC data sources on each workstation, and only have to worry about whether they can access the server from their workstation.
I would add another option to Ken's list. It involves having the server do the queries of the external database (therefore you are only setting up ODBC in on the server - you don't have to deal with it on the workstations). You create an agent that is launched on the server using the 'run on server' technique. When the workstation needs to query the external data, the code creates a throw-away document in the database, puts the query criteria into the temporary document, saves the document, then calls the 'run on server' agent passing a reference to the temporary document. The server launches the agent, reads the criteria from the temporary document, does the query, and writes the results back to the temporary document. Then the workstation can access the query results from the temporary document. A scheduled agent can delete the temp docs on a regular basis.
It sounds complicated, and it all has to be done in script, but I've done this in many applications and it is fast, flexible, easy to administer, and gives your applications a lot of power. Note that end users must have the ACL rights to create a document in the db (the temp doc) in order for this to work.
Good luck!