Automating Monthly SQL JOB/Query - sql

I frequent this site a lot but have never posted but here goes! I am fairly new only about a month in on the job but have had some experience with SQL before.
I have a simple query that runs monthly which counts the number of active members and notifications sent per Organization for the month.
I have created a one-step job with SQL SERVER AGENT that runs the query on the 5th of the month and records the information for the previous month.
I have the output of the job going to a file named MonthlyReport.txt. This .txt file is then mailed to the client.
The client opens the file with Excel as default... this removes all the formatting. I recommended opening Excel and importing the document and this has temporarily resolved the issue.
However, there are TWO very big issues:
1) asking the client to import the file for formatting is very inconvenient. Importing the file myself will create a lot of overhead as there are several of these reports for multiple databases.
2) the .txt file includes the lines "MonthlyReport' : Step 1, 'Collect Data' : Began Executing 2015-03-17 12:39:58"------ This really messes up the formatting of the column headers
I am looking for other ways to resolve this task by exporting directly to EXCEL or a formatted .txt file
I have tried saving the output as MonthlyReport.csv but the problems still remain and this requires importing into excel
FYI-- My company is running Windows Server 2012 which has SSIS functionality but we also are running a few legacy Windows Server 2008 R2 and I need the solution to work on both servers so SSIS packages are not compatible with Windows Server 2008.
I am sorry for the longwinded response and appreciate all the time and help that the community is able to provide.

Related

SQL Server Agent Jobs Run Successfully but Output No Data

Recently I was tasked to move two of our SQL Server Agent Jobs from one server to another as the old server is getting retired. These jobs run perfectly on the old server. Keep in mind I was not the one who created the SSIS packages that these jobs use. I also consider myself to have basic knowledge of SSIS.
I don't have permission to manipulate our servers so I had to work with our IT department to get this work done. I sent the IT department the two .dtsx files for the packages and they setup copies of the two jobs on the new server.
When I run these two jobs on the new server, they complete successfully but they run very quickly (compared to the old server's jobs) and I notice while looking in the message logs that they're writing 0 rows to my Excel output files.
There are no errors or even warnings that differ from the message logs I see for them on the old server where they're working perfectly so I'm at a loss for what's going on. I'm assuming I missed something very obvious like having to modify the jobs in Visual Studio in some way to account for what server they actually live on as I literally sent the same exact .dtsx files that are used on the old server (I was assuming what server the jobs live on doesn't matter from the SSIS/Visual Studio perspective because they don't pull any data from either one).
Anyway I'm just spitballing what the problem might be. Any help would be appreciated.

Visual Studio Writes Partial Data to Excel File

I have a visual studio package that does not write all the data needed to a blank excel file.
More specifically, the package goes through these steps:
Copies a template excel file to overwrite a shell file.
Connects to a SQL DB.
Runs a Select Statement.
Converts one column to unicode.
Pastes to shell file.
There are a few more steps afterward (like emailing the excel file) but those work fine.
The issue comes up for step 4. when Visual Studio or SSIS runs the package, I pull about 1400 rows. When I just run the select statement in SQL Server Management Studio or as a connection in Excel I pull about 2800 rows. 2800 is the right number.
I've tried building the process from scratch (excel files, connection files, etc.) but that rebuild elicits the same result. It's like Visual Studio just doesn't like the select statement. Double checked the mappings - all good. The data is pasting and being delivered fine, just not enough. No errors on visual studio either - it gives me that lovely (albeit confusing) check mark.
This was running as an automated package for about a year before this happened and I have no explanation. Seriously a headscratcher.
The only other clue I have is that when I pull the data manually with the select statement, there are no null values in a particular column, but when I run the package with that exact same select statement the output contains a null in the referenced column - almost as if the select statement in Visual Studio is pulling slightly different data than the manual pull, but the statements are exactly the same, so I don't know why that would be.
Any ideas?
I've seen this issue before. The timeout was set to an low value on the connection and caused it to only pull part of data before the timeout hit and killed the connection. Make sure you are not swallowing any exceptions and double check your timeouts.
Thanks for replying folks!
In the end, I solved the issue by completely remaking the package. While trying your solutions above, I was using the same file but building the connections and queries from scratch. Once I started from a new file it ran without error.
I guess to all those new folks to Visual Studio - always consider remaking the file from nothing!

Automate export from MS Excel to MS SQL Server

Is there a way to export data from a MS Excel file into a SQL Server table automatically? Maybe this is done using a script of some kind.
If it's not possible to be completely automated, perhaps there's a way to do it using minimal user effort. (For example clicking a button or link)
There is a MS Excel spreadsheet where the data keeps having to be manually exported to SQL Server.
I've done this using Excel to Access before, but not too certain on how to do it using SQL Server (MS).
*MS Office 2013 and MS SQL Server 2012.
The other answers are ok. I just want to suggest an additional alternative.
If it is just 1 specific Excel file that is frequently updated, I would consider using VBA. For example, write some VBA code in Excel that uploads changes to the database when the spreadsheet is saved (or the user presses a button).
The problem with using a scheduled job is that Excel is basically a single user application. If someone has the spreadsheet open or is doing something in it when the scheduled job runs or moves the spreadsheet to a different folder, then the job may fail.
This way you also get the updated data in your database in something close to real time instead of waiting on a job to run. This might take more time and effort to set up though than some of the other answers.
You can use SQL Server Agent to run a scheduled job that imports data from an Excel worksheet into a SQL Server table.
The import is relatively straightforward to do using Integration Services, but if you've not used either of these before you might need to do some reading up on it.
You can do the following:
You need to create an SSIS package and then create a job to run the package.
The easiest way to create the SSIS package is with "Import and Export Data" tool of SQL Server. It has a nice step by step wizard.
You set everything it asks you from the source and the destinations. Until you get here, select the "Save SSIS Package":
Then you only have to create the job to run it :)

Create SQL Table that will import automatic from Excel whenever that Excel is updated

So I have an excel spreadsheet with Product and Notes. I'd like to import this information into SQL and everytime people enter more products and notes into this excel sheet, it will automatically updated to the new one whenever I run the syntax?
I finished creating Product - Notes, which I imported current data into that table.I was planning to use insert into function, and every day insert the new values in the table. But this seems too manual.
Is there a way i can do this? The excel spreadsheet is updated daily.
I'm using SQL Server 2008
I'm sure this is possible. You could have the excel connect to your database and then write some macros to save the data to the table when there are changes or new rows are written.
It would not be easy. There is a lot of complicated logic here and excel was not written to be a front end for a database.
I believe the time spent changing your spreadsheet to work this way would be better spent actually writing a client server application to modify the database using a web application or a local application. Client server front end applications are easy to write these days with lots of examples, tools and templates. For someone with experience a simple data entry / modification form is just a couple of days work for a robust application.
Changing the excel file would be much harder.
You could use SSIS to import the excel data into your database on a scheduled basis.

UnixODBC driver issues for .MDF databases. OR: is there a way to easily extract a bunch of tables without an sql server?

Disclaimer: I am somewhat of a n00b when it comes to database programming, so bear with me.
I've been attempting to batch process a rather large amount (~20 gb) of data all contained in .MDF SQL database files. The files contain meteorological data obtained through weather balloons, with each table consisting of ~1 second observations of winds, pressure, height, temps, etc, and are created with our radiosonde tracking software on an unnetworked Windows machine. It is possible (and quite easy) to load the files using the associated software and export the tables as an ASCII text file...however, this process involves manually loading each one. As I'm performing a study that requires as many soundings as possible (we have over 2000), doing this process over and over for several years of twice-daily observations is extremely time-prohibitive.
I've been taking the files off of the computer and putting them on my laptop running Linux Mint, and consider myself to be fluent with Perl...I do most of my data analysis with Perl scripts. That said, I've had the darndest time trying to get into the database files!
I've tried to connect to one of the files using the DBI package using variants on
$dbh = DBI->connect("DBI:ODBC:$filename") or die "blahblahblah";
I have unixODBC installed and configured, have downloaded "libmyodbc.so" and "libodbcmyS.so", and keep getting the error
DBI connect('','',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified (SQL-IM002) at dumpsql.pl line 6.
I've tried remedying this a number of ways over the past couple days, and I won't post them here for the sake of brevity. My odbcinst.ini file is as follows:
[MySQL]
Description = ODBC for MySQL
Driver = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup = /usr/sib/x86_64-linux-gnu/odbc/libodbcmyS.so
FileUsage = 1
I'm seriously confused. I THINK I'm doing everything that various online tutorials are suggesting, but everyone else is connecting to servers and these files are all local and in the same directory! Could anyone attempt to point me in the right direction? All I want is to calculate meteorological values using vertical sounding data! Am I missing something totally obvious?
Any help would be greatly appreciated!
It seems the original database server was a Microsoft SQL Server (MDF files). I am afraid these files alone are useless on a Linux machine. You need a Microsoft SQL Server on a Windows machine to get access to the contained data.
You described that you are able to attach a MDF file on a SQL server manually and then you can export the needed data as text files. Try to automate that. I'm not a MS SQL Server expert but it should be possible.
E.g. here is a tutorial to attach and detach a MDF file via T-SQL. So my approach would be to write a script which iterates over the 2000 MDF files and attach each to the SQL server. Then execute a query to export your data and then detach the MDF.