bulk insert works in ssms but not in other applications - sql

Context: SQL Server 2005
I have a simple proc, which does a bulk load from an external file.
ALTER proc [dbo].[usp_test]
AS
IF OBJECT_ID('tempdb..#promo') is not null BEGIN
DROP TABLE #promo
END
CREATE TABLE #promo (promo VARCHAR(1000))
BULK INSERT #promo
FROM '\\server\c$\file.txt'
WITH
(
--FIELDTERMINATOR = '',
ROWTERMINATOR = '\n'
)
select * from #promo
I can run it in SSMS. But when I call it from another application (Reporting service 2005), it throws this error:
Cannot bulk load because the file "\server\c$\file.txt" could not be opened. Operating system error code 5 (Access is denied.).
Here is complicated because it may related to the account used by reporting service, or some windows security issue.
But I think I can maybe impersonate the login as the one I used to create the proc because the login can run it in SSMS. So tried to change the proc to 'with execute as self', it compiles ok, but when I tried to run it in SSMS, I got:
Msg 4834, Level 16, State 4, Procedure usp_test, Line 12
You do not have permission to use the bulk load statement.
I am still in the same session, so when I run this, it actually execute as the 'self', which is the login I am using now, so why I got this error? What should I do?
I know it's bit unclear so just list the facts.
========update
I just tried using SSIS to load the file into a table so that the report can use. The package runs ok in BIDS but when runs in sql agent job it got the same access to the file is denied error. Then I set up a proxy and let the package run under that account and the job runs no problem.
So I am thinking is it the account ssrs used can't access the file? What account is used by ssrs? Can ssrs be set up to run under a proxy like sql agent does?
==============update again
Finally got it sorted
I have created a SSIS package, put the package in a job (running under a proxy account to access the file), and in the proc to execute the job. This does work although tricky (need to judge whether the job has finished in the proc). This is too tricky to maintain, so just create as a proof of concept, will not go into production.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/761b3c62-c636-407d-99b5-5928e42f3cd8/execute-as-problem?forum=transactsql

1) The reason you get the "You do not have permission to use the bulk load statement." is because (naturally) you don't have permissions to use the bulk load statement.
You must either be a sysadmin or a bulkadmin at the server level to run BULK commands.
2) Yes, "Access is denied" usually means whatever credentials you are using to run the sproc in SSRS does not have permissions to that file. So either:
Make the file available to everyone.
Set a known credential with full access to the file to the datasource running the sproc.
3) What the heck, dude.
Why not just use the text file directly as a data source in SSRS?
If that's not possible, why not perform all your ETL in one sproc run outside SSRS, and then just use a simple "select * from table" statement for SSRS?
Please do not run a BULK INSERT every time someone wants the report. If they need up to date reads of the file, use the file as a data source. If they can accept, say, a 10 minute lag in data, create a batch job or ETL process to pick the file up and put it into a database table every 10 minutes and just read from that. Write once, read many.

Related

Access to a session in SQL Server

I have a session created in my vb.net codes and running some SQL queries, there are some local temp tables like #T1, #T2 , ...
Execution process has some steps and I need to know which data changes in my local tables in each step.
Currently I use this to view the data in my code:
select * into ##T1 from #T1
I can't use sp_getbindtoken because there is no active transaction. I can not use DBCC because I don't have permission.
I can run sys.dm_exec_sessions view and therefor I have active session_id,
I also have connection Index of active sql connection
is there any way to connect to a active session and access local temp tables?
or is there any way to get those data of #T1, #T2,...?
EDIT1:
according to the comment which commented by #SeanLange
I have some temp tables as I said, and in the steps mentioned before I do some calculations on these temp tables, for tracing these calculations I need to know what happens in these steps, and I want to execute a simple select statement on these temp tables. what I wanted to do was connect to the active session created in my source code from an external project called Tracer, and perform select statements while my source is on the fly and meanwhile trace the data created in these session
You can't do it. Sorry. (at least without sa privileges).
Run your queries from within a stored procedure and add code to log whatever you need to a table, then query the log table as needed.
Execution process has some steps and I need to know which data changes in my local tables in each step.
If you have permission, you can create a trigger to do the logging for you

Check database / server before executing query

I am frequently testing certain areas on a development server and so running a pre-defined SQL statement to truncate the tables in question before testing again. It would only be a slip of a key to switch to the live server.
I'm looking for an IF statement or similar to prevent that.
Either to check the server name, database name, or even that a certain record in a different table exists before running the query.
Any help appreciated
For such cases I use stored procedures. I'd call them TestTruncateTables, etc.
Then instead of calling TRUNCATE TABLE you should CALL TestTruncateTables.
Just make sure that the procedures are not created on the live server. If by any chance you happen to run CALL TestTruncateTables on the live server you only get an error about non-existing proc.

Debugging SP on SSIS

I have a stored procedure that I execute through SSIS using an execute sql task. It appears to work on SISS, but when I look at the database the record is not created. The connection is for the correct database. The PROBLEM.
I have put a breakpoint ON and checked all the variables getting fed IN AND THEN ran it manually IN SQL SERVER management.
The SP work perfectly in SSMS with the same input parameters, but when executed through SSIS, it does not create the records required and does not give any error out.
In the SP I have a try catch to put any erorrs in the stored procedure when it erorr out to a table, but there is no entry for the SSIS run. According to the Error table for the SP and SSIS it looks like it executed successfully. When I go to see if the record it is not created. I cannot see the problem. Is there something I can put into the stored procedure to debug this problem or anything further I can do in SSIS to work this out ?
It has been 3 hours on this problem so looking for a fresh perspective to work out what is happening.
The SSIS package definitely points to the correct database and stored procedure.
From the watch window it appears to be giving all the parameters the correct values and does not error in SSIS.
Worked it out with sql profiler . In the Target database there is sequence that is incremented each time a new record needs to be created . When I deleted the record to rerun it it created it with a different ID number , I was expecting it to be created with the same ID number.
Thanks Billinkc !

Problem with SQL Server client DB upgrade script

SQL Server 2005, Win7, VS2008. I have to upgrade database from the old version of product to the newer one. I'd like to have one script that creates new database and upgrades old database to the new state. I am trying to do the following (SQL script below) and get the error (when running on machine with no database ):
Database 'MyDatabase' does not exist. Make sure that the name is
entered correctly.
The question is:
How can I specify database name in upgrade part
Is the better way to write create/upgrade exists ?
SQL code:
USE [master]
-- DB upgrade part
if exists (select name from sysdatabases where name = 'MyDatabase')
BEGIN
IF (<Some checks that DB is new>)
BEGIN
raiserror('MyDatabase database already exists and no upgrade required', 20, -1) with log
END
ELSE
BEGIN
USE [MyDatabase]
-- create some new tables
-- alter existing tables
raiserror('MyDatabase database upgraded successfully', 20, -1) with log
END
END
-- DB creating part
CREATE DATABASE [MyDatabase];
-- create new tables
You don't usually want to explicitly specify database name in a script. Rather, supply it exernally or pre-process the SQL to replace a $$DATABASENAME$$ token with the name of an actual database.
You're not going to be able to include the USE [MyDatabase] in your script since, if the database doesn't exist, the query won't parse.
Instead, what you can do is keep 2 separate scripts, one for an upgrade and one for a new database. Then you can call the scripts within the IF branches through xp_cmdshell and dynamic SQL. The following link has some examples that you can follow:
http://abhijitmore.wordpress.com/2011/06/21/how-to-execute-sql-using-t-sql/
PowerShell may make this task easier as well, but I don't have any direct experience using it.

Using table just after creating it: object does not exist

I have a script in T-SQL that goes like this:
create table TableName (...)
SET IDENTITY INSERT TableName ON
And on second line I get error:
Cannot find the object "TableName" because it does not exist or you do not have permissions.
I execute it from Management Studio 2005. When I put "GO" between these two lines, it's working. But what I would like to acomplish is not to use "GO" because I would like to place this code in my application when it will be finished.
So my question is how to make this work without using "GO" so that I can run it programmatically from my C# application.
Without using GO, programmatically, you would need to make 2 separate database calls.
Run the two scripts one after the other - using two calls from your application.
You should only run the second once the first has successfully run anyway, so you could run the first script and on success run the second script. The table has to have been created before you can use it, which is why you need the GO in management studio.
From the BOL: "SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to SQL Server". Therefore, as Jose Basilio already pointed out, you have to make separate database calls.
If this can help, I was faced with the same problem and I had to write a little (very basic) parser to split every single script in a bunch of mini-script which are sent - one at a time - to the database.
something even better than tpdi's temp table is a variable table. they run lightning fast and are dropped automatically once out of scope.
this is how you make one
declare #TableName table (ColumnName int, ColumnName2 nvarchar(50))
then to insert you just do this
insert into #TableName (ColumnName, ColumnName2)
select 1, 'A'
Consider writing a stored proc that creates a temporary table and does whatever it needs to with that. If you create a real table, your app won't be able to run the script more than once, unless it also drops the table -- in which case, you have exactly the functionality of a temp table.