I run this query to get landesk info. It returns 4 columns of info that I want but when I try to set it up as a scheduled job the output doesn't format the results the same. I'd like a column for each. Under Steps I just run the same query and under advanced I output the file. Another twist is, why can I not send the output to a share? I can only select drives on the server. SQL Server 2008 R2
SELECT DISTINCT A0.DISPLAYNAME AS "Device Name"
,A1.OSTYPE AS "OS Name"
,A0.DOMAINNAME AS "Domain Name"
,A0.HWLASTSCANDATE AS "Last Hardware Scan Date"
FROM Computer A0(NOLOCK)
LEFT JOIN Operating_System A1(NOLOCK) ON A0.Computer_Idn = A1.Computer_Idn
WHERE (A0.DEVICENAME IS NOT NULL)
ORDER BY A0.DISPLAYNAME
I believe to get the consistency you are looking for as well the ability to output to network locations, that you will want to set up an ssis package and run that under the agent.
If you are stuck with this method see if the files are tab delimited, you are probably getting 4 columns but in a text editor its not obvious.
Related
I have a report which is sent daily which has some no. of rows but I want to send a separate report with a subject which says it is "critical" as it has n no. of rows in it.
How do I schedule this in SSRS?
Thank you!
Create a data driven subscription that only returns results if your table contains n rows.
A Data Driven Subscription would be best if you have the Enterprise version of SQL, but if you don't you'll need to get creative. One method that should work is to create a copy of the existing report (if it's TheNinjaReport call the copy TheNinjaReport_Critical or something), and alter the query so that it throws an error if there aren't the requisite number of rows. When the query throws an error, the subscription will fail and nothing will go to the end user. Something like
IF (SELECT COUNT(*) FROM dbo.ErrorLog) > 100
SELECT *
FROM dbo.ErrorLog
ELSE
RAISERROR('Not a critical number of errors', 16, 1)
This is not ideal because now you have two reports to maintain, but it will get you where you need to be.
I am having a an issue with SQL Command and CASE. I am pretty new to Crystal Reports/SQL and I have a basic code that I am playing around with to learn. I want to clean out a field -- that has imported from SQL Server. I just want to do something simple like this:
SELECT "I"."I_TYPE", "Alleg” =
CASE
WHEN "ALLEGs"."ALLEG” LIKE ‘*im*’ THEN ‘Improper’
ELSE ‘UNKNOWN’
END
I get an error that says Database Connector Error:
'42000:[MS][SQL..Incorrect syntax near ' . ' . Databse vender code 102.
Can you even use CASE as an IF THEN statement in the SQL Command. I am aware of SQL expressions, but I am trying to pull data to sql command to prevent performance decrease.
I am not sure about crystal report but your query formation don't look correct. It should be
SELECT I.I_TYPE,
CASE WHEN ALLEGs.ALLEG LIKE '%im%' THEN 'Improper' ELSE 'UNKNOWN' END AS 'Alleg'
Rahul, is correct for direct SQL commands to the server.
However sometimes when running Crystal Reports we use VBA within the report to do some data tuning rather than modifying the raw SQL on the fly.
This leaves the raw SQL a known result (verifiable on the SQL Server) and then modify the output in Crystal to fit the end users filtering requirements.
This is not efficient with large result sets but when the results are smaller (under 50k records) we usually have our team go with simple (post filtering) to reduce design and testing time.
This technique works very well with dynamic filters on the option section.
example: [Record Selection]
if {?Select Sales Person} <> "ENT" then
{R0033___P2A.ProjectionSP} = {?Select Sales Person} and {R0033___P2A.FM} >= 0
else
{R0033___P2A.ProjectionSP} > "" and {R0033___P2A.FM} >= 0
Where {?Select Sales Person} is a user selection filter and {R0033___P2A} is a predefined report view or stored procedure.
I'm kicking tires on BI tools, including, of course, Tableau. Part of my evaluation includes correlating the SQL generated by the BI tool with my actions in the tool.
Tableau has me mystified. My database has 2 billion things; however, no matter what I do in Tableau, the query Redshift reports as having been run is "Fetch 10000 in SQL_CURxyz", i.e. a cursor operation. In the screenshot below, you can see the cursor ids change, indicating new queries are being run -- but you don't see the original queries.
Is this a Redshift or Tableau quirk? Any idea how to see what's actually running under the hood? And why is Tableau always operating on 10000 records at a time?
I just ran into the same problem and wrote this simple query to get all queries for currently active cursors:
SELECT
usr.usename AS username
, min(cur.starttime) AS start_time
, DATEDIFF(second, min(cur.starttime), getdate()) AS run_time
, min(cur.row_count) AS row_count
, min(cur.fetched_rows) AS fetched_rows
, listagg(util_text.text)
WITHIN GROUP (ORDER BY sequence) AS query
FROM STV_ACTIVE_CURSORS cur
JOIN stl_utilitytext util_text
ON cur.pid = util_text.pid AND cur.xid = util_text.xid
JOIN pg_user usr
ON usr.usesysid = cur.userid
GROUP BY usr.usename, util_text.xid;
Ah, this has already been asked on the AWS forums.
https://forums.aws.amazon.com/thread.jspa?threadID=152473
Redshift's console apparently doesn't display the query behind cursors. To get that, you can query STV_ACTIVE_CURSORS: http://docs.aws.amazon.com/redshift/latest/dg/r_STV_ACTIVE_CURSORS.html
Also, you can alter your .TWB file (which is really just an xml file) and add the following parameters to the odbc-connect-string-extras property.
UseDeclareFetch=0;
FETCH=0;
You would end up with something like:
<connection class='redshift' dbname='yourdb' odbc-connect-string-extras='UseDeclareFetch=0;FETCH=0' port='0000' schema='schm' server='any.redshift.amazonaws.com' [...] >
Unfortunately there's no way of changing this behavior trough the application, you must edit the file directly.
You should be aware of the performance implications of doing so. While this greatly enhances debugging there must be a reason why Tableau chose not to allow modification of these parameters trough the application.
I am using pentaho for data migration testing. I have set a "table input" step where many parts of the query inside "table inputs" are variables. I have been looking for a way to capture that query after it gets executed during runtime.
I was wondering if there is any specific system log variables for sql or is it to do with metadata. need help! Thanks
Maybe the following approach will help:
We assume a transformation reading a CSV file to get the dynamic portion of the SELECT statement (e.g. the columns) and setting the variable columns with it.
The second transformation uses this variable to generate the SELECT statement and store it into the variable sql_statement.
In the main transformation we use ${sql_statement} as the SELECT statement of the table input and write the data to an output file (that's the business process so to say). From the same input we copy the output to another path. There we add the current time as a field (use element "Get system data") and we add the generated SQL statement, join them as a cartesian product and group the result by the sql_statement. That way we can compute the first time and the last time that the statement was used. These results are written to a text file.
The last thing we need is a job calling the three transformations sequentially.
This is a sample output:
sql_statement;min_time;max_time
SELECT my_column FROM test_table;2014/05/08 00:41:21.143;2014/05/08 00:41:21.144
Thank you Marcus! I did some thing similar.
It works. awesome.
I gathered parts of queries from table field where they were kept and formed a full query in javascript. After that full query will be sent as parameter to a transformation that will run and log the query.
I'm fairly new to MSSQL and SSRS.
I'm trying to create a data driven subscription in MSSQL 2008 Standard SSRS that does the following.
Email the results of the report to a email address found within the report.
Run Daily
For Example:
Select full_name, email_address from users where (full_name = 'Mark Price')
This would use the email_address column to figure out who to email, This must also work for multiple results with multiple email address's.
The way I'm thinking of doing this is making a subscription to run the query, if no result is found then nothing happens.
But if a result is found then the report changes the row in Subscriptions table to run the report again in the next minute or so with the correct email information found in the results.
Is this a silly idea or not?
I've found a couple blog posts claiming this works but i couldn't understand their code enough to know what it does.
So, Any suggestions on how to go about this or if you can suggest something already out there on the internet with a brief description?
This takes me back to my old job where I wrote a solution to a problem using data-driven subscriptions on our SQL Server 2005 Enterprise development box and then discovered to my dismay that our customer only had Standard.
I bookmarked this post at the time and it looked very promising, but I ended up moving jobs before I had a chance to implement it.
Of course, it is targeted at 2005, but one of the comments seems to suggest it works in 2008 as well.
I've implemented something like this on SQL Server Standard to avoid having to pay for Enterprise. First, I built a report called “Schedule a DDR” (Data Driven Report). That report has these parameters:
Report to schedule: the name of the SSRS report (including folder) that you want to trigger if the data test is met. E.g. "/Accounting/Report1".
Parameter set: a string that will be used to look up the parameters to use in the report. E.g. "ABC".
Query to check if report should be run: a SQL query that will return a single value, either zero or non-zero. Zero will be interpreted as "do not run this report"
Email recipients: a list of semicolon-separated email recipients that will receive the report, if it is run.
Note that the “Schedule a DDR” report is the report we’re actually running here, and it will send its output to me; what it does is run another report – in this case it’s “/Accounting/Report1” and it’s that report that needs these email addresses. So “Schedule a DDR” isn’t really a report, although it’s scheduled and runs like one – it’s a gadget to build and run a report.
I also have a table in SQL defined as follows:
CREATE TABLE [dbo].[ParameterSet](
[ID] [varchar](50) NULL,
[ParameterName] [varchar](50) NULL,
[Value] [varchar](2000) NULL
) ON [PRIMARY]
Each parameter set – "ABC" in this case – has a set of records in the table. In this case the records might be ABC/placecode/AA and ABC/year/2013, meaning that there are two parameters in ABC: placecode and year, and they have values "AA" and "2013".
The dataset for the "Schedule a DDR" report in SSRS is
DDR.dbo.DDR3 #reportName, #parameterSet, #nonZeroQuery, #toEmail;
DDR3 is a stored procedure:
CREATE PROCEDURE [dbo].[DDR3]
#reportName nvarchar(200),
#parameterSet nvarchar(200),
#nonZeroQuery nvarchar(2000),
#toEmail nvarchar(2000)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
select ddr.dbo.RunADDR(#reportName,#parameterSet,#nonZeroQuery,#toEmail) as DDRresult;
END
RunADDR is a CLR. Here's an outline of how it works; I can post some code if anyone wants it.
Set up credentials
Select all the parameters in the ParameterSet table where the parameterSet field matches the parameter set name passed in from the Schedule A DDR report
For each of those parameters
Set up the parameters array to hold the parameters defined in the retrieved rows. (This is how you use the table to fill in parameters dynamically.)
End for
If there’s a “nonZeroQuery” value passed in from Schedule A DDR
Then run the nonZeroQuery and exit if you got zero rows back. (This is how you prevent query execution if some condition is not met; any query that returns something other zero will allow the report to run)
End if
Now ask SSRS to run the report, using the parameters we just extracted from the table, and the report name passed in from Schedule A DDR
Get the output and write it to a local file
Email the file to whatever email addresses were passed in from Schedule A DDR
Instead of creating a subscription to modify the subscriptions table, I would put that piece somewhere else, such as in a SQL agent. But the idea is the same. A regularly running piece of SQL can add or change lines in the subscription table.
A Google of "SSRS Subscription table" returned a few helpful results: Here's an article based on 2005, but the principles should be the same for 2008: This article is for 2008, and is really close to what you are describing as well.
I would just look at the fields one by one in the subscriptions table and determine what you need for each. Try creating a row by hand (a manual insert statement) to send yourself a subscription.
R-Tag supports SSRS data driven reports with SQL Server standard edition
You can use SQL-RD, a third-party solution, to create and run data-driven schedules without having to upgrade to SQL enterprise. It also comes with event-based scheduling (triggers the report on events including database changes, file changes, emails received and so on).