Cross Checking a SQL server report [closed] - sql

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a report that runs daily. I want to send the output of this report to a csv file. Due to the nature of the report, from time to time some data can be lost (new data is generated when the job is executing so sometimes, some is lost during this process as it is a lengthy job).
Is there a way to cross check on a daily basis that there is not any data from the previous day that has been lost- Perhaps with a tick or cross at the end of each row to show that the data has not been exported as a csv?
I am working with sensitive information so cant share any of the report details.

This is a fairly common question. Without specifics, it's very hard to give you a concrete answer - but here are a few solutions I've used in the past.
Typically, such reports have "grand total" lines - your widget report might be broken down by month, region, sales person, product type, etc. - but you usually have a "total widgets sold" line. If that's a quick query (you may need to remove joins and other refinements) then running that query after you've generated the report data allows you to compare your report grand total with the grand total at the end of the report. If the results are different, you know that the data changed while running the report.
Another option - SQLServer specific - is to use a checksum over the data you're reporting on. If the checksum changes between the start and end of the reporting run, you know you've had data changes.
Finally - and most dramatically - if the report's accuracy is critical, you can store the fact that a particular row was included in a reporting run. This makes your report much more complex, but it allows you to be clear that you've included all the data you need. For instance:
insert into reporting_history
select #reportID, widget_sales_id
from widget_sales
--- reporting logic here
select widget.cost,
widget_sales.date,
widget_sales.price,
widget_sales......
from widgets inner join widget sales on ...
inner join reporting_history on widget_sales.widget_sales_id = widget_sales.widget_sales_id
---- all your other logic

Related

How to create SSRS report that use different aggregation algorithms on the same column? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'd like to create a report that has columns like Year, Month, Earnings.
From Month Jan. to Sept, the report shows sum of earnings.
From Month Oct. to Dec, the report shows average of earnings.
I am not sure how to approach this question. I am new to SSRS, please explain it as simple as possible. Thank you very much.
Assume that, you have a table which likes as below and it includes your data in SQL Server.
CREATE table TestTable (ActYear INT ,
ActMonthName VARCHAR(20) ,
ActMonthNum INT,Earnings INT)
In order to develop a report for SSRS you can use the SSRS Report Builder or SSDT in Visual Studio. I suggested the SSRS Report Builder for newbies.
In the first step of the ssrs report development, you should define a data source and the report builder wizard will help to complete this step.
In the second step, you can use the following query in order to populate data SQL Server to report.
SELECT ActYear , SUM(Earnings) AS TotalEarning,
(SELECT AVG(EarnAvg.Earnings) FROM TestTable EarnAvg
WHERE EarnAvg.ActMonthNum BETWEEN #AvgParam1 AND #AvgParam2 ) AS AverageEarn FROM Testtable EarnTot
WHERE EarnTot.ActMonthNum BETWEEN #SumParam1 AND #SumParam2
GROUP BY ActYear
The above query has written as a parametrical so that the users can determine the specific month range. Therefore you don’t need to maintain your report every changings.
For more detail to develop ssrs reports you can refer
SSRS Report Builder introduction and tutorial this article

IBM MDM Component level getParty but as per requesterTimeZone [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
We have existing code to retrieve user detail by component level getPetrson call. Perhaps the last developer did component level getPerson due to performance benefits.
But now I have requriement that all the date fields in the getPerson response must have dates in timezone as per the defined value in requesterTimeZone field.
I have 2 options
Convert all component level getParty into controller level and set timezone. 2. Manually write codes to convert list of 20 - 25 date field values into a timezone defined in requesterTimeZone.
Which one is really performance benificial . is there a way at component level getPerson call to set requesterTimeZone as say IST or PST but the stored value in DB is by default GMT.
I will choose 1st option as per IBM standards. Manually converting timestamp fields which avaialble in most of the BOBjs is tedious job and it is not as per recommendations. I hope you guys enabled OTS, Hence adding controller flow doesn't impact much. If your invoking getParty more than once then save the response instead of calling many times..
Are you calling at business proxy? like Maintain?
Possible let us know the behaviour exactly.
Finally I used the ObjectHierarchyMetadata.addHandler(BusinessObjectTimeZoneConverterHandler); & ObjectHierarchyMetadata.execute(anyBobj);
to convert a anyBObj got using component level get call.

Sabre- BARGAIN FINDER MAX (OTA_AirLowFareSearchRQ) ResponseType="" [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm currently working on the development of Sabre SOAP API's for Air(Flights) and using BargainFinderMax(OTA_AirLowFareSearchRQ) to search for flight availability. so in the request, there is a parameter ResponseType that defines the type of response to the requested data.
My Question is: if there is any responsetype which will bring the result grouped based upon the prices. ForExample: a group with one price = $1000 will contain multiple Options of flights (having different timings). For now, I can only get the OTA and GIR response type. which shows separate itineraries having the same price, as shown in the image below:
It has two itineraries with the same data(same price) but different Legs. What I'm actually looking for is that Itineraries with same price be grouped together in a single element.
Same as the response returned in TravelPort if we make LOWFARESEARCH Request and set SolutionResult="false". it gives PricePoint results i.e. Itineraries grouped in a single pricepoint. Can this be possible in sabre?
ResponseType can only have those 2 values, as stated in the request documentation: ResponseType, specify type of the response, valid values: "OTA" - regular OTA response, "GIR" - Grouped Itinerary Response.
If not used, it will default to OTA.
Anyway, even though it is harder to read (by a person), GIR groups almost everything, in order to avoid duplicating data. But, since the price of the whole itinerary is inside the itinerary element, the only way to do what you want is by looping through the itineraries and grouping them together, and it can be achieved using either OTA or GIR. There's nothing built in for that.

SQL Server/Table Design, table for data snapshots where hundreds of columns possible [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a business process that requires taking a "snapshot" of portions of a client's data at a point in time, and being able to regurgitate it later. The data set has some oddities though that make the problem interesting:
The data is pulled from several databases, some of which are not ours.
The list of fields that could possibly be pulled are somewhere between 150 and 200
The list of fields that are typically pulled are somewhere between 10 and 20.
Each client can pull a custom set of fields for storage, this set is pre-determined ahead of time.
For example (and I have vastly oversimplified these):
Client A decides on Fridays to take a snapshot of customer addresses (1 record per customer address).
Client B decides on alternate Tuesdays to take a snapshot of summary invoice information (1 record per type of invoice).
Client C monthly summarizes hours worked by each department (1 record per department).
When each of these periods happen, a process goes out and fetches the appropriate information for each of these clients... and does something with them.
Sounds like an historical reporting system, right? It kind of is. The data is later parsed up and regurgitated in a variety of formats (xml, cvs, excel, text files, etc..) depending on the client's needs.
I get to rewrite this.
Since we don't own all of the databases, I can't just keep references to the data around. Some of that data is overwritten periodically anyway. I actually need to find the appropriate data and set it aside.
I'm hoping someone has a clever way of approaching the table design for such a beast. The methods that come to mind, all with their own drawbacks:
A dataset table (data set id, date captured, etc...);
A data table (data set id, row number, "data as a blob of crap")
A dataset table (data set id, date captured, etc....);
A data table (data set id, row number, possible field 1, possible field 2, possible field 3, ...., possible field x (where x > 150)
A dataset table (data set id, date captured, etc...); A field table (1 row per all possible field types); A selected field table (1 row for each field the client has selected); One table for each primitive data type possible (varchar, decimal, integer) (keyed on selected field, data set id, row, position, data is the single field value).
The first being the easiest to implement, but the "blob of crap" would have to be engineered to be parseable to break it down into reportable fields. Not very database friendly either, not reportable, etc.. Doesn't feel right.
The second is a horror show of columns. shudder
The third sounds right, but kind of doesn't. It's 3NF (yes, I'm old) so feels right that way. However reporting on the table screams of "rows that should have been columns" problems -- fairly useless to try to select on outside of a program.
What are your thoughts?
RE: "where hundreds of columns possible"
The limitations are 1000 columns per table
http://msdn.microsoft.com/en-us/library/ms143432.aspx

Add a Record To Last Of The Table [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
When I use below query to add new record to the table (at unknown), the new records added to first of table .
I want to add new records to the last of table .
this my code :
begin
insert into TBLCrowler (Url,Title,ParentId,HasData) values (#Url,#Title,#ParentId,#HasData)
end
select top 1 CatId, Title, ParentId, Url, CrawlerCheck from TBLCrowler where CatId=(Select min(CatId) from TBLCrowler where CrawlerCheck=1)
update TBLCrowler set CrawlerCheck=2 , HasData=2
where CatId=(Select min(CatId) from TBLCrowler where CrawlerCheck=1)
Okay, once more into the breach.
You are using a relational database. It's not a worksheet, it's not a rectangular array of cells in a word document. It's power is reliant on being able to store and retrieve records in the most efficient way possible.
Ordering is either implied through an index, which the DBMS is free to ignore, and which could change anyway, or explicitly required through an order by statement
If you want things ordered by the time they were added to the table, you add a created_at column and populate it at the time you perform the insert.
Then when you select from it you add Order By Created_At to your select statement.
If you want that ordering to be "fast" you add an index on the Created_At column, which then DBMS will make a brave attempt at using the index in order to avoid the cost of a full pass sort.
Step back and think for a minute about how you would write a DBMS. What would the cost of your implicit orderedness be. Any change to any but the "last" record in the "last" table would mean rewriting to disk every record in every table "after" it. Insert is worse, Delete is as bad and that's without considering that different records take up different amounts of space.
So throw first and last in the bin, if you can find them...