Storing a result set in a SQL Server database? [closed] - sql

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm looking for a way to store a result set in my SQL Server database so it's faster to retrieve, if possible. The reason I want to do this is that I need the information quite frequently, but the data rarely changes so I believe it will improve my database performance a lot.
The only thing I was able to find was indexed views, which doesn't work for me since my query doesn't qualify for that kind of view.
My result set is derived from several sql queries, that will increase in time.
My backup solution is to have the program using the database to store it's own copy, so I can skip calling the database. But this will make my system more complex. I would rather have all my data calls in the database so it's easier to keep track of things.
Do any of you know a way store result sets on a SQL Server database?

I need the information quite frequently, but the data rarely changes
If the data is going to rarely change, then why not just use a SSI file based on the data in the database. You can always recreate this text file whenever the data changes.
When I did web stuff we served up all the data for all the web pages directly from database queries. We decided to change our model to use SSI files for all the database items that rarely changed. We built a "File Recreation" routine inside the backend admin that would automatically build and overwrite the SSI file when ever the customer changed one of those "rarely" changed database items.
This boosted performance on our servers, cut down on server round trips and spead up the display time. Truly a win-win.

Related

Are there significant cons to deeply nested types in DB columns? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm trying to evaluate the pros and cons of structuring my data model in various ways, and I'd like to know if I'd be shooting myself in the foot by stuffing rather complex nested data types into individual columns of a SQL table.
For example, let's say I want to serialize an array of structs (or even worse, array of hashes of hashes) and save it in a single column. It'd most likely be a somewhat nested JSON dictionary. Something like:
user_id | user_related_data_blob
---------------------------
1 | { .... }
The obvious cons I can immediately see are data coupling, in case some data isn't quite tightly related to the user. There's also the size of each retrieved row, which might make fetching from the web rather slow, especially if most of the data isn't even immediately necessary by the client. Doing SQL by those columns also becomes rather complex (and probably not indexable) unless there's special tech in place to support it.
Pretty much the only upside, and it might be significant depending on the context, is that if you don't want to create a complex schema and spend a lot of time making sure that all the constraints are sane and that you have a lot fewer moving parts. For something like a quick prototype, it might even make sense.
Am I missing anything here? Is there a rule of thumb out there in the SQL world that states that you should never nest data in an individual column? Any good guidelines?
Is there a rule of thumb out there in the SQL world that states that
you should never nest data in an individual column?
First normal form for a start. Perhaps you're after a NoSQL solution?
A major problem is you will likely need to retrieve your data into another environment to be able to update it effectively, unlikely able to access it in the most effective ways. Postgres, for example, has a json data type and some functions for retrieving and storing data, including type verification, but you still need to retrieve the object from the database, serialize it in your code, and then access it how you want.
With the many varieties of data stores if you must store data in specific formats I'd seek out options designed around that. Mongo and Couchbase, for example, are great engines for storing JSON encoded data allowing you to access a JSON object as its natural type.
Going back to Postgres for a second, it does provide the row_to_json() function which will return table rows formatted as JSON. If you can easily map your JSON data to a flat table structure, this may offer some possibilities. Of course it does not help with unstructured or semi-structured data.
Storing blob data in a SQL database is a bad idea, because SQL doesn't understand it and doesn't provide useful tools to work with it.
there is no way to search or filter BLOB data
there is no way to retrieve just parts of the BLOB, it's all or nothing
most SQL databases aren't built internally for handling BLOBs
You could find a dirty workaround using string operations in SQL to get some limited access to the content of the blobs, but these will be very, very slow and cumbersome to implement and maintain.
But when nested documents are the best way to express your data, you should consider using a document-oriented database like MongoDB or CouchDB which provide the tools to search, filter and modify individual fields of documents.

SQL Server 2005 nested views - strategy for unentaglement? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a reporting web application that graphs information based on aggregations done in a series of nested sql views (3 levels deep). The last nested view is called from a stored procedure within the entity framework code. The performance penalty of this has begun to take its toll, as it struggles to get data and times out:
In Sql profiler there's 360,000+ reads and CPU # 28313. In addition, I can't even open the third level view in SQL without timing out.
The first view simply gathers data from the several tables and aggregates. The second performs calculations on this data such as date differences, time zone adjustments, and averages. The third finalizing these calculations and presents a summary of the required data. The third view is the one I query from.
What is a good strategy for untangling nested views, in general? Specifically if you have calculations that need to be done in SQL server, but can only be done once data has been combined into a certain level, what's a better strategy than nested views?
Thanks for any help you can provide!
You can include any view as a subquery, however, I doubt that in itself will help performance in this case. It will enable you to look at parts of the views and possibly put any shared parts into table variables or temporary tables.
If you could share more information, that might help.
I advise you:
inline those views, so you've got it all in front of you
cut it down, bit by bit, to see where the (uncompiled!!) performance issues are.
fix the performance issues
hope that the performance issues are in the same place once it is compiled.

Centralized storage for text files [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I thinking about centralized storage of text files with different metadata and content settings(unique lines, key:value lines) but still don't know which technology to use - sql db like PostgreSQL or NoSQL solutions.
Large files: 100 - 600 mb each, small queries to read/write 100 - 500 lines
Any hint ?
Really, the choice between SQL and NoSQL systems depends on what kind of system you're running. SQL is relatively expensive compared to most NoSQL systems because it provides all the ACID guarantees -- Atomicity, Consistency, Integrity, and Durability. These are important guarantees to maintain consistency of your data if you actually require consistent data. If you don't require consistent data (e.g. you are a caching solution or you are Twitter) then NoSQL systems' efficiency becomes much more attractive.
For your specific use case; it doesn't sound like there are many solutions that are going to help you. Modifying the middle of a text file is inherently going to require (at least) rewriting the entire portion of the text file after the edit point out to disk (assuming you actually want the files to be plain text on disk).
You might be able to build a system on top of SQL or NoSQL which represents text files as lines or chunks of lines, and be able to operate on them in a row-oriented manner. But even that type of system is likely to be inefficient for files as large as 100-600MB. Consider storing the files themselves as some kind of structured data in SQL; and then regenerating the files on demand when a user requests the full text file.

server-side sql vs client-side sql [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have been looking to find answers to this question from past two hours. I haven't found even one single relevant post/book/answer. Could somebody explain the difference between server-side scripting and client-side scripting to me. I know that triggers are part of server-side scripting but really, whats the difference between the two. Could you please provide me with couple examples.
Thanks!
This could actually mean a couple of different things, but the explanation that is probably most relevant to you (based on your mention of triggers) is that server-side scripting is SQL that is precompiled and stored in the database in the form of triggers, functions, stored procedures, views, etc while client-side SQL (also known as dynamic SQL) is SQL that is contained within the application.
Some of the reasons for implementing server-side SQL include performance (the database can precompile and optimize the SQL), security, and maintenance (it is much easier to modify a stored procedure than to recompile and re-release your application).
The primary reason that we have found for implementing dynamic SQL is to handle situations in which are not easily handled through server-side SQL, usually involving variable-length where statements.
I don't think you can say there is really such thing as "client-side SQL". There might be SQL commands/statements generated by a client application, but they are executed directly on the Database Enginer to persist and be logged.
In other words a client application might issue this:
select *
from SomeTable
If it is successful, that SELECT will be executed on the database server, not the client application even though that's where it was generated.
Now you might be trying to distinguish where SQL code is generated. A client application might generate the bulk of the Data Manipulation Language (DML) code (i.e. INSERT/UPDATE/DELETE) and be performing OLAP (SELECT). The server will generate the SQL and events for things like triggers. The database engine, with a trigger, will see that an action was taken on a database, object, or the server itself and then this event will "trigger" the database engine to execute another piece of SQL code. That would be server-generate SQL.
I think I understand your question, but please let me know if there is something else or I didn't answer it correctly.

FOR XML AUTO in stored procedure [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
will FOR XML AUTO queries in stored procedures and using ExecuteXmlReader to retreive data to set business objects cause any performance hit?
I don't know about performance, but I have started doing things this way in order to take advantage of serialization, so I can pass BLL types as generics directly to the DAL for filling, and I like it a lot. It bypasses the Linq or typed DataSets, and uses a lot less code, whether machine-generated or not. As for performance, the best thing to do is run your own tests.
Update: If you're going to use FOR XML to serialize to BLL objects, don't use auto, use PATH, and specify the name of the root, otherwise it will use <row/> as the root element.
It surely affects the performance if the amount of data you are going to retrieve is relatively higher (The XML formatting is bigger than TDS rowset). I don't have the exact statistics with me. But tou can easily profile your queries with and without XML AUTO and find the facts. But XML AUTO surely takes much time than normal SQL queries.
I would say its preferrable to convert your recordsets to XML format in your application code than doing it in sql server.
EDIT:
T-SQL commands vs. XML AUTO in SQL Server - this article explains and gives the comparison between XML auto and normal T-SQL queries, have a look at this.
Author concluded that "It seem to be consistent that the T-SQL query is performing better than the rest. The XML query FOR XML AUTO is using more than eight times more CPU but the same amount of I/O. The complex XML commands are issuing more than 80 times (!) more reads than the T-SQL command, also many writes and above six times the CPU."