The question was already asked here,
Pervasive SQL query
but never answered.
Can somebody help to create a query that will search the entire database for a specific value?
Sorry, I can't comment on the previous question as I am a new user and don't have enough reputation to do so.
There is not a built-in way to search every single column for a specific value.
I'm not exactly sure why you want to search every single column for a specific value. Seems a little excessive in terms of the performance hit on the database.
If you really need to do this, the best suggestion I can give would be to write a stored procedure that iterates all of the tables, then iterates all of the fields in each table to use them in the WHERE clause. A better way to do it would be to build the query using the fields where the value is like to be. For example, if you're trying to search all the tables for a specific ID, you probably don't need to search date or currency or quantity fields. How you do this will also depend on the version of PSQL you're using.
If you explain what you hope to accomplish and why you need it, we might be able to offer better suggestions.
Related
I want to find which tables/columns in Redshift remain unused in the database in order to do a clean-up.
I have been trying to parse the queries from the stl_query table, but it turns out this is a quite complex task for which I haven't found any library that I can use.
Anyone knows if this is somehow possible?
Thank you!
The column question is a tricky one. For table use information I'd look at stl_scan which records info about every table scan step performed by the system. Each of these is date-stamped so you will know when the table was "used". Just remember that system logging tables are pruned periodically and the data will go back for only a few days. So may need a process to view table use daily to get extended history.
I ponder the column question some more. One thought is that query ids will also be provided in stl_scan and this could help in identifying the columns used in the query text. For every query id that scans table_A search the query text for each column name of the table. Wouldn't be perfect but a start.
I have created an SQL script that runs really fast and is doing everything I want. It uses a cursor to loop through the parent record and then does some calculations on each and then outputs the results into a temporary table. I have another cursor in that one to extract all the children records of that parent and again does some work and puts it into a temporary table.
My senior Dev is saying cursors are awful and I should do it another way, but also doesn't tell me what a better way is.
So how do I loop through records and do steps of calculations and create an output for each record without using a cursor?
I'm sorry due to work product and how large the script is I can't post it's code. The format of it is:
cursor loops through table that holds parent records
For each parent record it takes field values and does conversions from strings to time.
Those conversions are then used in between statements to figure out if a time falls between the 2 field times
An insert statement with the output is put into a temp table and summed at the end.
Another cursor is created in the parent cursor to pull child records of the parent record from another table. The same process as the parent happens.
I'm not actually upset with my script, its working as intended, its running very quickly so far, but I am open to better practices if possible.
First of all, I hope you're aware that SQL Server 2008 is out of support even with SP4 for a few months now.
Second, as others already said, your Senior DBA is right about cursors. And if your code is too big to post it here, it probably is too time consuming for him to go through it, understand your code and then change it for you. I would expect a senior to give some hints on what to search for, though.
About your question, I find it very hard to think of an answer because your description only gives me a vague idea what you're trying to accomplish. E.g., what are the "field values" that you convert to time?
As of my experience, SQL Server does a pretty good job interpreting datetime strings. You may also find cast/convert and datepart useful.
As far as I understood your parent/child table, you'd probably want to use a table join here. They are well explained here: https://www.w3schools.com/sql/sql_join.asp
You may aggregate the result set to sum(). But again, my understanding of your endeavour is to vague.
I have an SQL table (SQLite database), Listing, that has a datetime field. For my program, I need to know the most recent time field my program has seen. However, I don't store all listings the program has seen into the database.
So my question is, what is the usual way to store data like "most recently seen object", which is a single record, into a database? Is there something more elegant than making another table that has one record with a datetime field?
First of all, you need to save all records into your database, then you get the max() suggested by #GordonLinoff. What's your problem in save the data? If you hope keep database size small, You can add a trigger which delete oldest rows during new INSERTs.
But I don't understand properly your example. Can you give us some part of code?
I've got a table with close to 5kk rows. Each one of them has one text column where I store my XML logs
I am trying to find out if there's some log having
<node>value</node>
I've tried with
SELECT top 1 id_log FROM Table_Log WHERE log_text LIKE '%<node>value</node>%'
but it never finishes.
Is there any way to improve this search?
PS: I can't drop any log
A wildcarded query such as '%<node>value</node>%' will result in a full table scan (ignoring indexes) as it can't determine where within the field it'll find the match. The only real way I know of to improve this query as it stands (without things like partitioning the table etc which should be considered if the table is logging constantly) would be to add a Full-Text catalog & index to the table in order to provide a more efficient search over that field.
Here is a good reference that should walk you through it. Once this has been completed you can use things like the CONTAINS and FREETEXT operators that are optimised for this type of retrieval.
Apart from implementing full-text search on that column and indexing the table, maybe you can narrow the results by another parameters (date, etc).
Also, you could add a table field (varchar type) called "Tags" which you can populate when inserting a row. This field would register "keywords, tags" for this log. This way, you could change your query with this field as condition.
Unfortunately, about the only way I can see to optimize that is to implement full-text search on that column, but even that will be hard to construct to where it only returns a particular value within a particular element.
I'm currently doing some work where I'm also storing XML within one of the columns. But I'm assuming any queries needed on that data will take a long time, which is okay for our needs.
Another option has to do with storing the data in a binary column, and then SQL Server has options for specifying what type of document is stored in that field. This allows you to, for example, implement more meaningful full-text searching on that field. But it's hard for me to imagine this will efficiently do what you are asking for.
You are using a like query.
No index involved = no good
There is nothing you can do with what you have currently to speed this up unfortunately.
I don't think it will help but try using the FAST x query hint like so:
SELECT id_log
FROM Table_Log
WHERE log_text LIKE '%<node>value</node>%'
OPTION(FAST 1)
This should optimise the query to return the first row.
I'm given a task from a prospective employer which involves SQL tables. One requirement that they mentioned is that they want the name retrieved from a table called "Employees" to come in the form at of either "<LastName>, <FirstName>" OR "<FirstName> <MiddleName> <LastName> <Suffix>".
This appears confusing to me because this kind of sounds like they're asking me to make a function or something. I could probably do this in a programming language and have the information retrieved that way, but to do this in the SQL table exclusively is weird to me. Since I'm rather new to SQL and my familiarity with SQL doesn't exceed simple tasks such as creating databases, tables, fields, inserting data into fields, updating fields in records, deleting records in tables which meet a specific condition, and selecting fields from tables.
I hope that this isn't considered cheating since I mentioned that this was for a prospective employer, but if I was still in school then I could just outright ask a professor where I can find a clue for this or he would've outright told me in class. But, for a prospective job, I'm not sure who I would ask about any confusion. Thanks in advance for anyone's help.
A SQL query has a fixed column output: you can't change it. To achieve this. you could have a concatenate with a CASE statement to make it one varchar column, but then you need something (parameter) to switch the CASE.
So, this is presentation, not querying SQL.
I'd return all 4 columns mentioned and decide how I want them in the client.
Unless you have just been asked for 2 different queries on the same SQL table
You haven't specified the RDBMS, but in SQL Server you could accomplish this using Computed Columns.
Typically, you would use a View over the table..