my query returns a column that can hold types of real estate. Values can be condo or duplex or house and so on. Instead of displaying condo, I just want a C in the column. My plan was to use a huge case/when structure to cover all the cases, is there an easier way? Just displaying the first letter in upper case wont work by the way, because sometimes that rule cant be applied to create the short code. Duplex for example is DE...
Thanks :-)
If you don't want to use a CASE statement how about creating a lookup table to map the column value to the lookup code you want. Join on to this in your query.
NB - Only worth considering if your query is running over a fairly small resultset or you'll hit performance issues. Indexing the column would help.
Some other options are depending on your DB server features:
Use a UDF to do the conversion.
Computed column on the source table.
Helper table with a column for each shorthand matching the long string?
The obvious thing to do would be to have another table which maps your value to a code, to which you can then join your results. But it smells a bit wrong. I'd want to join this other table to key values, not strings (which I assume aren't key values)
Why dont you use a decode function in sql
select decode(your_column_name,"condo","C",your_column_name) from table
Related
I am creating a table by selecting from another. My understanding is that by doing this, the columns created should be of the same data type as the original source. This is not the case for a couple columns and it is driving me nuts. Two columns in particular. One is a Varchar2(4) and the other a Varchar2(1), but in the created table they both become Varchar2(100). Is there any case where this should happen?
My select query is fairly complicated in that there are 20 Unions, but they all pull from the same single table. There are also 28 columns, so I would rather have this work than create the table and populate it in two steps.
Are unions known to mess up this sort of script?
I understand the preference to create and fill the table in one step. Besides being easier now, it will also be easier if there are any modifications. However at some point you might consider you are spending more time trying to fix this, then it would take to write the create table statement. (which you can likely have the database do for you from the existing version).
As far as why it is doing this, I would suggest that one of your unions has either a concat or other function applied to the field. If you find the offending column then you can cast it to insure the table is seen as you intend.
If you can't find which of the 20 select statements is the offending one, then remove half, and see if it creates the columns the way you intend. Then keep dividing the unions in half until you have identified the select statement that causes this. CAUTION: it might be more than one.
I figured out the problem you're both pretty close to correct. One of the later unions was casting the variables as
CAST(NULL AS VARCHAR2(100))
Didn't figure a NULL value would increase the size of the datatype, but I guess it makes sense.
Which table structure is better among below 2?
OR
in first query i use LIKE operator in query.
In second i use AND operator.
Does first table design has any advantages over second on selecting data?
On what situation i need to decide between first table structure and second one?
The first one would be better if you never needed to work with the Type or the Currency attributes in any way and you used allways only the whole text stored as MEASUREMENT_NAME.
If you plan to work with the values of the Type or the Currency attributes separately, like using their values in where conditions etc., the second option will allways be a better choice.
You can also eventually create a combined structure containing both the whole text MEASUREMENT_NAME and separated values Type & Currency for filtering purposes. This would take more space on disk and would not be optimized, but the whole text or MEASUREMENT_NAME can in the future also eventually contain atributes that are now unknown to you. That could be the reason for storing MEASUREMENT_NAME in the raw format.
In case that the attribute MEASUREMENT_NAME is not something you get from external sources, but it is a data structure made by yourself and you seek a way how to store records with flexible (changing) structure, you better store it as JSON or XML data, Oracle has built in functions for JSON data.
I also recommend to use linked tables for Type or Currency values, so that the main table contains only ID link as a foreign key.
Second table obviously have advantages over first. If you have to query type or currency from first table, you might have to use right, left or any other functions.
Also if you align keys/constraints for second table, follows 2nd normal form.
I've got a table with close to 5kk rows. Each one of them has one text column where I store my XML logs
I am trying to find out if there's some log having
<node>value</node>
I've tried with
SELECT top 1 id_log FROM Table_Log WHERE log_text LIKE '%<node>value</node>%'
but it never finishes.
Is there any way to improve this search?
PS: I can't drop any log
A wildcarded query such as '%<node>value</node>%' will result in a full table scan (ignoring indexes) as it can't determine where within the field it'll find the match. The only real way I know of to improve this query as it stands (without things like partitioning the table etc which should be considered if the table is logging constantly) would be to add a Full-Text catalog & index to the table in order to provide a more efficient search over that field.
Here is a good reference that should walk you through it. Once this has been completed you can use things like the CONTAINS and FREETEXT operators that are optimised for this type of retrieval.
Apart from implementing full-text search on that column and indexing the table, maybe you can narrow the results by another parameters (date, etc).
Also, you could add a table field (varchar type) called "Tags" which you can populate when inserting a row. This field would register "keywords, tags" for this log. This way, you could change your query with this field as condition.
Unfortunately, about the only way I can see to optimize that is to implement full-text search on that column, but even that will be hard to construct to where it only returns a particular value within a particular element.
I'm currently doing some work where I'm also storing XML within one of the columns. But I'm assuming any queries needed on that data will take a long time, which is okay for our needs.
Another option has to do with storing the data in a binary column, and then SQL Server has options for specifying what type of document is stored in that field. This allows you to, for example, implement more meaningful full-text searching on that field. But it's hard for me to imagine this will efficiently do what you are asking for.
You are using a like query.
No index involved = no good
There is nothing you can do with what you have currently to speed this up unfortunately.
I don't think it will help but try using the FAST x query hint like so:
SELECT id_log
FROM Table_Log
WHERE log_text LIKE '%<node>value</node>%'
OPTION(FAST 1)
This should optimise the query to return the first row.
I've got and sql express database I need to extract some data from. I have three fields. ID,NAME,DATE. In the DATA column there is values like "654;654;526". Yes, semicolons includes. Now those number relate to another table(two - field ID and NAME). The numbers in the DATA column relate to the ID field in the 2nd table. How can I via sql do a replace or lookup so instead of getting the number 654;653;526 I get the NAME field instead.....
See the photo. Might explain this better
http://i.stack.imgur.com/g1OCj.jpg
Redesign the database unless this is a third party database you are supporting. This will never be a good design and should never have been built this way. This is one of those times you bite the bullet and fix it before things get worse which they will. Yeu need a related table to store the values in. One of the very first rules of database design is never store more than one piece of information in a field.
And hopefully those aren't your real field names, they are atriocious too. You need more descriptive field names.
Since it a third party database, you need to look up the split function or create your own. You will want to transform the data to a relational form in a temp table or table varaiable to use in the join later.
The following may help: How to use GROUP BY to concatenate strings in SQL Server?
This can be done, but it won't be nice. You should create a scalar valued function, that takes in the string with id's and returns a string with names.
This denormalized structure is similar to the way values were stored in the quasi-object-relational database known as PICK. Cool database, in many respects ahead of its time, though in other respects, a dinosaur.
If you want to return the multiple names as a delimited string, it's easy to do with a scalar function. If you want to return the multiple rows as a table, your engine has to support functions that return a type of TABLE.
I'm reading CJ Date's SQL and Relational Theory: How to Write Accurate SQL Code, and he makes the case that positional queries are bad — for example, this INSERT:
INSERT INTO t VALUES (1, 2, 3)
Instead, you should use attribute-based queries like this:
INSERT INTO t (one, two, three) VALUES (1, 2, 3)
Now, I understand that the first query is out of line with the relational model since tuples (rows) are unordered sets of attributes (columns). I'm having trouble understanding where the harm is in the first query. Can someone explain this to me?
The first query breaks pretty much any time the table schema changes. The second query accomodates any schema change that leaves its columns intact and doesn't add defaultless columns.
People who do SELECT * queries and then rely on positional notation for extracting the values they're concerned about are software maintenance supervillains for the same reason.
While the order of columns is defined in the schema, it should generally not be regarded as important because it's not conceptually important.
Also, it means that anyone reading the first version has to consult the schema to find out what the values are meant to mean. Admittedly this is just like using positional arguments in most programming languages, but somehow SQL feels slightly different in this respect - I'd certainly understand the second version much more easily (assuming the column names are sensible).
I don't really care about theoretical concepts in this regard (as in practice, a table does have a defined column order). The primary reason I would prefer the second one to the first is an added layer of abstraction. You can modify columns in a table without screwing up your queries.
You should try to make your SQL queries depend on the exact layout of the table as little as possible.
The first query relies on the table only having three fields, and in that exact order. Any change at all to the table will break the query.
The second query only relies on there being those three felds in the table, and the order of the fields is irrelevant. You can change the order of fields in the table without breaking the query, and you can even add fields as long as they allow null values or has a default value.
Although you don't rearrange the table layout very often, adding more fields to a table is quite common.
Also, the second query is more readable. You can tell from the query itself what the values put in the record means.
Something that hasn't been mentioned yet is that you will often be having a surrogate key as your PK, with auto_increment (or something similar) to assign a value. With the first one, you'd have to specify something there — but what value can you specify if it isn't to be used? NULL might be an option, but that doesn't really fit in considering the PK would be set to NOT NULL.
But apart from that, the whole "locked to a specific schema" is a much more important reason, IMO.
SQL gives you syntax for specifying the name of the column for both INSERT and SELECT statements. You should use this because:
Your queries are stable to changes in the column ordering, so that maintenance takes less work.
The column ordering maps better to how people think, so it's more readable. It's more clear to think of a column as the "Name" column rather than the 2nd column.
I prefer to use the UPDATE-like syntax:
INSERT t SET one = 1 , two = 2 , three = 3
Which is far easier to read and maintain than both the examples.
Long term, if you add one more column to your table, your INSERT will not work unless you explicitly specify list of columns. If someone changes the order of columns, your INSERT may silently succeed inserting values into wrong columns.
I'm going to add one more thing, the second query is less prone to error orginally even before tables are changed. Why do I say that? Becasue with the seocnd form you can (and should when you write the query) visually check to see if the columns in the insert table and the data in the values clause or select clause are in fact in the right order to begin with. Otherwise you may end up putting the Social Security Number in the Honoraria field by accident and paying speakers their SSN instead of the amount they should make for a speech (example not chosen at random, except we did catch it before it actually happened thanks to that visual check!).