I've quite bulky data in a Database table and I want to sort the data based on their ID (Primary Key). The data in the key column could be:
001/2011,
002/2011,
001/2012
When I use 'order by id' it sorts the rows like
001/2011,
001/2012,
002/2011
However, what I am looking for is
001/2011,
002/2011,
001/2012
The data type of the id column is varchar(50). Is there a special SQL function that I should use to sort such type of data?
ORDER BY RIGHT(ID,4)+LEFT(ID,3)
This rearranges the varchar data so that the year comes first and the sequence/month/day-of-year comes after.
If you have some other format to your data, then think along the same lines. Shift the string around using SUBSTRING, of which LEFT and RIGHT are just 2 specific versions.
Related
First of all, thank you for taking the time reading below:
I have a following table:
As Is table, the DataType of Level is int
I need to transform it to the table like below:
To Be Table, Grouped Text
The idea is to group a numeric column into Text per Unique ID.
Is this Achievable at all?
Note: The number of Levels are subject to growth, so is it possible to comeup with sql to accommodate the increasing levels without any hardcoding's?
I have a fact table that gets updated daily with customer time on app info from a third-party platform that we use, and the identifying number has a bit of text appended to it. So if the customer ID number is 123, this table is getting populated with something like ABC_123. I need to pull this info for a particular cohort of customers based on their ID numbers, so was planning to create a temp table with the customer ID number and the time on app, and drop the appended bit of text. I so far have not had luck finding a way to split the text in that column using the "_" as a delimiter, and I'm hesitant to use a wildcard. Any advice?
Seems like it would be better to add a PERSISTED computed column to the table. Then you have both the original data, and the one you want and you can INDEX the PERSISTED column too.
ALTER TABLE dbo.YourTable ADD GoodID AS CONVERT(int,STUFF(BadID, 1, CHARINDEX('_',BadID),'')) PERSISTED;
db<>fiddle
I have two tables in oracle as per below and need mapping data from these two tables from stored procedure or in c# code for .net core. Both will work for me.
First Table contains data which is in key-value form where "Key" is the ID of the Second Table. Value is the actual data required.
First table :
ID Data
1 {"f100000":["02/02/2012"],"f100001":["01/04/2013"]}
So, "f100000", "f100001"... etc are the keys and ID for Second table
Second table has simple data with ID and Name
ID Name
f100000 Name of the field
f100001 Name of the field2
I would expect the result will be as per below:
Key Value
Name of the field 02/02/2012
Name of the field2 01/04/2013
This is not going to happen in current form without extraneous code which is redundant/inefficient when database design could be improved.
Can the design of table 1 not be improved i.e. have separate fields for f###### values and for the corresponding date.
That way the f###### values can be indexed so that joins will run efficiently between the two tables.
This amendment would need to be made in the code which inserts records into table.
If not, you would have to:
Select rows from table1.
Split the string into array based on ',' character
Split of each of these array values into another 2 dimension array based on ':' character when looping through first array
Strip out '[', ']' and '"' characters from date field for it to be parseable
As looping through, then SELECT from table_2.id = value from second array[0] value
Print out results line by line
As this would need to be done each time code is run, it is very efficient. Much better to redesign table 1 and add in logic to insert as required.
in sql, in any given table, in a column named "name", wih data type as text
if there are ten entries, suppose an entry in the column is "rohit". i want to show all the entries in the name column after rohit. and i do not know the row id or id. can it be done??
select * from your_table where name > 'rohit'
but in general you should not treat text columns like that.
a database is more than a collection of tables.
think about how to organize your data, what defines a datarow.
maybe, beside their name, there is another thing how you would classify such a row? some things like "shall be displayed?" "is modified" "is active"?
so if you had a second column, say display of type int and your table looked like
CREATE TABLE MYDATA
NAME TEXT,
DISPLAY INT NOT NULL DEFAULT(1);
you could flag every row with 1 or 0 whether it should be displayed or not and then your query could look like
SELECT * FROM MYDATA WHERE DISPLAY=1 ORDER BY NAME
to get your list of values.
it's not much of a difference with ten rows, you don't even need indexes here, but if you build something bigger, say 10,000+ rows, you'd be surprised how slow that would become!
in general, TEXT columns are good to select and display, but should be avoided as a WHERE condition as much as you can. Use describing columns, preferrably int fields which can be indexed with extreme high efficiency and an application doesn't get slower even if the record size goes over 100k.
You can use "default" keyword for it.
CREATE TABLE Persons (
ID int NOT NULL,
name varchar(255) DEFAULT 'rohit'
);
I have an indexed (nonclustered) string column (let's just call it 'Identifier') on a table with the following row values:
`0000001`
`0000245`
`001`
`AB0001`
I want to be able to efficiently return all the rows that have an Identifier ending with a certain number entered by the user. For example, when the user enters 1 then the following rows should be returned:
0000001
001
AB0001
The problem is that using WHERE Identifier LIKE CONCAT(N'%', #UserInput) uses an index scan which doesn't scale well, since the table has tons of rows in it (many millions)
How should I efficiently query this data? My first thought is to add a new column that represents the REVERSE() of the Identifier column, and then use WHERE ReversedIdentifier LIKE CONCAT(REVERSE(#UserInput), N'%') to find the matches using a "starts with"
This doesn't seem like the cleanest solution, but it's all I can think of at the moment. Is there a better way?
If you have a column that holds the number component and that column is a number and used that column in an index ... that would be a lot faster.