SQL Server - Truncate Using DATALENGTH - sql

Is there a way to truncate an nvarchar using DATALENGTH? I am trying to create an index on a column, but an index only accepts a maximum of 900 bytes. I have rows that consist of 1000+ bytes. I would like to truncate these rows and only accept the first n characters <= 900 bytes.

Can be this sql useful, Just update the table for that column.
Update Table
Set Column = Left(Ltrim(Column),900)

Create a COMPUTED COLUMN that represents the data you want to index, then create an index on it.
ALTER TABLE MyTable ADD ComputedColumn AS LEFT(LargeNVarcharColumn,900);
CREATE NONCLUSTERED INDEX MyIndex ON MyTable
(
ComputedColumn ASC
);
Reference:
computed_column_definition
Indexes on Computed Columns

Trim the column ,left or right side to 900 characters ,create a index on that column
ALTER TABLE usertable ADD used_column AS LEFT(nvarcharcolumn,900);
create a index on this used columm.it will work

Related

How to replace all the NULL values in postgresql?

I found the similar question and solution for the SQL server. I want to replace all my null values with zero or empty strings. I can not use the update statement because my table has 255 columns and using the update for all columns will consume lots of time.
Can anyone suggest to me, how to update all the null values from all columns at once in PostgreSQL?
If you want to replace the data on the fly while selecting the rows you need:
SELECT COALESCE(maybe_null_column, 0)
If you want the change to be saved on the table you need to use an UPDATE. If you have a lot of rows you can use a tool like pg-batch
You can also create a new table and then swap the old one and the new one:
# Create new table with updated values
CREATE TABLE new_table AS
SELECT COALESCE(maybe_null_column, 0), COALESCE(maybe_null_column2, '')
FROM my_table;
# Swap table
ALTER TABLE my_table RENAME TO obsolete_table;
ALTER TABLE new_table RENAME TO my_table;

SQL Server : query apply in where clause with IS NOT Null on Column that contains Json Object taking more time

I have a table in a SQL Server database that columns contain JSON object and that column I iterating through where clause condition with IN NOT NULL, by using openJson JSON_Value and JSON_Query function.The query is running successfully but it taking more time to response output.only in 4 rows taking 7sec.
Then what about if table having 1000 of rows.
Table looks lie this:
Here is the query on table that's I'm using:
SELECT TOP (1000)
[Id], JSON_Value(objectJson,'$.Details.Name.Value') AS objectValue
FROM
[geodb].[dbo].[userDetails]
WHERE
JSON_QUERY(jsonData,'$."1bf1548c-3703-88de-108e-bf7c4578c912"') IS NOT NULL
So, how to optimize above query so that it takes less time?
I would suggest altering the table:
ALTER TABLE dbo.Table
ADD Value AS JSON_VALUE(JsonData, '$.Details.Name.Value');
then creating a non clustered index on value column
CREATE NONCLUSTERED INDEX IX_ParsedValue ON dbo.Table (Value)
This will speed up the query.

SQL Index on changed column

Is it possible to create an Index on column with changes of this column.
For example - I have a column A (nvarchar), But in query I should to replace values from this column to compare with values in list. Classical index will work only if I use original values from column A.
The query looks like next
SELECT
*
FROM
MyTable
WHERE
REPLACE(A, ' ', '') IN ('aasa', 'asa', 'wew','wewe')
You can create a computed column and then create an index on it.
Note: SQL Server Index key columns have a 900 byte size limit. Since your column is NVARCHAR, it consumes 2 bytes for each character. So, let's can cap the index at 400 characters (800 bytes).
To conserve space, we can further restrict this column to contain a value, only if it meets your required conditions.
ALTER TABLE MyTable
ADD A_NoSpace AS (CASE WHEN REPLACE(A, ' ', '') IN ('aasa', 'asa', 'wew','wewe') THEN LEFT(REPLACE(A, ' ', ''), 400) END) PERSISTED
CREATE NONCLUSTERED INDEX IX_A_NoSpace ON MyTable (A_NoSpace)
Being a PERSISTED column, calculation will take place only during INSERT or UPDATE operations.
You can now use this column in your query:
SELECT *
FROM
MyTable
-- Don't need to check if value is in the list,
-- because we're already doing this in the computed column definition
WHERE A_NoSpace IS NOT NULL

Create an INT index on a VARCHAR column

I have a unique design where I will need to store all data as VARCHAR. I can't go into details why. I would like to index some fields as a different data type. Is this possible? If so, will there any gotchas doing this? What is the syntax to do this if its possible.
I will be using both SQL Server and PostgresQL for this project.
In PostgreSQL, you can create functional indexes ("index on expression"), that occupies less storage than creating redundant columns.
CREATE INDEX tbl_intasvarchar_idx ON tbl (cast(intasvarchar AS int));
Keep in mind that queries have to match the expression to allow the use of such an index. Like:
SELECT *
FROM tbl
WHERE intasvarchar::int = 123;
(Alternative syntax shorthand for cast works as well as cast().)
Of course, all varchar values must be valid to cast to int and if that's the case the superior approach would be to change the type to integer to begin with. In any RDBMS.
PostgreSQL:
Create a function based index like so:
create index int_index on tbl (cast(cast(num_as_string as decimal) as integer));
Fiddle: http://sqlfiddle.com/#!15/d0f46/1/0
Later, when you run a query such as:
select *
from tbl
where cast(cast(num_as_string as decimal) as integer) = 12
The index will be used, because the index is on the result of that function applied to the column, rather than the column itself.
SQL Server:
In SQL Server you can add a computed column and index that computed column like so:
create table tbl (num_as_string varchar(10));
insert into tbl (num_as_string) values ('12.3');
alter table tbl add num_as_string_int as cast(cast(num_as_string as decimal) as integer);
create index int_index on tbl (num_as_string_int);
Then query against num_as_string_int to use the index.
Fiddle: http://sqlfiddle.com/#!6/1f378/2/0

Conditional Index in DB2 database

Is there a possibility to create a conditional index in db2?
same as in oracle:
CREATE UNIQUE INDEX my_index ON my_table (
CASE WHEN my_column = 1
THEN indexed_column
ELSE NULL
END);
or mssql:
CREATE UNIQUE INDEX my_index
ON my_table (indexed_column) WHERE my_column = 1
Thanks :)
This looks like a contrived example, but I don't see any benefit of an index containing only the values 1 and NULL.
Expression-based indexes are supported beginning with DB2 LUW 10.5. If you are unable to upgrade, you can simulate the behaviour using a computed column (which is what Oracle does behind the scenes anyway).
The uniqueness is also checked during the execution of the CREATE
INDEX statement. If the table already contains rows with duplicate key
values, the index is not created.