Using UPDATE statement with a SELECT statement converting hexadecimal to text - sql

I have a table called Script_Data that has three columns - ScriptID (primary), RowOrder and ScriptData. each row value for ScripData is hexadecimal. For me to make sense of it, I convert it to text. I CAST() the ScriptData column into VarChar datatype using the following query
SELECT ScriptID, RowOrder, CAST(CAST(ScriptData AS varbinary(MAX)) AS varchar(MAX)) AS Converted_SD
FROM Script_Data
Is it possible to UPDATE values in the ScriptData column when converted? I know that I would typically do something like this if not for converting:
UPDATE Script_Data
SET ScriptData='Sales'
WHERE ScriptData='Marketing';
Is it even possible to do something like this when I have it converted from hex to text? I've tried so many different queries, most of which include subqueries, but all failed.
Converting it changes this
| ScriptID | RowOrder | ScriptData |
------------------------------------
| 5008 | 1 | 0x435669787|
to this (I'm over simplifying the results)
| ScriptID | RowOrder | ScriptData |
------------------------------------
| 5008 | 1 | Sales |
EDIT:
My best attempt seems to have been this query
UPDATE Script_Data
SET ScriptData='Engineering'
(SELECT ScriptID, RowOrder, CONVERT(varchar(max), ScriptData)
FROM Script_Data
WHERE ScriptData = 'Accounting')
But SQL is telling me that Implicit conversion from data type varchar to varbinary(max) is not allowed. Use the CONVERT function to run this query. I've tried to use CONVERT in creative ways to satisfy the error, but have not been successful. The ScriptData column is varbinary datatype with -1 length.

It seems you need to cast the new value to varbinary as part of the update.
UPDATE Script_Data SET
ScriptData = CAST('Engineering' AS VARBINARY(MAX))
WHERE CAST(ScriptData AS VARCHAR(MAX)) = 'Accounting'
I won't ask why you are storing strings as varbinary because I'm sure you realise life would be much easier if you just stored it as a varchar.
Here is the test script I used:
declare #ScriptData table (ScriptData varbinary(max));
insert into #ScriptData (ScriptData)
values (0x435669787), (convert(varbinary(max),'Sales'));
select *, convert(varchar(max),ScriptData,3), CAST(ScriptData AS varchar(MAX)) from #ScriptData;
update #ScriptData set
ScriptData = CAST('Marketing' AS VARBINARY(MAX))
where CAST(ScriptData AS varchar(MAX)) = 'Sales';
select *, convert(varchar(max),ScriptData,3), CAST(ScriptData AS varchar(MAX)) from #ScriptData;

For your SELECT query, the analogous UPDATE is to place conversion in the SET command assigning value to a new or different column, not the same column.
UPDATE is not a DDL (data definition language) but a DML (data manipulation language) command. Hence, it only adjusts data but does not change a columns' defined data type. Consider an ALTER command to create a new VARCHAR(MAX) column then run UPDATE to assign value:
ALTER TABLE Converted_SD ADD ScriptData_Text VARCHAR(MAX);
UPDATE Converted_SD
SET ScriptData_Text = CAST(CAST(ScriptData AS varbinary(MAX)) AS varchar(MAX));
Also, since ALTER is a DDL command, use it very sparingly and never in application code or stored procedure since it can adjust table schema and column definitions.

Related

SQL - Replace a particular part of column string value (between second and third slash)

In my SQLServer DB, I have a table called Documents with the following columns:
ID - INT
DocLocation - NTEXT
DocLocation has values in following format:
'\\fileShare1234\storage\ab\xyz.ext'
Now it seems these documents are stored in multiple file share paths.
We're planning to migrate all documents in one single file share path called say 'newFileShare' while maintaining the internal folder structure.
So basically '\\fileShare1234\storage\ab\xyz.ext' should be updated to '\\newFileShare\storage\ab\xyz.ext'
Two questions:
How do I query my DocLocation to extract DocLocations with unique file share values? Like 'fileShare1234' and 'fileShare6789' and so on..
In a single Update query how do I update my DocLocation values to newFileShare ('\\fileShare1234\storage\ab\xyz.ext' to '\\newFileShare\storage\ab\xyz.ext')
I think the trick would be extract and replace text between second and third slashes.
I've still not figured out how to achieve my first objective. I require those unique file shares for some other tasks.
As for the second objective, I've tried using replace between it will require multiple update statements. Like I've done as below:
update Documents set DocLocation = REPLACE(Cast(DocLocation as NVarchar(Max)), '\\fileShare1234\', '\\newFileShare\')
The first step is fairly easy. If all your paths begin with \\, then you can find all the DISTINCT servers using SUBSTRING. I will make a simple script with a table variable to replicate some data. The value of 3 is in the query and it is the length of \\ plus 1 since SQL Server counts from 1.
DECLARE #Documents AS TABLE(
ID INT NOT NULL,
DocLocation NTEXT NOT NULL
);
INSERT INTO #Documents(ID, DocLocation)
VALUES (1,'\\fileShare56789\storage\ab\xyz.ext'),
(2,'\\fileShare1234\storage\ab\cd\xyz.ext'),
(3,'\\share4567890\w\x\y\z\file.ext');
SELECT DISTINCT SUBSTRING(DocLocation, 3, CHARINDEX('\', DocLocation, 3) - 3) AS [Server]
FROM #Documents;
The results from this are:
Server
fileShare1234
fileShare56789
share4567890
For the second part, we can just concatenate the new server name with the path that appears after the first \.
UPDATE #Documents
SET DocLocation = CONCAT('\\newfileshare\',
SUBSTRING(DocLocation, 3, LEN(CAST(DocLocation AS nvarchar(max))) - 2));
SELECT * FROM #Documents;
For some reason I cannot create a table with the results here, but the values I see are this:
\\newfileshare\fileShare56789\storage\ab\xyz.ext
\\newfileshare\fileShare1234\storage\ab\cd\xyz.ext
\\newfileshare\share4567890\w\x\y\z\file.ext
Please try the following solution based on XML and XQuery.
Their data model is based on ordered sequences. Exactly what we need while processing fully qualified file path: [position() ge 4]
When you are comfortable, just run the UPDATE statement by updating the DocLocation column with the calculated result.
It is better to use NVARCHAR(MAX) instead of NText data type.
SQL
-- DDL and sample data population, start
DECLARE #tbl AS TABLE(ID INT IDENTITY PRIMARY KEY, DocLocation NVARCHAR(MAX));
INSERT INTO #tbl(DocLocation) VALUES
('\\fileShare56789\storage\ab\xyz.ext'),
('\\fileShare1234\storage\ab\cd\xyz.ext'),
('\\share4567890\w\x\y\z\file.ext');
-- DDL and sample data population, end
DECLARE #separator CHAR(1) = '\'
, #newFileShare NVARCHAR(100) = 'newFileShare';
SELECT ID, DocLocation
, result = '\\' + #newFileShare + #separator +
REPLACE(c.query('data(/root/r[position() ge 4]/text())').value('text()[1]', 'NVARCHAR(MAX)'), SPACE(1), #separator)
FROM #tbl
CROSS APPLY (SELECT TRY_CAST('<root><r><![CDATA[' +
REPLACE(DocLocation, #separator, ']]></r><r><![CDATA[') +
']]></r></root>' AS XML)) AS t(c);
Output
+----+---------------------------------------+--------------------------------------+
| ID | DocLocation | result |
+----+---------------------------------------+--------------------------------------+
| 1 | \\fileShare56789\storage\ab\xyz.ext | \\newFileShare\storage\ab\xyz.ext |
| 2 | \\fileShare1234\storage\ab\cd\xyz.ext | \\newFileShare\storage\ab\cd\xyz.ext |
| 3 | \\share4567890\w\x\y\z\file.ext | \\newFileShare\w\x\y\z\file.ext |
+----+---------------------------------------+--------------------------------------+
to get the unique list of shared folder path , you can use this query:
SELECT distinct SUBSTRING(DocLocation,0,CHARINDEX('\',DocLocation,3))
from Documents
and your update command should work and yes you can merge copuple of replace update but better to run them seperately
update Documents
set DocLocation = REPLACE(DocLocation,'\\fileShare1234','\\newFileShare')
but I recommend you always record relative address instead of full path like: \storage\ab\xyz.ext'

Ukrainian character change to question mark when insert to table

I have a file saved as Unicode text containing Ukrainian characters, and it got loaded successfully to staging table using SSIS.
Like this:
"Колодки тормозные дисковые, комплект"
Колодки тормозные
"Колодки тормозные дисковые, комплект"
This is Test
But when I am moving it to other table it changes to:
"??????? ????????? ????????, ????????"
??????? ?????????
"??????? ????????? ????????, ????????"
This is Test
The query I used:
insert into finaltable
(
column1
)
select column1 from staging table.
Collation: Latin1_General_CI_AS
How can I rectify this error?
Here you can see the deference between VARCHAR and NVARCHAR datatypes:
DECLARE #Non_Unicode_Var VARCHAR (MAX) = 'Колодки тормозные дисковые, комплект';
DECLARE #Unicode_Var NVARCHAR (MAX) = N'Колодки тормозные дисковые, комплект';
SELECT #Non_Unicode_Var AS NonUnicodeColumn, #Unicode_Var AS UnicodeColumn;
Result:
+--------------------------------------+--------------------------------------+
| NonUnicodeColumn | UnicodeColumn |
+--------------------------------------+--------------------------------------+
| ??????? ????????? ????????, ???????? | Колодки тормозные дисковые, комплект |
+--------------------------------------+--------------------------------------+
So, you need to change the data type to NVARCHAR data type, then insert your data into the table.
Use nvarchar in your table and when you type your strings in the insert statement put N in front, like N'your string'. Also consider changing your collation due to sorting issues, refer to this question.

SQL UPDATE specific characters in string

I have a column with the following values (there is alot more):
20150223-001
20150224-002
20150225-003
I need to write an UPDATE statement which will change the first 2 characters after the dash to 'AB'. Result has to be the following:
20150223-AB1
20150224-AB2
20150225-AB3
Could anyone assist me with this?
Thanks in advance.
Use this,
DECLARE #MyString VARCHAR(30) = '20150223-0000000001'
SELECT STUFF(#MyString,CHARINDEX('-',#MyString)+1,2,'AB')
If there is a lot of data, you could consider to use .WRITE clause. But it is limited to VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) data types.
If you have one of the following column types, the .WRITE clause is easiest for this purpose, example below:
UPDATE Codes
SET val.WRITE('AB',9,2)
GO
Other possible choice could be simple REPLACE:
UPDATE Codes
SET val=REPLACE(val,SUBSTRING(val,10,2),'AB')
GO
or STUFF:
UPDATE Codes
SET val=STUFF(val,10,2,'AB')
GO
I based on the information that there is always 8 characters of date and one dash after in the column. I prepered a table and checked some solutions which were mentioned here.
CREATE TABLE Codes(val NVARCHAR(MAX))
INSERT INTO Codes
SELECT TOP 500000 CONVERT(NVARCHAR(128),GETDATE()-CHECKSUM(NEWID())%1000,112)+'-00'+CAST(ABS(CAST(CHECKSUM(NEWID())%10000 AS INT)) AS NVARCHAR(128))
FROM sys.columns s1 CROSS JOIN sys.columns s2
I run some tests, and based on 10kk rows with NVARCHAR(MAX) column, I got following results:
+---------+------------+
| Method | Time |
+---------+------------+
| .WRITE | 28 seconds |
| REPLACE | 30 seconds |
| STUFF | 15 seconds |
+---------+------------+
As we can see STUFF looks like the best option for updating part of string. .WRITE should be consider when you insert or append new data into string, then you could take advantage of minimall logging if the database recovery model is set to bulk-logged or simple. According to MSDN articleabout UPDATE statement: Updating Large Value Data Types
According to the OP Comment:-
Its always 8 charachters before the dash but the characters after the
dash can vary. It has to update the first two after the dash.
use the next simple code:-
DECLARE #MyString VARCHAR(30) = '20150223-0000000001'
SELECT REPLACE(#MyString,SUBSTRING(#MyString,9,3),'-AB')
Result:-
20150223-AB00000001
try,
update table set column=stuff(column,charindex('-',column)+1,2,'AB')
Declare #Table1 TABLE (DateValue Varchar(50))
INSERT INTO #Table1
SELECT '20150223-000000001' Union all
SELECT '20150224-000000002' Union all
SELECT '20150225-000000003'
SELECT DateValue,
CONCAT(SUBSTRING(DateValue,0,CHARINDEX('-',DateValue)),
REPLACE(LEFT(SUBSTRING(DateValue,CHARINDEX('-',DateValue)+1,Len(DateValue)),2),'00','-AB'),
SUBSTRING(DateValue,CHARINDEX('-',DateValue)+1,Len(DateValue))) AS ExpectedDateValue
FROM #Table1
OutPut
DateValue ExpectedDateValue
---------------------------------------------
20150223-000000001 20150223-AB000000001
20150224-000000002 20150224-AB000000002
20150225-000000003 20150225-AB000000003
To Update
Update #Table1
SEt DateValue= CONCAT(SUBSTRING(DateValue,0,CHARINDEX('-',DateValue)),
REPLACE(LEFT(SUBSTRING(DateValue,CHARINDEX('-',DateValue)+1,Len(DateValue)),2),'00','-AB'),
SUBSTRING(DateValue,CHARINDEX('-',DateValue)+1,Len(DateValue)))
From #Table1
SELECT * from #Table1
OutPut
DateValue
-------------
20150223-AB000000001
20150224-AB000000002
20150225-AB000000003

Execute table valued function from row values

Given a table as below where fn contains the name of an existing table valued functions and param contains the param to be passed to the function
fn | param
----------------
'fn_one' | 1001
'fn_two' | 1001
'fn_one' | 1002
'fn_two' | 1002
Is there a way to get a resulting table like this by using set-based operations?
The resulting table would contain 0-* lines for each line from the first table.
param | resultval
---------------------------
1001 | 'fn_one_result_a'
1001 | 'fn_one_result_b'
1001 | 'fn_two_result_one'
1002 | 'fn_two_result_one'
I thought I could do something like (pseudo)
select t1.param, t2.resultval
from table1 t1
cross join exec sp_executesql('select * from '+t1.fn+'('+t1.param+')') t2
but that gives a syntax error at exec sp_executesql.
Currently we're using cursors to loop through the first table and insert into a second table with exec sp_executesql. While this does the job correctly, it is also the heaviest part of a frequently used stored procedure and I'm trying to optimize it. Changes to the data model would probably imply changes to most of the core of the application and that would cost more then just throwing hardware at sql server.
I believe that this should do what you need, using dynamic SQL to generate a single statement that can give you your results and then using that with EXEC to put them into your table. The FOR XML trick is a common one for concatenating VARCHAR values together from multiple rows. It has to be written with the AS [text()] for it to work.
--=========================================================
-- Set up
--=========================================================
CREATE TABLE dbo.TestTableFunctions (function_name VARCHAR(50) NOT NULL, parameter VARCHAR(20) NOT NULL)
INSERT INTO dbo.TestTableFunctions (function_name, parameter)
VALUES ('fn_one', '1001'), ('fn_two', '1001'), ('fn_one', '1002'), ('fn_two', '1002')
CREATE TABLE dbo.TestTableFunctionsResults (function_name VARCHAR(50) NOT NULL, parameter VARCHAR(20) NOT NULL, result VARCHAR(200) NOT NULL)
GO
CREATE FUNCTION dbo.fn_one
(
#parameter VARCHAR(20)
)
RETURNS TABLE
AS
RETURN
SELECT 'fn_one_' + #parameter AS result
GO
CREATE FUNCTION dbo.fn_two
(
#parameter VARCHAR(20)
)
RETURNS TABLE
AS
RETURN
SELECT 'fn_two_' + #parameter AS result
GO
--=========================================================
-- The important stuff
--=========================================================
DECLARE #sql VARCHAR(MAX)
SELECT #sql =
(
SELECT 'SELECT ''' + T1.function_name + ''', ''' + T1.parameter + ''', F.result FROM ' + T1.function_name + '(' + T1.parameter + ') F UNION ALL ' AS [text()]
FROM
TestTableFunctions T1
FOR XML PATH ('')
)
SELECT #sql = SUBSTRING(#sql, 1, LEN(#sql) - 10)
INSERT INTO dbo.TestTableFunctionsResults
EXEC(#sql)
SELECT * FROM dbo.TestTableFunctionsResults
--=========================================================
-- Clean up
--=========================================================
DROP TABLE dbo.TestTableFunctions
DROP TABLE dbo.TestTableFunctionsResults
DROP FUNCTION dbo.fn_one
DROP FUNCTION dbo.fn_two
GO
The first SELECT statement (ignoring the setup) builds a string which has the syntax to run all of the functions in your table, returning the results all UNIONed together. That makes it possible to run the string with EXEC, which means that you can then INSERT those results into your table.
A couple of quick notes though... First, the functions must all return identical result set structures - the same number of columns with the same data types (technically, they might be able to be different data types if SQL Server can always do implicit conversions on them, but it's really not worth the risk). Second, if someone were able to update your functions table they could use SQL injection to wreak havoc on your system. You'll need that to be tightly controlled and I wouldn't let users just enter in function names, etc.
You cannot access objects by referencing their names in a SQL statement. One method would be to use a case statement:
select t1.*,
(case when fn = 'fn_one' then dbo.fn_one(t1.param)
when fn = 'fn_two' then dbo.fn_two(t1.param)
end) as resultval
from table1 t1 ;
Interestingly, you could encapsulate the case as another function, and then do:
select t1.*, dbo.fn_generic(t1.fn, t1.param) as resultval
from table1 t1 ;
However, in SQL Server, you cannot use dynamic SQL in a user-defined function (defined in T-SQL), so you would still need to use case or similar logic.
Either of these methods is likely to be much faster than a cursor, because they do not require issuing multiple queries.

creating a SQL table with multiple columns automatically

I must create an SQL table with 90+ fields, the majority of them are bit fields like N01, N02, N03 ... N89, N90 is there a fast way of creating multiple fileds or is it possible to have one single field to contain an array of values true/false? I need a solution that can also easily be queried.
There is no easy way to do this and it will be very challenging to do queries against such a table. Create a table with three columns - item number, bit field number and a value field. Then you will be able to write 'good' succinct Tsql queries against the table.
At least you can generate ALTER TABLE scripts for bit fields, and then run those scripts.
DECLARE #COUNTER INT = 1
WHILE #COUNTER < 10
BEGIN
PRINT 'ALTER TABLE table_name ADD N' + RIGHT('00' + CONVERT(NVARCHAR(4), #COUNTER), 2) + ' bit'
SET #COUNTER += 1
END
TLDR: Use binary arithmetic.
For a structure like this
==============
Table_Original
==============
Id | N01| N02 |...
I would recommend an alternate table structure like this
==============
Table_Alternate
==============
Id | One_Col
This One_Col is of varchar type which will have value set as
cast(n01 as nvarchar(1)) + cast(n02 as nvarchar(1))+ cast(n03 as nvarchar(1)) as One_Col
I however feel that you'd use C# or some other programming language to set value into column. You can also use bit and bit-shift operations.
Whenever you need to get a value, you can use SQL or C# syntax(treating as string)
In sql query terms you can use a query like
SELECT SUBSTRING(one_col,#pos,1)
and #pos can be set like
DECLARE #Colname nvarchar(4)
SET #colname=N'N32'
-- ....
SET #pos= CAST(REPLACE(#colname,'N','') as INT)
Also you can use binary arithmetic too with ease in any programming language.
Use three columns.
Table
ID NUMBER,
FIELD_NAME VARCHAR2(10),
VALUE NUMBER(1)
Example
ID FIELD VALUE
1 N01 1
1 N02 0
.
1 N90 1
.
2 N01 0
2 N02 1
.
2 N90 1
.
You can also OR an entire column for a fieldname (or fieldnameS):
select DECODE(SUM(VALUE), 0, 0, 1) from table where field_name = 'N01';
And even perform an AND
select EXP(SUM(LN(VALUE))) from table where field_name = 'N01';
(see http://viralpatel.net/blogs/row-data-multiplication-in-oracle/)