SQL command to update (replace) a fixed text value by another one - sql

I have to update a value in one field of a table (t1).
Current table t1 records :
| POLNAME | VALUE |
|-------------------|
| TEST_01 | Normal |
| TEST_02 | High |
| TEST_03 | Normal |
| TEST_04 | Low |
| TEST_05 | Low** |
New table t1 records expected after the update :
| POLNAME | VALUE |
|-------------------|
| REST_01 | Normal |
| REST_02 | High |
| REST_03 | Normal |
| REST_04 | Low |
| REST_05 | Low** |
I need to replace, in t1-POLNAME field, the fixed value 'TEST_' by 'REST_' for all records of table t1.
I can do one by one using an UPDATE SQL command, but my goal is to replace all records using one SQL command.

You can use the REPLACE SQL FUNCTION.
UPDATE t1 SET POLNAME=REPLACE(POLNAME, 'TEST','REST');

You can use the REPLACE Syntax
UPDATE t1 SET POLNAME=REPLACE(POLNAME, 'TEST','REST');

Related

SQL - Given sequence of data, how do I query the origin?

Let's assume we have the following data.
| UUID | SEENTIME | LAST_SEENTIME |
------------------------------------------------------
| UUID1 | 2020-11-10T05:00:00 | |
| UUID2 | 2020-11-10T05:01:00 | 2020-11-10T05:00:00 |
| UUID3 | 2020-11-10T05:03:00 | 2020-11-10T05:01:00 |
| UUID4 | 2020-11-10T05:04:00 | 2020-11-10T05:03:00 |
| UUID5 | 2020-11-10T05:07:00 | 2020-11-10T05:04:00 |
| UUID6 | 2020-11-10T05:08:00 | 2020-11-10T05:07:00 |
Each data is connected to each other via LAST_SEENTIME.
In such case, is there a way to use SQL to identify these connected events as one? I want to be able to calculate start and end to calculate the duration of this event.
You can use a recursive CTE. The exact syntax varies by database, but something like this:
with recursive cte as
select uuid as orig_uuid, uuid, seentime
from t
where last_seentime is null
union all
select cte.orig_uuid, t.uuid, t.seentime
from cte join
t
on cte.seentime = t.last_seentime
)
select orig_uuid,
max(seentime) - min(seentime) -- or whatever your database uses
from cte
group by orig_uuid;

SQL- Query to determine if any NULLS in a column

I need to create a query to determine if any records in a table contain NULLS in the EXPORTED column. If so then it will execute a stored procedure. I tried searching and I could not find anything that queries just one specific column. Below is an example of my table.
+--------+-----------+----------+
| SAMPLE | DATE | EXPORTED |
+--------+-----------+----------+
| S1234 | 9/17/2019 | NULL |
| S1435 | 9/17/2019 | NULL |
| S1536 | 9/17/2019 | YES |
+--------+-----------+----------+

Oracle: comparing the column value with previous records

I have an Oracle table which is being loaded by a function - whenever it finds "LOW_MEMORY" in best_status, it will add the systimestamp in low_mem_timestamp column.
+----------+-------------------+-------+-------------------------------+
| device_id| best_status | job_id| low_mem_timestamp |
+----------+-------------------+-------+-------------------------------+
| 715016 | OPERATION_FAILURE | 511008|(null) |
| 715009 | LOW_MEMORY | 511008|10-MAY-17 11.13.22.143122000 AM|
| 715014 | DOWNLOAD_COMPLETE | 740004|(null) |
| 941015 | LOW_MEMORY | 740004|10-MAY-17 11.13.22.143122000 AM|
+----------+-------------------+-------+-------------------------------+
After this I have another table where i want to record the changes from above table
Whenever low_mem_timestamp changes for any device_id like:
if it had timestamp and now it got updated to "null" then it should add "1"
if it had null value and got updated to timestamp then "0"
Output table:
Condition:
device_id='715009' BEST STATUS moved from "LOW_MEMORY" to "UPDATE_DEFERRED" then low_mem_timstamp got updated to "null" then low_mem_timstamp should be "1"
device_id='715014' BEST STATUS moved from " DOWNLOAD_COMPLETE" to "LOW_MEMORY" then low_mem_timestamp got updated to some timestamp "any timestamp" then low_mem_timstamp should be "0"
device_id='941015' BEST STATUS remains same, it is not updated then low_mem_timstamp should be "NA"
Then in my final table output should be like
+----------+-------------------+-------+---------------+
| device_id| best_status | job_id| low_mem_toggle|
+----------+-------------------+-------+---------------+
| 715009 | UPDATE_DEFERRED | 511008|1 |
| 715014 | LOW_MEMORY | 740004|0 |
| 941015 | LOW_MEMORY | 740004|NA |
+----------+-------------------+-------+---------------+
Please suggest a sql query to implement this functionality.
Thanks in advance.

Unique string table in SQL and replacing index values with string values during query

I'm working on an old SQL Server database that has several tables that look like the following:
|-------------|-----------|-------|------------|------------|-----|
| MachineName | AlarmName | Event | AlarmValue | SampleTime | ... |
|-------------|-----------|-------|------------|------------|-----|
| 3 | 180 | 8 | 6.780 | 2014-02-24 | |
| 9 | 67 | 8 | 1.45 | 2014-02-25 | |
| ... | | | | | |
|-------------|-----------|-------|------------|------------|-----|
There is a separate table in the database that only contains unique strings, as well as the index for each unique string. The unique string table looks like this:
|----------|--------------------------------|
| Id | String |
|----------|--------------------------------|
| 3 | MyMachine |
| ... | |
| 8 | High CPU Usage |
| ... | |
| 67 | 404 Error |
| ... | |
|----------|--------------------------------|
Thus, when we want to get something out of the database, we get the respective rows out, then lookup each missing string based on the index value.
What I'm hoping to do is to replace all of the string indexes with the actual values in a single query without having to do post-processing on the query result.
However, I can't figure out how to do this in a single query. Do I need to use multiple JOINs? I've only been able to figure out how to replace a single value by doing something like -
SELECT UniqueString.String AS "MachineName" FROM UniqueString
JOIN Alarm ON Alarm.MachineName = UniqueString.Id
Any help would be much appreciated!
Yes, you can do multiple joins to the UniqueStrings table, but change the order to start with the table you are reporting on and use unique aliases for the joined table. Something like:
SELECT MN.String AS 'MachineName', AN.String as 'AlarmName' FROM Alarm A
JOIN UniqueString MN ON A.MachineName = MN.Id
JOIN UniqueString AN ON A.AlarmName = AN.Id
etc for any other columns

SQL LIKE question

I was wondering if there's a drawback (other than bad practice) to using something like this
SELECT * FROM my_table WHERE id LIKE '1';
where id is an integer. I know you're supposed to use id=1 but I am writing a java program and if everything can use LIKE it'll be a lot easier for me. Also, so far, everything works fine; I get the correct query results, so if there is no drawback I will continue doing it like this.
edit: I am using MySQL.
MySQL will allow it, but will ignore the index:
mysql> describe METADATA_44;
+---------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| AtextId | int(11) | NO | PRI | NULL | |
| num | varchar(128) | YES | | NULL | |
| title | varchar(128) | YES | | NULL | |
| file | varchar(128) | YES | | NULL | |
| context | varchar(128) | YES | | NULL | |
| source | varchar(128) | YES | | NULL | |
+---------+--------------+------+-----+---------+-------+
6 rows in set (0.00 sec)
mysql> explain select * from METADATA_44 where Atextid like '7';
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
| 1 | SIMPLE | METADATA_44 | ALL | PRIMARY | NULL | NULL | NULL | 591 | Using where |
+----+-------------+-------------+------+---------------+------+---------+------+------+-------------+
mysql> explain select * from METADATA_44 where Atextid=7;
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
| 1 | SIMPLE | METADATA_44 | const | PRIMARY | PRIMARY | 4 | const | 1 | |
+----+-------------+-------------+-------+---------------+---------+---------+-------+------+-------+
1 row in set (0.00 sec)
You'd need to look at the Query Execution Plan on your RDBMS to verify that LIKE with no wildcards is treated as efficiently as an = would be. A quick test in SQL Server shows that it would give you an index scan rather than a seek so I guess it doesn't look at that when generating the plan and for SQL Server using = would be much more efficient. I don't have a MySQL install to test against.
Edit: Just to update this SQL Server seems to handle it fine and do a seek when the data type is varchar. When it is run against an int column though you get the scan. This is because it does an implicit conversion to varchar on the int column so can't use the index.
You are better off writing your query as
SELECT * FROM my_table WHERE id = 1;
otherwise mysql will have to typecast '1' to int which is the type of the column id
so obviously there is a small performance penalty, when u know the type of the column supply the value according to that type
Speed. [15-char filler as there's not much more to say]
Without using any wildcards with LIKE, is should be fine for your needs if the speed/efficiency is something you don't bother with.