I have a logger table with timestamp,tagname and tagvalue fields.
Every time tag value changes, the control system writes record to the table with those 3 parameters.
Timestamp for records is not synchornized.
I want to run a pivot table query to get all data for 3 different tags to show the values of those 3 tags only.
When I run the query below, I get in return a dataset with all timestamp records in the table and lots of null values in the value fields(the SQL returns me all timestamp values).
I use the query:
SELECT *
FROM (
SELECT [timestamp],
[_VAL] AS '_VAL',
[point_id]
FROM DATA_LOG) p
PIVOT(SUM([_VAL]) FOR point_id in ([GG02.PV_CURNT],
[GG02.PV_JACKT],
[GG02.PV_SPEED],
[GG02.PV_TEMP])
) as tagvalue
ORDER BY timestamp ASC
Here's an example to the values I get in return from the SQL Server:
Results example:
Please anybody can help me how to limit the timestamp that SQL returns me only for timestamp relevant to those 3 tags and not all timestamp values in the table? (the return values list will include a record when at least one of the tags values will not be null)
If anybody have other ideas and not using PIVOT query to get the data in the format shown above - any idea will be welcome.
I think you simply want:
WHERE [GG02.PV_CURNT] IS NOT NULL OR
[GG02.PV_JACKT] IS NOT NULL OR
[GG02.PV_SPEED] IS NOT NULL OR
[GG02.PV_TEMP] IS NOT NULL
in the subquery.
Related
I have a data warehouse query that builds a fact table by joining 14 source tables. Each source table has a source_timestamp field to indicate the time the record was inserted or updated. I need to pull the max source_timestamp for each row of the query result from each of the 14 source tables. This will allow me to know the max update date for each row of the fact table.
I wanted to do something like this for the last field in the query...
(
SELECT MAX(Source_Timestamp)
FROM (
VALUES a.source_timestamp, b.source_timestamp, c.source_timestamp, ...
) AS UpdateDate(Source_Timestamp)
) AS LastUpdateDate
However, I get an incorrect syntax error because the subquery doesn't know a., b., or c. in the query context. I was hoping the VALUES clause would help me out but apparently not.
Any ideas on how to accomplish this?
It was my fault for not being more careful with the coding. I should've guessed from the fact that it was a syntax error. I needed to enclose each of the items in the VALUES clause in () like:
(
SELECT MAX(Source_Timestamp)
FROM (
VALUES (a.source_timestamp), (b.source_timestamp), (c.source_timestamp),(...)
) AS UpdateDate(Source_Timestamp)
) AS LastUpdateDate
I have a table in a SQL Server database that columns contain JSON object and that column I iterating through where clause condition with IN NOT NULL, by using openJson JSON_Value and JSON_Query function.The query is running successfully but it taking more time to response output.only in 4 rows taking 7sec.
Then what about if table having 1000 of rows.
Table looks lie this:
Here is the query on table that's I'm using:
SELECT TOP (1000)
[Id], JSON_Value(objectJson,'$.Details.Name.Value') AS objectValue
FROM
[geodb].[dbo].[userDetails]
WHERE
JSON_QUERY(jsonData,'$."1bf1548c-3703-88de-108e-bf7c4578c912"') IS NOT NULL
So, how to optimize above query so that it takes less time?
I would suggest altering the table:
ALTER TABLE dbo.Table
ADD Value AS JSON_VALUE(JsonData, '$.Details.Name.Value');
then creating a non clustered index on value column
CREATE NONCLUSTERED INDEX IX_ParsedValue ON dbo.Table (Value)
This will speed up the query.
I am using a Select Query to insert data into a Temp Table .In the Select Query I am doing order by on two columns something like this
insert into #temp
Select accnt_no,acct_name, start_date,end_date From table
Order by start_date DESC,end_date DESC
Select * from #temp
Here when there is an entry present in start_date field and an Null entry in the end_date field .During the order by operation Sybase is filling it with an Default date ( jan 1 1900 ) . I dont want that to happen .If the end_date field is Null . The data should be written just as Null .Any suggestion on how to keep it as Null even while fetching the data from the table .
The 1/1/1900 usually comes from trying to cast an empty string into a datetime.
Is your 'date' source column an actual datetime datatype or a string-ish varchar or char?
Sounds like the table definition requires that end_date not be null, and has default values inserted automatically to prevent them. Are you sure there are even nulls when you do a straight select on the table without the confusion of ordering and inserting?
I created a temp table with a nullable datetime column that has a default value.
Defaults are not there to handle nulls per se, they are there to handle missing values on inserts that were not supplied. If I run an insert without a column list (just as you have done) the default value does not apply and a null is still inserted.
I suggest adding the column list to your insert statement. This might prevent the problem (or expose a different problem in having them in the wrong order.)
insert into #temp (accnt_no, accnt_name, start_date, end_date)
select accnt_no,acct_name, start_date,end_date from ...
Here's a query that should help you find the actual defaults on any of the columns if you don't have access to the create script:
select c.name, object_name(cdefault), text
from tempdb..syscolumns c, tempdb..syscomments cm
where c.id = object_id('tempdb..#temp') and cm.id = c.cdefault
I have a table which I dynamically fill with some data I want to create some statistics for. I have one value which has some values following a certain pattern, so I created an additional column where I map the values to other values so I can group them.
Now before I run my statistics, I need to check if I have to remap these values which means that I have to check if there are null values in that column.
I can do a select like this:
select distinct 1
from my-table t
where t.status_rd is not null
;
The disadvantage is, that this returns exactly one row, but it has to perform a full select. Is there some way that I can stop the select for the first encounter? I'm not interested in the exact row, because when there is at least one row, I have to run an update on all of them anyway, but I would like to avoid running the update unnecessarily everytime.
In Oracle I would do it with rownum, but this doesn't exist in SQLite
select 1
from my-table t
where t.status_rd is not null
and rownum <= 1
;
Use LIMIT 1 to select the first row returned:
SELECT 1
FROM my_table t
WHERE t.status_rd IS NULL
LIMIT 1
Note: I changed the where clause from IS NOT NULL to IS NULL based on your problem description. This may or may not be correct.
I would like to find the columns in a table that has a null value in it.
Is there a system table that have that information?
To find columns where "null" values are allowed try...
select *
from dbc.columns
where databasename = 'your_db_name'
and tablename = 'your_table_name'
and Nullable = 'Y'
then to identify the specific rows w/ null values, take the "ColumnName" from the previous result set and run queries to identify results... perhaps throw them in a volatile table if you want to take further action on them (update,delete).
-- for example you found out that column "foo" is nullable...
create volatile table isnull_foo_col
as
(
sel *
from your_table_name
where foo is null
) with data
on commit preserve rows;
If you have statistics collected on the column you can use the views found here for Teradata 12.0.03+ and Teradata 13.0.02+ to determine the number of records in the table that have NULL values.
In Teradata 14, if you use the SHOW STATISTICS with the VALUES clause you will get similar information generated by the views listed at the link above.
You can use the DBC.Columns data dictionary view to determine what columns in a particular table are nullable.