I'm learning SQL Oracle and tried to create a view, but I keep getting this error. I know is because of that NULL, but still don't know how to fix it. Any advice is helpful. Thanks.
CREATE VIEW Produse_HP (model, categorie, viteza, ram, hd, ecran, culoare, tip, pret)
AS
SELECT *
FROM
(SELECT model, categorie, viteza, ram, hd, NULL, NULL, NULL, pret
FROM Produs
NATURAL JOIN PC
WHERE fabricant = 'HP'
UNION
SELECT model, categorie, viteza, ram, hd, ecran, NULL, NULL, pret
FROM Produs
NATURAL JOIN Laptop
WHERE fabricant = 'HP'
UNION
SELECT model, categorie, NULL, NULL, NULL, NULL, culoare, tip, pret
FROM Produs
NATURAL JOIN Imprimanta
WHERE fabricant = 'HP');
It suppose to show those columns with SELECT [model, categorie, viteza, ram, hd, ecran, culoare, tip, pret]. I need it this way because I need it in a instead-of trigger, to insert values through this view.
You need aliases for the nulls. At least in your first query, so that Oracle knows how to call the columns in the result:
SELECT
model, categorie, viteza, ram, hd,
NULL AS ecran, NULL AS culoare, NULL AS tip, pret
FROM Produs NATURAL JOIN PC
WHERE fabricant = 'HP'
UNION
...
In addition to #Thorsten 's answer you need to declare the datatype of your nulls using a CAST() function. Something like,
CAST(Null as Varchar) as ecran, CAST(Null as Date) as culoare...
Related
I have a performance issue in the following script.
At the beginning, the script runs around a few second.
Nowadays, it needs run around 3 mins.
I think the most of the reason is the TransactionSendQueue table, which has over 3 million rows at this moment. In the "ctetran", I need to find out the latest record and compare with the temp table.
I try to add different index, but it cannot improve it even slower. Any suggestions on how to improve the performance.
WITH ctetran AS --the lastest transaction
(
SELECT
Tran_ID,
Field2,
Field3,
Field4,
Field5,
Field6,
Field7,
Field8,
Field9,
ROW_NUMBER() OVER (PARTITION BY Tran_ID
ORDER BY LastUpdate DESC) AS rn
FROM
TransactionSendQueue
WHERE
STATUS = '1'
) --where 1 mean complete
UPDATE temp
SET STATUS = CASE
WHEN temp.f2 = cte.Field2
AND temp.f3 = cte.Field3
AND temp.f4 = cte.Field4
AND temp.f5 = cte.Field5
AND temp.f6 = cte.Field6
AND temp.f7 = cte.Field7
AND temp.f8 = cte.Field8
AND temp.f9 = cte.Field9
THEN '2' -- where 2 mean skip
ELSE '3' --where 3 mean ready to execute
END
FROM #TempTran temp
INNER JOIN ctetran cte ON temp.Tran_ID = cte.Tran_ID
AND cte.rn = 1;
The table design:
CREATE TABLE [dbo].[TransactionSendQueue]
(
[Batch_ID] [CHAR](20) NOT NULL,
[Tran_ID] [VARCHAR](20) NOT NULL,
[Field2] [VARBINARY](100) NULL,
[Field3] [VARBINARY](100) NULL,
[Field4] [VARBINARY](100) NULL,
[Field5] [VARBINARY](100) NULL,
[Field6] [VARBINARY](100) NULL,
[Field7] [VARBINARY](100) NULL,
[Field8] [VARBINARY](100) NULL,
[Field9] [VARBINARY](100) NULL,
[LastUpdate] [DATETIME] NOT NULL,
[STATUS] [INTEGER] NOT NULL,
CONSTRAINT [PK_TransactionSendQueue]
PRIMARY KEY CLUSTERED([Batch_ID], [Tran_ID])
);
One core principle when doing SQL queries with mutliple steps needs to be: eliminate as much data as possible as early as possible. Your CTE loads all rows from TransactionSendQueue, when you only want the latest transaction per Tran_ID. The more data that's being handled, the higher risk of data loaded being written to disk, which is extremely detrimental to performance. The more data that's written to disk, the worse the impact is. You can view your execution plan to check if this is the case but I'd say it's likely considering the execution time.
The CTE should only return one row per row that could possibly be updated in your #TempTran table. You can use an additional CTE to retrieve the latest update first, and then use that information in your ctetran to reduce the amount of data (rows) being search ed through in the update statement.
WITH LatestTran AS --the lastest transaction
(
SELECT
Tran_ID,
MAX(LastUpdate) AS LastUpdate
FROM
TransactionSendQueue
WHERE
STATUS = '1' --where 1 mean complete
GROUP BY
Tran_ID
), ctetran AS
(
SELECT
Tran_ID,
Field2,
Field3,
Field4,
Field5,
Field6,
Field7,
Field8,
Field9
FROM
TransactionSendQueue TSQ
INNER JOIN LatestTran LT ON
TSQ.Tran_ID = LT.Tran_ID AND
TSQ.LastUpdate = LT.LastUpdate
)
UPDATE temp
SET STATUS = CASE
WHEN temp.f2 = cte.Field2
AND temp.f3 = cte.Field3
AND temp.f4 = cte.Field4
AND temp.f5 = cte.Field5
AND temp.f6 = cte.Field6
AND temp.f7 = cte.Field7
AND temp.f8 = cte.Field8
AND temp.f9 = cte.Field9
THEN '2' -- where 2 mean skip
ELSE '3' --where 3 mean ready to execute
END
FROM #TempTran temp
INNER JOIN ctetran cte ON temp.Tran_ID = cte.Tran_ID
How big performance increase this will be is dependent on how many Batch_ID you have per Tran_ID, the more the bigger perfomance boost.
If the query is still running slow, you could also look into using an index for the LastUpdate column in the TransactionSendQueue table, since the query is now using that in a join statement.
Please let me know how much the query time is reduced, would be interesting to know.
Need to find out why my group by count query is not working. I am using Microsoft SQL Server and there are 2 tables I am trying to join.
My query needs to bring up the number of transactions made for each type of vehicle. The output of the query needs to have a separate row for each type of vehicle such as ute, hatch, sedan, etc.
CREATE TABLE vehicle
(
vid INT PRIMARY KEY,
type VARCHAR(30) NOT NULL,
year SMALLINT NOT NULL,
price DECIMAL(10, 2) NOT NULL,
);
INSERT INTO vehicle
VALUES (1, 'Sedan', 2020, 240)
CREATE TABLE purchase
(
pid INT PRIMARY KEY,
vid INT REFERENCES vehicle(vid),
pdate DATE NOT NULL,
datepickup DATE NOT NULL,
datereturn DATE NOT NULL,
);
INSERT INTO purchase
VALUES (1, 1, '2020-07-12', '2020-08-21', '2020-08-23')
I have about 10 rows on information in each table I just haven't written it out.
This is what I wrote but it doesn't return the correct number of transactions for each type of car.
SELECT
vehicle.vid,
COUNT(purchase.pid) AS NumberOfTransactions
FROM
purchase
JOIN
vehicle ON vehicle.vid = purchase.pid
GROUP BY
vehicle.type;
Any help would be appreciated. Thanks.
Your GROUP BY and SELECT columns are inconsistent. You should write the query like this:
SELECT v.Type, COUNT(*) AS NumPurchases
FROM Purchase p JOIN
Vehicle v
ON v.vID = p.pID
GROUP BY v.Type;
Note the use of table aliases so the query is easier to write and read.
If this doesn't produce the expected values, you will need to provide sample data and desired results to make it clear what the data really looks like and what you expect.
I'm working on Oracle to Postgres migration, The service is using my-batis for the DB connection, and have some complex queries that I couldn't convert with some reference.
Below is the table definition of which the query is to run.
CREATE TABLE TABLE1(
NOTIFY_ID bigint primary key not null,
TRANSACTION_ID numeric not null,
EVENT_TYPE character varying(64) not null,
EVENT_TIME timestamp(6) with time zone not null,
SOURCE_TRANSACTION_ID numeric,
QUANTITY numeric,
PROCESSED character varying(8) not null,
CURRENCY_CODE character varying(8),
COST numeric(12,2),
DESTINATION_LOCATION character varying(32),
SOURCE_LOCATION character varying(32),
CUSTOMER_COUNTRY_CODE character varying(2)
);
Here is the Oracle query which needs to be converted to PostgreSQL.
SELECT
TRANSACTION_ID,
EVENT_TYPE,
EVENT_TIME,
CUSTOMER_COUNTRY_CODE,
SOURCE_TRANSACTION_ID,
PROCESSED,
QUANTITY,
CURRENCY_CODE,
COST,
TABLE1.NOTIFY_ID,
DESTINATION_LOCATION,
SOURCE_LOCATION
FROM
(SELECT
NOTIFY_ID, RANK() OVER (ORDER BY TRANSACTION_ID, NOTIFY_ID) AS RANK
FROM
table1
WHERE
PROCESSED = 'FALSE' ) RANKED
INNER JOIN
TABLE1
ON RANKED.NOTIFY_ID = table1.NOTIFY_ID
WHERE
RANK <= 5
ORDER BY TRANSACTION_ID
FOR UPDATE OF TABLE1.NOTIFY_ID
SKIP LOCKED
When I run this query by removing the column name after for update it says "FOR UPDATE is not allowed with window functions." I'm not sure how to convert this query to Postgresql compatible.
Please help me in converting this query.
Thanks,
Gokul.
I've got a table that looks like this (I wasn't sure what all might be relevant, so I had Toad dump the whole structure)
CREATE TABLE [dbo].[TScore] (
[CustomerID] int NOT NULL,
[ApplNo] numeric(18, 0) NOT NULL,
[BScore] int NULL,
[OrigAmt] money NULL,
[MaxAmt] money NULL,
[DateCreated] datetime NULL,
[UserCreated] char(8) NULL,
[DateModified] datetime NULL,
[UserModified] char(8) NULL,
CONSTRAINT [PK_TScore]
PRIMARY KEY CLUSTERED ([CustomerID] ASC, [ApplNo] ASC)
);
And when I run the following query (on a database with 3 million records in the TScore table) it takes about a second to run, even though if I just do: Select BScore from CustomerDB..TScore WHERE CustomerID = 12345, it is instant (and only returns 10 records) -- seems like there should be some efficient way to do the Max(ApplNo) effect in a single query, but I'm a relative noob to SQL Server, and not sure -- I'm thinking I may need a separate key for ApplNo, but not sure how clustered keys work.
SELECT BScore
FROM CustomerDB..TScore (NOLOCK)
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2 (NOLOCK)
WHERE sc2.CustomerID = 12345)
Thanks much for any tips (pointers on where to look for optimization of sql server stuff appreciated as well)
When you filter by ApplNo, you are using only part of the key. And not the left hand side. This means the index has be scanned (look at all rows) not seeked (drill to a row) to find the values.
If you are looking for ApplNo values for the same CustomerID:
Quick way. Use the full clustered index:
SELECT BScore
FROM CustomerDB..TScore
WHERE ApplNo = (SELECT Max(ApplNo)
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345)
AND CustomerID = 12345
This can be changed into a JOIN
SELECT BScore
FROM
CustomerDB..TScore T1
JOIN
(SELECT Max(ApplNo) AS MaxApplNo, CustomerID
FROM CustomerDB..TScore sc2
WHERE sc2.CustomerID = 12345
) T2 ON T1.CustomerID = T2.CustomerID AND T1.ApplNo= T2.MaxApplNo
If you are looking for ApplNo values independent of CustomerID, then I'd look at a separate index. This matches your intent of the current code
CREATE INDEX IX_ApplNo ON TScore (ApplNo) INCLUDE (BScore);
Reversing the key order won't help because then your WHERE sc2.CustomerID = 12345 will scan, not seek
Note: using NOLOCK everywhere is a bad practice
Im having two tables with attributes like date(datetime),headline(varchar),text(text)
Now i want to UNION ALL these two tables and sort by the datetime. When doing this i'm getting the error:
Only text pointers are allowed in work tables, never text, ntext, or image columns. The query processor produced a query plan that required a text, ntext, or image column in a work table.
After trying back and forth i found out that it is the text attribute which is causing the error. But what to do? I tried casting to VARCHAR with no succes. Both tables uses text format in the text attribute.
Also when removing the ORDER BY it all works fine. What to do?
The original SQL query is below, but you can just reply to the simplified above.
SELECT id, datetime, author, headline, intro, text, type, toppriority,
secondpriority, comments, companyid, '1' source
FROM Table1
UNION ALL
SELECT AutoID AS id, Dato AS datetime,
ID COLLATE SQL_Latin1_General_CP1_CI_AS AS author, NULL AS headline,
NULL AS intro, Notat COLLATE SQL_Latin1_General_CP1_CI_AS AS text,
CAST(NotatTypeID AS VARCHAR) AS type,
NULL AS toppriority, NULL AS secondpriority, NULL AS comments,
Selskabsnummer AS companyid, '2' source
FROM Table2
WHERE (NotatTypeID = '5') OR (NotatTypeID = '6')
ORDER BY datetime DESC
Thanks in advance
One way round this is to run the union as a sup query and order the results afterwards:
SELECT * FROM
(
SELECT id, datetime, author, headline, intro, text, TYPE, toppriority,
secondpriority, comments, companyid, '1' source
FROM Table1
UNION ALL
SELECT AutoID AS id, Dato AS datetime,
ID COLLATE SQL_Latin1_General_CP1_CI_AS AS author, NULL AS headline,
NULL AS intro, Notat COLLATE SQL_Latin1_General_CP1_CI_AS AS text,
CAST(NotatTypeID AS VARCHAR) AS TYPE,
NULL AS toppriority, NULL AS secondpriority, NULL AS comments,
Selskabsnummer AS companyid, '2' source
FROM Table2
WHERE (NotatTypeID = '5') OR (NotatTypeID = '6')
) a
ORDER BY datetime DESC
What about casting the datetime field to some text field in the index? Please note that using 'datetime' and 'text' as field/alias names can be quite confusing, and a source for potential problems.