I am translating a script to PostgreSQL and I have a problem translating PIVOT, because there isn't that function on Postgres.
the script is:
UPDATE XYZ A
SET (A.YX1,YX2,A.YX3,A.YX4,A.YX5,A.YX6,A.YX7,A.YX8,A.YX9,A.YX10) =
(SELECT X1,X2,X3,X4,X5,X6,X7,X8,X9,10
FROM
((select X1,X2,X3,X4,X5,X6,X7,X8,X9,X10
from XYPZ )
PIVOT
( MAX(CONTROLLO) FOR LIVELLO IN
('X1' AS "X1",'X2' AS "X2" ,'X3' AS "X3",'X4' AS "X4",'X5' AS "X5",'X6' AS "X6",'X7' AS "X7",'X8' AS "X8",'X9' AS "X9",'X10' AS "X10")) B
);
A way to translate this update
Related
I am working on a sql database which will provide with data some grid. The grid will enable filtering, sorting and paging but also there is a strict requirement that users can enter free text to a text input above the grid for example
'Engine 1001 Requi' and that the result will contain only rows which in some columns contain all the pieces of the text. So one column may contain Engine, other column may contain 1001 and some other will contain Requi.
I created a technical column (let's call it myTechnicalColumn) in the table (let's call it myTable) which will be updated each time someone inserts or updates a row and it will contain all the values of all the columns combined and separated with space.
Now to use it with entity framework I decided to use a table valued function which accepts one parameter #searchQuery and it will handle it like this:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS #Result TABLE
( ... here come columns )
AS
BEGIN
DECLARE #searchToken TokenType
INSERT INTO #searchToken(token) SELECT value FROM STRING_SPLIT(#searchText,' ')
DECLARE #searchTextLength INT
SET #searchTextLength = (SELECT COUNT(*) FROM #searchToken)
INSERT INTO #Result
SELECT
... here come columns
FROM myTable
WHERE (SELECT COUNT(*) FROM #searchToken WHERE CHARINDEX(token, myTechnicalColumn) > 0) = #searchTextLength
RETURN;
END
Of course the solution works fine but it's kinda slow. Any hints how to improve its efficiency?
You can use an inline Table Valued Function, which should be quite a lot faster.
This would be a direct translation of your current code
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE (
SELECT COUNT(*)
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) > 0
) = (SELECT COUNT(*) FROM searchText)
);
GO
You are using a form of query called Relational Division Without Remainder and there are other ways to cut this cake:
CREATE FUNCTION myFunctionName(#searchText NVARCHAR(MAX))
RETURNS TABLE
AS RETURN
(
WITH searchText AS (
SELECT value token
FROM STRING_SPLIT(#searchText,' ') s(token)
)
SELECT
... here come columns
FROM myTable t
WHERE NOT EXISTS (
SELECT 1
FROM searchText
WHERE CHARINDEX(s.token, t.myTechnicalColumn) = 0
)
);
GO
This may be faster or slower depending on a number of factors, you need to test.
Since there is no data to test, i am not sure if the following will solve your issue:
-- Replace the last INSERT portion
INSERT INTO #Result
SELECT
... here come columns
FROM myTable T
JOIN #searchToken S ON CHARINDEX(S.token, T.myTechnicalColumn) > 0
I'm very puzzled because when I'm running this query on SQLite.
On MacOS Mojave SQLITE, I'm getting a syntax error on the on "FROM". There is no more detail.
This does work on Postgres.
Am I reading the SQLite documentation the wrong way? https://sqlite.org/lang_update.html
Here's the query:
BEGIN;
-- Statement 1
CREATE TEMP TABLE tempEdits (identifier text, serverEditTime double precision);
-- Statement 2
INSERT INTO tempEdits (identifier, serverEditTime)
VALUES
('uuid1', 1.5),
('uuid2', 2.2),
('uuid3', 3.3);
-- Statement 3
UPDATE
"pEdits"
SET
"serverEditTime" = t.serverEditTime
FROM
"pEdits" AS e JOIN tempEdits AS t ON e.identifier = t.identifier
WHERE
e.identifier = t.identifier;
END;
Setup query:
CREATE TABLE "pEdits" (identifier text, serverEditTime double precision);
INSERT INTO (identifier)
VALUES
('uuid1'),
('uuid2'),
('uuid3');
SQLite does not support joins in the UPDATE statement.
Instead you should use a correlated subquery:
UPDATE pEdits
SET serverEditTime = (
SELECT t.serverEditTime
FROM tempEdits AS t
WHERE t.identifier = pEdits.identifier
);
See the demo.
Edit: Starting from version 3.33.0+ (2020-08-14), SQLite supports the Postgresql-like FROM clause. See https://www.sqlite.org/lang_update.html#upfrom
The logic you want is:
UPDATE "pEdits"
SET "serverEditTime" = t.serverEditTime
FROM tempEdits t
WHERE "pEdits".identifier = t.identifier;
In other words, the table being updated should not be repeated in the FROM clause -- well, unless your intention is a self-join.
I have create the following SQL table in databricks (using the magic %sql) as follows:
%sql
CREATE TABLE mytable (
id INT
,name STRING
,met_area_name STRING
,state STRING
,type STRING
) USING CSV
I am now trying insert data into the table using the following command:
%sql
INSERT INTO TABLE mytable VALUES (id,name,type)
SELECT DISTINCT criteria1, criteria2, 'b'
FROM tablex
WHERE somecriteria1 = 0
ORDER BY somecriteria2;
However, I'm getting the following error:
Error in SQL statement: ParseException:
mismatched input 'FROM' expecting <EOF>(line 2, pos 2)
== SQL ==
INSERT INTO TABLE mytable VALUES (id,name,type)
FROM tablex
--^^^
WHERE somecriteria1 = 0
ORDER BY somecriteria2
I'm sure there is something very obvious that I'm missing, but I can't see it.
Any assistance much appreciated.
Cheers
I have a set of code that takes a string value, split it, and pass it to a table. The code works, but it runs slow. Any suggestion to modify the code and make it run faster would be greatly appreciated.
DECLARE #StrPropertyIDs VARCHAR(1000)
SET #StrPropertyIDs = '419,429,459'
DECLARE #TblPropertyID TABLE
(
property_id varchar(100)
)
INSERT INTO #TblPropertyID(property_id)
select x.Item
from dbo.SplitString(#StrPropertyIDs, ',') x
select *
from vw_nfpa_firstArv_RPT
where property_use IN
(
SELECT property_id
FROM #TblPropertyID
)
The best long term strategy here would be to move away from CSV data in your SQL tables if at all possible. As a quick fix here, we could try creating the table variable with an index on property_id:
DECLARE #TblPropertyID TABLE (
property_id varchar(100) INDEX idx CLUSTERED
);
This would make the WHERE IN clause of your query faster, though we could try rewriting it using EXISTS:
SELECT *
FROM vw_nfpa_firstArv_RPT t1
WHERE EXISTS (SELECT 1 FROM #TblPropertyID t2
WHERE t2.property_id = t1.property_use);
Note that this would only work on SQL Server 2014 or later.
I have simple query, but when I'm trying to execute this query I'm getting error:
Query input must contain at least one table or query. (Error 3067)
Query:
INSERT INTO FV_Ko ( FvId, OldPriceNetto )
SELECT [PFvId], (SELECT FV.PriceNetto1 FROM FV WHERE FV.FVnr = '123');
This should work - it should ask you for [PFvId]
INSERT INTO FV_Ko ( FvId, OldPriceNetto )
SELECT [PFvId], FV.PriceNetto1 FROM FV WHERE FV.FVnr = '123';
If PFvld is a parameter and the query (MyQuery) is saved (part of the MSAccess database), then you should be able to do:
Dim qdf As DAO.QueryDef
Set qdf = db.QueryDefs("MyQuery")
qdf.Parameters("[PFvld]") = myValue
If you compose the query on the fly and execute it, then you might just specify a value when composing the SQL code instead of a parameter (not that it is a better solution though).
The subquery is OK, but you aren't saying where PFvID is coming from. Your INSERT should be something like this:
INSERT INTO FV_Ko ( FvId, OldPriceNetto )
SELECT AnotherTable.PFvId,
(SELECT FV.PriceNetto1 FROM FV WHERE FV.FVnr = '123') FROM AnotherTable
...
SELECT [PFvId], --> you should define a FROM table at end of the query
(SELECT FV.PriceNetto1 FROM FV WHERE FV.FVnr = '123');