I'm using postgres 14 for creating this function in Datagrip:
CREATE EXTENSION plpython3u;
CREATE OR REPLACE FUNCTION pymax (x integer, y integer) RETURNS integer
AS $$
if x>y:
return x
$$ LANGUAGE plpython3u;
The user I use is a superuser.
I get the following error when I execute this. How do I resolve this? I've tried increasing the connection timeout in the settings which did not help. Any suggestions here?
In datagrip, the error looks like this.
I tried to replicate this in pgadmin4, the error looks like this:
Related
I am trying to create a function in Bigquery which pulls a Cloud Functions:
CREATE OR REPLACE FUNCTION `DATASET.XXXXX`(user_id int64, corp_id STRING) RETURNS STRING
REMOTE WITH CONNECTION `myPROJECTID.REGION.MY_CONNECTION`
OPTIONS (
endpoint = 'https://XXXX.cloudfunctions.net/XXXXX'
)
previously create a connection in the Bigquery shell, but I get the following error, does anyone know?
Keyword REMOTE is not supported at [2:1]
or
Not found: Connection my-connection
Your project must be allowlisted. It's a private preview (I asked 2 month ago, still nothing....)
I am reading a databricks blog link
and I find a problem with the built-in function to_json.
In the codes blew within this tutorial, it returns error:
org.apache.spark.sql.AnalysisException: Undefined function: 'to_json'. This function is neither a registered temporary function nor a permanent function registered in the database 'default'.
Does this means that this usage in the tutorial is wrong? and no udf could be used in selectExpr. Could I do something like register this to_json function into default database?
val deviceAlertQuery = notifydevicesDS
.selectExpr("CAST(dcId AS STRING) AS key", "to_json(struct(*)) AS value")
.writeStream
.format("kafka")
.option("kafka.bootstrap.servers", "host1:port1,host2:port2")
.option("toipic", "device_alerts")
.start()
You need to improt the to_json function as
import org.apache.spark.sql.functions.to_json
This should work rather than the selectExpr
data.withColumn("key", $"dcId".cast("string"))
.select(to_json(struct(data.columns.head, data.columns.tail:_*)).as("value")).show()
You must also use the spark 2.x
I hope this helps to solve your problem.
based on information I get from mail list. this function are not added into SQL from spark 2.2.0. Here is the commit link:commit.
Hope this will help. THX Hyukjin Kwon and Burak Yavuz.
While using Liquibase feature, I extended liquibase.sqlgenerator.core.CreateIndexGenerator class to convert this command
create index indexI on tableT(columnC)
into something like this:
declare
index_already_exists exception;
pragma exception_init(index_already_exists, -955);
--
begin
execute immediate 'create index indexI on tableT(columnC)';
exception
when index_already_exists then
dbms_output.put_line('Warning: Index indexI already exists');
end;
to make it idempotent and create some new validations.
It is working perfectly when using mvn liquibase:update. But, when generating the SQL using mvn liquibase:updateSQL a final / (slash) is missing.
Looking at Liquibase sourcecode I found out that the class LoggingExecutor used to have what I need on method outputStatement
} else if (database instanceof OracleDatabase) {
output.write(StreamUtil.getLineSeparator());
output.write("/");
I tried to add a final / (slash) after the end; if, but it becomes like this :
end;
/;
which is invalid PLSQL code
Is there another way to add a final / on the SQL generated code, or set the / as an end delimeter ?
Instead of extending CreateIndexGenerator you could instead override CreateIndexChange.generateStatements() to return a RawSqlStatement with your SQL. That allows you to better set the end delimiter and may work better with the LoggingExecutor.
I'm using liquibase 3.4.2 and the end delimiter / is recognized automatically. When I mean automatically you don't have to use the property endDelimiter in the declaration of the changeset. It turns out that in some older versions of liquibase a bug was introduced during the parsing.
Please check http://www.liquibase.org/2015/06/liquibase-3-4-0-released.html
and you will see that they fix the issue https://liquibase.jira.com/browse/CORE-2227. This issue was affecting the version 3.3.2.
So I suggest you to use a newer version or specify correctly the endDelimiter property in the declaration of the changeset.
http://www.liquibase.org/documentation/changes/sql.html
I am trying to transform the R/Shiny/SQL application to use data from SQL Server instead of Oracle. In the original code there is a lot of the following type conditions: If the table exists, use it as a data set, otherwise upload new data. I was looking for a counterpart of dbExistsTable command from DBI/ROracle packages, but the odbcTableExists is unfortunately just internal RODBC command not usable in R environment. Also a wrapper for RODBC package, allowing to use DBI type commands - RODBCDBI seems not working. Any ideas?
Here is some code example:
library(RODBC)
library(RODBCDBI)
con <- odbcDriverConnect('driver={SQL
Server};server=xx.xx.xx.xxx;database=test;uid=user;pwd=pass123')
odbcTableExists(con, "table")
Error: could not find function "odbcTableExists"
dbExistsTable(con,"table")
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘dbExistsTable’ for signature ‘"RODBC", "character"’
You could use
[Table] %in% sqlTables(conn)$TABLE_NAME
Where [Table] is a character string of the table you are looking for.
I have a really long list of sql commands on a memo, when I try to execute it I get the following error:
Parameter object is improperly defined. Inconsistent or incomplete information was provided.
The code to execute it:
Query.SQL.Text := Memo1.Lines.Text;
Query.ExecSQL;
I have a vague idea that the error is caused due to the way the query content was added, so, here's how I'm doing it now:
1) Memo1.Lines.LoadFromFile('Patch.sql');
2) Proceed to the query commands
As you can see, the contents of the memo is loaded from a file. Is there any other way to successfully do this?
P.S.: I'm using Microsoft SQL 2008.
Thank you!
It looks like you're not using parameters, so set ParamCheck off
Query.ParamCheck := false;
If there is a colon ":" in a string in the SQL, the TADOQuery thinks it's a parameter