Initialize a sequence in Hsqldb Datamanager - sequence

I am using hsqldb-1.8.0.10 and I want to use the sequence feature. I launch the DatabaseManager from the jar file as follows:
java -cp lib/hsqldb.jar org.hsqldb.util.DatabaseManager
I run these requests:
CREATE TABLE test (ID bigint)
CREATE SEQUENCE seq START WITH 1 INCREMENT BY 1
SELECT NEXT VALUE FOR seq FROM test
There is no output. If I insert some rows in the table, it works fine and displays 1.
Is it a normal comportment? Is there a way to get a value with an empty table?

Yes, this is normal. The statement below returns one row for each row of the table. It doesn't matter what you put between SELECT and FROM (except aggregate functions).
SELECT <anything you put here> FROM test
The NEXT VALUE FOR seq is like a function that return the next value each time it is called.
If you do not have to use version 1.8.0, use the latest version of HSQLDB as it supports a more extensive syntax.

Related

How to create table from result of client side function \crosstabview

In psql i can get the pivot of a table using the client side function \crosstabview, but i can't figure out how to store the result in a table, i tried CREATE Table foo AS ... \crosstabview, but it stores just the input to the function and not the output.
\crosstabview is a psql slash command which takes the results of the actual query (everything up until \crosstabview) and reformats it. As such the output you see is generated within the psql client for display only, and cannot be used in other SQL operations.
To create a table based off the results of a crosstab query you'll need the tablefunc extension.

Get a Big Query script to output a table

I am using the new scripting feature of Big Query to declare a variable and then am using that variable in a standard SQL query.
The structure of the query is :
DECLARE {name of variable} {data type};
SET {name of variable} = {Value}'
(A SQL QUERY THEN FOLLOWS USING THE ABOVE VARIABLE)
I understand that this is now a script a no longer a typical query, and thus when I run it, it runs as a sequence of executable tasks. But is there anyway in the script to explicitly state that I only want to output the resulting table of the SQL query as opposed to both the result of declaring the variable and SQL query?
What BQ Outputs
Depending on how you "capture" the output, if you are sending a query from Python/Java/CLI, then the last SELECT statement in script is the only output that you receive with the API.
Please also note that each "output" that you see come with a cost/bytes-billed, which is another reason for them to be visible at all time.
Update:
If you need to capture the output of SELECT statement to a table, depending on your intention, you may use:
CREATE OR REPLACE TABLE <your_destination_table> AS SELECT ...
or
INSERT INTO TABLE <your_destination_table> SELECT ...

Call a SQL function in Nifi using ExecuteSQL or another processor

I am currently using a function in SQL Server to get the max-value of a certain column. I Need this value to generate a specific number of dummy files to insert flowfiles that are created later on.
Is there a way of calling this function via a nifi-processor?
By using ExecuteSQL I Always get error like unable to execute SQL select query or the column "ab" was not found, when using select ab.functionname() (ab is the loginname of the db)
In SQL Server I can just use select ab.functionname() and get the desired results.
If there is no possible way of calling this function, is there another way to create #flowfiles dummyfiles to reserve this place for them in the DB so that no one else could insert or use this ids (not autoincremt, because it is not possible) while the flowfiles are getting processed?
I tried using $flowfile.count and the Counterprocessor, but this did not solve the Problem.
It should look like: INSERT INTO table (id,nr) values (max(id)+1,anynumber) for every flowfiles, unfortunately the ExecuteSQL is not able to do this.
Think this conversation can help you:
https://community.hortonworks.com/questions/26170/does-executesql-processor-allow-to-execute-stored.html
Gist:
You can use ExecuteScript or ExecuteProcess to call appropriate script. For example for ExecuteProcess just call sqlplus command. Choose type of command "sqlplus". In command arguments set something like: user_id/password#dbname #"script_path/someScript.sql". In someScript.sql you put something like:
execute spname(param)
You can write your own processor :) Of course it's more difficulty and often unnecessary

Sequence SQL SERVER Select Fails

I'm using SQL Server and I'm in the need of passing unique identifiers to my client.To avoid the usage of UUID's I want to use a sequence. In order to do so I took a look at the MS docs and found this example.
USE MyDB;
GO
CREATE SCHEMA Test;
GO
CREATE SEQUENCE Test.CountBy1
START WITH 1
INCREMENT BY 1 ;
GO
SELECT NEXT VALUE FOR Test.CountBy1 AS FirstUse;
SELECT NEXT VALUE FOR Test.CountBy1 AS SecondUse;
Which fails for me. I'm currently using sequences at another service in my application where they work - but they are used as a default value. The syntax here differs since I want to select the sequence as integer and pass it to my Java server and from there via REST to a web client.
The example was copied as it is from Microsofts docs. Only the database name was adjusted. Example B from that page is working.
mssql-server mssql-tools are running on a Ubuntu 16.04 host.
Error message:
[2017-07-11 16:38:24] [S00016][217] Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 2).
[2017-07-11 16:38:24] The connection is closed.

Select from a SQL table starting with a certain index?

I'm new to SQL (using postgreSQL) and I've written a java program that selects from a large table and performs a few functions. The problem is that when I run the program I get a java OutOfMemoryError because the table is simply too big. I know that I can select from the beginning of the table using the LIMIT operator, but is there a way I can start the selection from a certain index where I left off with the LIMIT command? Thanks!
There is offset option in Postgres as in:
select from table
offset 50
limit 50
For mysql you can use the follwoing approaches:
SELECT * FROM table LIMIT {offset}, row_count
SELECT * FROM table WHERE id > {max_id_from_the previous_selection} LIMIT row_count. First max_id_from_the previous_selection = 0.
This is actually something that the jdbc driver will handle for you transparently. You can actually stream the result set instead of loading it all into memory at once. To do this in MySQL, you need to follow the instructions here: http://javaquirks.blogspot.com/2007/12/mysql-streaming-result-set.html
Basically when you create you call connection.prepareStatement you need to pass ResultSet.TYPE_FORWARD_ONLY and ResultSet.CONCUR_READ_ONLY as the second and third parameters, then call setFetchSize(Integer.MIN_VALUE) on your PreparedStatement object.
There are similar instructions for doing this with other databases which I could iterate if needed.
EDIT: now we know you need instructions for PostgreSQL. Follow the instructions here: How to read all rows from huge table?