How to set "max_table_count_in_statement" to 8192 in SAP HANA - hana

I need to change limitation in SAP HANA.
I run first the below sql to check if "MAXIMUM_NUMBER_OF_TABLES_IN_STATEMENT" is set:
SELECT * FROM M_SYSTEM_LIMITS;
Found no MAXIMUM_NUMBER_OF_TABLES_IN_STATEMENT. According to documentation, this means that MAXIMUM_NUMBER_OF_TABLES_IN_STATEMENT=0
How can i set the value now to 8192?
I have tried the following sql:
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('sql', 'max_table_count_in_statement') = '8192' WITH RECONFIGURE;
The error that i get is following:
Configuration parameters for nameserver.ini can only be altered from SYSTEMDB SQLSTATE: HY000

I solved the issue.
Login as systemdb
hdbsql -n localhost -i 01 -u system -p XXXXXXX -d SYSTEMDB
Run the sql to update:
ALTER SYSTEM ALTER CONFIGURATION ('nameserver.ini', 'SYSTEM') SET ('sql'
, 'max_table_count_in_statement') = '8192' WITH RECONFIGURE;
Validate the change:
SELECT * FROM M_SYSTEM_LIMITS;
Expected result:
"DATABASE","MAXIMUM_NUMBER_OF_TABLES_IN_STATEMENT","8192","INTEGER","","This restriction shows the maximum number of tables including repeated occurrences in a statement. For example, given a table T1 and a view V1, the number is three for a query \"SELECT (SELECT COUNT(*) FROM T1) FROM T1, T1\" because T1 appears three times. If the definition of V1 includes T1 four times, the number for another query \"SELECT * FROM T1, V1\" is five (one for T1 and four for V1)."

Related

Greenplum's FDW tool duplicates data many times

I have Greenplum database version 6.14.1, working on CentOS 7.2 host.
So I try to copy data from Postgres 11 to Greenplum 6.14 by Foreign Data Wrapper.
With default options I receive N rows and all data comes through master node.
So I decide to change options to (mpp_execute "all segment"),
but in this case I receive 24*N rows, because my cluster has 24 segments nodes.
I think this is well known issue, but unfortunately can't find solution at all.
Steps to reproduce the behavior:
On Postgres server
create table x(id int, value float8);
insert into x select r, r * random() from generate_series(1,1000) r;
select count(*) from x;
1000
(1 row)
On Greenplum server
CREATE EXTENSION postgres_fdw;
create server foreign_server_x FOREIGN DATA WRAPPER postgres_fdw
OPTIONS(host '172.16.128.135', port '5432', dbname 'postgres');
-- user mapping
CREATE USER MAPPING FOR current_user
SERVER foreign_server_x OPTIONS (user 'admin', password 'admin');
-- foreign table foreign_x
CREATE FOREIGN TABLE foreign_x
(id int, value float8) SERVER foreign_server_x OPTIONS (schema_name 'public', table_name 'x');
select count(*) from foreign_x;
1000
(1 row)
-- mpp_execute = all segments
alter foreign table foreign_x options (add mpp_execute 'all segments');
-- foreign_x (24 segments)
select count(*) from foreign_x;
24000
(1 row)
this would be expected behavior since you have 24 segments, and are asking all of them to go query the database. I would suggest trying to execute only from the master, or select a unique count(*), or leverage an external table instead of FDW.

Backup & remove specific records - SQL Server

Is there anyway to:
Remove specific records from the table using a query?
Make a backup from specific records and restore them into another SQL Server instance somewhere else?
1) If ID is the table's PK (or it is unique) you can just use DELETE FROM TABLE_NAME WHERE ID IN (3, 4). You better check if this will not delete other items (or open a transaction, which is always good).
2) If it is just those 4 records and both databases are on the same server (and both tables have the same schema) you can just do (with the same worries that I have expressed in the answer above)
insert into DESTINATION
select * from SOURCE where id between 73 and 76;
Edit: If you really need to do something more like a row backup you can use the bcp utility:
bcp "select * from SOURCE where id between 73 and 76" queryout "file.dat" -T -c
bcp DESTINATION in file.dat -T -c
DELETE FROM ListsItems
WHERE ID = (3, 4);
It will remove your record.
Modify it....

How to return record count in PostgreSQL

I have a query with a limit and an offset. For example:
select * from tbl
limit 10 offset 100;
How to keep track of the count of the records, without running a second query like:
select count(*) from tbl;
I think this answers my question, but I need it for PostgreSQL. Any ideas?
I have found a solution and I want to share it. What I do is - I create a temp table from my real table with the filters applied, then I select from the temp table with a limit and offset (no limitations, so the performance is good), then select count(*) from the temp table (again no filters), then the other stuff I need and last - I drop the temp table.
select * into tmp_tbl from tbl where [limitations];
select * from tmp_tbl offset 10 limit 10;
select count(*) from tmp_tbl;
select other_stuff from tmp_tbl;
drop table tmp_tbl;
I haven't tried this, but from the section titled Obtaining the Result Status in the documentation you can use the GET DIAGNOSTICS command to determine the effect of a command.
GET DIAGNOSTICS number_of_rows = ROW_COUNT;
From the documentation:
This command allows retrieval of system status indicators. Each item
is a key word identifying a state value to be assigned to the
specified variable (which should be of the right data type to receive
it). The currently available status items are ROW_COUNT, the number of
rows processed by the last SQL command sent down to the SQL engine,
and RESULT_OID, the OID of the last row inserted by the most recent
SQL command. Note that RESULT_OID is only useful after an INSERT
command into a table containing OIDs.
Depends if you need it from the psql CLI or if you're accessing the database from something like an HTTP server. I am using postgres from my Node server with node-postgres. The result set is returned as an array called 'rows' on the result object so I can just do
console.log(results.rows.length)
To get the row count.

How to compare data in table (before and after an operation)?

Is there any free tool or a way to get to know what has changed in database's table?
You could take a copy before the update
CREATE TABLE t2 AS SELECT * FROM t1
Run your update
Then to show the differences
use this to show updates:
SELECT * FROM t1
MINUS
SELECT * FROM t2
use this to show the deletes:
SELECT * FROM t2
WHERE NOT EXISTS(SELECT 1 FROM t1 WHERE t1.primary_key = t2.primary_key)
and finally this to check the total number of records are identical
SELECT count(*) FROM t1
SELECT count(*) FROM t2
Note: If there are other sessions updating t1 it could be tricky spotting your updates.
Triggers really should be avoided but ...
If you are in a non-production environment you can set up a trigger to perform logging to a new table. You need 5 fields something like this:
LogTime DateTime;
Table Varchar2(50); -- Table Name
Action Char; -- Insert, Update or Delete
OldRec Blob; -- Concatenate all your field Values
NewRec Blob; -- Ditto
The Beauty of this is that you can select all the OldRecs and NewRecs for a given timespan into text files. A comparison tool will assist by highlighting your changes for you.
Any help ?
I have used Toad for MySQL very successfully in times past (for both the Schema and Data). I see it is also compatible with Oracle.
Try liquibase, it provides the version control mechanism for database.

SQL Command for copying table

What is the SQL command to copy a table from one database to another database?
I am using MySQL and I have two databases x and y. Suppose I have a table in x called a and I need to copy that table to y database.
Sorry if the question is too novice.
Thanks.
If the target table doesn't exist....
CREATE TABLE dest_table AS (SELECT * FROM source_table);
If the target table does exist
INSERT INTO dest_table (SELECT * FROM source_table);
Caveat: Only tested in Oracle
If your two database are separated, the simplest thing to do would be to create a dump of your table and to load it into the second database. Refer to your database manual to see how a dump can be performed.
Otherwise you can use the following syntax (for MySQL)
INSERT INTO database_b.table (SELECT * FROM database_a.table)
Since your scenario involves two different databases, the correct query should be...
INSERT INTO Y..dest_table (SELECT * FROM source_table);
Query assumes, you are running it using X database.
If you just want to copy the contents, you might be looking for select into:
http://www.w3schools.com/Sql/sql_select_into.asp. This will not create an identical copy though, it will just copy every row from one table to another.
At the command line
mysqldump somedb sometable -u user -p | mysql otherdb -u user -p
then type both passwords.
This works even if they are on different hosts (just add the -h parameter as usual), which you can't do with insert select.
Be careful not to accidentally pipe into the wrong db or you will end up dropping the sometable table in that db! (The dump will start with 'drop table sometable').
insert blah from select suggested by others is good for copying the data under mysql.
If you want to copy the table structure you might want to use the show create table Tablename; statement.