ClickHouse limitations in column manipulation - sql

I found in CH documentation that column manipulations have some limitations.
For tables that don’t store data themselves (such as Merge and Distributed), ALTER just changes the table structure, and does not change the structure of subordinate tables. For example, when running ALTER for a Distributed table, you will also need to run ALTER for the tables on all remote servers.
And here I have questions.. do you have some solutions to run it automatically? I have 4 servers created on containers and I don't want to login in each one and execute it manually commands like ALTER ... itd.

run ALTER TABLE db.table ADD COLUMN ... ON CLUSTER 'cluster-name'
first part for underlying Engine=ReplicatedMergeTree(...) table, and in second part for Engine=Distributed(...) table

Hmm Just expose port and write script that can go through each container and run command. ?
In Python ClickHouse have driver.
from clickhouse_driver import Client
client = Client('localhost', port=8090, user='admin', password='admin')
And iterate just over ports.

Related

Getting data from different database on different server with one SQL Server query

Server1: Prod, hosting DB1
Server2: Dev hosting DB2
Is there a way to query databases living on 2 different server with a same select query? I need to bring all the new rows from Prod to dev, using a query
like below. I will be using SQL Server DTS (import export data utility)to do this thing.
Insert into Dev.db1.table1
Select *
from Prod.db1.table1
where table1.PK not in (Select table1.PK from Dev.db1.table1)
Creating a linked server is the only approach that I am aware of for this to occur. If you are simply trying to add all new rows from prod to dev then why not just create a backup of that one particular table and pull it into the dev environment then write the query from the same server and database?
Granted this is a one time use and a pain for re-occuring instances but if it is a one time thing then I would recommend doing that. Otherwise make a linked server between the two.
To backup a single table in SQL use the SQl Server import and export wizard. Select the prod database as your datasource and then select only the prod table as your source table and make a new table in the dev environment for your destination table.
This should get you what you are looking for.
You say you're using DTS; the modern equivalent would be SSIS.
Typically you'd use a data flow task in an SSIS package to pull all the information from the live system into a staging table on the target, then load it from there. This is a pretty standard operation when data warehousing.
There are plenty of different approaches to save you copying all the data across (e.g. use a timestamp, use rowversion, use Change Data Capture, make use of the fact your primary key only ever gets bigger, etc. etc.) Or you could just do what you want with a lookup flow directly in SSIS...
The best approach will depend on many things: how much data you've got, what data transfer speed you have between the servers, your key types, etc.
When your servers are all in one Active Directory, and when you use Windows Authentification, then all you need is an account which has proper rights on all the databases!
You can then simply reference all tables like server.database.schema.table
For example:
insert into server1.db1.dbo.tblData1 (...)
select ... from server2.db2.dbo.tblData2;

How can I copy and overwrite data of tables from database1 to database2 in SQL Server

I have a database1 which has more than 500 tables and I have database2 which also has the same number of tables and in both the databases the name of tables are same.. some of the tables have different table definitions, for example a table reports in database1 has 9 columns and the table reports in database2 has 10.
I want to copy all the data from database1 to database2 and it should overwrite the same data and append the columns if structure does not match. I have tried the import export wizard in SQL Server 2008 but it gives an error when it comes to the last step of copying rows. I don't have the screen shot of that error right now, it is my office PC. It says that error inserting into the readonly column xyz, some times it says that vs_isbroken, for the read only column error as I mentioned a enabled the identity insert but it did not help..
Please help me. It is an opportunity in my office for me.
SSIS and SQL Server 2008 Wizards can be finicky tools.
If you get a "can't insert into column ABC", then it could be one of the following:
Inserting into a PK column -> when setting up the mappings, you need to indicate to overwrite the value
Inserting into a column with a smaller range -> for example from nvarchar(256) into nvarchar(50)
Inserting into a calculated column (pointed out by #Nick.McDermaid)
You could also get issues with referential integrity if your database uses this (most do).
If you're going to do this more often, then I suggest you build an SSIS package instead of using the wizard tooling. This way you will see warnings on all sorts of issues like the ones I've described above. You can then run your package on demand.
Another suggestion I would make, is that you insert DB1 into "stage" tables in DB2. These tables should have no relational integrity and will allow you to break the process into several steps as follows.
Stage the data from DB1 into DB2
Produce reports/queries on issues pertinent to your database/rules
Merge the data from stage tables into target tables using SQL
That last step is where you can use merge statements, or simple insert/updates depending on a key match. Using SQL here in the local database is then able to use set theory to manage the overlap of the two sets and figure out what is new or to be updated.
SSIS "can" do this, but you will not be able to do a bulk update using SSIS, whereas with SQL you can. SSIS would do what is known as RBAR (row by agonizing row), something slow and to be avoided.
I suggest you inform your seniors that this will take a little longer to ensure it is reliable and the results reportable. Then work step by step, reporting on each stages completion.
Another two small suggestions:
Create _Archive tables of each of the stage tables and add a Tstamp column to each. Merge into these after the stage step which will allow you to quickly see when which rows were introduced into DB2
After stage and before the SQL merge step, create indexes on your stage tables. This will improve the merge performance
Drop those Indexes after each merge, this will increase the bulk insert Performance
Basic on Staging (response to question clarification):
Links:
http://www.codeproject.com/Articles/173918/How-to-Create-your-First-SQL-Server-Integration-Se
http://www.jasonstrate.com/tag/31daysssis/
http://blogs.msdn.com/b/andreasderuiter/archive/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data.aspx
Staging is the act of moving data from one place to another without any checks.
First you need to create the target tables, the schema should match the source tables.
Open up BIDS and create a new Project and in it a new SSIS package.
In the package, create a connection for the source server and another for the destination.
Then create a data flow step, in the step create a data source for each table you want to copy from.
Connect each source to a new data destination and set the appropriate connection and table.
When done, save and do a test run.
Before the data flow step, you might like to add a SQL step that will truncate all the target tables.
If you're open to using tools then what about using something like Red Gate Sql Compare and Red Gate SQL Data Compare?
First I would use data compare to manage the schema differences, add the new columns you want to your destination database (database2) from the source (database1). Then with data compare you match the contents of the tables any columns it can't match based on names you specify how to handle. Then you can pick and choose what data you want to copy from your destination. So you'll see what data is new and what's different (you can delete data in the destination that's not in the source or ignore it). You can either have the tool do the work or create you a script to run when you want.
There's a 15 day trial if you want to experiment.
Seems like maybe you are looking for Replication technology as is offered by SQL Server Replication.
Well, if i understood your requirement correctly, you need to make database2 a replica of database1. Why not take a full backup of database1 and restore it as database2? Your database2 will be exactly what database1 is at the time of backup.

merge sql statement from 2 databases

I'm trying to merge 2 tables from 2 databases on 2 differents servers.
For now, I create a linked server on one of the servers and I use a query like this:
MERGE INTO tablename1 as T1
using linkedservername.dbname.tablename2 as T2 ON
WHEN MATCHED THEN
UPDATE SET ...
WHEN NOT MATCHED THEN
INSERT ...
I would like to know if there is a solution to do that without create a linked server.
There are three general ways to do this in SSIS. But there is a lot more information if you check online.
Either way you first need to create a connection manager in SSIS pointing directly at your linked server. Start with that.
Then create a data flow task where you select from dbname.tablename2 in a data flow source
Then you can do it a few ways:
A. Staging Table
Dump that result into a staging table then run your merge statement locally in a subsequent SQL Task. This is usually the quickest (and simplest) way unless you aren't allowed to create tables/data in the target.
B. Lookup
Use a lookup in your data flow to identify if the record exists or not, followed by a OLEDB destination (for inserts) or a OLEDB command (for updates)
This is generally slow because both the lookup and update are inefficient.
C. row level merge
Feed the result into a OLEDB command, and put your merge directly in there
This is probably the slowest.
If you want more info, get your connection manager sorted and post back.

Synchronizing the table structure between two sql server databases

I'm using two databases test and prod in the same SQL Server instance. The databases share the same data structure but contains different data, is there an easy way to synchronize the structure between the two so if I modify a table in test automatically update also the same table in prod?
You could write a trigger on the table in test which uses a MERGE to update the table in prod...
I would be careful of persisting changes made in a test environment to a prod environment automatically though.

How do I recreate a VIEW as a local table in SQL Server?

I am using Microsoft SQL Server Management Studio and have access to a bunch of views without the original tables that the view depends on. I have copied some data from this view into a file and would like to import it into a database that I locally created to do some analysis.
The brute-force way of doing this is to manually write down the CREATE TABLE statement looking at the columns in the view but is there a better way to get the CREATE or CREATE VIEW statement that I can directly use to recreate a similar table in my localhost?
Create a linked server in your localhost to this server. Then use (while connected to localhost)
SELECT * INTO NewTableName FROM LinkedServer.DBName.SchemaName.View
and a new table in your current DB in localhost would be created.
What I typically prefer to do is use SSIS for data transforms. The first step in the package would be to grab the definition using a SELECT INTO ... WHERE 1=0 so that it doesn't bring over any data and minimizes the locking time (SELECT INTO's result in database wide locks). Then once you have the resulting table with the source view's definition, then copy the data over.
If you're afraid the view's definition can change, stick with an INSERT INTO ... SELECT * FROM SQL task. Otherwise, save the definition that you retrieved from the SQL above and create the table (if it does not already exist). Then use a data flow task to transfer the data over.
With either of these approaches, you avoid the potential double hop scenario (if you're using Windows authentication). It's also reusable in a SQL agent job if you need to do this periodically. Otherwise, it may be a little overkill.
Or you can just run the first part in SSMS but I definitely recommend using the WHERE 1=0 then using an INSERT INTO rather than a straight SELECT INTO. Again, to minimize database locking.