How can I tranfer data between different DB Servers, this is a daily job,
i.e:
Insert into ServerA..table1
select data from ServerB.table2.
(this is just an example, the real situation is we select data from many servers, and then do some join, then insert into the destination).
We can not use SSIS, we can not use linked server,
How can we do this?
btw, this is a daily job, and the data is huge.
A simple command line BCP script should work for you. For instance:
bcp AdventureWorks2012.Sales.Currency out Currency.dat -T -c -SServer1
bcp AdventureWorks2012.Sales.Currency in Currency.dat -T -c -SServer2
Here's more details
The Sync Framework might be worth a look : http://msdn.microsoft.com/en-us/sync/bb736753.aspx
Look at this question:
SQL backup version is incompatible with this server
The first options from my answer should work for your case
You can use C#.net SqlBulkCopy method.
My answer was converted into comment but I'm adding some more info.
I guess you are looking for this answer on SO:
What is the best way to auto-generate INSERT statements for a SQL Server table?
Once you have the code, just add USE your_databasename_where_to_copy_data at the begining, execute and voila
Edit:
As you want to do it on the fly, using code, try some of the solutions provided on this question on SO. Basically it is similar to your code proposal, with some few differences, as for example:
INSERT INTO bar..tblFoobar( *fieldlist* )
SELECT *fieldlist* FROM foo..tblFoobar
Related
I want to set a lot of dictionaries in my clickhouse server and some of them aren't just plain MySQL queries to get the existing values, for a few I need to do JOINs and WHERES, and the dictionary configuration in Clickhouse only allows me to tell which MySQL table it will read the data from.
Is it possible to set a custom MySQL query for it?
Other thing that would be helpful is to use ALIASES in the attributes names.. that way I wouldn't be force to use the MySQL column name later.
Thank you.
you can try use external shell script which run
mysql -u<user> -p<password> -h <host> -N -B -e "SELECT field AS field_alias... FROM table1 JOIN table2"
and try read this article
https://www.altinity.com/blog/2017/4/12/dictionaries-explained
Ive tried to execute below delete through SQL script in Pentaho Job, I get the error as
Unknown table 'a' in MULTI DELETE. Can somebody throw light on this. Is there any other way
to go around this?
DELETE a.* FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST a
WHERE EXISTS
(SELECT 1 FROM pm_report.PM_CONCERTS_GQV_REPORT_TEST_3 b WHERE b.TM_EVENT_ID=a.TM_EVENT_ID
GROUP BY b.TM_EVENT_ID)
This is mysql right?
See similar solutions here - recommends removing the table alias.
Worth noting this is nothing to do with Pentaho, if you did it in a SQL client you'd get the same error. If you don't then the difference is probably in the jdbc driver version - may be worth checking that.
i can suggest these options:
dont use aliases
try this directly on your mysql and check if it works for you.
dont use pentaho like this : make a transformation and break apart the query to steps
with table input and lookup then delete the rows by row_id
its a little bit longer but a lot more undersrandable and easy to maintain.
"dont over optimize"
I have the following problem, I need to put in a script that is going to run before the new version is rolled the SQL code that enables the pgAgent in PostgreSQL. However, this code should be run on the maintenance database (postgres) and the database where we run the script file is another one.
I remember that in SQL Server there is a command "use " so you could do something like:
use foo
-- some code
use bar
-- more code
is there something similar in PostgreSQL?
You can put in your file something like:
\c first_db_name
select * from t; --- your sql
\c second_db_name
select * from t; --- your sql
...
Are you piping these commands through the psql command? If so, \c databasename is what you want.
psql documentation
You can't switch databases in Postgres in this way. You actually have to reconnect to the other database.
PostgreSQL doesn't have the USE command. You would most likely use psql with the --dbname option to accomplish this, --dbname takes the database name as a parameter. See this link for details on the other options you can pass in you will also want to check out the --file option as well. http://www.postgresql.org/docs/9.0/interactive/app-psql.html
well after looking on the web for some time I found this which was what I need it
http://www.postgresonline.com/journal/archives/44-Using-DbLink-to-access-other-PostgreSQL-Databases-and-Servers.html
I'm working in Ubuntu with MySql and I also have Query Browser and Administrator installed, I'm not afraid of the command line either if it helps.
I want simply to be able to run a query and see a result set but then convert that result set into a series of commands that could be used to create the same rows in a table of an identical schema.
I hope the question makes sense, it's quite a simple problem and one that must have been solved but I can't for the life of me work out where this kind of conversion is made available.
Thanks in advance,
Gav
I think you need to use a command line utility mysqldump http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
if you want to dump one or more tables.
If you need to dump a result of an arbitrary query and restore it later, take a look on SELECT ... INTO OUTFILE and LOAD DATA INFILE( http://dev.mysql.com/doc/refman/5.0/en/load-data.html)
I do not know if I understood you at all but you can use a SELECT INTO statement.
SELECT *
INTO new_table_name
FROM old_tablename
WHERE ...
I am trying to archive some of my tables into another database on the same server. However the INSERT INTO...SELECT...FROM gives me an error (SQLSTATE=42704) on build. The table exists in the second database.
Can anyone help with this?
It's not clear from your question what version of DB2 is being used. I'll presume that it's the Linux, Unix & Windows version. You look to be using federation to link the two databases.
Does the SELECT part of your query work from LS2DB001? It's worth trying to pin down which database you have the issue with.
Presuming that the problem is on LS2DB001, if the user you have defined the federated link with has permissions on the base tables in the query, check also that they have permissions on the system catalog tables. If not, they would not be able to parse and validate that you can run the query.
We've cracked it! If the following script is used then it works. The LOAD works without having to COMMIT in between batches of rows copied. ('Transaction Log full...' error problem is also solved)
CONNECT TO LS2DB001;
EXPORT TO "C:\temp\TIN_TRIGGER_OUT.IXF" OF IXF
MESSAGES "C:\temp\TIN_TRIGGER_OUT.EXM"
SELECT * FROM LS2USER.TIN_TRIGGER_OUT;
CONNECT RESET;
CONNECT TO LQIFCOLD;
LOAD FROM "C:\temp\TIN_TRIGGER_OUT.IXF" OF IXF
MESSAGES "C:\temp\TIN_TRIGGER_OUT.IMM"
INSERT INTO LS2USER.TIN_TRIGGER_OUT COPY NO INDEXING MODE AUTOSELECT;
COMMIT;
CONNECT RESET;
I found this on http://www.connx.com/products/connx/Connx%208.6%20UserGuide/CONNXCDD32D/DB2_SQL_States.htm:
42704 Undefined object or constraint name. Revise SQL syntax and retry.
For more help try to be more specific, eg paste the full sql statement, the table scheme etc.
You can do
Select 'insert into tblxxxx (blabla,blabal) values(' + fld1 + ',' + fld2 + ',' ...... + ')'
From tblxxxxxx
copy the result as a text script and execute it in the other DB.
The best way to do this would be to create a custom script. Depending on the size of the tables (how many records) you could either do a select of all of the data into memory and then roll over them inserting them into a copy of the table you create first, or you could export the data out as a csv file or some other text based file and then roll over that to insert the data into the other table.
If you do not have some sort of formal backup procedures that could do this already, this would be your best bet.
Note: some db2 databases, such as those on an iSeries do not actually have "databases", they have libraries. With the right user profile you can access two libraries at the same time, joining tables from them together or doing a
create table library/newFilename as
(select * from originallibrary/originalfilename) with data
But this only applies to the iSeries I believe.
I'm writing this response as another answer so I have more space.
I can only suggest breaking the steps down to their components, and working through to see where the error is occuring. Again, I'm assuming you're using federation:
a) In your FROM db, connecting as the user you're using for the federated link, does your select work?
b) In your TO db, using the link, does the select work?
c) In your TO db, using the link via a stored proc, does the select work?
d) In your TO db, using an INSERT...values(x,y,z), can you insert into the table?
e) In your TO db, via a stored proc, using INSERT...values(x,y,z), can you insert?
Without more information, this is the best line of attack I can suggest.