I have got MyDns server v1.2.8.31 under PostgreSQL and I want write TXT record for DKIM into Database without using admin.php and other tools.
How to use rr.edata and rr.edatakey in MyDns for DKIM?
Which psql queries should be for correct insert data?
How to enable rr.edata in Mydns and what I should do with rr.data field?
You should first activate this option in your mydns.conf :
extended-data-support = yes
After that you can recreate your database structure with :
mydns --create-tables | mysql -u root -p mydns
If you already have data, then adjust the mysql scheme with :
alter table rr add column edata blob;
alter table rr add column edatakey char(32) DEFAULT NULL;
To use it directly in your code, you should detect if data is longer than you data field, if it's the case you should split data : a first split going to the classic data field, the second going the the edata field(which is a blob so can be very long), you should also md5sum the edata to put it in the edatakey.
If help needed you can consult the code on admin.php provided in the contrib repository of the source package.
Related
I found in CH documentation that column manipulations have some limitations.
For tables that don’t store data themselves (such as Merge and Distributed), ALTER just changes the table structure, and does not change the structure of subordinate tables. For example, when running ALTER for a Distributed table, you will also need to run ALTER for the tables on all remote servers.
And here I have questions.. do you have some solutions to run it automatically? I have 4 servers created on containers and I don't want to login in each one and execute it manually commands like ALTER ... itd.
run ALTER TABLE db.table ADD COLUMN ... ON CLUSTER 'cluster-name'
first part for underlying Engine=ReplicatedMergeTree(...) table, and in second part for Engine=Distributed(...) table
Hmm Just expose port and write script that can go through each container and run command. ?
In Python ClickHouse have driver.
from clickhouse_driver import Client
client = Client('localhost', port=8090, user='admin', password='admin')
And iterate just over ports.
I want to load mysqldump file into a server.
while loading dump , I want to change few column values and update schema.
for example for guid column we gave varchar(100) so now I want to change into binary(16) that means I need change in table schema and table values.
can I do this changes while loading dump file into new server.
Thanks
No, basically you can't do anything WHILE loading dump. As mentioned in comments, you have two options:
Edit SQL in dump
Load dump and after that execute a script
with needed fixes.
If you have access to initial database, you can produce another dump with needed changes.
I have a table called units, which exists in two separate schemas within the same database (we'll call them old_schema, and new_schema). The structure of the table in both schemas are identical. The only difference is that the units table in new_schema is presently empty.
I am attempting to export the data from this table in old_schema and import it into new_schema. I used pg_dump to handle the export, like so:
pg_dump -U username -p 5432 my_database -t old_schema.units -a > units.sql
I then attempted to import it using the following:
psql -U username -p 5432 my_database -f units.sql
Unfortunately, this appeared to try and reinsert back in to the old_schema. Looking at the generated sql file, it seems there is a line, which I think is causing this:
SET search_path = mysql_migration, pg_catalog;
I can, in fact, alter this line to read
SET search_path = public;
And this does prove successful, but I don't believe this is the "correct" way to accomplish this.
Question: When importing data via a script generated through pg_dump, how can I specify in to which schema the data should go without altering the generated file?
There are two main issues here based on the scenario you described.
The difference in the schemas, to which you alluded.
The fact that by dumping the whole table via pg_dump, you're dumping the table definition also, which will cause issues if the table is already present in the destination schema.
To dump only the data, if the table already exists in the destination database (which appears to be the case based on your scenario above), you can dump the table using pg_dump with the --data-only flag.
Then, to address the schema issue, I would recommend doing a search/replace (sed would be a quick way to do it) on the output sql file, replacing old_schema with new_schema.
That way, it will apply the data (which is all that would be in the file, not the table definition itself) to the table in new_schema.
If you need a solution on a broader level to support, say, dynamically named schemas, you can use the same search/replace trick with sed, but instead of replacing it with new_schema, replace it with some placeholder text, say, $$placeholder_schema$$ (something highly unlikely to appear as as token elsewhere in the file), and then, when you need to apply that file to a particular schema, use the original file as a template, copy it, and then modify the copy using sed or similar, replacing the placeholder token with the desired on-the-fly schema name.
You can set some options for psql on the command line, such as --set AUTOCOMMIT=off, however, a similar approach with SEARCH_PATH does not appear to have any effect.
Instead, it needs the form \set SEARCH_PATH to <path>, which can be specified with the -c option, but not in combination with -f (it's either or).
Given that, I think modifying the file with sed is probably the best all around option in this case for use with -f.
Is there a way to change the database's table in hive or Hcatalog?
For instance, I have the table foo in the database default, and I want to put this table in the database bar. I try this, but it doesn't work:
ALTER TABLE foo RENAME TO bar.foo
Thanks in advance
AFAIK there is no way in HiveQL to do this. A ticket was raised long back though. But the issue is still open.
An alternate could be to use the EXPORT/IMPORT feature provided by Hive. With this feature we can export the data of a table to a HDFS file along with the metadata using the EXPORT command. The data is stored in JSON format. Data once exported this way could be imported back to another database (even another hive instance) using the IMPORT command.
More on this can be found on the IMPORT/EXPORT MANUAL.
HTH
thanks for your response. I found an other mean to change the database
USE db1; CREATE TABLE db2.foo like foo
I have two SQL Servers (both 2005 version).
I want to migrate several tables from one to another.
I have tried:
On source server I have right clicked on the database, selected Tasks/Generate scripts.
The problem is that under Table/View options there is no Script data option.
Then I used Script Table As/Create script to generate SQL files in order to create the tables on my destination server. But I still need all the data.
Then I tried using:
SELECT *
INTO [destination server].[destination database].[dbo].[destination table]
FROM [source server].[source database].[dbo].[source table]
But I get the error:
Object contains more than the maximum number of prefixes. Maximum is
2.
Can someone please point me to the right solution to my problem?
Try this:
create your table on the target server using your scripts from the Script Table As / Create Script step
on the target server, you can then issue a T-SQL statement:
INSERT INTO dbo.YourTableNameHere
SELECT *
FROM [SourceServer].[SourceDatabase].dbo.YourTableNameHere
This should work just fine.
Just to show yet another option (for SQL Server 2008 and above):
right-click on Database -> select 'Tasks' -> select 'Generate Scripts'
Select specific database objects you want to copy. Let's say one or more tables. Click Next
Click Advanced and scroll down to 'Types of Data to script' and choose 'Schema and Data'. Click OK
Choose where to save generated script and proceed by clicking Next
If you don't have permission to link servers, here are the steps to import a table from one server to another using Sql Server Import/Export Wizard:
Right click on the source database you want to copy from.
Select Tasks - Export Data.
Select Sql Server Native Client in the data source.
Select your authentication type (Sql Server or Windows authentication).
Select the source database.
Next, choose the Destination: Sql Server Native Client
Type in your Server Name (the server you want to copy the table to).
Select your authentication type (Sql Server or Windows authentication).
Select the destination database.
Select Copy data.
Select your table from the list.
Hit Next, Select Run immediately, or optionally, you can also save the package to a file or Sql Server if you want to run it later.
Finish
There is script table option in Tasks/Generate scripts! I also missed it at beginning! But you can generate insert scripts there (very nice feature, but in very un-intuitive place).
When you get to step "Set Scripting Options" go to "Advanced" tab.
Steps described here (pictures can understand, but i do write in latvian there).
Try using the SQL Server Import and Export Wizard (under Tasks -> Export Data).
It offers to create the tables in the destination database. Whereas, as you've seen, the scripting wizard can only create the table structure.
If the tables are already created using the scripts, then there is another way to copy the data is by using BCP command to copy all the data from your source server to your destination server
To export the table data into a text file on source server:
bcp <database name>.<schema name>.<table name> OUT C:\FILE.TXT -c -t -T -S <server_name[ \instance_name]> -U <username> -P <Password>
To import the table data from a text file on target server:
bcp <database name>.<schema name>.<table name> IN C:\FILE.TXT -c -t -T -S <server_name[ \instance_name]> -U <username> -P <Password>
For copying data from source to destination:
use <DestinationDatabase>
select * into <DestinationTable> from <SourceDataBase>.dbo.<SourceTable>
Just for the kicks.
Since I wasnt able to create linked server and since just connecting to production server was not enough to use INSERT INTO i did the following:
created a backup of production server database
restored the database on my test server
executed the insert into statements
Its a backdoor solution, but since i had problems it worked for me.
Since i have created empty tables using SCRIPT TABLE AS / CREATE in order to transfer all the keys and indexes I couldnt use SELECT INTO. SELECT INTO only works if the tables do not exist on the destination location but it does not copy keys and indexes, so you have to do that manualy. The downside of using INSERT INTO statement is that you have to manualy provide with all the column names, plus it might give you some problems if some foreign key constraints fail.
Thanks to all anwsers, there are some great solutions but i have decided to accept marc_s anwser.
You can't choose a source/destination server.
If the databases are on the same server you can do this:
If the columns of the table are equal (including order!) then you can do this:
INSERT INTO [destination database].[dbo].[destination table]
SELECT *
FROM [source database].[dbo].[source table]
If you want to do this once you can backup/restore the source database.
If you need to do this more often I recommend you start a SSIS project where you define source database (there you can choose any connection on any server) and create a project where you move your data there.
See more information here: http://msdn.microsoft.com/en-us/library/ms169917%28v=sql.105%29.aspx
It can be done through "Import/Export Data..." in SQL Server Management Studio
This is somewhat a go around solution but it worked for me I hope it works for this problem for others as well:
You can run the select SQL query on the table that you want to export and save the result as .xls in you drive.
Now create the table you want to add data with all the columns and indexes. This can be easily done with the right click on the actual table and selecting Create To script option.
Now you can right click on the DB where you want to add you table and select the Tasks>Import .
Import Export wizard opens and select next.Select the Microsoft Excel as input Data source and then browse and select the .xls file you have saved earlier.
Now select the destination server and also the destination table we have created already.
Note:If there is any identity based field, in the destination table you might want to remove the identity property as this data will also be inserted . So if you had this one as Identity property only then it would error out the import process.
Now hit next and hit finish and it will show you how many records are being imported and return success if no errors occur.
Yet another option if you have it available: c# .net. In particular, the Microsoft.SqlServer.Management.Smo namespace.
I use code similar to the following in a Script Component of one of my SSIS packages.
var tableToTransfer = "someTable";
var transferringTableSchema = "dbo";
var srvSource = new Server("sourceServer");
var dbSource = srvSource.Databases["sourceDB"];
var srvDestination = new Server("destinationServer");
var dbDestination = srvDestination.Databases["destinationDB"];
var xfr =
new Transfer(dbSource) {
DestinationServer = srvDestination.Name,
DestinationDatabase = dbDestination.Name,
CopyAllObjects = false,
DestinationLoginSecure = true,
DropDestinationObjectsFirst = true,
CopyData = true
};
xfr.Options.ContinueScriptingOnError = false;
xfr.Options.WithDependencies = false;
xfr.ObjectList.Add(dbSource.Tables[tableToTransfer,transferringTableSchema]);
xfr.TransferData();
I think I had to explicitly search for and add the Microsoft.SqlServer.Smo library to the references. But outside of that, this has been working out for me.
Update: The namespace and libraries were more complicated than I remembered.
For libraries, add references to:
Microsoft.SqlServer.Smo.dll
Microsoft.SqlServer.SmoExtended.dll
Microsoft.SqlServer.ConnectionInfo.dll
Microsoft.SqlServer.Management.Sdk.Sfc.dll
For the Namespaces, add:
Microsoft.SqlServer.Management.Common
Microsoft.SqlServer.Management.Smo