How to create own database in redis? - redis

There are 0 to 15 databases in redis.
I want to create my own database using redis-cli.
Is there any command for it?

Redis database is not an equivalent of database names in DBMS like mysql.
It is a way to create isolation and namespacing for the keys, and only provides index based naming, not custom names like my_database.
By default, redis has 0-15 indexes for databases, you can change that number
databases NUMBER in redis.conf.
And then you use SELECT command to select the database you want to work on.

You don't create a database in Redis with a command - the number of databases is defined in the configuration file with the databases directive (the default value is 16). To switch between the databases, call SELECT.

Use select, for example:
select 1
select 2
...

I found this relevant when I encountered the same question:
Redis different selectable databases are a form of namespacing: all the databases are anyway persisted together in the same RDB / AOF file. However different databases can have keys having the same name, and there are commands available like FLUSHDB, SWAPDB or RANDOMKEY that work on specific databases.
In practical terms, Redis databases should mainly used in order to, if
needed, separate different keys belonging to the same application, and
not in order to use a single Redis instance for multiple unrelated
applications.
The bolding is my addition.
Read more here: https://redis.io/commands/select
For the question of how to select a "database", it is the same answer given here:
$ select 1
And also some useful stuff about persistence, if RDB/AOF were mentioned: https://redis.io/topics/persistence

Related

Why does Redis have numeric keys for database names and why are these capped to 16?

I'm curious as to why Redis was designed this way. Is this a performance consideration? Or to simply limit database names?
This is configurable in redis.conf
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
However, the creator of Redis does not recommend using the SELECT command any longer.

Postgresql dump with data restriction

I'm working on developing a fast way to make a clone of a database to test an application. My database has some specif tables that are quite big (+50GB), but the big majority of the tables only have a few MBs. On my current server, the dump + restore takes some hours. These bigs tables have date fields.
With the context in mind, my question is: Is possible to use some type of restrictions on table rows to select the data that is being dumped? e.g. On table X only dump the rows that date is Y.
If this is a possible show can I do it? if it's not possible what would be a good alternative?
You can use COPY SELECT whatever FROM yourtable WHERE ... TO '/some/file' to limit what you export.
COPY command
You could use row level security and create a policy that lets the dumping database user see only those rows that you want to dump (make sure that that user is neither a superuser nor owns the tables, because these users are exempt from row level security).
Then dump the database with that user, using the --enable-row-security option of pg_dump.

Getting data from different database on different server with one SQL Server query

Server1: Prod, hosting DB1
Server2: Dev hosting DB2
Is there a way to query databases living on 2 different server with a same select query? I need to bring all the new rows from Prod to dev, using a query
like below. I will be using SQL Server DTS (import export data utility)to do this thing.
Insert into Dev.db1.table1
Select *
from Prod.db1.table1
where table1.PK not in (Select table1.PK from Dev.db1.table1)
Creating a linked server is the only approach that I am aware of for this to occur. If you are simply trying to add all new rows from prod to dev then why not just create a backup of that one particular table and pull it into the dev environment then write the query from the same server and database?
Granted this is a one time use and a pain for re-occuring instances but if it is a one time thing then I would recommend doing that. Otherwise make a linked server between the two.
To backup a single table in SQL use the SQl Server import and export wizard. Select the prod database as your datasource and then select only the prod table as your source table and make a new table in the dev environment for your destination table.
This should get you what you are looking for.
You say you're using DTS; the modern equivalent would be SSIS.
Typically you'd use a data flow task in an SSIS package to pull all the information from the live system into a staging table on the target, then load it from there. This is a pretty standard operation when data warehousing.
There are plenty of different approaches to save you copying all the data across (e.g. use a timestamp, use rowversion, use Change Data Capture, make use of the fact your primary key only ever gets bigger, etc. etc.) Or you could just do what you want with a lookup flow directly in SSIS...
The best approach will depend on many things: how much data you've got, what data transfer speed you have between the servers, your key types, etc.
When your servers are all in one Active Directory, and when you use Windows Authentification, then all you need is an account which has proper rights on all the databases!
You can then simply reference all tables like server.database.schema.table
For example:
insert into server1.db1.dbo.tblData1 (...)
select ... from server2.db2.dbo.tblData2;

Bteq Scripts to copy data between two Teradata servers

How do I copy data from multiple tables within one database to another database residing on a different server?
Is this possible through a BTEQ Script in Teradata?
If so, provide a sample.
If not, are there other options to do this other than using a flat-file?
This is not possible using BTEQ since you have mentioned both the databases are residing in different servers.
There are two solutions for this.
Arcmain - You need to use Arcmain Backup first, which creates files containing data from your tables. Then you need to use Arcmain restore which restores the data from the files
TPT - Teradata Parallel Transporter. This is a very advanced tool. This does not create any files like Arcmain. It directly moves the data between two teradata servers.(Wikipedia)
If I am understanding your question, you want to move a set of tables from one DB to another.
You can use the following syntax in a BTEQ Script to copy the tables and data:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH DATA AND STATS;
Or just the table structures:
CREATE TABLE <NewDB>.<NewTable> AS <OldDB>.<OldTable> WITH NO DATA AND NO STATS;
If you get real savvy you can create a BTEQ script that dynamically builds the above statement in a SELECT statement, exports the results, then in turn runs the newly exported file all within a single BTEQ script.
There are a bunch of other options that you can do with CREATE TABLE <...> AS <...>;. You would be best served reviewing the Teradata Manuals for more details.
There are a few more options which will allow you to copy from one table to another.
Possibly the simplest way would be to write a smallish program which uses one of their communication layers (ODBC, .NET Data Provider, JDBC, cli, etc.) and use that to take a select statement and an insert statement. This would require some work, but it would have less overhead than trying to learn how to write TPT scripts. You would not need any 'DBA' permissions to write your own.
Teradata also sells other applications which hides the complexity of some of the tools. Teradata Data Mover handles provides an abstraction layer between tools like arcmain and tpt. Access to this tool is most likely restricted to DBA types.
If you want to move data from one server to another server then
We can do this with the flat file.
First we have fetch data from source table to flat file through any utility such as bteq or fastexport.
then we can load this data into target table with the help of mload,fastload or bteq scripts.

Partitioning a database table in MySQL

I am writing a data warehouse, using MySQL as the back-end. I need to partition a table based on two integer IDs and a name string.
A more concrete example would be to assume that I am storing data about a school. I want to partition the school_data table based on COMPOSITE 'Key' based on the following:
school id (integer)
course_id (integer)
student_surname (string)
For the student surname, it is just the first character of the surname that determines which 'partitioned table' the data should go in to.
How may I implement this requirement using MySQL (5.1) with InnoDb tables?
Also, I am doing my development on a Windows box, but I will deploy onto a *nix box for production. I have two further questions:
I am assuming that I will have to dump and restore the data when moving from Windows to Linux. I don't know if this is OK if the database contains partitioned tables (pointer to where it states this in the documentation will put my mind to rest - I have not been able to find any specific mention of dump/restore regarding partitioned tables.
I may also need to change databases (if Oracle pulls a surprise move on MySQL users) in which case I will need to SOMEHOW export the data into another database. In this (hopefully unlikely scenario) - what will be the best way to dump data out of MySQL (maybe to text or something) bearing in mind the partitioned table?
RANGE Partitioning
A table that is partitioned by range is partitioned in such a way that each partition contains rows for which the partitioning expression value lies within a given range.
CREATE TABLE employees (
school id (integer)
course_id (integer)
student_surname (string)
)
PARTITION BY RANGE (student_surname) (
PARTITION p0 VALUES LESS THAN ('ezzzzzzzzzzzzzzzzzzzzzzz'),
PARTITION p1 VALUES LESS THAN ('ozzzzzzzzzzzzzzzzzzzzzzz'),
PARTITION p2 VALUES LESS THAN ('tzzzzzzzzzzzzzzzzzzzzzzz'),
PARTITION p3 VALUES LESS THAN (MAXVALUE)
);
Range partitioning
Data Migration to Another DB
MySQLDUMP will output the table and data to a file. However, Oracle supports connecting to other databases via ODBC, just as SQL Server has it's linked server capability.
Addendum
It looks like you are partitioning by only one of the 3 fields I mentioned (i.e. name). I saw partitioning by a single field in the MySQL docs, but not 3 fields (int, int, string) like I want to do.
Partitioning by three columns is possible, but my example is per your requirements in the OP:
For the student surname, it just the first character of the surname that determines which 'partitioned table' the data should go in to.
How may I implement this requirement using mySQL (5.1) with InnoDb tables?
Have a look at the Chapter 18. Partitioning of MySQL documentation and especially the Partition Types (I'd look at the HASH partitioning). But keep in mind that the partitioning implementation in MySQL 5.1 is still undergoing development and there are some limitations and restrictions.
I am assuming that I will have to dump and restore the data when moving from windows to Linux. I dont know if this is OK if the db contains partitioned tables (pointer to where it states this in the docs will put my mind to rest - I have not been able to find any specific mention of dump/restore regarding partitioned tables.
I didn't find anything in 18.3 Partition Management but, according to this post, backing up and restoring a partitioned table is nothing special. To backup:
mysqldump --opt db_name table_name > file.dump
And to restore:
mysql db_name < file.dump
I would do some testing though.
I may also need to change databases (if Oracle pulls a suprise move on mySQL users) in which case I will need to SOMEHOW export the data into another database. In this (hopefully unlikely scenario) - what will be the best way to dump data out of mySQL (maybe to text or something) bearing in mind the partitioned table?
Oracle SQL Developer incorporates migration support by including redeveloped features and greatly extending the functionality and usability offered by the original Oracle Migration Workbench to migrate Microsoft Access, Microsoft SQL Server, MySQL and Sybase databases to Oracle.