Can we create one datafile for two tablespaces in Oracle? - sql

there is no possibility to store one data files in to two table space.but when creating IOT in oracle we are giving over flow property to another table space!
usually data file contains tables even IOT see this image Click for concept here!so how can point out two tablespace for pointing one table(IOT).let is consider the following code:
CREATE TABLE admin_docindex(
token char(20),
doc_id NUMBER,
token_frequency NUMBER,
token_offsets VARCHAR2(2000),
CONSTRAINT pk_admin_docindex PRIMARY KEY (token, doc_id))
ORGANIZATION INDEX
TABLESPACE admin_tbs
PCTTHRESHOLD 20
OVERFLOW TABLESPACE admin_tbs2;

One segment in Oracle will be stored in exactly one tablespace. But one object may be comprised of many different segments. For example, if you have a partitioned table, each partition is a separate segment each of which may be stored in a different tablespace. Each LOB in a table is a separate segment that can potentially be stored in a different tablespace. And, in your case, the row overflow area is a separate segment from the segment storing the main table segment.
The various scenarios where a table can comprise multiple segments was discussed over on the DBA stack yesterday.

Related

What's the best way to create a unique repeatable identification column with rows that are nearly identicle?

I'm writing a stored procedure that links together data from several different relational tables based on the primary key for the main table. This information is being send to a flat database. The stored procedure is going to produce several nearly identical rows where only a single column may be different due to multiple entries in some of the tables that are linked to a single entry in the main table. I need to uniquely identify each row in the stored procedure output but I am unable to use the primary key from the main table since there will be multiple entries for each "key".
I considered taking the approach of using the primary key from the main table followed by each of the columns that may be different in duplicate rows. For example _
However, this approach results in a very long and messy key. I am unable to use a GUID because if any data changes in the relational database the stored procedure is rerun and must update old entries rather than create new ones.
If your purpose is only to have a unique key that is as short as possible and does not relate to anything else, consider just adding ROW_NUMBER() to your select.
SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)), othercolumns

What happens to indexes when using ALTER SCHEMA to transfer a table?

I have a massive job that runs nightly, and to have a smaller impact on the DB it runs on a table in a different schema (EmptySchema) that isn't in general use, and is then swapped out to the usual location (UsualSchema) using
ALTER SCHEMA TempSchema TRANSFER UsualSchema.BigTable
ALTER SCHEMA UsualSchema TRANSFER EmptySchema.BigTable
ALTER SCHEMA EmptySchema TRANSFER TempSchema.BigTable
Which effectively swaps the two tables.
However, I then need to set up indexes on the UsualSchema table. Can I do this by disabling them on the UsualSchema table and then re-enabling them once the swap has taken place? Or do I have to create them each time on the swapped out table? Or have duplicate indexes in both places and disable/enable them as necessary (leading to duplicates in source control, so not ideal)? Is there a better way of doing it?
There's one clustered index and five non-clustered indexes.
Thanks.
Indexes, including those that support constraints, are transferred by ALTER SCHEMA, so you can have them in both the source and target object schema.
Constraint names are schema-scoped based on the table schema and other indexes names are scoped by the table/view itself. It is therefore possible to have identical index names in the same schema but on different tables. Constraint names must be unique within the schema.

Creating tables with one to many relationships or just a table with a single column

If I have multiple key value pairs in Azure Blob Storage such as:
-/files/key1
-/files/key2
-/files/key3
And each key is uploaded by a user, but a user can upload multiple keys, what is the best table design in my SQL database to reference what keys are associated with what user?
A) Table with single column - Everytime I add a file to BLOB storage I add a row to a single column table with the username and the associated key value i.e:
AssociationColumn
-User1+key1
-User2+key2
-User1+key3
Will this be slow in looking up all the keys for User1 for example if I query using some sort of regex starts with? Will making this two column with User as one column and key as another column affect performance at all? How can I achieve this one to many relationship?
Also is it bad to store keys using an identifier such as 1-2-n? Any suggestions on how to create unique identifiers that can fit in the space of varchar(MAX)?
The correct approach in a relational database is to have a junction table. This would have at least two columns:
User
Key
You wouldn't put these in a single column -- I don't think even in Azure.

A single table that represents multiple tables

I have a problem with finding a way to represent multiple tables hash tables into a single table.
Say I have 3 tables with the format:
Table1(Table1_PK1,Table1_PK2,Table1_PK3,Table1_Hash)
Table2(Table2_PK1,Table2_PK2,Table2_Hash)
Table3(Table3_Pk1,Table3_PK2,Table3_PK3,Table3_PK4,Table3_PK5,Table3_Hash)
Table1_PK1,Table1_PK2,Table1_PK3... are columns and they might have different datatypes (VARCHAR, INT or DATETIME ...).
My question is if there is a way to create a single table (fixed number of columns) that can represent all of these 3 tables (may be more in practical).
I am trying to do this for my database tool. Each table actual a table which contains primary keys and a hash data associating with them.
Since you're apparently building a database tool, not a database, it might make more sense to do this in application code rather than in a database table.
In a different answer, you commented
I am still looking for a dynamic way to do it without knowing how many primary keys a table can have.
A table can have only one primary key. That primary key can consist of more than one column, though. (You already knew this; you were just using the wrong words, which might confuse others.)
A table can also have an arbitrary number of other keys, which will be either declared (as NOT NULL UNIQUE) or "undeclared" (by creating an index that guarantees uniqueness over a set of columns).
You can look all that stuff up at run time in one or both of two ways. (Links go to documentation for PostgreSQL.)
System tables, sometimes called system catalogs
information_schema views
As far as I know, all modern SQL platforms implement at least one of these interfaces. The information_schema views are covered in the SQL standards, but there seems to be some room for interpretation. They don't look quite the same on all platforms.
Why combine the 3 tables into one? Would be really bad db design. But here's a way to do it:
The one table will have a column for each of the 3 tables' columns you want in the final table. I am making the assumption that TableX_Hash is the same type, so that remains as one unique column:
Table_All_in_One (
Table1_PK1,
Table1_PK2,
Table1_PK3,
# space just for clarity of grouping
Table2_PK1,
Table2_PK2,
Table3_PK1,
Table3_PK2,
Table3_PK3,
Table3_PK4,
Table3_PK5,
TableX_Hash # Assuming all the _Hash'es are the same type+length,
# otherwise, add Table1_Hash, Table2_Hash, Table3_Hash
# This can be your new primary key
)
The Primary Keys (PKx) are required to be non-NULL only in their own tables. For this table, they have to allow nulls. The idea is that each row of this new table will only hold the data for one of the tables. The other columns will be empty for that row. If you want to associate the row of one table with another, you can add that to the same row or add FK_Table1_Hash, FK_Table2_Hash and FK_Table3_Hash columns which will refer to the TableX_Hash value of a record.
PS: I wonder if what you are really looking for is a View and not this really bad all-in-one table.
Edit: Combining them into one "without knowing how many primary keys a table can have." as per your comment:
Store all the _PKs concatenated into one column:
Table_All_in_One (
New_PK,
TableX_Hash,
Table1_PKx, # Concatenated PKs of Table1
Table2_PKx, # Concatenated PKs of Table2, etc.
...,
# OR just one
TableX_PKs, # concatenate all the PK's into one VARCHAR field
# Add a pipe `|` between them optionally.
Table_Num # If using just one, then you'll need to store the table number
)
You will not be able to conveniently pick records based on part of their composite primary key. It will always have to be TableX_PKs = CONCAT_WS('|', Table1_PK1, Table1_PK2, ...). So your only dependency is the number of PKs in the original column.
In order to model a bunch of tables you will need 3 tables. An entity table that contains the table names of the tables you wish to set up this way called a factor or entity table. A Factor_detail table that contains all the columns and their associated properties of the tables. A table, factor_detail_value, for storing things like lookup values for lookup tables. I'm trying to learn more about this myself as well because we are using this technique at work as well. Genrate sql on the fly for any table so encoded, and store the data in a repository pertiinant to the data itself. This way if a table changes and you need to add a column or change a datatype, you can add a row to the factor detail table without affecting a database shut down in production. In most businesses a four hour shut down to make a sql data table change can cost thousands of dollars. If you are dealing with insurance for example, each additional state that you sell insurance in has different requirements for being able to seel it and that will result in table changes. We reduced our table count way down from over 700 tables in this manner also we can make changes without database shut down thus avoiding the loss in revenue.

How to Merge Multiple Database files in SQLite?

I have multiple database files which exist in multiple locations with exactly similar structure. I understand the attach function can be used to connect multiple files to one database connection, however, this treats them as seperate databases. I want to do something like:
SELECT uid, name FROM ALL_DATABASES.Users;
Also,
SELECT uid, name FROM DB1.Users UNION SELECT uid, name FROM DB2.Users ;
is NOT a valid answer because I have an arbitrary number of database files that I need to merge. Lastly, the database files, must stay seperate. anyone know how to accomplish this?
EDIT: an answer gave me the idea: would it be possible to create a view which is a combination of all the different tables? Is it possible to query for all database files and which databases they 'mount' and then use that inside the view query to create the 'master table'?
Because SQLite imposes a limit on the number of databases that can be attached at one time, there is no way to do what you want in a single query.
If the number can be guaranteed to be within SQLite's limit (which violates the definition of "arbitrary"), there's nothing that prevents you from generating a query with the right set of UNIONs at the time you need to execute it.
To support truly arbitrary numbers of tables, your only real option is to create a table in an unrelated database and repeatedly INSERT rows from each candidate:
ATTACH DATABASE '/path/to/candidate/database' AS candidate;
INSERT INTO some_table (uid, name) SELECT uid, name FROM candidate.User;
DETACH DATABASE candidate;
Some cleverness in the schema would take care of this.
You will generally have 2 types of tables: reference tables, and dynamic tables.
Reference tables have the same content across all databases, for example country codes, department codes, etc.
Dynamic data is data that will be unique to each DB, for example time series, sales statistics,etc.
The reference data should be maintained in a master DB, and replicated to the dynamic databases after changes.
The dynamic tables should all have a column for DB_ID, which would be part of a compound primary key, for example your time series might use db_id,measurement_id,time_stamp. You could also use a hash on DB_ID to generate primary keys, use same pk generator for all tables in DB. When merging these from different DBS , the data will be unique.
So you will have 3 types of databases:
Reference master -> replicated to all others
individual dynamic -> replicated to full dynamic
full dynamic -> replicated from reference master and all individual dynamic.
Then, it is up to you how you will do this replication, pseudo-realtime or brute force, truncate and rebuild the full dynamic every day or as needed.