i have a large query (Web Dashboard Query) with many temporary tables.i have created indexes on the temp table.the application that is using the query has a user management module with different levels of permissions.My question is are indexes created per session like the temp db ?
i don't want the indexes to be shared across sessions.
i have been doing something like
EXEC('CREATE INDEX idx_test'+ #sessionId + 'ON #TempTable (id1,id2)');
is this necessary. i have seen it done by some developers.
Indexes on temporary tables (#t, not ##t) are not shared across the sessions, and there is no need to invent a unique index name to an index on a temporary table.
What is different (and may be you have seen in the code from other developers) is CONSTRAINT NAME. Index name can be repeated many times for different tables, but a constraint name must be unique within the database.
So maybe you see the code for stored procedures that create a constraint name with reference to a session, this is an attempt to give a unique name to a constraint. Because if you launch a stored procedure that creates a temp table #t in two sessions, every session create it's own table with it's own name(not just #t, the system is adding additional symbols to a table name that makes it unique)
But if the same proc tries to create a CONSTRAINT PK_t, the first session will succeeded but the second will get an error that the constraint PK_t already exists in the database(tempdb)
Related
I have a table called Table 1. I'm trying to create an after-insert trigger for a Table 1; whereby, whenever a user enters a record, the trigger will create a new table named after the record that triggered its creation.
Please help, I'm using SQL Server 2008
This sounds super non-relational-database-design-ish. I would heavily advise against this in almost every case. And I say "almost" only to allow for artistic freedom of development, I can't think of a single case where this would be appropriate.
That said, if you do in fact want this, you can use dynamic SQL to create a table.
You can build the SQL in your trigger, but basically you want something like:
EXEC 'CREATE TABLE ' + #tableName + ' (ID INT IDENTITY(1,1))';
Of course, the columns are up to you, but that should get you started.
But while we're at it, what you should (probably) be doing is using a single table with a one-to-many relationship to the table on which your trigger is currently assigned.
For instance, if you have a table Users with a column for email and you're looking to create a table for each user's favorites on your website, you should instead consider adding an identity column for user IDs, then reference that in a single UserFavorites table that has UserId and PostId columns, and the appropriate foreign keys implemented.
I have a distributed data base with two nodes. I have a table like this one in node2 (only in this node):
CREATE TABLE table2
(
cod_proveedor CHAR(15) REFERENCES proveedor(cod_proveedor),
cod_articulo CHAR(15) REFERENCES articulo(cod_articulo),
);
Now, I have the tables "articulo" in node1 and node2.
As we see, I am doing REFERENCES to nodo2.proveedor and nodo2.articulo because my table "table2" is in this node "node2".
I gotta do reference to nodo1.proveedor when I am creating the table but I don't know how...
Can you help me?
If "distributed database" means that you have two separate databases, you cannot create foreign key constraints in one database that references a table in another database.
You could create a materialized view in database 2 that pulls all the proveedor data from database 1 to database 2 and then create a foreign key constraint in database 2 that references the materialized view. Of course, since there would be a lag between when new data was written to the table on database 1 and when the materialized view was updated on database 2 that you could have windows where a child row couldn't be written despite the parent row existing on database 1. And if you deleted a row in database 1, you wouldn't find out whether there were child rows that would be orphaned until you tried to replicate that change to database 2. You'll need to write a lot of code to detect and to resolve these sorts of errors.
In Oracle, it would generally make far more sense to create a single database using RAC (Real Application Clusters) that is mounted on multiple physical servers. That would allow you to distribute the load across the database servers where each server has access to the full contents of the database rather than distributing subsets of data to different nodes.
I have a pair of databases, one is a live database and one is for testing a configuration for that live database. Both reside on the same server.
I have three tables, Users (PK UserId, FK MainGroupId) and Groups (PK GroupId) and GroupMembers (PK GroupMemberId, FK GroupId and UserId).
The tables are the same schema on both databases however the test database has a set of special test users. Groups is mostly stable, but sometimes we add groups, and sometimes we change column data in the groups. GroupMembers is the same but in the test database refers to the test users.
I need to be able to update the Groups table from the live to test user programmatically. I want to use a bulk copy operation, but to do so I have to delete the Groups table first, which will cause a constraint violation.
I could copy the table in bulk to a dummy table, and then post process by doing an insert of the new rows, and update on the existing rows. However, my problems is that there are about 30 tables like Groups, and I don't want to encode all the column names into the stored procedure in the UPDATE SET method. I'd also like to be able to do it in bulk.
The DBAs are dubious about granting ALTER TABLE permission to temporarily drop the constraints.
Any other suggestions?
SInce both databases are on the same server, why not use a MERGE statement?
select for export and import. If you do it in the right order it should work correctly.
I can think of two main benefits:
Avoiding concurrency problems, if you have many processes creating/dropping tables you can get in trouble as one process tries to create an already existing table.
Performance, I imagine that creating temporary tables (with #) is more performant than regular tables.
Is there any other reason, and is any of my reasons false?
You can't compare temporary and persistent tables:
Persistent tables keep your data and can be used by any process.
Temporary ones are throw away and #ones are visible only to that connection
You'd use a temp table to spool results for further processing and such.
There is little difference in performance (either way) between the two types of table.
You shouldn't be dropping and creating tables all the time... any app that relies on this is doing something wrong, not least way too many SQL calls.
(1)Temp Tables are created in the SQL Server TEMPDB database and therefore require more IO resources and locking. Table Variables and Derived Tables are created in memory.
(2)Temp Tables will generally perform better for large amounts of data that can be worked on using parallelism whereas Table Variables are best used for small amounts of data (I use a rule of thumb of 100 or less rows) where parallelism would not provide a significant performance improvement.
(3)You cannot use a stored procedure to insert data into a Table Variable or Derived Table. For example, the following will work: INSERT INTO #MyTempTable EXEC dbo.GetPolicies_sp whereas the following will generate an error: INSERT INTO #MyTableVariable EXEC dbo.GetPolicies_sp.
(4)Derived Tables can only be created from a SELECT statement but can be used within an Insert, Update, or Delete statement.
(5) In order of scope endurance, Temp Tables extend the furthest in scope, followed by Table Variables, and finally Derived Tables.
1)
A table variable's lifespan is only for the duration of the transaction that it runs in. If we execute the DECLARE statement first, then attempt to insert records into the #temp table variable we receive the error because the table variable has passed out of existence. The results are the same if we declare and insert records into #temp in one transaction and then attempt to query the table. If you notice, we need to execute a DROP TABLE statement against #temp. This is because the table persists until the session ends or until the table is dropped.
2)
table variables have certain clear limitations.
-Table variables can not have Non-Clustered Indexes
-You can not create constraints in table variables
-You can not create default values on table variable columns
-Statistics can not be created against table variables
-Similarities with temporary tables include:
Similarities with temporary tables include:
-Instantiated in tempdb
-Clustered indexes can be created on table variables and temporary tables
-Both are logged in the transaction log
-Just as with temp and regular tables, users can perform all Data Modification Language -(DML) queries against a table variable: SELECT, INSERT, UPDATE, and DELETE.
I am migrating an MS Access application (which has linked tables to a MSSQL Server) to MySQL.
As a means to overcome some MSAccess table naming problems, I am seeking a solution to add a MySQL table alias that will point to an existing table in the MySQL database. Ideally I would like to create the alias 'dbo_customers' in mysql that would point to the customers table also in mysql.
To be clear I am not wanting to alias a table name inside a query like this:
SELECT * FROM customers AS dbo_customers
But rather I would like to be able issue the following query:
SELECT * FROM dbo_customers
and have it return data from the customers table.
Off the top of my head
CREATE VIEW dbo_customers AS
SELECT * FROM customers
Maybe not the best solution but should work as the view is updatable. Will definitely work for Read Only
You can create a View.
CREATE VIEW dbo_customers AS SELECT * FROM customers;
If that doesn't work for you, you could try creating a shadow-copy of the table, and use Triggers to keep the tables synced.
For example:
CREATE TABLE t1( id serial primary key, field varchar(255) not null );
CREATE TABLE dbo_t1( id serial primary key, field varchar(255) not null );
-- INSERT trigger
CREATE TRIGGER t1_dbo_insert AFTER INSERT ON t1
FOR EACH ROW BEGIN
INSERT INTO dbo_t1 SET field = NEW.field;
-- No need to specify the ID, it should stay in-sync
END
-- UPDATE trigger
CREATE TRIGGER t1_dbo_update AFTER UPDATE ON t1
FOR EACH ROW BEGIN
UPDATE dbo_t1 SET field = NEW.field WHERE id = NEW.id;
END
-- DELETE trigger
CREATE TRIGGER t1_dbo_delete AFTER DELETE ON t1
FOR EACH ROW BEGIN
DELETE FROM dbo_t1 WHERE id = OLD.id;
END
Not exactly an 'alias', and far from perfect. But it is an option if all else fails.
there is a simpler solution for MySQL via MERGE table engine:
imagine we have table named rus_vacancies and need its English equivalent
create table eng_vacancies select * from rus_vacancies;
delete from eng_vacancies;
alter table eng_vacancies ENGINE=MERGE;
alter table eng_vacancies UNION=(rus_vacancies);
now table rus_vacancies equals to table eng_vacancies for any read-write operations
one limitation - original table must have ENGINE=MyISAM (it can be easily done by "alter table rus_vacancies ENGINE=MyISAM")
You could create a view named dbo_customers which is backed by the customers table.
#OMG Ponies ponies said in a comment:
Why not rename the table?
...and it seems the obvious answer to me.
If you create an ODBC linked table for the MySQL table customers it will be called customers and then all you have to do is rename the table to dbo_customers. There is absolutely no need that I can see to create a view in MySQL for this purpose.
That said, I'd hate to have an Access app that was using SQL Server table names when the MySQL tables were not named the same thing -- that's just confusing and will lead to maintenance problems (i.e., it's simpler for the linked tables in the Access front end to have the same names as the MySQL tables, wherever possible). If I were in your position, I'd get a search and replace utility and replace all the SQL Server table names with the MySQL table names throughout the entire Access front end. You'd likely have to do it one table at a time, but in my opinion, the time it takes to do this now is going to be more than made up for in clarity going forward with development of the Access front end.
I always rename my "linked to SQL" tables in Access from
{dbo_NAME} to {NAME}.
The link creates the table name as {dbo_NAME} but access occasionally has problems with the dbo_ prefix.
Aliases would be nice, yet MySQL does NOT have such a feature.
One option that may serve your needs, besides creating a view, is to use the FEDERATED storage engine locally.
CREATE TABLE dbo_customers (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
PRIMARY KEY (id),
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user#localhost:9306/federated/customers';
There are currently some limitations with the FEDERATED storage engine. Here are a couple especially important ones:
FEDERATED tables do not support transactions
FEDERATED tables do not work with the query cache
I'd like to mention a bad solution I explored (and abandoned), which was to use hardlinks on the .frm, .MYD and .MYI files corresponding to my table in /var/lib/mysql/{name_of_my_database}/.
It does, however, NOT work. For InnoDB tables, it simply cannot (even if you hardlink the .idb file) because tables are also referenced in ibdata1.
For MyISAM tables, it kind of works, except it doesn't because in memory, the tables are still distinct and thus do not share cache. So if you write a row to original_table, it won't immediately appear in aliased_table. You would have to flush tables first… which defeats the purpose and even causes data loss (if you insert a row in both the original and the alias before flushing, only one row is kept).
I thought my experiment was worth mentioning as a cautionary tale.