What is a good way of executing the DDL for a in-process HSQL database on the startup of the application?
I thought of selecting from INFORMATION_SCHEMA.TABLES and if the count is zero, then i'd only execute my DDL script.
Is there a better way of doing this?
Checking the database metadata is always a good way to find out if your tables have already been created. INFORMATION_SCHEMA.TABLES reports some system tables and view as well as user tables, therefore you should select the name and schema of one of your application's tables to check the count.
Related
I've been told that RDBMS ( SQL Server in this case ) make use of the temporary database to perform its internal job, for instance when a SELECT count( column ) FROM foo query is performed.
What kind of queries / statements trigger the use of the temporary database?
background:
We are currently about to change the collation on our application database, but we have been told there might be problems if that database make use of the temporary database, because they will have different collation. The rationale is the temporary database is already being used by other applications.
So we want to identify what kind of queries may trigger temp db usage and see if they'll have any problem.
I've found this about when is the db used:
http://msdn.microsoft.com/en-us/library/ms190768.aspx
I did a succesful migration from MySql to Sql Server using the migration tool.
Unfortunately for some reason it labels the tables database.DBO.tablename instead of just database.tablename
I have never used Sql Server so perhaps this is just the way they name their tables.
When I do:
SELECT TOP 1000 [rid]
,[filename]
,[qcname]
,[compound]
,[response]
,[isid]
,[isidresp]
,[finalconc]
,[rowid]
FROM [test].[calibration]
it does not work
But, when I do:
SELECT TOP 1000 [rid]
,[filename]
,[qcname]
,[compound]
,[response]
,[isid]
,[isidresp]
,[finalconc]
,[rowid]
FROM [test].[dbo].[calibration]
it works.
Does anyone know why it prefixes with DBO?
dbo is the standard database owner for anything you create (tables, stored procedures, etc,..), hence the migration tool automatically prefixing everything with it.
When you access something in Sql Server, such as a table called calibration, the following are functionally equivalent:
calibration
dbo.calibration
database_name.dbo.calibration
server_name.database_name.dbo.calibration
MySql doesn't, as far as I remember (we migrated a solution from MySql to SqlServer about 12 months ago using custom scripts executed by nant) support database owner's when referencing objects, hence you're probably not familiar with four part (server_name.database_name.owner_name.object_name) references.
Basically, if you want to specify the database you're accessing, you also need to specify the "owner" of the object. i.e, the following are functionally identical:
USE [master]
GO
SELECT * FROM [mydatabase].[dbo].[calibration]
USE [mydatabase]
GO
SELECT * FROM [calibration]
SqlServer uses an owner name when it references tables. In this case, dbo is the owner.
MySQL doesn't use owner for table names, which is why you didn't see those names before.
SQL Server has something called schemas, in this case the default schema is dbo but it could be anything you wanted. Schemas are used to logically group objects. So you can create a Employee schema and have all the Employee tables, views, procs and functions in there, this then also enables you to give certain users only access to certain schemas
Tell me your migration tool you have used, and let me know the version of from and to databases.
Regards
Eugene
You do have an issue here with the default schema, if it's set to 'dbo' for the user you logged in as you don't need to specify it. See http://msdn.microsoft.com/en-us/library/ms176060.aspx
For debug purposes I need to send 1 table of an existing Firebird 1.5 database to someone.
In stead of sending the whole db , I want to send just the db with just this table - no triggers, no constraints. I can't copy the data to another db because it's just that that we want to check - why this one table is given troubles.
I am just wondering if there is a way to drop all triggers , all constraints and all but one table (using some clever trick with the system tables or so ) ?
Using GUI tool (I personally prefer IBExpert) execute following command:
select 'DROP TRIGGER ' || rdb$trigger_name || ';' from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null)
Copy result into clipboard then paste and execute within script executive
window.
If your database backup can switch to Firebird 2.1 there is some switch in gbak and isql.
Some Firebird command-line tools have
been supplied with new switches to
suppress the automatic firing of
database triggers:
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
These switches can only be used by the
database owner and SYSDBA.
You can drop all triggers by directly deleting them from the system table, like so:
delete from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null);
Note that the normal way of using drop trigger is certainly preferable, but it can be done.
You can also drop constraints by executing DDL statements, but to enumerate constraints and drop them in a SQL script you would need the execute block functionality that Firebird 1.5 doesn't have.
There are similar statements to delete other database objects, but actually running these successfully may be much more difficult because of dependencies between objects. You can't drop any object as long as another object depends on it. This can become really tricky due to circular references, where two (or even more) objects depend on one another, forming a cycle, so there isn't a single one that may be dropped first.
The way around this is to break one of the dependencies. A procedure for example that has dependencies to other objects can be altered to have an empty body, after which it does no longer depend on those other objects, so they may be dropped then. Dropping foreign keys is another way of eliminating dependencies between tables.
I don't know of any tool implementing such a partial delete of database objects, your use case is IMO far from common. You could however have a look at the FlameRobin source code which has a certain amount of dependency detection in the code that is used to create DDL scripts or modification statements for database objects. Armed with that information you could write your own tool to do it.
If it's a one time thing it may be enough to do this manually, though. Use any Firebird management tool of your choice for that.
One third party app is storing data in a huge database (SQL Server 2000/2005). This database has more than 80 tables. How would I come to know that how many tables are affected when application stores a new record in database? Is there something available I can retrieve the list of tables affected?
You might be able to tell by running a trace in SQL Profiler on the database - the SQL:StmtCompleted event is probably the one to monitor - i.e. if the application does a series of inserts into multiple tables, you should see them go through in Profiler.
You can use SQL Profiler to trace SQL queries. So you will see sequence of calls caused by one button click in your application.
Also use can use metadata or SQL tools to get list of triggers which could make a lot of actions on simple insert.
If you have the SQL script that used to store the new record(Usually, it should be insert statement, or other DML statement such as update, merge and so on). Then you may know how many tables were affected by parsing those SQL script.
Take this SQL for example:
Insert into emp(fname, lname)
Values('john', 'reyes')
You can get result like this:
sstinsert
emp(tetInsert)
Tables:
emp
Fields:
emp.fname
emp.lname
you can add triggers on tables that get fired on update - you could use this to update a log table that would report what was being updated.
see more here: http://www.devarticles.com/c/a/SQL-Server/Using-Triggers-In-MS-SQL-Server/
Profiler is the way to go, as others have said especially with an unfamilar third party database.
I would also spend some time creating diagrams so you can see the foreign key relationships and understand how the database is put together. I usaully know my database structure so well, I can tell from the fields being inserted what tables they affect and I know what triggers are on my tables and what they affect. There is no substitute for taking the time to understand the database you support.
If you create an Oracle dblink you cannot directly access LOB columns in the target tables.
For instance, you create a dblink with:
create database link TEST_LINK
connect to TARGETUSER IDENTIFIED BY password using 'DATABASESID';
After this you can do stuff like:
select column_a, column_b
from data_user.sample_table#TEST_LINK
Except if the column is a LOB, then you get the error:
ORA-22992: cannot use LOB locators selected from remote tables
This is a documented restriction.
The same page suggests you fetch the values into a local table, but that is... kind of messy:
CREATE TABLE tmp_hello
AS SELECT column_a
from data_user.sample_table#TEST_LINK
Any other ideas?
The best solution by using a query as below, where column_b is a BLOB:
SELECT (select column_b from sample_table#TEST_LINK) AS column_b FROM DUAL
Yeah, it is messy, I can't think of a way to avoid it though.
You could hide some of the messiness from the client by putting the temporary table creation in a stored procedure (and using "execute immediate" to create they table)
One thing you will need to watch out for is left over temporary tables (should something fail half way through a session, before you have had time to clean it up) - you could schedule an oracle job to periodically run and remove any left over tables.
For query data, the solution of user2015502 is the smartest. If you want to insert or update LOB's AT the remote database (insert into xxx#yyy ...) you can easily use dynamic SQL for that. See my solution here:
You could use materalized views to handle all the "cache" management. It´s not perfect but works in most cases :)
Do you have a specific scenario in mind?
For example, if the LOB holds files, and you are on a company intranet, perhaps you can write a stored procedure to extract the files to a known directory on the network and access them from there.
In this specific case can the only way the two systems can communicate is using the dblink.
Also, the table solution is not that terrible, it's just messy to have to "cache" the data on my side of the dblink.