I have 2 schemas in the POSTgreSQL database and I need to compare and validate the table ABC in schema 1 and table XYZ in schema 2 using Mulesoft? Once validation is done write them to log file or XML file.Can you provide some high level steps
With my knowledge i can suggest you build a query to compare two schema s and provide the query to database connector in mule flow.
Which will return expected results.
For reference to query definition please find the below link
DB2 SQL query to compare 2 schemas
Please let me know if this works for you.
Related
A colleague of mine has given me a task to ensure all table name and pages in a stored procedure contains the relevant schemas?
For example if I have a table in a database that is dbo.table1, then in a query if I have:
Select * from table1
He wants me to change it to:
Select * from dbo.table1
This is same for pages that start with Support.
What is the significance of adding in a scheme like dbo. At the start when manually writing SQL? Is it suppose to be better for performance as it seems to know where the tables even if I don't include .dbo at the start?
I'm using SQL server 2012 and its management studio.
Thank you
You may not notice the performance if you write the schema in all of your queries. If you don't specify it, the engine will look through all of your schemas in your databases ( I'm not pretty sure if the engine will check the sys schemas too ) until it finds the table you're getting the data; It's recommended as a best practice to write it because you are telling to the engine what schema need to search the table.
I hope this answer was helpful.
dbo is the schema which is used as part naming convention for tables in a mssql database
look at this post SQL Server edits the table name with dbo as prefix
I need to write a database table data to a text file with some transformation.
There are two steps available to retrieve the data from the table, namely Table input and Database join. I don't see much difference between them except the "outer join?" option (correct me if I understood wrongly). So which would be better to use?
Environment:
Database : oracle
Pentaho Spoon : 5.3.* (Community edition)
Thanks in advance.
Table Input step in PDI is used to read the data from your database tables. The query would be executed once and will return you the result set. Check the wiki.
Database Join works slightly different. It will allow you to execute your query based on the data received from the previous step. For every row coming in from the previous step, the query in this step will be substituted and is executed. Check the wiki.
The choice of using the above steps clearly depends on your requirement.
If you need to fetch the data set from a database table, you should use the Table Input Step - The best choice.
In case, you need to run the query in the database for every row to fetch the result, use Database Join - The best choice.
Hope it helps :)
Hi i am using SQL Server 2012. i would like to view the system objects code/definition. can we view the system objects definition/code in SQL Server any version ?
wanted to know when i execute this query SELECT MONTH(18/200), MONTH(200/18) i am getting output 1 for both just want to know internal code what is going on and how it is giving output 1 for MONTH(200/18) ? to understand this looking for MONTH() function code.
Use INFORMATION_SCHEMA
An information schema view is one of several methods SQL Server
provides for obtaining metadata.
Information schema views provide an
internal, system table-independent view of the SQL Server metadata.
Information schema views enable applications to work correctly
although significant changes have been made to the underlying system
tables.
The information schema views included in SQL Server comply
with the ISO standard definition for the INFORMATION_SCHEMA.
answer to SELECT MONTH(18/200), MONTH(200/18) is any integer to the MONTH() interprets 0 as January 1, 1900. that's whey it is returning 1 1 as output.
I have 13 SQL databases some 2005 others 2008, on a VPN. I'd like to take all of the data from the "Employees" table on each database and make it a view at each location. I would then like to publish these views to 1 database on another server, all in one table marking where each came from within the origninal databases. For example the database where all the information goes to would look like this:
User Name Location
bik Bob K 1
JS John S 2
Etc.
Any help is appreciated.
I assume you want the data on the final server to be viewable, but not modifiable, and to reflect changes made to the source databases?
This would probably not perform all that well, but one do-it-yourself-way to do it would be the following (disclaimer: I haven't tried doing this myself):
Set up all the source servers as linked servers on the final server.
Create a view in this form:
SELECT *, 1 as Location
FROM [Linked Server 1].Database1.dbo.Table1
UNION ALL
SELECT *, 2 as Location
FROM [Linked Server 2].Database2.dbo.Table2
... etc ....
You might want to read this documentation on distributed queries, if you haven't already.
I believe it's also possible to use SSIS as the source of a distributed query, but a quick scan through the documentation didn't find anything about it. I mention that because SSIS would make pulling and transforming data from disaparate data sources very easy, and if you could use the final recordset as a data source, you could use an SSIS package as the backend to your view. However, again, performance would probably require considerable tuning.
If the queries don't have to be real time you could look into using SQL Server Integration Services (SSIS) to pull in the data to a local DB. you could schedule the job to run hourly/daily/weekly..
In some days ago...i was converting some Large MySQL Database to Oracle 10g R2 by using Oracle SQL Developer database migration tools.But unfortunately it was migrated on system schema.but i need to it on scott schema.
After Googling i found this two links OraFAQ Forum and ASK TOM Q&A.But I cannot find any appropriate answer. Any one Can help me to do , How it is possible.
Thanks In Advance.
IIRC the MySQL backup tool spits out plain SQL. As it would be in the form of fairly vanilla SQL -- just create and insert, I guess -- it ought to be able to be run against your Oracle schema with the minimum of alteration.
Having said that, in the SQL Developer migration wizard, the second step allows you to select the target schema. If you have a connection setup to scott, why doesn't that work for you?
If the table isn't too large (dependent upon your system resources and server horsepower etc.) then you could simply rebuild the table in the desired schema with the following.
N.B. You need to be logged in as either the target schema's user (with select permission on the table in the SYSTEM tablespace) or as system:
CREATE TABLE <newschema>.<tablename>
AS
SELECT *
FROM system.<tablename>;
Then remove the original table once the new table has been created.
If the table is large then you could use DATAPUMP to export and import it into the desired schema.
Here is an article on using Data Pump for this purpose:
http://oraclehack.blogspot.com/2010/06/data-pump-moving-tables-to-new-schema.html
Hope this helps