Resolve Symbols in Datagrip - sql

I have a DWH with a few schemas. I always have to use tables and views from different schemas to combine them for new views. This is done in a REPORT schema that has all the synonyms to almost all the tables and views in the other schemas. All those tables and views also have privileges granted to REPORT.
Whenever I make a reference to a different schema then REPORT, Datagrip is not able to resolve that reference, stating that it is "Unable to resolve symbol.."
I am not sure, whether this is needed for the referencing or not, but I have database connections in the project to all schemas.
Let's say, I need col1 and col2 from a table srctable located in a schema DATA. I do all the code in the schema REPORT.
I have tried code like
SELECT
col1,
col2
FROM
srctable
And also
SELECT
col1,
col2
FROM
DATA.srctable
Since I get the data from the query, everything seems to be set up right. But I want to utilise the power of Datagrip and it is annoying I can not get the references to work.

You need to choose destination schemas in database tree for completion to work.

Related

Hive view has no path?

I've tried to make a Hive view so we can start sqooping on the columns we need.
The statement used to create the view:
CREATE VIEW new_view (col1, col2, col3)
AS SELECT col1, col2, col3 FROM source_table;
However the view has no perceived location within our Hadoop cluster. What's odd if we run a SELECT * FROM new_view; we get the data.
But when we try to run an Oozie job to hook into the view we get a table not found error. The table isn't in file browser either.
Reason why you are getting this error is that your view doesn't have any HDFS location to it. Sqoop export looks for export-dir but since your view don't contain any location to it, you are getting this error.
A view is just a layer of abstraction on top of your Hive table (which redirects to HDFS location associated to it). Please find below view definition:
A VIEW statement lets you create a shorthand abbreviation for a more
complicated query.The base query can involve joins, expressions,
reordered columns, column aliases,and other SQL features that can make
a query hard to understand or maintain.It is purely a logical
construct (an alias for a query) with no physical data behind it.
You may have to use the source_table for sqoop-export.
Note that a view is a purely logical object with no associated storage.
When a query references a view, the view's definition is evaluated in order to produce a set of rows for further processing by the query.
You can consider reading :- https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/Drop/AlterView
So you might think of storing the view data in a table in order to access it from oozie.

Auto-Match Fields to Columns SQL LOADER

I'm trying to load some data from csv to Oracle 11g database tables through sqlldr
So I was I thinking if there's a way to carry those data matching the columns described on the ctl file with the table columns by the name. Just like an auto-match, with no sequential order or filler command
Anyone knows a thing about that? I've been searching in documentation and forums but haven't found a thing
Thank you, guys
Alas you're on 11g. What you're looking for is a new feature in 12c SQL Loader Express Mode. This allows us to load a comma-delimited file to a table without defining a Loader control file; instead Oracle uses the data dictionary ALL_TAB_COLUMNS to figure out the mapping.
Obviously there are certain limitations. Perhaps the biggest one is that external tables are the underlying mechanism so it requires the same privileges , including privileges on Directory objects. I think this reduces the helpfulness of the feature, because many people need to use SQL Loader precisely because their DBAs or sysadmins won't grant them the privileges necessary for external tables.

How can I copy and overwrite data of tables from database1 to database2 in SQL Server

I have a database1 which has more than 500 tables and I have database2 which also has the same number of tables and in both the databases the name of tables are same.. some of the tables have different table definitions, for example a table reports in database1 has 9 columns and the table reports in database2 has 10.
I want to copy all the data from database1 to database2 and it should overwrite the same data and append the columns if structure does not match. I have tried the import export wizard in SQL Server 2008 but it gives an error when it comes to the last step of copying rows. I don't have the screen shot of that error right now, it is my office PC. It says that error inserting into the readonly column xyz, some times it says that vs_isbroken, for the read only column error as I mentioned a enabled the identity insert but it did not help..
Please help me. It is an opportunity in my office for me.
SSIS and SQL Server 2008 Wizards can be finicky tools.
If you get a "can't insert into column ABC", then it could be one of the following:
Inserting into a PK column -> when setting up the mappings, you need to indicate to overwrite the value
Inserting into a column with a smaller range -> for example from nvarchar(256) into nvarchar(50)
Inserting into a calculated column (pointed out by #Nick.McDermaid)
You could also get issues with referential integrity if your database uses this (most do).
If you're going to do this more often, then I suggest you build an SSIS package instead of using the wizard tooling. This way you will see warnings on all sorts of issues like the ones I've described above. You can then run your package on demand.
Another suggestion I would make, is that you insert DB1 into "stage" tables in DB2. These tables should have no relational integrity and will allow you to break the process into several steps as follows.
Stage the data from DB1 into DB2
Produce reports/queries on issues pertinent to your database/rules
Merge the data from stage tables into target tables using SQL
That last step is where you can use merge statements, or simple insert/updates depending on a key match. Using SQL here in the local database is then able to use set theory to manage the overlap of the two sets and figure out what is new or to be updated.
SSIS "can" do this, but you will not be able to do a bulk update using SSIS, whereas with SQL you can. SSIS would do what is known as RBAR (row by agonizing row), something slow and to be avoided.
I suggest you inform your seniors that this will take a little longer to ensure it is reliable and the results reportable. Then work step by step, reporting on each stages completion.
Another two small suggestions:
Create _Archive tables of each of the stage tables and add a Tstamp column to each. Merge into these after the stage step which will allow you to quickly see when which rows were introduced into DB2
After stage and before the SQL merge step, create indexes on your stage tables. This will improve the merge performance
Drop those Indexes after each merge, this will increase the bulk insert Performance
Basic on Staging (response to question clarification):
Links:
http://www.codeproject.com/Articles/173918/How-to-Create-your-First-SQL-Server-Integration-Se
http://www.jasonstrate.com/tag/31daysssis/
http://blogs.msdn.com/b/andreasderuiter/archive/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data.aspx
Staging is the act of moving data from one place to another without any checks.
First you need to create the target tables, the schema should match the source tables.
Open up BIDS and create a new Project and in it a new SSIS package.
In the package, create a connection for the source server and another for the destination.
Then create a data flow step, in the step create a data source for each table you want to copy from.
Connect each source to a new data destination and set the appropriate connection and table.
When done, save and do a test run.
Before the data flow step, you might like to add a SQL step that will truncate all the target tables.
If you're open to using tools then what about using something like Red Gate Sql Compare and Red Gate SQL Data Compare?
First I would use data compare to manage the schema differences, add the new columns you want to your destination database (database2) from the source (database1). Then with data compare you match the contents of the tables any columns it can't match based on names you specify how to handle. Then you can pick and choose what data you want to copy from your destination. So you'll see what data is new and what's different (you can delete data in the destination that's not in the source or ignore it). You can either have the tool do the work or create you a script to run when you want.
There's a 15 day trial if you want to experiment.
Seems like maybe you are looking for Replication technology as is offered by SQL Server Replication.
Well, if i understood your requirement correctly, you need to make database2 a replica of database1. Why not take a full backup of database1 and restore it as database2? Your database2 will be exactly what database1 is at the time of backup.

copy tables with data to another database in SQL Server 2008

I have to copy the tables with data from one database to another using Query. I know how to copy tables with data within a database. But I was not sure about how to do the same for copying between two databases.
I have to copy huge number of tables, so I need any fast method using query...
Anybody please help out...Thanks in advance...
You can use the same way to copy the tables within one database, the SELECT INTO but use a fully qualified tables names database.schema.object_name instead like so:
USE TheOtherDB;
SELECT *
INTO NewTable
FROM TheFirstDB.Schemaname.OldTable
This will create a new table Newtable in the database TheOtherDB from the table OldTable whih belongs to the databaseTheFirstDB
Right click on the database, select tasks and click on Generate Scripts.
In the resultant pop-up, choose options as required (click advanced), to drop and create table, drop if exists, etc.
Scroll down and choose "Schema and Data" or "Data Only" or "Types of data to script (2008 R2)"as required.
Save to file and execute on the destination DB.
Advantages -
Can be executed against the destination DB, even if it is on another server / instance
Quickly script multiple tables, with data as needed
Warning - Might take quite a while to script, if the tables contain a large amount of data.
Rajan
INSERT INTO DB2.dbo.MyOtherTable (Col0, Col1)
SELECT Col0, Col1 FROM DB1.dbo.MyTable
Both table column's must have same data types..
Below SQL Query will copy SQL Server table schema & data from one database to another database. You can always table name (SampleTable) in your destination database.
SELECT * INTO DestinationDB.dbo.SampleTable FROM SourceDB.dbo.SampleTable

Bulk Insert from table to table

I am implementing an A/B/View scenario, meaning that the View points to table A, while table B is updated, then a switch occurs and the view points to table B while table A is loaded.
The switch occurs daily. There are millions of rows to update and thousands of users looking at the view. I am on SQL Server 2012.
My questions are:
how do I insert data into a table from another table in the fastest possible way? (within a stored proc)
Is there any way to use BULK INSERT? Or, is using regular insert/select the fastest way to go?
You could to a Select ColA, ColB into DestTable_New From SrcTable. Once DestTable_New is loaded, recreate indexes and constraints.
Then rename DestTable to DestTable_Old and rename DestTable_New to DestTable. Renaming is extremly quick. If something turns out to have gone wrong, you also have a backup of the previous table close by (DestTable_Old).
I did this scenario once where we had to have the system running 24/7 and needed to load tens of millions of rows each day.
I'd be inclined to use SSIS.
Make table A an OLEDB source and table B an OLEDB destination. You will bypass the transaction log so reduce the load on the DB. The only way (I can think of) to do this using T-SQL is to change the recovery model for your entire database, which is far from ideal because it means no transactions are stored, not just the ones for your transfer.
Setting up SSIS Transfer
Create a new project and drag a dataflow task to your design surface
Double click on your dataflow task which will take you through to the Data Flow tab. Then drag and drop an OLE DB source from the "Data flow Sources" menu, and an OLE DB destination from the "Data flow Destinations" menu
Double click on the OLE DB source, set up the connection to your server, choose the table you want to load from and click OK. Drag the green arrow from the OLE DB source to the destination then double click on the destination. Set up your connection manager, destination table name and column mappings and you should be good to go.
OLE DB Source docs on MSDN
OLE DB Destination docs on MSDN
You could do the
SELECT fieldnames
INTO DestinationTable
FROM SourceTable
as a couple answers suggest, that should be as fast as it can get (depending on how many indexes you'd need to recreate, etc).
But I would suggest using synonyms in order to change the pointer from one table to another. They're very transparent and in my opinion, cleaner than updating the view, or renaming tables.
I know the question is old, but I was hunting for an answer to the same question and didn't find anything really helpful. Yes the SSIS approach is a possibility, but the question wanted a stored proc.
To my delight I have discovered (almost) the solution that the original question wanted; you can do it with a CLR SP.
Select the data from TableA into a DataTable and then use the WriteToServer(DataTable dt) method of the SqlBulkCopy class with TableB as the DestinationTableName.
The only slight drawback is that the CLR procedure must use external access in order to use SqlBulkCopy, and does not work with context connection, so you need to fiddle a little bit with permissions and connection strings. But hey! nothing is ever perfect.
INSERT... SELECT... functions fairly similarly to BULK INSERT. You could use SSIS, like #GarethD says, but that might be overly complex if you're just copying rows from table1 to table2.
If you are copying serious quantities of data, keep an eye on the transaction log -- it can bloat up pretty fast when doing huge inserts. One work-around is to "chunkify" the data you are inserting, by looping over an insert statment that processes, say, only 100,000 or 10,000 rows a time (depends on how wide your rows are, i.e. how many MB per pass).
(Just curious, are you doing ALTER VIEW to reset the view? I did something similar once, though we had to have four tables and four views to support past/present/next/swap sets.)
You can simply do like this
select * into A from B Where [criteria]
This shall select the data from B, based on the criteria and shall insert it into A, provided the columns are same or you can specify column names instead of *.