Import Data from ODBC to SQL Server and make it live - sql-server-2016

Is there a way to import data from an ODBC connection AND make it live data? Meaning a constant flow. I know of the Import wizard but wasn't sure if that was a one time deal or not.

Related

Transfer Data from Oracle database 11G to MongoDB

I want to have an automatic timed transfer from Oracle database to MongoDB. In a typical RDBMBS scenario, i would have established connection between two databases by creating a dblink and transferred the data by using PL/SQL procedures.
But i don't know what to do in MongoDB case; thus, how and what should i be implementing so that i can have an automatic transfer from Oracle database to MongoDB.
I would look at using Oracle Goldengate. It has a MONGODB Handler.
https://docs.oracle.com/goldengate/bd123110/gg-bd/GADBD/using-mongodb-handler.htm#GADBD-GUID-084CCCD6-8D13-43C0-A6C4-4D2AC8B8FA86
https://oracledb101.wordpress.com/2016/07/29/using-goldengate-to-replicate-to-mongodb/
What type of data do you want to transfer from the Oracle database to MongoDB? If you just want to export/import a small number of tables on a set schedule, you could use something like UTL_FILE on the Oracle side to create a .csv export of the table(s) and use DBMS_SCHEDULER to schedule the export to happen automatically based on your desired time frame.
You could also use an application like SQL Developer to export tables as .csv files by browsing to the table the schema list, then Right Click -> Export and choosing the .csv format. You may also find it a little easier to use UTL_FILE and DBMS_SCHEDULER through SQL Developer instead of relying on SQL*Plus.
Once you have your .csv file(s), you can use mongoimport to import the data, though I'm not sure if MongoDB supports scheduled jobs like Oracle (I work primarily with the latter.) If you are using Linux, you could use cron to schedule a script that will import the .csv file on a scheduled interval.

Import an SQL file to Azure

I am new to the field and I need some guidance. I want to import an SQL file containing data (~8gb if that matters) in Azure. I have created an SQL database and I want to import it there. Can someone give me some guidance?
I'd suggest BCP to connect to your Azure SQL DB and upload the data directly.
The other way of doing it would be to upload your CSV data to a storage account and create an external table on top of it. See here.
This link lists some additional ways to do it.

How to validate data in Hive HQL while import from Source

Please explain how to put the validation while importing data from Source in Hive table for example in a bulk of data if some data is corrupt which is not suppose to import so how to discard that data.
You need to develop ETL process and have strategy to discard the corrupt data. Either you can use 3rd party tools like Informatica big data edition, Talend etc or you need to develop your custom code. It is a major effort.

How to import a transactionally-inconsistent bacpac

It is well-known that creating a bacpac on SQL Azure does not guarantee transactional consistency when doing an export of a live, changing database.
The accepted workaround is to create a snapshot of the database first, by copying it, and then doing an export.
This approach is pretty ridiculous, because it forces users to spend extra money for relational DB storage. In fact, in the older days of SQL Azure, databases were billed by the day, so creating daily bacpacs from production databases essentially used to double the costs (it's now billed by the hour, if I'm not mistaken).
However, my question is not about this. My question is as follows - if it is acceptable for me to have a transactionally inconsistent bacpac, is there any way of actually restoring (i.e. importing it)? The problem is simple - because some constraints are no longer satisfied, the import fails (say, with a FK exception). While the bacpac restore is nothing more than re-creating the DB from the schema, followed by bulk imports, the entire process is completely opaque and not much control is given to the user. However, since Azure SQL tools are always in flux, I would not be surprised if this became possible.
So, to recap, the question: given a potentially inconsistent bacpac (i.e. some constaints won't hold), is there a way (without writing tons of code) to import it into an on-premise database?
Try using BCP.exe to import the data.
bacpac is a zip file. You can open the bacpac by changing its file
extension to .zip. All data is captured in .bcp file format in
‘Data’ folder.
Move Data folder out from the zip file and save it for step 4 below.
Change the .zip extension back to .bacpac and
import it. It creates a database with schema only.
Using bcp.exe, import .bcp files to tables in the database.
https://msdn.microsoft.com/en-us/library/ms162802.aspx
Troubleshoot and fix the data inconsistency.
If you already know which table contains inconsistent data, you can move out bcp files for that tables only and import them using bcp.

GAE Datastore Large Amounts of Data

Background:
I'm working on a project that's starting out with a large SQL dump that I have to import to a new database. This dump is about 1.5GB of just plain text, so quite a lot of information. My client right now wants me to use Google App Engine and its datastore, which I'm (a) not so fond of and (b) doesn't really play well with SQL dumps. Before I go through the trouble to make that happen...
Question:
What is a cloud-hosted database solution that can efficiently handle large quantities of data (and ideally is lower-cost)? In particular, which would be a database solution to which I could just import my SQL dump as-is?
Does your client has any reasons to use the datastore? If you already have the SQL dump, I think it would be easier to use Google Cloud Sql from GAE.