Converting oracle dump (.dmp) files to sql file (PostgreSQL files) [closed] - sql

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I would like to know the option of converting oracle dump to sql file (for PostgreSQL database). Is it possible to get the DDL, DML, and the data as well in the SQL file?
Thanks in advance.

Years ago I wrote a little program explode to convert the classical dmp file to csv files for easy loading. However, this is not the way to go since those file formats are undocumented.
A much smarter route to get the table definitions and data from oracle to Postgres is to use the oracle foreign data wrapper in postgres.
This allows for native -Postgres- access on the oracle tables but in a smarter way than the oracle database links. Check the docs and see if it fits your use case. If so, it will take out a lot of the conversion work for you.

Related

Bigquery: dbt seed with ARRAY fields [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'd like to load some small configuration table I have on BigQuery using seeds.
I did that successfully till now, but now I have a table with an array field.
I put the arrays in the usual BigQuery format ["blablabla"], but no luck.
I tried forcing the datatype in dbt_prject.yml, but I get a "ARRAY is not a valid value" error.
Did someone ever used seeding with structured fields?
Daniele
I don't think this is possible, unfortunately. From a little online research, this appears to be a joint limitation of:
the BigQuery LoadJobConfig API that dbt calls here
the CSV file format, which doesn't really have a way to specify a nested schema (related issue)
A long-term resolution to this may be support for JSON-formatted seeds (dbt#2365).
In the meantime, I recommend that you set the seed column type to string and convert it to an array (using json_extract_array) in a staging model.

Which file type is better for importing data into SQL Server: CSV or JSON? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am taking part in a project where a third-party company will provide us with an export of our customer data so that we can import them into our in-house system.
Each customer record has about twenty fields. The data-types are strings, booleans, integers and dates (with time components and UTC offset components). None of the strings are longer than 250 chars. The integer can range from 0 to 100,000 inclusive.
I have to import about 2 million users into a SQL Server database. I am in the planning phase and trying to determine if I should ask for the export file in csv or json. I am planning on asking for both (just in case), but I don't yet know if I can.
If I can only pick one file-type (csv or json), which is better for this kind of work? Can anyone with experience importing data into SQL Server provide any advice on which is better?
The same are fast if you use the bulk method.
From SQL Server 2016 the json is natively supported and you can manipulate it easily with JSON function.
You can also import file directly via T-SQL with OPENJSON
and OPENROWSET BULK IMPORT. Alternately you can put the T-SQL above into SSIS package.
See this article for more details:
https://www.sqlshack.com/import-json-data-into-sql-server/

Apply Database Changes after edit SQL Server [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a DataBase SQL , I did some changes in tables and datatype of this data base
but this data base contains data
I used a select Insert statement from the old database to the new one after edit
Is it the best way to bring he data to the new database or I can use better way ??
It depends why you are doing it and when.
If you are doing this as a one off process on your development machine and there is only a small number of simple tables then this approach will be fine
If you are doing it for a larger number of tables, need it to be re-runnable or want a GUI then use SSIS or "SQL Server Import and Export Wizard". It's easy to use.

Need a query to get a list of used and unused tables [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I gave a list of tables from our schema in a text file to my boss asking him that we need NOT back up all these tables. Now he is asking me to write a query to return the rest of the tables which REQUIRE BACK UP.
He gave me the hint that I will have to use USER_TABS and DIFF in my query.
Can anybody please help?
I am using ORACLE database.
What criteria defines a table that is "unused" and what criteria defines a table that "needs backup"? And what do you mean by "backup"?
If you're talking about backups, you'd normally be talking about physical backups in which case you back up the entire database (either a full backup or an incremental backup depending on how you've structured your backup strategy). You would not and could not include or exclude tables from a physical backup. You could potentially move the tables to a read-only tablespace which lets Oracle know that they don't need to be backed up again. But it doesn't sound like that's what you're after.

Is it possible to dump a mysql schema into latex tables? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am documenting a database, and it would be great to have the functionality to update the tables automatically based on the current state of the database.
I know phpmyadmin does this, but it is a buggy gui that doesn't provide many options- so I end up having to write a script with sed to find and replace things that I don't want and add things that I do.
You can accomplish this via SHOW TABLES and SHOW CREATE TABLE command.
It may not be an easy task to reconcile what has and has not changed. But, the basic approach here is to get a list of the tables in a database, and then run the SHOW CREATE TABLE table_name command for a full spec on the schema. Also, you could use the EXPLAIN command; however, though it's similar, it also contains higher-level information. I'd argue that SHOW CREATE is best since that's all you need to see to replicate the schema.
If I had more specifics about how you wanted to use this information, I'd ammend this with more information. Especially what programming language you're using to connect to mysql. You can actually even use the command line to get this info, but you'll want some intelligent processing in order to do a reconciliation against existing fields in your replicated data containers.