Exporting Oracle Database Using TOAD - sql

I have this strange error when exporting a database
this is the error report,
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user WELTESADMIN
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user WELTESADMIN
About to export WELTESADMIN's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export WELTESADMIN's tables via Conventional Path ...
EXP-00008: ORACLE error 904 encountered
ORA-00904: "POLTYP": invalid identifier
EXP-00000: Export terminated unsuccessfully
I have granted the DBA access to the executor and this is what happened. Please help me...

Related

Problems Loading data from CSV into a PostgreSQL table with PGADMIN

I am at a loss. I am on a mac using PG Admin 4 with PostgreSQL I tried to use the import/export data wizard when you right click on the table I created...and get this error ERROR: extra data after last expected column...all of the colomns match up and there is no additional data. I don't know what needs to change...
So then I tried to create it with the quarry tool with the following code (renaming things to put here):
create schema schema_name;
create table schema_name.tablename( column1 text, column2 text...); ***all the columns are a text data type***
copy schema_name.tablename
from '/Users/me/downloads/filename.csv'
delimiter ',' header csv;
and get this error message:
ERROR: could not open file "/Users/me/downloads/filename.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
Going to properties for that database, and then to security, and privileges I made public all privileges. But that did nothing. And so I am here to ask yall for help.
Successfully import the data from the CSV.

Data migration from cocroachdb to postgresql

I am tryig to migrate my cockroachdb into postgresql for some reason :
I have dumbs of cockroachdb data in .sql format like booking.sql etc .
I tried many ways ways to solve this problem
tried direct import of dump file using psql but since the dump file was of cockroachdb it is showing some syntactical error
my second plan was to restore the dump file back into cockroachdb system and try running pgdump from there. But I am not able to restore the database in cockroachdb.
ERROR: failed to open backup storage location: unsupported storage scheme: "" - refer to docs to find supported storage schemes
I tried doing again with import statement from cockroachdb but of no use .
with my little knowledge I also searched google and youtube but of their little documentation I didnt found anything useful
Any help will be appreciated . Thank you
for exporting data from cockroachDB there are some limitations. you can't export your data into SQL directly in new versions.
the first way of exporting is using the cockroach dump command, but it's been deprecated from version 20.2 so if you are using a newer version, this won't work.
cockroach dump <database> <table> <table...> <flags>
sample:
cockroach dump startrek --insecure --user=maxroach > backup.sql
in new versions, you can export your data into CSV files using SQL commands like EXPORT
EXPORT DATABASE bank INTO 's3://{BUCKET NAME}/{PATH}?AWS_ACCESS_KEY_ID={KEYID}&AWS_SECRET_ACCESS_KEY={SECRET ACCESS KEY}' \ AS OF SYSTEM TIME '-10s';
to export into local nodes
EXPORT DATABASE bank INTO ('nodelocal://1/{PATH}');
the other alternative way of exporting is using database clients such as DBeaver.
you can download and install DBeaver from https://dbeaver.io/download/.
after adding the connection you can export the database from this path Right-click on db>tools>Backup
the fastest and easiest way of exporting is using a database tool like DBeaver.
I hope this answer would have been helpful

Hello all, I am getting SAP DBTech JDBC: [2]: general error: Remote export failed: while exporting a table into csv file

I am getting below error wgile exporting a table into csv in hana studio.
SAP DBTech JDBC: [2]: general error: Remote export failed: export size exceeds 20% of available memory, please use server-local export.
My table has 186 millions records
Please let me know how to resolve this issue and how to run in server - local export
Pallavi there is export limitation applied to the HANA users. You can check those by going into settings of your hana studio. Thats one you need to sort out but if there those many rows as you mentioned above i would suggest to slice the extract into multiple extracts.
Here is the admin settings link from SAP which will guide you to the particular setting i am referring to:
https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/c06b0a63bb5710148bb5e18dfd71c237.html
Look for query limit and exported data to file

Import a CSV file with Access frontend into SQL Server

Background:
In my company we have many CSV files which have to be imported into an SQL Server. The CSV files contain multidimensional market simulations, which are stored in an EAV form (2 columns and 10^6 to 10^10 rows). Their size is variable, but it is not unusual that it is more than 500Mb.
Until now, theses files were imported by an database administrator via SSMS into SQL Server.
Every importation should get an ImportationID and a Timestamp. This is time consuming and error prone for the database administrator who does this manually.
Thus, an Access front end is created to allow every user to import easily the CSV file into the server, after making a selection with a Listbox.
Now, I am faced to the problem to import the CSV file through the Access interface.
Problem:
Here are the possible options which I have considered but which aren't possible :
Pass some T-SQL command to the SQL Server, as listed here (not allowed by Access)
Import the CSV line by line with a VBA loop (takes too long for 10^6 to 10^10 rows)
Import the CSV file in the Access database and then export the table to the SQL Server (2Gb size limitation of Access makes it impossible)
Is there any other option to perform this task, using Access ?
One possible solution is as follows. Your Access frontend has a form that accepts three values: file name/location; ImportationID; Timestamp. After the user enters this data, the 'Go' or 'Submit' button fires a stored procedure on the SQL Server database that accepts these 3 variables.
The stored procedure will issue a BulkInsert (or other of the commands you linked to) to get the CSV into the database, and then manipulate the data and transform it according to your business rules (sets ImportationID and Timestamp correctly, for example).
This is something that a database developer (or maybe a database admin) should be able to set up, and any validation or security constraints can be enforced on the database.

Is it possible to use sql server bulk insert without a file?

Curious if this is possible: The app server and db server live in different places (obviously). The app server currently generates a file for use with sql server bulk insert.
This requires both the DB and the app server to be able to see the location, and it makes configuration more difficult in different environments.
What I'd like to know is: is it possible to bypass the file system in this case? Perhaps I can pass the data to sql server and have it generate the file?
I'm on sql server 2008, if that makes a difference.
thanks!
I don't think you can do that with SQL Server's bulkcp tool, but if your app is written using .NET, you can use the System.Data.SqlClient.SqlBulkCopy class to bulk insert rows from a data table (or any datasource you can access with a SqlDataReader).
From the documentation on bulk insert:
BULK INSERT
[ database_name. [ schema_name ] . | schema_name. ] [ table_name | view_name ]
FROM 'data_file'
The FROM 'data_file' is not optional and is specified as such:
'data_file'
Is the full path of the data file that contains data to import into the
specified table or view. BULK INSERT
can import data from a disk (including
network, floppy disk, hard disk, and
so on).
data_file must specify a valid path
from the server on which SQL Server is
running. If data_file is a remote
file, specify the Universal Naming
Convention (UNC) name. A UNC name has
the form
\Systemname\ShareName\Path\FileName.
For example,
\SystemX\DiskZ\Sales\update.txt.
Your application could do the insert directly using whatever method meets your performance needs.