I am trying to generate the report from OracleDB --19c with ora2pg --V23.1.
Command Used: ora2pg -t show_report --dump_as_html -l db_report_filename.html -c E:\ora2pg\ora2pg.cong
Error generated in html report:
FATAL: ORA-00604: error occurred at recursive SQL level 1 ORA-08177: can't serialize access for this transaction (DBD ERROR: OCIStmtExecute)
Looking for ideas to resolve this issue.
This issue was fixed when a configuration change in ora2pg conf file was changed
Data are exported in serialized transaction mode to have a consistent snapshot of the data, see Oracle documentation about what parameter to increase to not have this issue. Or if you are sure that no modification are done in the Oracle database you can force Ora2Pg to use a readonly transaction instead, see TRANSACTION directive in ora2pg.conf
Related
I have a SQL dump that I need to import using postgresql into pgadmin4, however when I run the command, the schema gets created but none of the data comes with it, I have the database set up in pgadmin4. This is my first time using postgresql and pgadmin so I know I have to be missing something.
The SQL dump file was sent to me directly, I did not use pg_dump to migrate anything, the file is in my downloads and I need to plug it in to pgadmin.
I need this SQL dump because I need to log into several portals locally for a large project.
On Windows, using postgres version 14, I've tried several ways from other solutions on stack overflow, first using the command line in both bash and powershell
This here is the command I was told to use that should add the tables and data for the app from a coworker, and it worked fine for him.
C:\Program Files\PostgreSQL\14\bin>psql -h localhost -U postgres -d the_database -f PATH_TO_YOUR_DOWNLOADS\data_dump.sql
This command will create the schema in the pgadmin database but no data comes with it. (I know the data is missing because I cant use my dummy logins to get into the project)
Second, I tried using the built in restore and backup methods in pgadmin and both of those end in an error
`Process failed Restoring backup on the server 'PostgreSQL 14 (localhost:5432)
Third I tried using the query tool and link the sql file that way, but when I hit execute I get an error there as well.
Using the query tool, when I link the download file, I can see the data in the Query, but it is not in the database.
ERROR: syntax error at or near "2"
LINE 3285: 2 Some Test 2020-11-13 07:42:29.356827 2020-11-13 04:32:...
^
SQL state: 42601
Character: 87447
Any advice?
Do I need the SQL file formatted in any certain way?
I just need the data to be imported into pgadmin4 database WITH my schema.
Problem:
Getting below mentioned error while importing schema from AWS Postgres to Gcloud postgres.
Error:
Import failed:
SET
SET
SET
SET
SET set_config
------------
(1 row)
SET
SET
SET
CREATE SCHEMA
SET
SET
CREATE TABLE
ERROR: syntax error at or near "AS" LINE 2: AS integer ^
Import error: exit status 3
I used --no-acl --no-owner --format=plain while exporting data from AWS postgres
pg_dump -Fc -n <schema_name> -h hostname -U user -d database --no-acl --no-owner --format=plain -f data.dump
I am able to import certain schemas in gcloud sql exported using same method but getting error for some other similar schemas. Table has geospatial info and postgis is already installed in destination database.
Looking for some quick help here.
My solution:
Basically, I had a data dump file from postgres 10.0 with tables having 'sequence' for PK . Apparently, the way sequences along with other table data got dumped in file, was not been read properly by Gcloud postgres 9.6. That's where it was giving error "AS integer". Also, finally I did find this express in dump file which I couldn't find earlier. Hence I need to filter out this bit.
CREATE SEQUENCE sample.geofences_id_seq
AS integer <=====had to filter out this bit to get it working
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
No sure if anyone else faced this issue but i had and this solution worked for me without loosing any functionality.
Happy to get other better solutions here.
The original answer is correct, and similar answers are given for the general case. Options include:
Upgrading the target database to 10: this depends on what you are using in GCP. For a managed service like Cloud SQL, upgrading is not an option (though support for 10 is in the works, so waiting may be an option in some cases). It is, if you are running the database inside a Compute instance, or as a container in, e.g., App Engine (a ready instance is available from the Marketplace).
Downgrading the source before exporting. Only possible if you control the source installation.
Removing all instances of this one line from the file before uploading it. Adapting other responses to modify an already-created dump file, the following worked for me:
cat dump10.sql | sed -e '/AS integer/d' > dump96.sql
On trying to load a json file to bigquery. I get the following error: "An internal error occurred and the request could not be completed. Error: 8822097". Is this an error related to hitting the bigquery daily load limit? It will be amazing if someone can point me to a glossary of errors.
{Location: ""; Message: "An internal error occurred and the request could not be completed. Error: 8822097"; Reason: "internalError"
Thanks!
Are you trying to load different types of file in a single command?
It may happen when you try to load from a Google Storage path with both compressed and uncompressed files:
$ gsutil ls gs://bucket/path/
gs://bucket/path/a.txt
gs://bucket/path/b.txt.gz
$ bq load --autodetect --noreplace --source_format=NEWLINE_DELIMITED_JSON "project-id:dataset_name.table_name" gs://bucket/path/*
Waiting on bqjob_id_1 ... (0s) Current status: DONE
BigQuery error in load operation: Error processing job 'project-id:bqjob_id_1': An internal error occurred and the request could not be completed. Error: 8822097
This error can occur due to the maximum columns per table — 10,000 BigQuery limit.
To verify this, you can check the number of distinct columns in the used table:
bq --format=json show project:dataset.table | jq . | grep "type" | grep -v "RECORD" | wc -l
Reducing the number of columns would probably be the best and quickest way to work-around this issue.
We got the same error "An internal error occurred and the request could not be completed. Error: 8822097" when running a standard sql query. Running the corresponding legacy sql query gave us an error message that was actually actionable:
Error while reading table: ABC, error message: The reference schema
differs from the existing data: The required field 'XYZ' is
missing.
Fixing the underlying error, exposed by the legacy sql query, also fixed the error for the standard sql query.
In our case we have avro files. The table was created from the avro files. Newer avro files didn't contain a certain field but the table still contained that field. Rebuilding the table from the new avro files solved the issue. We also have views on top of the table which may or may not change the resulting error message.
Hi I am using SQL SERVER 2005 Service pack 4 on both publisher and distributor. While trying to setup merge replication, i am getting below error continuously. Below are replication details.
I am using push subscription and path is network path.
Distributer and publisher present on the same server.
I have restored recent backup on subscriber and 1 week back backup on publisher.
I am setting up replication for only few tables, procedures and user defined functions.
I have verified and both the publisher and subscriber are having same schema.
As the replication is failing initially saying unable to drop userdefined functions : To resolve it I have set publisher property for user defined functions as Keep existing object unchanged.
Every time the error is coming after running synchronization for around 50 to 55 minutes.
My snapshot agent is working fine without any issue. Problem is only with merge agent.
I have changed the verbosehistory value to 3 in merge agent profile but it is not giving any additional information
Error messages: The merge process was unable to deliver the snapshot
to the Subscriber. If using Web synchronization, the merge process may
have been unable to create or write to the message file. When
troubleshooting, restart the synchronization with verbose history
logging and specify an output file to which to write. (Source:
MSSQL_REPL, Error number: MSSQL_REPL-2147201001)
Get help: http://help/MSSQL_REPL-2147201001
The process could not bulk copy into table
'"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number:
MSSQL_REPL20037)
Get help: http://help/MSSQL_REPL20037
The system cannot find the file specified. (Source: MSSQLServer, Error
number: 0)
Get help: http://help/0
To obtain an error file with details on the errors encountered when
initializing the subscribing table, execute the bcp command that
appears below. Consult the BOL for more information on the bcp
utility and its supported options. (Source: MSSQLServer, Error number:
20253)
Get help: http://help/20253
bcp "greyhound"."dbo"."refund_import_log" in
"\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp"
-e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w (Source: MSSQLServer, Error number: 20253)
Here i am getting problem with different table every time.
Is there any bug related to it ? If so where i can get the fix ? If it is not a bug then please let me know how to resolve this problem.
The error message tells you the problem:
The process could not bulk copy into table '"dbo"."refund_import_log"'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20037)
It then gives you a perfectly good repro, to see why bulk copy is failing:
bcp "greyhound"."dbo"."refund_import_log" in "\usaz-ism-db-02\ghstgrpltest\unc\USAZ-ISM-DB-02_GREYHOUND_GREYHOUND-STAGE\20150529112681\refund_import_log_7.bcp" -e "errorfile" -t"\n\n" -r"\n<,#g>\n" -m10000 -SUSGA-QTS-GT-01 -T -w
Looking at the bcp repro above, can you please doublecheck the UNC path that you set for the snapshot folder, it looks incorrect to me. UNC paths should have two forward slashes in the beginning, yours only has one. The UNC path should look like this:
\\usaz-ism-db-02\ghstgrpltest\unc\
I need to parse a CSV file and write the data to a Vertica database. The issue is that I get an error when I create a Vertica database connection in Spoon. The following is the error at the end of the post.
I tried copying the following two JAR files and adding them to libext/jdbc:
vertica-jdbc-4.1.14.jar and vertica-jdk5-6.1.2-0.jar
But the above didn't help. I am looking for pointers!
Error:
Error connecting to database [Vertica Dev] : org.pentaho.di.core.exception.KettleDatabaseException:
Error occured while trying to connect to the database
Exception while loading class
com.vertica.jdbc.Driver
org.pentaho.di.core.exception.KettleDatabaseException:
Error occured while trying to connect to the database
Exception while loading class
com.vertica.jdbc.Driver
at org.pentaho.di.core.database.Database.normalConnect(Database.java:366)
The two JAR files you copied are of two different versions of Vertica and do not use the same class.
vertica-jdk5-6.1.2-0.jar will expose com.vertica.jdbc.Driver whereas version 4 will expose com.vertica.Driver.
The error message thus makes obvious that Pentaho is looking for com.vertica.jdbc.Driver (version 5, thus). If it fails, it probably is because the JAR version 4 is loaded first.
Try to delete the version 4 only from the libext/jdbc, keep the version 5, and restart Pentaho.
On a side note, this class is hardcoded in Pentaho, so if you do need to use the JAR version 4 and feel adventurous, you just need to get the Pentaho source, update VerticaDatabaseMeta.java, and recompile.