Unable to connect to Google BigQuery using Mule - google-bigquery

I am trying to connect to BigQuery for migrating data and I am currently facing some issues in connection.
2 approaches which I am currently focusing on
Database Connector with passing JDBC connection string
<db:config name="Database_Config" doc:name="Database Config" doc:id="07671c43-86bc-4768-b914-c71058120615" >
<db:generic-connection url="jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=<project-id>;OAuthType=0;OAuthPvtKeyPath=<path-to-key-file>;OAuthServiceAcctEmail=<email>" driverClassName="com.simba.googlebigquery.jdbc42.Driver"/>
</db:config>
The above results in an error when testing connection
ToolingException{message='Got status code: 500 when trying to
resolve a Mule Runtime operation. Reason: 'Server Error.
{"errorType":null,"errorMessage":null,"errorDetail":null,"additionalProperties":{servlet=org.glassfish.jersey.servlet.ServletContainer-7bf5036,
message=Request failed.,
url=/mule/tooling/applications/4cfe61fc-a304-4ee5-8b52-36ed6e4cfd66/components/Database_Config/connection,
status=500}}'' , rootCauseMessage='null' , rootCauseType='null'
, rootCauseStackTrace='[]'} at
org.mule.tooling.client.api.exception.ToolingException$Builder.build(ToolingException.java:141)
at
org.mule.tooling.agent.rest.client.RestAgentToolingService.handleToolingAgentHandlerException(RestAgentToolingService.java:911)
at
org.mule.tooling.agent.rest.client.RestAgentToolingService.lambda$serviceExceptionOrToolingException$43(RestAgentToolingService.java:859)
at
org.mule.tooling.agent.rest.client.RestAgentToolingService.serviceExceptionOr(RestAgentToolingService.java:873)
BigQuery Connector provided by MuleSoft
<bigquery:config name="BigQuery__Configuration" doc:name="BigQuery Configuration" doc:id="3b8287ed-f3e5-4721-bd9e-96950347cf3a" >
<bigquery:jwt-connection privateKeyId="<private-key-id>" privateKey="<private-key>" issuer="<email>" projectId="<project-id>" />
</bigquery:config>
The above results in an error
org.mule.runtime.api.connection.ConnectionException: Could not
create connection Caused by:
org.mule.runtime.api.exception.MuleRuntimeException: Invalid PKCS8
data.
I made sure the credentials passed are appropriate and we also checked with the customer regarding appropriate privileges to use this but still did not find a way to solve this.
References
mulesoft-bigquery-jdbc

Related

Load from GCS to GBQ causes an internal BigQuery error

My application creates thousands of "load jobs" daily to load data from Google Cloud Storage URIs to BigQuery and only a few cases causing the error:
"Finished with errors. Detail: An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 7916072"
The application is written on Python and uses libraries:
google-cloud-storage==1.42.0
google-cloud-bigquery==2.24.1
google-api-python-client==2.37.0
Load job is done by calling
load_job = self._client.load_table_from_uri(
source_uris=source_uri,
destination=destination,
job_config=job_config,
)
this method has a default param:
retry: retries.Retry = DEFAULT_RETRY,
so the job should automatically retry on such errors.
Id of specific job that finished with error:
"load_job_id": "6005ab89-9edf-4767-aaf1-6383af5e04b6"
"load_job_location": "US"
after getting the error the application recreates the job, but it doesn't help.
Subsequent failed job ids:
5f43a466-14aa-48cc-a103-0cfb4e0188a2
43dc3943-4caa-4352-aa40-190a2f97d48d
43084fcd-9642-4516-8718-29b844e226b1
f25ba358-7b9d-455b-b5e5-9a498ab204f7
...
As mentioned in the error message, Wait according to the back-off requirements described in the BigQuery Service Level Agreement, then try the operation again.
If the error continues to occur, if you have a support plan please create a new GCP support case. Otherwise, you can open a new issue on the issue tracker describing your issue. You can also try to reduce the frequency of this error by using Reservations.
For more information about the error messages you can refer to this document.

create sequence via ucanaccess

I have some troubles creating a sequence in a MS Access database via Dbeaver / ucanaccess connector.
Does anybody know, if this should be possible or is not implemented?
The error I get is the following:
org.jkiss.dbeaver.model.sql.DBSQLException: SQL-Fehler: UCAExc:::5.0.1 net.ucanaccess.jdbc.FeatureNotSupportedException: Feature not supported yet.
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:133)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:577)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$1(SQLQueryJob.java:486)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:172)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:493)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:894)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3645)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:123)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:172)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:121)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4949)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)

Datastream Troubleshoot: "An unknown error occurred. Please try again. If the error persists, contact Google support"

We are trying to replicate data from AlloyDB to Bigquery using Datastream.
We Get "An unknown error occurred. Please try again. If the error persists, contact Google support."
In the Datastream console --> objects list, we see all source tables with Object Status "Failed" and Backfill status "Completed".
In Bigquery we see only a subset of the tables (not all the "Completed" objects were synced).
In the Logs Explorer I can see this error on BQ:
I also see this error: error: {
code: 11
message: "Unsupported primary key column either does not exist or is a pseudocolumn at [1:401]"
}
The column referred in the error is of type enum.
The desired situation is having all the AlloyDB tables replicated into Bigquery.
The error message is not very informative...
What does it mean?
What would be the best way to go about troubleshooting this?
We're actively working on making these error messages be more informative, and improvements are continuously being rolled out as we identify more edge cases. Assuming you followed all the steps in the documentation, then you may need to open a ticket with support for further investigation. If a support ticket isn't an option, you can still report the issue using the public issue tracker
I just had this same issue but connecting to a PostgreSQL in AWS RDS:
Beginning with Postgres 10, passwords are encrypted using SCRAM-SHA-256 in PostgreSQL. Google DataStream still expects MD5 password encryption, or it will generate an "unknown error" in the logs and fail the backfills.
You'll need to update your postgresql.conf (or RDS Cluster Parameter Group if you're using AWS like me):
password_encryption = 'MD5'
Restart the database and make sure the parameter has changed with:
SHOW password_encryption;
Reset the password of your users:
ALTER USER "{username}" with password '{password}';
More info from the PostgreSQL docs: https://www.postgresql.org/docs/current/auth-password.html

failed to connect to a database in B1if

sometimes i have this error when i try to connect to my database in B1if :
vBIU.errhdlg='' exceptionmsg='com.sap.b1i.xcellerator.XcelleratorException: XCE001 Nested exception:
com.sap.b1i.bizprocessor.BizProcException: BPE001 Nested exception:
com.sap.b1i.xcellerator.XcelleratorException: XCE001 Nested exception:
com.sap.b1i.xcellerator.XcelleratorException: XCE001 Nested exception: java.lang.RuntimeException: Connect to
Business One failed. (-119) Database server type not supported -b1Server=SRV-B1-HYPPROD,
company=SBO_HYPFR_PROD, licenseServer=HYP-B1-LIC:30000, dbType=7, dbUser=sa, userName=B1i-' msglogexcl='false'
handover2CentralSrv='' MessageLog='true' msglogdbop='insert'>
the question is how to fix that ?
The Error happens when B1i don't get a connection to the DB. That should not be a problem because B1i repeats the process after a minuten and then the connection should be there.
We fixed this by Installing the 64 Bit version of B1DIAPI.x64. Not sure why that worked.
Note you need to update the SLD links for your database and SBO-COMMON - the jcoPath will likely be different - replace the "Program Files (x86)" with just "Program Files".

Deploying worklight project on WAS 8.5

I got the following exception when i deployed the war on WAS 8.5
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'identityAssociationCleanupTask' defined in URL [wsjar:file:/C:/Program%20Files/IBM/Worklight/WorklightServer/worklight-jee-library.jar!/conf/core.xml]: Invocation of init method failed; nested exception is org.springframework.orm.jpa.JpaSystemException: "WRKSCHM.CLUSTER_SYNC" is an undefined name.. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.57.82 {prepstmnt -234009374 SELECT t0.HOSTVMID, t0.ID, t0.UPDATETIMESTAMP, t0.VERSION FROM WRKSCHM.CLUSTER_SYNC t0 WHERE t0.ID = ? optimize for 1 row [params=(String) identityAssociationCleanupTask]} [code=-204, state=42704]SQLCA OUTPUT[Errp=SQLNQ1FC, Errd=-2145779603, 0, 0, 0, -10, 0]
"WRKSCHM.CLUSTER_SYNC" is an undefined name.. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.57.82
An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-204", SQLSTATE "42704" and message tokens "WRKSCHM.CLUSTER_SYNC".. SQLCODE=-727, SQLSTATE=56098, DRIVER=3.57.82
An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-204", SQLSTATE "42704" and message tokens "WRKSCHM.CLUSTER_SYNC".. SQLCODE=-727, SQLSTATE=56098, DRIVER=3.57.82; nested exception is <openjpa-1.2.2-r422266:898935 nonfatal general error> org.apache.openjpa.persistence.PersistenceException: "WRKSCHM.CLUSTER_SYNC" is an undefined name.. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.57.82 {prepstmnt -234009374 SELECT t0.HOSTVMID, t0.ID, t0.UPDATETIMESTAMP, t0.VERSION FROM WRKSCHM.CLUSTER_SYNC t0 WHERE t0.ID = ? optimize for 1 row [params=(String) identityAssociationCleanupTask]} [code=-204, state=42704]SQLCA OUTPUT[Errp=SQLNQ1FC, Errd=-2145779603, 0, 0, 0, -10, 0]
"WRKSCHM.CLUSTER_SYNC" is an undefined name.. SQLCODE=-204, SQLSTATE=42704, DRIVER=3.57.82
An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-204", SQLSTATE "42704" and message tokens "WRKSCHM.CLUSTER_SYNC".. SQLCODE=-727, SQLSTATE=56098, DRIVER=3.57.82
An error occurred during implicit system action type "2". Information returned for the error includes SQLCODE "-204", SQLSTATE "42704" and message tokens "WRKSCHM.CLUSTER_SYNC".. SQLCODE=-727, SQLSTATE=56098, DRIVER=3.57.82
But I am able to deploy the same on the embedded server of eclipse.
And iam not able to see any internal tables under APPCNTR database.
Iam using worklight 6.0 and i installed using installation manager 1.6.3.
Iam using DB2 10.1,I have manually created APPCNTR,WRKLGHT,WLREPORT databases and set the schemas for WRKLGHT as WRKSCHM and WLREPORT as WLRESCHM.
I followed this http://pic.dhe.ibm.com/infocenter/wrklight/v6r0m0/index.jsp?topic=%2Fcom.ibm.worklight.help.doc%2Fdeploy%2Fc_deploy_custom_war_file_to_app_server.html, for Configuring WebSphere Application Server for DB2 manually.
followed http://pic.dhe.ibm.com/infocenter/wrklight/v6r0m0/index.jsp?topic=%2Fcom.ibm.worklight.help.doc%2Fdeploy%2Fc_deploy_custom_war_file_to_app_server.html this to Configuring WebSphere Application Server manually and deploying .
Please help me if I have missed any configuration.
You have not provided details about your environment (Worklight version, WAS profile, database) so it is difficult to pinpoint exactly what the problem could be; my answer is based on certain assumptions, such as that you are using DB2.
Looking closely to the error:
WRKSCHM is the schema name that Worklight Server is trying to access through JPA
CLUSTER_SYNC is a table in the 'WRKLGHT' database
This error can be caused by either the table not being properly created (DB2 setup process is incomplete), or by a schema name mismatch (WAS datasource setup incorrect).
Please verify the following on your environment:
DB2 Setup
Make sure that your DB2 is properly configured for Worklight; verify that the necessary databases are created ('WRKLGHT' for Worklight Server, 'WLREPORT' for the Worklight Reports feature and 'APPCNTR' for Worklight Application Center). The following documentation might be useful to you:
Creating the DB2 databases
Setting up the 'WRKLGHT' and 'WLREPORT' databases
Setting up the 'APPCNTR' database
DB2 Schema Configuration
Make sure that your WAS configuration is properly defining the appropriate schema name that you used in the steps above; the schema name is passed to the JDBC driver using the currentSchema property. The following documentation might be helpful as well:
Configuring DB2 on WAS Liberty profile
Configuring DB2 on WAS Standard profile
DB2 Cheat Sheet (how to list the current DB2 schema names)
DB2 schema qualifiers
I hope this will help you get past this problem.