execution of AfterUpdate caused by: System.DmlException: Insert failed. first error: INSUFFICIENT_ACCESS_ON_CROSS_REFERENCE_ENTITY, - sfdc

[npsp.TDTM_Opportunity: execution of AfterUpdate caused by: System.DmlException: Insert failed. First exception on row 0; first error: INSUFFICIENT_ACCESS_ON_CROSS_REFERENCE_ENTITY, insufficient access rights on cross-reference id: a0e4T0000044hDw: [] (System Code)][1]
[while inserting amount in opportunity record throwing error ]
[1]: https://i.stack.imgur.com/OV136.png

Related

BigQuery API call by BigQuerySinkConnector send error

I have a problem when executing a query on BigQuery and I end up with the following error:
org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:568)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: com.wepay.kafka.connect.bigquery.exception.BigQueryConnectException: A write thread has failed with an unrecoverable error
Caused by: The job encountered an internal error during execution and was unable to complete successfully.
at com.wepay.kafka.connect.bigquery.write.batch.KCBQThreadPoolExecutor.lambda$maybeThrowEncounteredError$0(KCBQThreadPoolExecutor.java:101)
at java.base/java.util.Optional.ifPresent(Optional.java:183)
at com.wepay.kafka.connect.bigquery.write.batch.KCBQThreadPoolExecutor.maybeThrowEncounteredError(KCBQThreadPoolExecutor.java:100)
at com.wepay.kafka.connect.bigquery.BigQuerySinkTask.put(BigQuerySinkTask.java:236)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)
... 10 more
Caused by: com.google.cloud.bigquery.BigQueryException: The job encountered an internal error during execution and was unable to complete successfully.
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:113)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.getQueryResults(HttpBigQueryRpc.java:623)
at com.google.cloud.bigquery.BigQueryImpl$34.call(BigQueryImpl.java:1222)
at com.google.cloud.bigquery.BigQueryImpl$34.call(BigQueryImpl.java:1217)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.bigquery.BigQueryImpl.getQueryResults(BigQueryImpl.java:1216)
at com.google.cloud.bigquery.BigQueryImpl.getQueryResults(BigQueryImpl.java:1200)
at com.google.cloud.bigquery.Job$1.call(Job.java:332)
at com.google.cloud.bigquery.Job$1.call(Job.java:329)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.poll(RetryHelper.java:64)
at com.google.cloud.bigquery.Job.waitForQueryResults(Job.java:328)
at com.google.cloud.bigquery.Job.getQueryResults(Job.java:291)
at com.google.cloud.bigquery.BigQueryImpl.query(BigQueryImpl.java:1187)
at com.wepay.kafka.connect.bigquery.MergeQueries.mergeFlush(MergeQueries.java:158)
at com.wepay.kafka.connect.bigquery.MergeQueries.lambda$mergeFlush$1(MergeQueries.java:119)
... 3 more
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
GET https://www.googleapis.com/bigquery/v2/projects/car-project-prd/queries/3df89651-3567-4b28-88a8-0655a174574c?location=EU&maxResults=0&prettyPrint=false
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "The job encountered an internal error during execution and was unable to complete successfully.",
"reason" : "jobInternalError"
} ],
"message" : "The job encountered an internal error during execution and was unable to complete successfully.",
"status" : "INVALID_ARGUMENT"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:149)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:112)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:39)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:443)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1108)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:541)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:474)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:591)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.getQueryResults(HttpBigQueryRpc.java:621)
... 20 more
The context is as follows:
I use the Kafka BigQuerySinkConnector to update partitioned tables in a dataset. The connector works perfectly for 1, 2 or 3 days and then fails with the trace above.
Can you tell me if you have an idea of the reason for this error or do you have any leads that I could follow to discover the problem and solve it.
I tried to get information to google help but it's just the following message.
Support Google for Error

How do I give a community user permission to PersonAccount Fields?

I have a User with a Customer Community Plus Login User License. I am getting the following error when running some code as this user that inserts a new Account record:
Insert failed. First exception on row 0; first error: CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY, BCH_AccountTrigger: execution of AfterInsert
caused by: System.DmlException: Insert failed. First exception on row 0; first error: CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY, DDH.HC_ProgramEligibility_Trigger: execution of AfterInsert
caused by: DDH.HC_SQuery.SQueryException: Permission Denied: cannot read [Account] object field(s) {PersonBirthdate}
When I do the same as a System Admin, Everything works as expected. How do I give access to PersonBirthdate to this Customer Community Plus Login User via a permission set?
I tried searching the Object Settings for this field and could not find it.

How to find batch element in Websphere commerce error

When I am running buildindex in my Websphere application, I have this error in buildindex log:
[2021/05/10 15:41:57:590 GMT] I Data import pre-processing completed in 0.389 seconds for table TI_CAT_EXTENDED_41060.
[2021/05/10 15:41:57:591 GMT] I /opt/IBM/WebSphere/CommerceServer80/instances/auth/search/pre-processConfig/MC_41060/DB2/wc-dataimport-preprocess-catentry-metainf.xml
[2021/05/10 15:41:57:591 GMT] I
Table name: TI_X_CATENT_META_INF_410600
Fetch size: 500
Batch size: 500
[2021/05/10 15:41:58:048 GMT] I Error for batch element #415: DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001, SQLERRMC=null, DRIVER=4.19.77
[2021/05/10 15:41:58:048 GMT] I SQL: SELECT CATENTRY_ID, TITLE, TITLE_KEYWORDS, SHORT_DESC, SHORT_DESC_KEYWORDS, LONG_DESC, LONG_DESC_KEYWORDS, LOCALE FROM X_CATENT_META_INF WHERE STORE_ID = 41006
[2021/05/10 15:41:58:087 GMT] I
The program exiting with exit code: 1.
Data import pre-processing was unsuccessful. An unrecoverable error has occurred.
[2021/05/10 15:41:58:091 GMT] E com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain:handleExecutionException Exception message: CWFDIH0002: An SQL exception was caught. The following error occurred: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null., stack trace: com.ibm.commerce.foundation.dataimport.exception.DataImportSystemException: CWFDIH0002: An SQL exception was caught. The following error occurred: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null.
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.processDataConfig(DataImportPreProcessorMain.java:1515)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.execute(DataImportPreProcessorMain.java:1331)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.main(DataImportPreProcessorMain.java:534)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:95)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:56)
at java.lang.reflect.Method.invoke(Method.java:620)
at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:280)
Caused by: com.ibm.db2.jcc.am.BatchUpdateException: [jcc][t4][102][10040][4.19.77] Batch failure. The batch was submitted, but at least one exception occurred on an individual member of the batch.
Use getNextException() to retrieve the exceptions for specific batched elements. ERRORCODE=-4229, SQLSTATE=null
at com.ibm.db2.jcc.am.b4.a(b4.java:475)
at com.ibm.db2.jcc.am.Agent.endBatchedReadChain(Agent.java:414)
at com.ibm.db2.jcc.am.ki.a(ki.java:5342)
at com.ibm.db2.jcc.am.ki.c(ki.java:4929)
at com.ibm.db2.jcc.am.ki.executeBatch(ki.java:3045)
at com.ibm.commerce.foundation.dataimport.preprocess.AbstractDataPreProcessor.populateTable(AbstractDataPreProcessor.java:373)
at com.ibm.commerce.foundation.dataimport.preprocess.StaticAttributeDataPreProcessor.process(StaticAttributeDataPreProcessor.java:461)
at com.ibm.commerce.foundation.dataimport.preprocess.DataImportPreProcessorMain.processDataConfig(DataImportPreProcessorMain.java:1482)
... 7 more
The exception seems to be clear, but I can't identify what is the element #415 in batch. Even the log doesn't helps, because it doesn't point to another more detailed log. Do you have any suggestion for find it?
Thanks to the comment of user #mao, I have followed this link
The failing table first must be identified. Enable more detailed tracing for di-preprocess:
Navigate to :
WC_installdir/instances/instance_name/xml/config/dataimport
and open the logging.properties file. Find all instances of INFO and
change it to FINEST. Optionally increase the size of the log file and
the number of historical log files while editing this file.
Thanks to this suggestion, I had re-run the buildindex process, and found that solr was wrongly grouping fields from original table, thus generating a too long field for the destination, and generating the error.

ORA-04088: error during execution of trigger - additional errors

I have a blank table for which I've set up a trigger:
CREATE OR REPLACE TRIGGER authors_bir
BEFORE INSERT ON authors
FOR EACH ROW
begin
if upper(:new.name) = 'TEST' then
raise_application_error(-20001, 'Sorry, that value is not allowed.');
end if;
end;
After executing:
insert into AUTHORS
VALUES (1, 'test', '1-Jan-1989', 'M');
Why am I getting ORA-06512 and ORA-04088 error messages in addition to the expected ORA-20001 error prompt?
ErrorMessage
Error starting at line : 5 in command -
insert into AUTHORS
VALUES (1, 'test', '1-Jan-1989', 'M')
Error report -
ORA-20001: Sorry, that value is not allowed.
ORA-06512: at "RPS.AUTHORS_BIR", line 3
ORA-04088: error during execution of trigger 'RPS.AUTHORS_BIR'
Your trigger works perfectly and ORA-06512 is part of debugging mode and tells you what line of code raised that ORA-20001 that you coded. While ORA-04088 is saying that an error has occurred in the trigger. Both error codes are GENERIC part of oracle troubleshooting report.
According to documentation:
ORA-06512: at stringline string
Cause: Backtrace message as the stack is unwound by unhandled exceptions.
Basically, this error is part of the error stack telling at which line the actual error occurred.
And documentation:
ORA-04088: error during execution of trigger 'string.string'
Cause: A runtime error occurred during execution of a trigger.
And this error is a part of the error stack telling you that error actually occurred in a trigger.
When an unhandled error occurs, error stack is always displayed. If you want to display just the error message, you could use an exception handling part so body of the trigger would look something like this:
begin
if upper(:new.name) = 'TEST' then
raise_application_error(-20001, 'Sorry, that value is not allowed.');
end if;
exception
when others then
dbms_output.put_line(sqlcode|' '|sqlerrm);
end;

[Amazon](500310) Invalid operation: Assert

I am using spark-redshift and querying redshift data using pyspark for processing.
The query works fine if i run on redshift using workbench etc.But spark-redshift unloads data to s3 and then retrieves it and it is throwing the following error when i run it.
py4j.protocol.Py4JJavaError: An error occurred while calling o124.save.
: java.sql.SQLException: [Amazon](500310) Invalid operation: Assert
Details:
-----------------------------------------------
error: Assert
code: 1000
context: !AmLeaderProcess -
query: 583860
location: scheduler.cpp:642
process: padbmaster [pid=31521]
-----------------------------------------------;
at com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(ErrorResponse.java:1830)
at com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(PGMessagingContext.java:822)
at com.amazon.redshift.client.PGMessagingContext.handleMessage(PGMessagingContext.java:647)
at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(InboundMessagesPipeline.java:312)
at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(PGMessagingContext.java:1080)
at com.amazon.redshift.client.PGMessagingContext.getErrorResponse(PGMessagingContext.java:1048)
at com.amazon.redshift.client.PGClient.handleErrorsScenario2ForPrepareExecution(PGClient.java:2524)
at com.amazon.redshift.client.PGClient.handleErrorsPrepareExecute(PGClient.java:2465)
at com.amazon.redshift.client.PGClient.executePreparedStatement(PGClient.java:1420)
at com.amazon.redshift.dataengine.PGQueryExecutor.executePreparedStatement(PGQueryExecutor.java:370)
at com.amazon.redshift.dataengine.PGQueryExecutor.execute(PGQueryExecutor.java:245)
at com.amazon.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source)
at com.amazon.jdbc.common.SPreparedStatement.execute(Unknown Source)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:108)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$2.apply(RedshiftJDBCWrapper.scala:126)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused by: com.amazon.support.exceptions.ErrorException: [Amazon](500310) Invalid operation: Assert
The query which gets generated:
UNLOAD ('SELECT “x”,”y" FROM (select x,y from table_name where
((load_date=20171226 and hour>=16) or (load_date between 20171227 and
20171226) or (load_date=20171227 and hour<=16))) ') TO ‘s3:s3path' WITH
CREDENTIALS ‘aws_access_key_id=xxx;aws_secret_access_key=yyy' ESCAPE
MANIFEST
What is the issue here and how can i resolve this.
Assert error usually happens when something is wrong with interpreting data types, for example for 2 parts of union query where column N in one part is varchar and in another part the same column is integer or null. Maybe your assertion error happens for data that comes from different nodes (just like in union query). Try to add explicit data formatting for each column like x::integer