Error in Synapse dataflow while connecting to SAP SLT via SAP CDC Connector - azure-synapse

I have the following Setup:
SAP ECC -> SAP LT (SLT) -> SAP CDC Connector (ODP) -> Azure Synapse/Azure Data Lake Storage Gen2 (parquet)
The connection via the SAP CDC connector is working, I see all available tables and when I choose one table, the meta data can be loaded to Azure (data preview is not supported with SLT).
On SLT side the request looks also good - no errors. It starts the initial load and shows also the number of records.
But I am getting the error message below after a couple of minutes when I debug the dataflow in Azure Synapse to load the data:
Operation on target TARGETNAME failed: {"StatusCode":"DF-SAPODP-ExecuteFuncModuleWithPointerFailed","Message":"Job failed due to reason: at Source 'KNA1': Error Message: DF-SAPODP-012 - SapOdp copy activity failure with run id: c194054d-876f-4684-8105-9e038ca3b7e1, error code: 2200 and error message: Failure happened on 'Source' side. ErrorCode=SapOdpOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Sap Odp operation 'Execute function module RODPS_REPL_ODP_FETCH with pointer 20221107095340.000094000, package id 20221107095436.000183000' failed. Error Number: '404', error message: 'DataSource QUEUENAME~KNA1 does not exist in version A',Source=Microsoft.DataTransfer.Runtime.SapRfcHelper,'","Details":""}
Does anyone knows what the error message means? The table KNA1 is available, the User has all necessary permissions, connection works.
Thanks,
Frank
What I have done so far on Azure site:
used another table
checked prerequesites
checked user permission
changed target from parquet to csv in Dataflow sink
What I have done so far on SAP site:
checked RODPS_REPL_ODP_FETCH
checked SLT monitoring
checked ODQMON

​Hi all,
SAP provided a fix to this issue and I was now able to solve it. The main issue is an outdated version of the SLT, so with a new Version this issue should not appear. If an upgrade of the version is not possible due to dependencies to other systems the following notes needs to be implemented in SAP (SLT).
2232584 - Release of SAP extractors for ODP replication (ODP SAPI)
2459760 - DataSource does not exist in Version A - Data Services (implement SAP Notes 2324659 and 2427380)
The Notes 2324659 and 2427380 solved this problem.
Regards,
Frank

Related

BigQuery Error: 33652656 | I can't directly contact Google

I've been trying to connect a CSV I have in Google Drive to a BigQuery table for a week but I've been getting the following error:
"An internal error occurred and the request could not be completed. This is usually caused by a transient issue. Retrying the job with back-off as described in the BigQuery SLA should solve the problem: https://cloud.google.com/bigquery/sla. If the error continues to occur please contact support at https://cloud.google.com/support. Error: 33652656"
Since I have Basic Support I think I can't contact Google directly to report it. What can I do?
If you can generate a version of your sheet/CSV file that demonstrates the issue and is suitable for inclusion in a public issue tracker (e.g. any sensitive info is redacted), posting to the BigQuery public issue tracker may be another path forward.

Supplemental logging error when mapping tables from CDC on SQL Server

The source is SQL Server 2016 and Target is Netezza 7.2
When a source table is being mapped to the target, The below message appears,
ERROR:
An error has occurred while setting the replication method for dbo.CCM [An error occurred while turning on supplemental logging for dbo.CCM.
Failed to get publication ID.]. Check the event log for related events and a possible cause.
SQL Server Replication is enabled with a local distributor database. We have checked the CDC event logs and the same error is logged, nothing much in detail.
Any help on this would be appreciated.
You need to check the trace files. These are located in whichever folder you selected for instance data in the install. If you do not know this you can look at /conf/userfolder.vmargs - /instance/log
If you cannot find any useful information, then turn on detailed traces
1.) Management Console, configuration perspective, select the MS SQL Server datastore, properties, system parameters
2.) Add a new parameter global_trace_hours and specify a numeric value say 4
3.) Save
4.) The tracing is enabled dynamically - tracing will be set on for the number of hours you specify. The value will be automatically decremented every minute and then when it gets to 0 tracing is automatically and dynamically disabled
5.) Attempt to change the replication method to mirror again
6.) In the folder /instance/log/on you should find some files with data in
7.) Copy the trace file to a location with a short path (e.g. C:\TEMP) - or if it has already been zipped, unzip to C:\TEMP
8.) Open a command prompt as administrator
9.) Change directory to /bin
10.) Execute dmdecodetrace C:\TEMP\ | more
Note that the additional trace files are not full text, to minimize the impact of writing them, so need to be decoded
If you still do not get any pointers open a support ticket.
One potential cause could be that the table does not have a primary key. SQL replication requires a primary key, and as CDC is using the SQL replication to ensure that the full row images are logged in the transaction log, it is also a prereq for CDC as well.

Arcgis server CreateReplica REST API of feature not working

I created Feature class in enterprise geodatabase (SQLServer2014 express). Feature class is sync enabled and published successfully.
Now I can not generate offline geodatabase from Arcgis Android SDk.
I can see ' Create Replica ' from 'Supported Operations' from 'http://xyz:6080/arcgis/rest/services/MyFeature/FeatureServer'
I tried 'http://xyz:6080/arcgis/rest/services/MyFeature/FeatureServer/createReplica' rest api from feature service. it creates job but no results shown.
Server logs show following error
Error executing tool.: ErrorMsg#SyncGPService:{"code":400,"description":""} Failed to execute (Create Feature Service Replica).
Log source is 'System/SyncTools.GPServer'
First, make sure that there's nothing needed at the DB level where your data is stored. Taking the server out of the equation, can you run the Create Replica tool in ArcMap/ArcGIS Pro against the data source, and does it succeed? If that works (and other operations like Adds, Updates, Deletes etc.), then put ArcGIS Server back in the equation.
What are your ArcGIS Server log levels set at? It may be beneficial to up the logging level to Verbose or Debug, try to create the replica again, and consult the logs to see if more helpful information is returned.
You may also want to check and see if your version of ArcGIS Server needs to be patched. For example, at 10.5.1 there was a patch released specifically for Sync issues.
If all else fails, Esri Support may be a good place to find some help as well.
Have you looked at the requirements for making your data available for offline use? See this link in the ArcGIS Server documentation.
Specifically you need to enable archiving and include Global IDs on the dataset, but there are more details at the above link.
For future reference, and in case that suggestion doesn't work, the Esri GeoNet ArcGIS Enterprise place is a good spot to ask these questions.

Transferring Storage Accounts Table into Data Lake using Data Factory

I am trying to use Data Factory to transfer a table from Storage Accounts into Data Lake. Microsoft claims that one can, "store files of arbitrary sizes and formats into Data Lake". I use the online wizard and try to create a pipeline. Pipeline gets created, but I then always get an error saying:
Copy activity encountered a user error: ErrorCode=UserErrorTabularCopyBehaviorNotSupported,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=CopyBehavior property is not supported if the source is tabular data source.,Source=Microsoft.DataTransfer.ClientLibrary,'.
Any suggestions what I can do to be able to use Data Factory to transfer data from Storage Accounts table into Data Lake?
Thanks.
Your case is supported by ADF. While for the error you hit, there is a known defect that for some cases the copy wizard mis-generate a "CopyBehavior" property which is not applicable. We are fixing that now.
For you to workaround, go to Azure portal -> Author and deploy -> select that pipeline -> find the "CopyBehavior": "MergeFiles" under AzureDataLakeStoreSink and remove that line -> then deploy and rerun the activity.
If you happened to author an run-once pipeline, please re-author a scheduled one given the former is hard to be updated using JSON.
Thanks,
Linda

WSO2 Gadget Gen Tool -

I have an external Hadoop cluster (CDH4) with Hive. I used the Gadget Gen tool (BAM 2.3.0) to create a simple table gadget, but no data is populated when I add the gadget to a dashboard using the URL supplied from the gadget gen tool.
Here's my data source settings from the Gadget Generator Wizard
jdbc:hive://x.x.x.x:10000/default
org.apache.hadoop.hive.jdbc.HiveDriver
I added the following jar files to make sure I had everything required for the JDBC connection and restarted wso2server:
hive-exec-0.10.0-cdh4.2.0.jar hive-jdbc-0.10.0-cdh4.2.0.jar
hive-metastore-0.10.0-cdh4.2.0.jar hive-service-0.10.0-cdh4.2.0.jar
libfb303-0.9.0.jar commons-logging-1.0.4.jar slf4j-api-1.6.4.jar
slf4j-log4j12-1.6.1.jar hadoop-core-2.0.0-mr1-cdh4.2.0.jar
I see map reduce jobs running on my cluster during step 2 and 3 of the wizard (and the wizard shows me previews of the actual data), but I don't see any jobs submitted after the gadget is generated.
Any help appreciated.
Gadgen gen tool is for RDBMS database such as MySQL,h2, etc. you can't provide hive URL from the gadget gen tool and run it.
Generally in WSO2 BAM, the hive is used to summarize the collected data which was stored in cassandra and write the summarized final result on RDBMS database. Then from Gadget-gen tool, the gdaget xmls are created by pointing to the final result stored RDBMS database.
You can find more information on WSO2 BAM 2.3.0 documentation. http://docs.wso2.org/wiki/display/BAM230/Gadget+Generation+Tool
Make sure the URL generated for the location of Gadget XML has the correct IP/Host Name. See whether the given gadget xml is located in the registry location of the generated url. You do not have to worry about Hive / Hadoop / Cassandra stuff as they are not relevant to the Gadget. Only the RDBMS (H2 by default) data matters. Hope your problem will be resolved when Gadget location is corrected.