Apache Ignite: SQL query returns empty result on non-baseline node - ignite

I have set up a 3 node Apache Ignite cluster and noticed the following unexpected behavior:
(Tested with Ignite 2.10 and 2.13, Azul Java 11.0.13 on RHEL 8)
We have a relational table "RELATIONAL_META". It's created by our software vendors product that uses Ignite to exchange configuration data. This table is backed by this cache, that gets replicated to all nodes:
[cacheName=SQL_PUBLIC_RELATIONAL_META, cacheId=-252123144, grpName=null, grpId=-252123144, prim=512, mapped=512, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, affCls=RendezvousAffinityFunction]
Seen behavior:
I did a failure test, simulating a disk failure of one of the Ignite nodes. The "failed" node restarts with an empty disk and joins the topology as expected. While the node is not yet part of the baseline nodes, either because auto-adjust is disabled, or auto-adjust did not yet complete, the restarted node returns empty results via the JDBC connection:
0: jdbc:ignite:thin://b2bivmign2/> select * from RELATIONAL_META;
+------------+--------------+------+-------+---------+
| CLUSTER_ID | CLUSTER_TYPE | NAME | VALUE | DETAILS |
+------------+--------------+------+-------+---------+
+------------+--------------+------+-------+---------+
No rows selected (0.018 seconds)
It's interesting that it knows the structure of the table, but not the contained data.
The table actually contains data, as I can see when I query against one of the other cluster nodes:
0: jdbc:ignite:thin://b2bivmign1/> select * from RELATIONAL_META;
+-------------------------+--------------+----------------------+-------+------------------------------------------------------------------------------------------------------------------+
| CLUSTER_ID | CLUSTER_TYPE | NAME | VALUE | DETAILS |
+-------------------------+--------------+----------------------+-------+------------------------------------------------------------------------------------------------------------------+
| cluster_configuration_1 | writer | change index | 1653 | 2023-01-24 10:25:27 |
| cluster_configuration_1 | writer | last run changes | 0 | Updated at 2023-01-29 11:08:48. |
| cluster_configuration_1 | writer | require full sync | false | Flag set to false on 2022-06-11 09:46:45 |
| cluster_configuration_1 | writer | schema version | 1.4 | Updated at 2022-06-11 09:46:25. Previous version was 1.3 |
| cluster_processing_1 | reader | STOP synchronization | false | Resume synchronization - the processing has the same version as the config - 2.6-UP2022-05 [2023-01-29 11:00:50] |
| cluster_processing_1 | reader | change index | 1653 | 2023-01-29 10:20:39 |
| cluster_processing_1 | reader | conflicts | 0 | Reset due to full sync at 2022-06-11 09:50:12 |
| cluster_processing_1 | reader | require full sync | false | Cleared the flag after full reader sync at 2022-06-11 09:50:12 |
| cluster_processing_2 | reader | STOP synchronization | false | Resume synchronization - the processing has the same version as the config - 2.6-UP2022-05 [2023-01-29 11:00:43] |
| cluster_processing_2 | reader | change index | 1653 | 2023-01-29 10:24:06 |
| cluster_processing_2 | reader | conflicts | 0 | Reset due to full sync at 2022-06-11 09:52:19 |
| cluster_processing_2 | reader | require full sync | false | Cleared the flag after full reader sync at 2022-06-11 09:52:19 |
+-------------------------+--------------+----------------------+-------+------------------------------------------------------------------------------------------------------------------+
12 rows selected (0.043 seconds)
Expected behavior:
While a node is not part of the baseline, it is per definition not persisting data. So when I run a query against it, I would expect it to fetch the partitions that it does not hold itself, from the other nodes of the cluster. Instead it just shows an empty result, even showing the correct structure of the table, just without any rows. This has caused inconsistent behavior in the product we're actually running, that uses Ignite as a configuration store, because suddenly the nodes see different results depending on which node they have opened their JDBC connection to. We are using a JDBC connection string that contains all the Ignite server nodes, so it fails over when one goes down, but of course it does not prevent the issue I have described here.
Is this "works a designed"? Is there any way to prevent such issues? It seems to be problematic to use Apache Ignite as a configuration store for an application with many nodes, when it behaves like this.
Regards,
Sven
Update:
After restarting one of the nodes with an empty disk, it joins as a node with a new ID. I think that is expected behavior. We have enabled baseline auto-adjust, so the new node id should join the baseline, and old one should leave the baseline. This works, but before this is completed, the node returns empty results to SQL queries.
Cluster state: active
Current topology version: 95
Baseline auto adjustment enabled: softTimeout=60000
Baseline auto-adjust is in progress
Current topology version: 95 (Coordinator: ConsistentId=cdf43fef-deb8-4732-907f-6264bd55de6f, Address=b2bivmign3.fritz.box/192.168.0.151, Order=11)
Baseline nodes:
ConsistentId=3ffe3798-9a63-4dc7-b7df-502ad9efc76c, Address=b2bivmign1.fritz.box/192.168.0.149, State=ONLINE, Order=64
ConsistentId=40a8ae8c-5f21-4f47-8f67-2b68f396dbb9, State=OFFLINE
ConsistentId=cdf43fef-deb8-4732-907f-6264bd55de6f, Address=b2bivmign3.fritz.box/192.168.0.151, State=ONLINE, Order=11
--------------------------------------------------------------------------------
Number of baseline nodes: 3
Other nodes:
ConsistentId=080fc170-1f74-44e5-8ac2-62b94e3258d9, Order=95
Number of other nodes: 1
Update 2:
This is the JDDB URL the application uses:
#distributed.jdbc.url - run configure to modify this property
distributed.jdbc.url=jdbc:ignite:thin://b2bivmign1.fritz.box:10800..10820,b2bivmign2.fritz.box:10800..10820,b2bivmign3.fritz.box:10800..10820
#distributed.jdbc.driver - run configure to modify this property
distributed.jdbc.driver=org.apache.ignite.IgniteJdbcThinDriver
We have seen it connecting via JDBC to a node that was not part of the baseline and therefore receiving empty results. I wonder why a node that is not part of the baseline returns any results without fetching the data from the baseline nodes?
Update 3:
It seems to be dependent on the tables/caches attributes wether this happens, I cannot yet reproduce it with a table I create on my own, only with the table that is created by the product we use.
This is the cache of the table that I can reproduce the issue with:
[cacheName=SQL_PUBLIC_RELATIONAL_META, cacheId=-252123144, grpName=null, grpId=-252123144, prim=512, mapped=512, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, affCls=RendezvousAffinityFunction]
I have created 2 tables my own for testing:
CREATE TABLE Test (
Key CHAR(10),
Value CHAR(10),
PRIMARY KEY (Key)
) WITH "BACKUPS=2";
CREATE TABLE Test2 (
Key CHAR(10),
Value CHAR(10),
PRIMARY KEY (Key)
) WITH "BACKUPS=2,atomicity=ATOMIC";
I then shut down one of the Ignite nodes, in this case b2bivmign3, and remove the ignite data folders, then start it again. It starts as a new node that is not part of the baseline, and I disabled auto-adjust to just keep that situation. I then connect to b2bivmign3 with the SQL CLI and query the tables:
0: jdbc:ignite:thin://b2bivmign3/> select * from Test;
+------+-------+
| KEY | VALUE |
+------+-------+
| Sven | Demo |
+------+-------+
1 row selected (0.202 seconds)
0: jdbc:ignite:thin://b2bivmign3/> select * from Test2;
+------+-------+
| KEY | VALUE |
+------+-------+
| Sven | Demo |
+------+-------+
1 row selected (0.029 seconds)
0: jdbc:ignite:thin://b2bivmign3/> select * from RELATIONAL_META;
+------------+--------------+------+-------+---------+
| CLUSTER_ID | CLUSTER_TYPE | NAME | VALUE | DETAILS |
+------------+--------------+------+-------+---------+
+------------+--------------+------+-------+---------+
No rows selected (0.043 seconds)
The same when I connect to one of the other Ignite nodes:
0: jdbc:ignite:thin://b2bivmign2/> select * from Test;
+------+-------+
| KEY | VALUE |
+------+-------+
| Sven | Demo |
+------+-------+
1 row selected (0.074 seconds)
0: jdbc:ignite:thin://b2bivmign2/> select * from Test2;
+------+-------+
| KEY | VALUE |
+------+-------+
| Sven | Demo |
+------+-------+
1 row selected (0.023 seconds)
0: jdbc:ignite:thin://b2bivmign2/> select * from RELATIONAL_META;
+-------------------------+--------------+----------------------+-------+------------------------------------------------------------------------------------------------------------------+
| CLUSTER_ID | CLUSTER_TYPE | NAME | VALUE | DETAILS |
+-------------------------+--------------+----------------------+-------+------------------------------------------------------------------------------------------------------------------+
| cluster_configuration_1 | writer | change index | 1653 | 2023-01-24 10:25:27 |
| cluster_configuration_1 | writer | last run changes | 0 | Updated at 2023-01-29 11:08:48. |
| cluster_configuration_1 | writer | require full sync | false | Flag set to false on 2022-06-11 09:46:45 |
| cluster_configuration_1 | writer | schema version | 1.4 | Updated at 2022-06-11 09:46:25. Previous version was 1.3 |
| cluster_processing_1 | reader | STOP synchronization | false | Resume synchronization - the processing has the same version as the config - 2.6-UP2022-05 [2023-01-29 11:00:50] |
| cluster_processing_1 | reader | change index | 1653 | 2023-01-29 10:20:39 |
| cluster_processing_1 | reader | conflicts | 0 | Reset due to full sync at 2022-06-11 09:50:12 |
| cluster_processing_1 | reader | require full sync | false | Cleared the flag after full reader sync at 2022-06-11 09:50:12 |
| cluster_processing_2 | reader | STOP synchronization | false | Resume synchronization - the processing has the same version as the config - 2.6-UP2022-05 [2023-01-29 11:00:43] |
| cluster_processing_2 | reader | change index | 1653 | 2023-01-29 10:24:06 |
| cluster_processing_2 | reader | conflicts | 0 | Reset due to full sync at 2022-06-11 09:52:19 |
| cluster_processing_2 | reader | require full sync | false | Cleared the flag after full reader sync at 2022-06-11 09:52:19 |
+-------------------------+--------------+----------------------+-------+------------------------------------------------------------------------------------------------------------------+
12 rows selected (0.032 seconds)
I will test more tomorrow the find out which attribute of the table/cache enables this issue.
Update 4:
I can reproduce this with a table that is set to mode=REPLICATED instead of PARTITIONED.
CREATE TABLE Test (
Key CHAR(10),
Value CHAR(10),
PRIMARY KEY (Key)
) WITH "BACKUPS=2";
[cacheName=SQL_PUBLIC_TEST, cacheId=-2066189417, grpName=null, grpId=-2066189417, prim=1024, mapped=1024, mode=PARTITIONED, atomicity=ATOMIC, backups=2, affCls=RendezvousAffinityFunction]
CREATE TABLE Test2 (
Key CHAR(10),
Value CHAR(10),
PRIMARY KEY (Key)
) WITH "BACKUPS=2,TEMPLATE=REPLICATED";
[cacheName=SQL_PUBLIC_TEST2, cacheId=372637563, grpName=null, grpId=372637563, prim=512, mapped=512, mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, affCls=RendezvousAffinityFunction]
0: jdbc:ignite:thin://b2bivmign2/> select * from TEST;
+------+-------+
| KEY | VALUE |
+------+-------+
| Sven | Demo |
+------+-------+
1 row selected (0.06 seconds)
0: jdbc:ignite:thin://b2bivmign2/> select * from TEST2;
+-----+-------+
| KEY | VALUE |
+-----+-------+
+-----+-------+
No rows selected (0.014 seconds)
Testing with Visor:
It makes no difference where I run Visor, same results.
We see both caches for the tables have 1 entry:
+-----------------------------------------+-------------+-------+---------------------------------+-----------------------------------+-----------+-----------+-----------+-----------+
| SQL_PUBLIC_TEST(#c9) | PARTITIONED | 3 | 1 (0 / 1) | min: 0 (0 / 0) | min: 0 | min: 0 | min: 0 | min: 0 |
| | | | | avg: 0.33 (0.00 / 0.33) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
| | | | | max: 1 (0 / 1) | max: 0 | max: 0 | max: 0 | max: 0 |
+-----------------------------------------+-------------+-------+---------------------------------+-----------------------------------+-----------+-----------+-----------+-----------+
| SQL_PUBLIC_TEST2(#c10) | REPLICATED | 3 | 1 (0 / 1) | min: 0 (0 / 0) | min: 0 | min: 0 | min: 0 | min: 0 |
| | | | | avg: 0.33 (0.00 / 0.33) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |
| | | | | max: 1 (0 / 1) | max: 0 | max: 0 | max: 0 | max: 0 |
+-----------------------------------------+-------------+-------+---------------------------------+-----------------------------------+-----------+-----------+-----------+-----------+
One is empty when I scan it, the other has one row as expected:
visor> cache -scan -c=#c9
Entries in cache: SQL_PUBLIC_TEST
+================================================================================================================================================+
| Key Class | Key | Value Class | Value |
+================================================================================================================================================+
| java.lang.String | Sven | o.a.i.i.binary.BinaryObjectImpl | SQL_PUBLIC_TEST_466f2363_47ed_4fba_be80_e33740804b97 [hash=-900301401, VALUE=Demo] |
+------------------------------------------------------------------------------------------------------------------------------------------------+
visor> cache -scan -c=#c10
Cache: SQL_PUBLIC_TEST2 is empty
visor>
Update 5:
I have reduced the configuration file to this:
https://pastebin.com/dL9Jja8Z
I did not manage to reproduce this with persistence turned off, as I don't manage to keep a node out the baseline then, it always joins immediately. So maybe this problem is only reproducible with persistence enabled.
I go to each of the 3 nodes, remove the Ignite data to start from scratch, and start the service:
[root#b2bivmign1,2,3 apache-ignite]# rm -rf db/ diagnostic/ snapshots/
[root#b2bivmign1,2,3 apache-ignite]# systemctl start apache-ignite#b2bi-config.xml.service
I open visor, check the topology that all nodes have joined, then activate the cluster.
https://pastebin.com/v0ghckBZ
visor> top -activate
visor> quit
I connect with sqlline and create my tables:
https://pastebin.com/Q7KbjN2a
I go to one of the servers, stop the service and delete the data, then start the service again:
[root#b2bivmign2 apache-ignite]# systemctl stop apache-ignite#b2bi-config.xml.service
[root#b2bivmign2 apache-ignite]# rm -rf db/ diagnostic/ snapshots/
[root#b2bivmign2 apache-ignite]# systemctl start apache-ignite#b2bi-config.xml.service
Baseline looks like this:
https://pastebin.com/CeUGYLE7
Connect with sqlline to that node, issue reproduces:
https://pastebin.com/z4TMKYQq
This was reproduced on:
openjdk version "11.0.18" 2023-01-17 LTS
OpenJDK Runtime Environment Zulu11.62+17-CA (build 11.0.18+10-LTS)
OpenJDK 64-Bit Server VM Zulu11.62+17-CA (build 11.0.18+10-LTS, mixed mode)
RPM: apache-ignite-2.14.0-1.noarch
Rocky Linux release 8.7 (Green Obsidian)

Related

TDengine An offline node cannot be deleted

taos> show dnodes;
id | endpoint | vnodes | support_vnodes | status | create_time | note |
=================================================================================================================================================
1 | td-1:6030 | 6 | 80 | ready | 2022-12-05 11:20:16.972 | |
2 | td-2:6030 | 2 | 16 | offline | 2022-12-05 11:20:17.342 | status msg timeout |
Query OK, 2 row(s) in set (0.002706s)
taos> drop dnode 2;
DB error: Node is offline (0.138705s)
if you want to delete the TDengine database data node, you have to migrate the data back, hence before dropping the dnode , your dnode must be online but not offline .

How to query the metadata(such as ttl) of record from aql?

Assume you have a set as follows:
+-------+-------+
| PK | value |
+-------+-------+
| "pk1" | 24 |
+-------+-------+
1 row in set (0.105 secs)
How to get the metadata for this?
To get the metadata, all you need to do is run this command before running the query:
set RECORD_PRINT_METADATA true
Now, when you query the set
select * from test.segments
you can see additional metadata of the set, as follows:
+-------+-------+--------------------------------+------------+-------+-------+
| PK | value | {edigest} | {set} | {ttl} | {gen} |
+-------+-------+--------------------------------+------------+-------+-------+
| "pk1" | 24 | "Rn/5rHEQGWvPOSBK+vHRMyLkFyo=" | "segments" | 57 | 1 |
+-------+-------+--------------------------------+------------+-------+-------+
1 row in set (0.175 secs)
NOTE:
The command has to be run just once. It holds true for all the queries that come after it.
To go back to the default behaviour, set the argument to false

Result-set inconsistency between hive and hive-llap

we are using Hive 3.1.x clusters on HDI 4.0, with 1 being LLAP and another Just HIVE.
we've created a managed tables on both the clusters with the row count being 272409.
Before merge on both clusters
+---------------------+------------+---------------------+------------------------+------------------------+
| order_created_date | col_count | col_distinct_count | min_lmd | max_lmd |
+---------------------+------------+---------------------+------------------------+------------------------+
| 20200615 | 272409 | 272409 | 2020-06-15 00:00:12.0 | 2020-07-26 23:42:17.0 |
+---------------------+------------+---------------------+------------------------+------------------------+
Based on the delta, we'd perform a merge operation (which updates 17 rows).
After merging on the hive-llap cluster (before compaction)
+---------------------+------------+---------------------+------------------------+------------------------+
| order_created_date | col_count | col_distinct_count | min_lmd | max_lmd |
+---------------------+------------+---------------------+------------------------+------------------------+
| 20200615 | 272409 | 272392 | 2020-06-15 00:00:12.0 | 2020-07-27 22:52:34.0 |
+---------------------+------------+---------------------+------------------------+------------------------+
After merging on the hive-llap cluster (after compaction)
+---------------------+------------+---------------------+------------------------+------------------------+
| order_created_date | col_count | col_distinct_count | min_lmd | max_lmd |
+---------------------+------------+---------------------+------------------------+------------------------+
| 20200615 | 272409 | 272409 | 2020-06-15 00:00:12.0 | 2020-07-27 22:52:34.0 |
+---------------------+------------+---------------------+------------------------+------------------------+
After merging on just hive cluster (without compacting deltas)
+---------------------+------------+---------------------+------------------------+------------------------+
| order_created_date | col_count | col_distinct_count | min_lmd | max_lmd |
+---------------------+------------+---------------------+------------------------+------------------------+
| 20200615 | 272409 | 272409 | 2020-06-15 00:00:12.0 | 2020-07-27 22:52:34.0 |
+---------------------+------------+---------------------+------------------------+------------------------+
This is the inconsistency observed
However, after compacting the table on hive-llap, the result-set inconsistency is not seen, both the clusters are returning same result.
We thought it might be due to either caching or llap issue, so we restarted the hive-server2 process which will clear the cache. The issue is still persistent.
We also created a dummy table with same schema on just hive cluster and pointed the location of that table to that of llap one, which in turn is producing result as expected.
We even queried on spark using **Qubole spark-acid reader** (direct hive managed table reader), which is also producing expected result
This is very strange and peculiar, can someone help out here.
We also faced a similar issue in the HDInsight Hive llap cluster. On setting hive.llap.io.enabled as false resolved the issue
Qubole does not support Hive LLAP yet. (However, we (at Qubole) are evaluating whether to support this in the future)

SQL to set a value based on a value from a diffrent table automatically

The title may not be that helpful but what I am trying to do is this.
For simplicity's sake I have two tables one called logs and another called Log controls
In LOGS I have and a log event column, this is automatically populated by imported information. On the LOG CONTROLS I have a manually entered list of Log events (to match the ones coming in) and I have this table to have them assigned ID numbers and other details about the event.
What I need to do is have a column in the LOGS table which looks at the Log events, matches it to the ID from the LOG CONTROLS table and assigns the ID into the LOGS table.
I have seen a few methods of changing information in columns based of information in other tables but all of these seem to be one way checks i.e if ID = X change to VALUE FROM OTHER TABLE where as what I need is IF VALUE = X FROM OTHER TABLE CHANGE ID FIELD TO = Y FROM OTHER TABLE
Below is a mock up of the tables.
+----+-----------+----------+------------+
| ID | Date_Time | Event | Control ID|
+----+-----------+----------+------------+
| 1 | 0/0/0 | Shutdown | |
| 2 | 0/0/0 | Start up | |
| 3 | 0/0/0 | Error | |
| 4 | 0/0/0 | Info | |
| 5 | 0/0/0 | Shutdown | |
| 6 | 0/0/0 | Error | |
+----+-----------+----------+------------+
+-------------------+----------+--------+-------+
| Control ID | Event | Export | Flag |
+-------------------+----------+--------+-------+
| 1 | Shutdown | TRUE | TRUE |
| 2 | Start up | TRUE | FALSE |
| 3 | Error | TRUE | TRUE |
| 4 | Info | TRUE | FALSE |
+-------------------+----------+--------+-------+
So I need the Control ID in the first table to match the control ID from the second table depending on what the event was.
I hope this makes sense.
Any help or advice would be greatly appreciated.
From your description, it seems that a simple UPDATE statement is all you need:
update logs
set control_id = c.control_id
from log_controls as c
where c.event = logs.event;

"Update Terminated" on /nva01 transaction

While running the SAP-SD benchmarking process on 3 tier SAP setup, a number of transactions are fired by automated users.
The following steps are executed,
6 /nva01 (Create Sales Order)
[ENTER]
7 Order Type or
Sales Organization 0001
Distribution Channel 01
Division 01
[ENTER]
8 Sold-to party sdd00000
PO Number perf500
Req.deliv.date 22.12.2009
Deliver.Plant 0001
Material Order quantity
sd000000 1
sd000001 1
sd000002 1
sd000003 1
sd000004 1
[F11] (Save)
9 [F3] (Back)
(This dialogstep is needed only to get 4 dialogsteps for VA01 as defined
for the SD benchmarks)
whenever [F11] is pressed after entering information, it saves successfully. However, when [F3] is pressed, it shows error “unable to update”
Then I manually tried to execute the same steps
6 /nva01 (Create Sales Order)
[ENTER]
7 Order Type or
Sales Organization 0001
Distribution Channel 01
Division 01
[ENTER]
8 Sold-to party sdd00000
PO Number perf500
Req.deliv.date 22.12.2009
Deliver.Plant 0001
Material Order quantity
sd000000 1
sd000001 1
sd000002 1
sd000003 1
sd000004 1
On pressing [F11] it successfully saves. But when [F3] is pressed to go back to previous screen, it gives “update was terminated” error.
[F11] (Save)
9 [F3] (Back)
Then to locate the root cause of error, SM13 transaction and it shows the following details for the error
There is a large number of same errors in logs, and the update key for all the error entries is the same “4A08B4400C022793E10000000FD5F53D” is this normal..?
On googling found out that the possible reason for this error could be
Key already exists in table and duplicate entry is disallowed.
Which table is affected by this transaction..? how to resolve..?
Document number ranges issue
Which document number range to modify..? how to resolve..?
Kindly advise how to resolve this
edit including system log--
Runtime Errors SAPSQL_ARRAY_INSERT_DUPREC Exception
CX_SY_OPEN_SQL_DB Date and Time 12.05.2009 06:59:27
---------------------------------------------------------------------------------------------------- |Short text
| | The ABAP/4 Open SQL array insert results in duplicate database
records. |
---------------------------------------------------------------------------------------------------- |What happened?
| | Error in the ABAP Application Program
| |
| | The current ABAP program "SAPLV05I" had to be terminated
because it has | | come across a statement
that unfortunately cannot be executed.
|
---------------------------------------------------------------------------------------------------- |What can you do?
| | Note down which actions and inputs caused the error.
| |
| |
| | To process the problem further, contact you SAP system
| | administrator.
| |
| | Using Transaction ST22 for ABAP Dump Analysis, you can look
| | at and manage termination messages, and you can also
| | keep them for a long time.
|
---------------------------------------------------------------------------------------------------- |Error analysis
| | An exception occurred that is explained in detail below.
| | The exception, which is assigned to class 'CX_SY_OPEN_SQL_DB',
was not caught | | in
| | procedure "SD_PARTNER_UPDATE" "(FUNCTION)", nor was it
propagated by a RAISING | | clause.
| | Since the caller of the procedure could not have anticipated
that the | | exception would occur, the
current program is terminated. | |
The reason for the exception is:
| | If you use an ABAP/4 Open SQL array insert to insert a record
in | | the database and that record
already exists with the same key, | |
this results in a termination.
| |
| | (With an ABAP/4 Open SQL single record insert in the same error
| | situation, processing does not terminate, but SY-SUBRC is set
to 4.) |
---------------------------------------------------------------------------------------------------- |How to correct the error
| | Use an ABAP/4 Open SQL array insert only if you are sure that
none of | | the records passed already
exists in the database. | |
| | If the error occures in a non-modified SAP program, you may be
able to | | find an interim solution in an
SAP Note. | |
If you have access to SAP Notes, carry out a search with the following
| | keywords:
| |
| | "SAPSQL_ARRAY_INSERT_DUPREC" "CX_SY_OPEN_SQL_DB"
| | "SAPLV05I" or "LV05IU15"
| | "SD_PARTNER_UPDATE"
| |
| | If you cannot solve the problem yourself and want to send an
error | | notification to SAP, include
the following information: | |
| | 1. The description of the current problem (short dump)
| |
| | To save the description, choose "System->List->Save->Local
File | | (Unconverted)".
| |
| | 2. Corresponding system log
| |
| | Display the system log by calling transaction SM21.
| | Restrict the time interval to 10 minutes before and five
minutes | | after the short dump. Then
choose "System->List->Save->Local File | |
(Unconverted)".
| |
| | 3. If the problem occurs in a problem of your own or a modified
SAP | | program: The source code of the
program | |
In the editor, choose "Utilities->More
| | Utilities->Upload/Download->Download".
| |
| | 4. Details about the conditions under which the error occurred
or which | | actions and input led to the
error. | |
| | The exception must either be prevented, caught within proedure
| | "SD_PARTNER_UPDATE" "(FUNCTION)", or its possible occurrence
must be declared | | in the
| | RAISING clause of the procedure.
| | To prevent the exception, note the following:
|
---------------------------------------------------------------------------------------------------- |System environment
| | SAP-Release 701
| |
| | Application server... "hpvm-202"
| | Network address...... "15.213.245.61"
| | Operating system..... "HP-UX"
| | Release.............. "B.11.31"
| | Hardware type........ "ia64"
| | Character length.... 16 Bits
| | Pointer length....... 64 Bits
| | Work process number.. 10
| | Shortdump setting.... "full"
| |
| | Database server... "ghoul3"
| | Database type..... "ORACLE"
| | Database name..... "E64"
| | Database user ID.. "SAPSR3"
| |
| | Terminal.......... "hpvmmsa"
| |
| | Char.set.... "C"
| |
| | SAP kernel....... 701
| | created (date)... "Feb 24 2009 21:53:01"
| | create on........ "HP-UX B.11.23 U ia64"
| | Database version. "OCI_102 (10.2.0.4.0) "
| |
| | Patch level. 32
| | Patch text.. " "
| |
| | Database............. "ORACLE 9.2.0.., ORACLE 10.1.0..,
ORACLE 10.2.0.." | | SAP database version. 701
| | Operating system..... "HP-UX B.11"
| |
| | Memory consumption
| | Roll.... 2013408
| | EM...... 0
| | Heap.... 0
| | Page.... 0
| | MM Used. 1966160
| | MM Free. 24336
|
---------------------------------------------------------------------------------------------------- |User and Transaction
| |
| | Client.............. 900
| | User................ "SAP_PERF000"
| | Language key........ "E"
| | Transaction......... "VA01 "
| | Transactions ID..... "4A08B9BC0C022793E10000000FD5F53D"
| |
| | Program............. "SAPLV05I"
| | Screen.............. "RSM13000 3000"
| | Screen line......... 2
|
---------------------------------------------------------------------------------------------------- |Information on where terminated
| | Termination occurred in the ABAP program "SAPLV05I" - in
"SD_PARTNER_UPDATE". | | The main program was
"RSM13000 ".
| |
| | In the source code you have the termination point in line 480
| | of the (Include) program "LV05IU15".
| | The program "SAPLV05I" was started in the update system.
| | The termination is caused because exception "CX_SY_OPEN_SQL_DB"
occurred in | | procedure "SD_PARTNER_UPDATE"
"(FUNCTION)", but it was neither handled locally | |
nor declared
| | in the RAISING clause of its signature.
| |
| | The procedure is in program "SAPLV05I "; its source code begins
in line | | 1 of the (Include program
"LV05IU15 ". |
---------------------------------------------------------------------------------------------------- |Source Code Extract
|
---------------------------------------------------------------------------------------------------- |Line |SourceCde
|
---------------------------------------------------------------------------------------------------- | 450| POSNR = I_XVBPA-POSNR
| | 451| PARVW =
I_XVBPA-PARVW. | | 452| IF
I_YVBPA-STCD1 <> I_XVBPA-STCD1 OR
| | 453| I_YVBPA-STCD2 <> I_XVBPA-STCD2 OR
| | 454| I_YVBPA-STCD3 <> I_XVBPA-STCD3 OR
| | 455| I_YVBPA-STCD4 <> I_XVBPA-STCD4 OR
| | 456| I_YVBPA-STCDT <> I_XVBPA-STCDT OR
| | 457| I_YVBPA-STKZN <> I_XVBPA-STKZN OR
| | 458| I_YVBPA-J_1KFREPRE <> I_XVBPA-J_1KFREPRE OR
| | 459| I_YVBPA-J_1KFTBUS <> I_XVBPA-J_1KFTBUS OR
| | 460| I_YVBPA-J_1KFTIND <> I_XVBPA-J_1KFTIND.
| | 461| MOVE-CORRESPONDING I_XVBPA TO WA_XVBPA3I.
| | 462| APPEND WA_XVBPA3I TO DA_XVBPA3I.
| | 463| ENDIF.
| | 464| ENDIF.
| | 465| ENDIF.
| | 466| WHEN UPDKZ_OLD.
| | 467| IF DA_VBPA-ADRDA CA GCF_ADDR_IND_COMB_MAN_OLD OR
| | 468| DA_VBPA-ADRDA CA GCF_ADDR_IND_COMB_MAN_ADRC.
| | 469| YADR-ADRNR = DA_VBPA-ADRNR. COLLECT YADR.
| | 470| ENDIF.
| | 471| IF DA_VBPA-ADRDA CA GCF_ADDR_IND_COMB_MAN_OLD OR
| | 472| DA_VBPA-ADRDA CA GCF_ADDR_IND_COMB_MAN_ADRC.
| | 473| XADR-ADRNR = DA_VBPA-ADRNR. COLLECT XADR.
| | 474| ENDIF.
| | 475| ENDCASE.
| | 476| ENDLOOP.
| | 477| UPDATE (OBJECT) FROM TABLE DA_XVBPAU.
| | 478| UPDATE VBPA3 FROM TABLE DA_XVBPA3U.
| | 479|
| |>>>>>| INSERT (OBJECT) FROM TABLE DA_XVBPAI.
| | 481| INSERT VBPA3 FROM TABLE DA_XVBPA3I.
| | 482|
| | 483| IF SY-SUBRC > 0.
| | 484| MESSAGE A700 WITH OBJECT SY-SUBRC DA_XVBPAI(21).
| | 485| ENDIF.
| | 486|
| | 487|* Sonderfall neue VBPA (VBPA2) für Rollen AA und AW
| | 488| LOOP AT I_XVBPA2.
| | 489| DA_VBPA2 = I_XVBPA2.
| | 490| CASE DA_VBPA2-UPDKZ.
| | 491| WHEN UPDKZ_NEW.
| | 492| IF DA_VBPA2-ADRDA CA GCF_ADDR_IND_COMB_MAN_OLD OR
| | 493| DA_VBPA2-ADRDA CA GCF_ADDR_IND_COMB_MAN_ADRC.
| | 494| XADR-ADRNR = DA_VBPA2-ADRNR. COLLECT XADR.
| | 495| ENDIF.
| | 496| I_XVBPA-MANDT = SY-MANDT.
| | 497| IF I_XVBPA2-VBELN IS INITIAL.
| | 498| I_XVBPA2-VBELN = F_VBELN.
| | 499| ENDIF.
|
It is very clear that system is trying to update with some duplicate record and hence, the update termination system is popping up. Take the help of ABAP team and check the root cause of this issue. Also if there is any customization involved in sale order creation process, then in that case also, this will happen. So you have to check with ABAP team. Alternatively, if you have login credentials for Service Marketplace, then have a look at OSS note 330904.