For a big project we use DynamoDbLocal in our unit tests. Most of the times, these tests pass. The code also works as expected on our production environments, where we use the "real" dynamodb that's part of the VPC.
However, sometimes the unit tests fail. Particularly when calling putItem() we sometimes get the following exception:
The request processing has failed because of an unknown error, exception or failure. (Service: DynamoDb, Status Code: 500, Request ID: db23be5e-ae96-417b-b268-5a1433c8c125, Extended Request ID: null)
software.amazon.awssdk.services.dynamodb.model.DynamoDbException: The request processing has failed because of an unknown error, exception or failure. (Service: DynamoDb, Status Code: 500, Request ID: db23be5e-ae96-417b-b268-5a1433c8c125, Extended Request ID: null)
at software.amazon.awssdk.services.dynamodb.model.DynamoDbException$BuilderImpl.build(DynamoDbException.java:95)
at software.amazon.awssdk.services.dynamodb.model.DynamoDbException$BuilderImpl.build(DynamoDbException.java:55)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:89)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:63)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:42)
at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.lambda$handle$0(MetricCollectingHttpResponseHandler.java:52)
at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:64)
at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.handle(MetricCollectingHttpResponseHandler.java:52)
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:89)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:132)
at java.base/java.util.Optional.ifPresent(Optional.java:183)
at software.amazon.awssdk.http.crt.internal.AwsCrtResponseBodyPublisher.completeSubscriptionExactlyOnce(AwsCrtResponseBodyPublisher.java:216)
at software.amazon.awssdk.http.crt.internal.AwsCrtResponseBodyPublisher.publishToSubscribers(AwsCrtResponseBodyPublisher.java:281)
at software.amazon.awssdk.http.crt.internal.AwsCrtAsyncHttpStreamAdapter.onResponseComplete(AwsCrtAsyncHttpStreamAdapter.java:114)
at software.amazon.awssdk.crt.http.HttpStreamResponseHandlerNativeAdapter.onResponseComplete(HttpStreamResponseHandlerNativeAdapter.java:33)
Relevant versions of our tools and artifacts:
Maven 3
Kotlin version 1.5.21
DynamoDbLocal version 1.16.0
Amazon SDK 2.16.67
Our DynamoLocalDb is spun up inside our unit tests as follows:
val url: String by lazy {
System.setProperty("sqlite4java.library.path", "target/dynamo-native-libs")
System.setProperty("aws.accessKeyId", "test-access-key")
System.setProperty("aws.secretAccessKey", "test-secret-key")
System.setProperty("log4j2.configurationFile", "classpath:log4j2-config-for-dynamodb.xml")
val port = randomFreePort()
logger.info { "Creating local in-memory Dynamo server on port $port" }
val instance = ServerRunner.createServerFromCommandLineArgs(arrayOf("-inMemory", "-port", port.toString()))
try {
instance.safeStart()
} catch (e: Exception) {
instance.stop()
fail("Could not start Local Dynamo Server on port $port.", e)
}
Runtime.getRuntime().addShutdownHook(object : Thread() {
override fun run() {
logger.debug("Stopping Local Dynamo Server on port $port")
instance.stop()
}
})
"http://localhost:$port"
}
Our dynamo log4j configuration is:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Logger name="com.amazonaws.services.dynamodbv2.local" level="DEBUG">
<AppenderRef ref="Console"/>
</Logger>
<Logger name="com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccess" level="WARN">
<AppenderRef ref="Console"/>
</Logger>
<Root level="DEBUG">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Our client is created with:
val client: DynamoDbAsyncClientWrapper by lazy {
DynamoDbAsyncClientWrapper(
DynamoDbAsyncClient.builder()
.region(Region.EU_WEST_1)
.credentialsProvider(DefaultCredentialsProvider.builder().build())
.endpointOverride(URI.create(url))
.httpClientBuilder(AwsCrtAsyncHttpClient.builder())
.build()
)
}
The code for the Kotlin Dynamo Wrapper DSL we use in the code above is open sourced and worth a look.
After enabeling the debug logging in com.amazonaws.services.dynamodbv2.local as described above, we noticed the following in the logs:
09:14:22.917 [SQLiteQueue[]] DEBUG com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccessJob - SELECT ObjectJSON FROM "events-table" WHERE hashKey = ? AND rangeKey = ?;
09:14:22.919 [qtp1058328657-20] ERROR com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler - Unexpected exception occured
com.amazonaws.services.dynamodbv2.local.shared.exceptions.LocalDBAccessException: [1] DB[1] prepare() INSERT OR REPLACE INTO "events-table" (rangeKey, hashKey, ObjectJSON, indexKey_1, indexKey_2, indexKey_6, rangeValue, hashRangeValue, hashValue,itemSize) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?,?); [table events-table has no column named indexKey_6]
at com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.AmazonDynamoDBOfflineSQLiteJob.get(AmazonDynamoDBOfflineSQLiteJob.java:84) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccess.putRecord(SQLiteDBAccess.java:1718) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.PutItemFunction.putItemNoCondition(PutItemFunction.java:183) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.PutItemFunction$1.criticalSection(PutItemFunction.java:83) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.LocalDBAccess$WriteLockWithTimeout.execute(LocalDBAccess.java:361) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.PutItemFunction.apply(PutItemFunction.java:85) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction.doWrite(TransactWriteItemsFunction.java:353) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction.access$000(TransactWriteItemsFunction.java:60) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction$1.run(TransactWriteItemsFunction.java:109) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.helpers.MultiTableLock$SingleTableLock$2.criticalSection(MultiTableLock.java:66) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.LocalDBAccess$WriteLockWithTimeout.execute(LocalDBAccess.java:361) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.helpers.MultiTableLock$SingleTableLock.run(MultiTableLock.java:68) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction.apply(TransactWriteItemsFunction.java:113) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.awssdkv1.client.LocalAmazonDynamoDB.transactWriteItems(LocalAmazonDynamoDB.java:401) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBRequestHandler.transactWriteItems(LocalDynamoDBRequestHandler.java:240) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.dispatchers.TransactWriteItemsDispatcher.enact(TransactWriteItemsDispatcher.java:16) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.dispatchers.TransactWriteItemsDispatcher.enact(TransactWriteItemsDispatcher.java:8) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler.packageDynamoDBResponse(LocalDynamoDBServerHandler.java:395) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler.handle(LocalDynamoDBServerHandler.java:482) ~[DynamoDBLocal-1.16.0.jar:?]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1369) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:190) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1284) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.Server.handle(Server.java:501) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) [jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) [jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:272) [jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) [jetty-io-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [jetty-io-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) [jetty-io-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at java.lang.Thread.run(Thread.java:829) [?:?]
This stacktrace hints at a problem in dynamically creating the underlying SQLLite table or querie(s), and the fact that it not always occurs feels like a bug in the form of a race condition or a failure to clean up memory or old objects between statements.
In this case, the generated SQL was:
INSERT OR REPLACE INTO "events-table"
(rangeKey, hashKey, ObjectJSON, indexKey_1, indexKey_2, indexKey_6, rangeValue, hashRangeValue, hashValue,itemSize)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?,?);
[table events-table has no column named indexKey_6]
We have tried several changes to our code, but we are running out of options. We are looking for a possible cause of this intermittent problem, or a way to reliably reproduce it.
On the Amazon forums we found this post which seems to hint at a similar problem, but is also unanswered/unsolved since August 2020. It would be great if we could also solve Andrew's problem. I also posted this question on re:Post but somehow AWS does not let me log back in with the userid/password I created there, and to create a ticket I have to log in. I guess I should have posted this on StackOverflow to begin with.
Edit: Looking at the decompiled com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccess code, it seems that there is an internal queue for firing queries at the database. Is it possible that meta information is fetched from the SQLite table to build an item in the queue, but meanwhile the data in the table is changed by a statement that is fired earlier than the statement that was put on the queue? I haven't been able to create this situation yet, but it almost feels like this is what is happening.
It turns out that this problem indeed has to do with the queue and the way the DynamoLocalDb works, but not in the way we thought. We use DynamoLocalDb in a Kotlin project where we use coroutines. In case of I/O work, we dispatch routines, like so:
withContext(Dispatchers.IO) {
// Dynamo.PutItem() code here
}
By using a dispatcher, the code is executed on one of the threads in the I/O Thread Pool. If we happen to delete or change a table on a different IO thread, or the main thread, the SQLite statements sometimes are executed before the statements in the IO threads are done, and that in turn causes the errors we get.
We "solved" the issue by never throwing tables away in our unittests, but rather delete all items from a table like so:
private suspend fun clearTable(table: DynamoTable<Any>) {
val scanRequest = ScanRequest.builder().tableName(table.name).build()
lateinit var items: List<Map<String, AttributeValue>>
while (client.scan(scanRequest).items()
.let {
items = it
!it.isEmpty()
}
) {
items.forEach {
client.deleteItem(table.name) {
key {
table.partitionKey from it.getValue(table.partitionKey).s()
table.sortKey from it.getValue(table.sortKey).s()
}
}
}
}
logger.debug { "Removed all items from local dynamo table ${table.name}" }
}
In our code, the DynamoTable class is a simple data class holding the name, pk and sk of a table. Please note that client.scan() returns paginated results. Because we are deleting, the pagination is expected to break and since we don't really care about the pagination here, we just fire the request again until we get an empty first page back.
I hope this helps other people struggling with similar problems.
Cheers!
Related
I use the default Exposed framework configuration, which has the built-in logging of SQL statements that the framework creates for the database calls.
As a result, I see SQL statements in the logs in the following format:
[...] DEBUG Exposed - INSERT INTO sensitive_table (column1, column2) VALUES ('PII1', 'PII2')
Is it possible to configure logging in Exposed to hide (e.g. replace with '?') the sensitive information that can be present in the SQL statement parameters?
[...] DEBUG Exposed - INSERT INTO sensitive_table (column1, column2) VALUES (?, ?)
I solved this problem using a custom SqlLogger that logs SQL without injecting parameters values.
object SafeSqlLogger : SqlLogger {
private val log: Logger = LoggerFactory.getLogger(SafeSqlLogger::class.java)
override fun log(context: StatementContext, transaction: Transaction) {
log.debug(context.sql(TransactionManager.current()))
}
}
I disabled the Exposed logger in the logback config.
<logger name="Exposed" level="OFF"/>
And added the logger to the transactions that I wanted to log.
transaction {
addLogger(SafeSqlLogger)
// query the database
}
As a result, I got the following log statements:
[...] DEBUG SafeSqlLogger - INSERT INTO sensitive_table (column1, column2) VALUES (?, ?)
And finally wrote a function that can be used instead of transaction for logged transactions.
fun <T> loggedTransaction(db: Database? = null, statement: Transaction.() -> T): T {
return transaction(db.transactionManager.defaultIsolationLevel,
db.transactionManager.defaultRepetitionAttempts,
db
) {
addLogger(SafeSqlLogger)
statement.invoke(this)
}
}
Hope this will be helpful for anyone having the same problem as me.
I had the same problem with logging huge ByteArray values. And I came up with another solution: we create our own custom type:
object BasicBinaryColumnTypeCustomLogging : BasicBinaryColumnType() {
override fun valueToString(value: Any?): String {
return "bytes:${(value as ByteArray).size}"
}
}
And then in our Table object we use it like:
object Images : Table("image") {
// val file = binary("file")
val file = binaryCustomLogging("file")
private fun binaryCustomLogging(name: String): Column<ByteArray> = registerColumn(name, BasicBinaryColumnTypeCustomLogging)
}
So in your case you can create your own type with custom
We are using Infinispan to control a distributed cache (replicated-cache) in an JEE application running on a Payara server (Enterprise v 5.22.0) with Java 8 (OpenJDK 64-Bit Server VM Vendor: Azul Systems, Inc. Version: 25.262-b19)
In order to have a controlled start up of the application when starting multiple instances we have created a PESSIMISTIC locking cache, called Mutex, that is used to "lock" the cluster to allow 1 instance to load the cache while the others wait. The first instance to the get the lock reads a database and loads many other caches which are all configured as OPTIMISTIC locking. These cache "puts" all happen inside the outer Mutex transaction. The OPTMISTIC caches are defined with state-transfer enabled=true so that when the instance loading the cache from the database is done and releases the Mutex lock by committing the outer transaction the caches are updated on all instances too.
When loading the OPTIMISTIC caches we sometimes use entries in CACHE1 to drive the loading of CACHE2 (we do have more meaningful names but that detail does not matter here). So having loaded CACHE1 we use CACHE1.values() to orchestrate entries into CACHE2.put().
Now to the problem...
At V9.4.20.Final (and below) the process above works. At V10.x (also V.11.0.5.Final) this does not work. We have debugged our code to find that at V10.x the entries written to CACHE1 (all caches are isolation="READ_COMMITTED") are not visible with CACHE1.values() when trying to load CACHE2. Just to confirm this same code works at V9 where CACHE1.values() does return the values as expected as it is in the same transaction and should be able to see the entries.
If at V10 we don't have the outer Mutex transaction or commit the outer Mutex transaction before trying to read CACHE1 then all works.
The question:
Has the transactionality changed to remove visibilty of entries written in a nested transaction to the process that wrote them?
I have tried Weblogic 12.2.13 suspecting that the containers transaction manager might behave differently, but no. It fails at V10 works with V9 on Weblogic.
I can provided full code reproducer in a zip (eclipse / gradle project) but here are code snippets:
The CacheServiceImpl has a method exclusivePutAndGetAll and locks with name LOCK_KEY which can be called with a boolean to control whether entries are read before or after the "Mutex" parent transaction is committed:
#Override
public <K, V> Collection<V> exclusivePutAndGetAll(String cacheName, Map<K, V> values, boolean insideMutex) throws Exception {
Collection<V> returnValues = null;
LOGGER.debug("mutex manager is " + mutexManager.getManagerHash());
LOGGER.debug("cache manager is " + cacheManager.getManagerHash());
LOGGER.info("Acquiring mutex lock before starting context");
mutexManager.startTransactionAndAcquireMutex("LOCK_KEY");
putAll(cacheName, values);
if (insideMutex) {
returnValues = getAll(cacheName); // this only works and returns values with V9 !!
}
mutexManager.commitTransaction();
LOGGER.info("Mutex lock released after starting context.");
if (!insideMutex) {
returnValues = getAll(cacheName);
}
return returnValues;
}
And here's the mutexManager's startTransactionAndAcquireMutex which begins the transaction and locks the cache called Mutex with the provided "LOCK_KEY"
#Override
public boolean startTransactionAndAcquireMutex(String mutexName) {
final TransactionManager transactionManager = mutexCache.getTransactionManager();
LOGGER.debug("Mutex cache TransactionManager is " + transactionManager.getClass());
try {
transactionManager.begin();
} catch (NotSupportedException | SystemException ex) {
throw new CacheException("Unable to start transaction for mutex " + mutexName, ex);
}
return acquireMutex(mutexName);
}
and here is the mutexManager aquiring the lock:
#Override
public boolean acquireMutex(String mutexName) {
final TransactionManager transactionManager = mutexCache.getTransactionManager();
boolean lockResult = false;
try {
if (transactionManager.getStatus() == Status.STATUS_ACTIVE) {
lockResult = mutexCache.lock(mutexName);
}
} catch (final SystemException ex) {
throw new CacheException("Unable to lock mutex " + mutexName, ex);
}
return lockResult;
}
and finally the cache configuration in XML
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:10.1 http://www.infinispan.org/schemas/infinispan-config-10.1.xsd"
xmlns="urn:infinispan:config:10.1">
<jgroups>
<stack-file name="tcp-cdl" path="cluster-jgroups.xml"/>
</jgroups>
<cache-container name="SampleCacheManager" statistics="true">
<transport stack="tcp-cdl"/>
<jmx duplicate-domains="true"/>
<replicated-cache-configuration name="replicated-cache-template" statistics="true" mode="SYNC" remote-timeout="120000">
<locking isolation="READ_COMMITTED" acquire-timeout="120000" write-skew="false" concurrency-level="150" striping="false"/>
<state-transfer enabled="true" timeout="240000" chunk-size="10000"/>
<transaction
transaction-manager-lookup="org.infinispan.transaction.lookup.GenericTransactionManagerLookup"
mode="NON_XA"
locking="OPTIMISTIC">
</transaction>
</replicated-cache-configuration>
<replicated-cache name="Mutex" configuration="replicated-cache-template">
<transaction locking="PESSIMISTIC" />
</replicated-cache>
</cache-container>
</infinispan>
I have managed to read data from my firebase database but cant seem to re-use the String which has been read.
My successful read is as per below. When i check the logcat for the Log.d("Brand") it actually shows the String as expected.
brandchosenRef=FirebaseDatabase.getInstance().reference
val brandsRef = brandchosenRef.child("CarList2").orderByChild("Car").equalTo(searchable_spinner_brand.selectedItem.toString())
val valueEventListener = object : ValueEventListener {
override fun onDataChange(dataSnapshot: DataSnapshot) {
for(ds in dataSnapshot.children){
Log.d("spinner brand",searchable_spinner_brand.selectedItem.toString())
val Brand = ds.child("Brand").getValue(String::class.java)
val brandselected= Brand.toString()
Log.d("Brand","$brandselected")
selectedbrand== brandselected
Log.d("selected brand",selectedbrand)
}
}
override fun onCancelled(databaseError: DatabaseError) {
Log.d("Branderror","error on brand")
}
}
brandsRef.addListenerForSingleValueEvent(valueEventListener)
What i am trying to do is write "selectedbrand" into a separate node using the following:
val carselected = searchable_spinner_brand.selectedItem.toString()
val dealref = FirebaseDatabase.getInstance().getReference("Deal_Summary2")
val dealsummayId = dealref.push().key
val summaryArray = DealSummaryArray(dealsummayId.toString(),"manual input for testing","brand","Deal_ID",carselected,extrastext.text.toString(),otherinfo.text.toString(),Gauteng,WC,KZN,"Open")
dealref.child(dealsummayId.toString()).setValue(summaryArray).addOnCompleteListener{
}
Note, in the above i was inputting "manual input for testing" to check that my write to Firebase was working and it works as expected. if i replace that with selectedbrand, then i get the below error.
kotlin.UninitializedPropertyAccessException: lateinit property selectedbrand has not been initialized
the summary array indicated above is defined in a separate class as follows. and as seen "manual input for testing is declared as String.
class DealSummaryArray(val id:String,val brand:String,val Buyer_ID:String,val Deal_ID:String,val Car:String,val extras:String,val other_info:String,val Gauteng:String,val Western_Cape:String,val KZN:String,val Status:String) {
constructor():this("","","","","","","","","","",""){
}
}
My question simply put, it why can i not re-use the value i read from the database? even if i was not trying to re-write it to a new node i cannot seem to utilize the value outside of the firebase query.
I seem to get this problem everywhere in my activities and have to find strange work around's like write to a textview and then reference the textview. please assist.
Data is loaded from Firebase asynchronously, as it may take some time before you get a response from the server. To prevent blocking the application (which would be a bad experience for your users), your main code continues to run while the data is being loaded. And then when the data is available, Firebase calls your onDataChange method.
What this means in practice is that any code that needs the data from the database, needs to be inside the onDataChange method or be called from there. So any code that requires selectedbrand needs to be inside onDataChange or called from there (typically through a callback interface).
Also see:
How to check a certain data already exists in firestore or not, which contains example code including of the callback interface, in Java.
getContactsFromFirebase() method return an empty list, which contains a similar example for the Firebase Realtime Database.
Setting Singleton property value in Firebase Listener, which shows a way to make the code behave more synchronous, and explains shows that this may not work on various Android versions.
can this code makes some bad things? I found it in one project and do not know if it can be cause of some crazy bugs(deadlocks, timeouts in DB,...). Code like this is executed concurently many times in program even in threads.
Thanks a lot
class first {
void doSomething {
using (ITransaction transaction = session.BeginTransaction){
var foo = new second();
foo.doInNewTransaction(); //inner transaction in new session
transaction.Commit();
}
}
}
class second {
void doInNewTransaction(){
using (Session session = new Session()){
using (ITransaction transaction = session.BeginTransaction){
//do someting in database
transaction.Commit();
}
}
}
}
This should be fine. I'm sure I have done stuff like this in the past. The only thing that you need to be aware of is that if you modify an object in the inner session then these changes will not automatically be reflected in the outer session if the same object has already been loaded.
Having said that, if you do not need to do this then I would avoid it. Normally I would recommend AOP based transaction management when using NHibernate. This would allow your inner component to easily join in with the transaction from the outer component. However, in order to do this you need to be using a DI container that supports this, for example Spring.NET or Castle.
What I try to achieve here is to get the number of relationships of a particular node, while other threads adding new relationships to it concurrently. I run my code in a unit test with
TestGraphDatabaseFactory().newImpermanentDatabase() graph service.
My code is executed by ~50 threads, and it looks something like this:
int numOfRels = 0;
try {
Iterable<Relationship> rels = parentNode.getRelationships(RelTypes.RUNS, Direction.OUTGOING);
while (rels.iterator().hasNext()) {
numOfRels++;
rels.iterator().next();
}
}
catch(Exception e) {
throw e;
}
// Enforce relationship limit
if (numOfRels > 10) {
// do something
}
Transaction tx = graph.beginTx();
try {
Node node = createMyNodeAndConnectToParentNode(...);
tx.success();
return node;
}
catch (Exception e) {
tx.failure();
}
finally {
tx.finish();
}
The problem is once a while I get a "ArrayIndexOutOfBoundsException: 1" in the try-catch block above (the one surrounding the getRelationships()). If I understand correctly Iterable is not thread-safe and causing this problem.
My question is what is the best way to iterate over constantly changing relationships and nodes using Neo4j's Java API?
I am getting the following errors:
Exception in thread "Thread-14" org.neo4j.helpers.ThisShouldNotHappenError: Developer: Stefan/Jake claims that: A property key id disappeared under our feet
at org.neo4j.kernel.impl.core.NodeProxy.setProperty(NodeProxy.java:188)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.createMyNodeAndConnectToParentNode(AppEntity.java:546)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.create(AppEntity.java:305)
at com.inbiza.connio.neo4j.server.extensions.TestEmbeddedConnioGraph$appCreatorThread.run(TestEmbeddedConnioGraph.java:61)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "Thread-92" java.lang.ArrayIndexOutOfBoundsException: 1
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:72)
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:36)
at org.neo4j.helpers.collection.PrefetchingIterator.hasNext(PrefetchingIterator.java:55)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.create(AppEntity.java:243)
at com.inbiza.connio.neo4j.server.extensions.TestEmbeddedConnioGraph$appCreatorThread.run(TestEmbeddedConnioGraph.java:61)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "Thread-12" java.lang.ArrayIndexOutOfBoundsException: 1
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:72)
at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull(RelationshipIterator.java:36)
at org.neo4j.helpers.collection.PrefetchingIterator.hasNext(PrefetchingIterator.java:55)
at com.inbiza.connio.neo4j.server.extensions.graph.AppEntity.create(AppEntity.java:243)
at com.inbiza.connio.neo4j.server.extensions.TestEmbeddedConnioGraph$appCreatorThread.run(TestEmbeddedConnioGraph.java:61)
at java.lang.Thread.run(Thread.java:722)
Exception in thread "Thread-93" java.lang.ArrayIndexOutOfBoundsException
Exception in thread "Thread-90" java.lang.ArrayIndexOutOfBoundsException
Below is the method responsible of node creation:
static Node createMyNodeAndConnectToParentNode(GraphDatabaseService graph, final Node ownerAccountNode, final String suggestedName, Map properties) {
final String accountId = checkNotNull((String)ownerAccountNode.getProperty("account_id"));
Node appNode = graph.createNode();
appNode.setProperty("urn_name", App.composeUrnName(accountId, suggestedName.toLowerCase().trim()));
int nextId = nodeId.addAndGet(1); // I normally use getOrCreate idiom but to simplify I replaced it with an atomic int - that would do for testing
String urn = App.composeUrnUid(accountId, nextId);
appNode.setProperty("urn_uid", urn);
appNode.setProperty("id", nextId);
appNode.setProperty("name", suggestedName);
Index<Node> indexUid = graph.index().forNodes("EntityUrnUid");
indexUid.add(appNode, "urn_uid", urn);
appNode.addLabel(LabelTypes.App);
appNode.setProperty("version", properties.get("version"));
appNode.setProperty("description", properties.get("description"));
Relationship rel = ownerAccountNode.createRelationshipTo(appNode, RelTypes.RUNS);
rel.setProperty("date_created", fmt.print(new DateTime()));
return appNode;
}
I am looking at org.neo4j.kernel.impl.core.RelationshipIterator.fetchNextOrNull()
It looks like my test generates a condition where else if ( (status = fromNode.getMoreRelationships( nodeManager )).loaded() || lastTimeILookedThereWasMoreToLoad ) is not executed, and where currentTypeIterator state is changed in between.
RelIdIterator currentTypeIterator = rels[currentTypeIndex]; //<-- this is where is crashes
do
{
if ( currentTypeIterator.hasNext() )
...
...
while ( !currentTypeIterator.hasNext() )
{
if ( ++currentTypeIndex < rels.length )
{
currentTypeIterator = rels[currentTypeIndex];
}
else if ( (status = fromNode.getMoreRelationships( nodeManager )).loaded()
// This is here to guard for that someone else might have loaded
// stuff in this relationship chain (and exhausted it) while I
// iterated over my batch of relationships. It will only happen
// for nodes which have more than <grab size> relationships and
// isn't fully loaded when starting iterating.
|| lastTimeILookedThereWasMoreToLoad )
{
....
}
}
} while ( currentTypeIterator.hasNext() );
I also tested couple locking scenarios. The one below solves the issue. Not sure if I should use a lock every time I iterate over relationships based on this.
Transaction txRead = graph.beginTx();
try {
txRead.acquireReadLock(parentNode);
long numOfRels = 0L;
Iterable<Relationship> rels = parentNode.getRelationships(RelTypes.RUNS, Direction.OUTGOING);
while (rels.iterator().hasNext()) {
numOfRels++;
rels.iterator().next();
}
txRead.success();
}
finally {
txRead.finish();
}
I am very new to Neo4j and its source base; just testing as a potential data store for our product. I will appreciate if someone knowing Neo4j inside & out explains what is going on here.
This is a bug. The fix is captured in this pull request: https://github.com/neo4j/neo4j/pull/1011
Well I think this a bug. The Iterable returned by getRelationships() are meant to be immutable. When this method is called, all the available Nodes till that moment will be available in the iterator. (You can verify this from org.neo4j.kernel.IntArrayIterator)
I tried replicating it by having 250 threads trying to insert a relationship from a node to some other node. And having a main thread looping over the iterator for the first node. On careful analysis, the iterator only contains the relationships added when getRelationship() was last called. The issue never came up for me.
Can you please put your complete code, IMO there might some silly error. The reason it cannot happen is that the write locks are in place when adding a relationship and reads are hence synchronized.