Any idea when this error will come in Open Laszlo? - migration

I am currently migrating an application from open laszlo from 3.3 to 5.0. I encountered this error in one of the classes.
line unknown: Error: A conflict exists with inherited definition $lzc$class_xxx.$datapath in namespace public, in line: var $classrootdepth;var $datapath;function $lzc$class__mjb ($0:LzNode? = null, $1:Object? = null, $2:Array? = null, $3:Boolean = false) {
In that particular class i have the datapath tag if i remove that then i am not getting this error.
Can anyone tell me why this error is occuring?

I managed to reproduce the error message using this code:
<canvas debug="true">
<class name="c1" extends="node">
<datapath />
</class>
<class name="c2" extends="c1">
<datapath />
</class>
</canvas>
Looking into JIRA, I saw that it is filed as a bug already: LPP-9747 - SWF10: Explicit <datapath> declarations in class definitions lead to compiler error
There seems to be a relatively high number of bugs or cases, where the compiler spits out error messages or exceptions which are hard to understand - especially when upgrading 3.x or 4.0/4.1 applications to versions of OpenLaszlo with SWF10+ runtime support. That's very unfortunate, since it easily gives the impression that the compiler is buggy.
When you use the datapath tag within instances of <c1> and <c2>, the compiler does not report any error messages, e.g.:
<canvas>
<class name="c1" extends="node">
</class>
<class name="c2" extends="c1">
</class>
<c1>
<datapath/>
<c2>
<datapath />
</c2>
</c1>
</canvas>

Related

DynamoDbLocal intermittently generates HTTP 500 errors

For a big project we use DynamoDbLocal in our unit tests. Most of the times, these tests pass. The code also works as expected on our production environments, where we use the "real" dynamodb that's part of the VPC.
However, sometimes the unit tests fail. Particularly when calling putItem() we sometimes get the following exception:
The request processing has failed because of an unknown error, exception or failure. (Service: DynamoDb, Status Code: 500, Request ID: db23be5e-ae96-417b-b268-5a1433c8c125, Extended Request ID: null)
software.amazon.awssdk.services.dynamodb.model.DynamoDbException: The request processing has failed because of an unknown error, exception or failure. (Service: DynamoDb, Status Code: 500, Request ID: db23be5e-ae96-417b-b268-5a1433c8c125, Extended Request ID: null)
at software.amazon.awssdk.services.dynamodb.model.DynamoDbException$BuilderImpl.build(DynamoDbException.java:95)
at software.amazon.awssdk.services.dynamodb.model.DynamoDbException$BuilderImpl.build(DynamoDbException.java:55)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.unmarshall(AwsJsonProtocolErrorUnmarshaller.java:89)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:63)
at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonProtocolErrorUnmarshaller.handle(AwsJsonProtocolErrorUnmarshaller.java:42)
at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.lambda$handle$0(MetricCollectingHttpResponseHandler.java:52)
at software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:64)
at software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.handle(MetricCollectingHttpResponseHandler.java:52)
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:89)
at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:132)
at java.base/java.util.Optional.ifPresent(Optional.java:183)
at software.amazon.awssdk.http.crt.internal.AwsCrtResponseBodyPublisher.completeSubscriptionExactlyOnce(AwsCrtResponseBodyPublisher.java:216)
at software.amazon.awssdk.http.crt.internal.AwsCrtResponseBodyPublisher.publishToSubscribers(AwsCrtResponseBodyPublisher.java:281)
at software.amazon.awssdk.http.crt.internal.AwsCrtAsyncHttpStreamAdapter.onResponseComplete(AwsCrtAsyncHttpStreamAdapter.java:114)
at software.amazon.awssdk.crt.http.HttpStreamResponseHandlerNativeAdapter.onResponseComplete(HttpStreamResponseHandlerNativeAdapter.java:33)
Relevant versions of our tools and artifacts:
Maven 3
Kotlin version 1.5.21
DynamoDbLocal version 1.16.0
Amazon SDK 2.16.67
Our DynamoLocalDb is spun up inside our unit tests as follows:
val url: String by lazy {
System.setProperty("sqlite4java.library.path", "target/dynamo-native-libs")
System.setProperty("aws.accessKeyId", "test-access-key")
System.setProperty("aws.secretAccessKey", "test-secret-key")
System.setProperty("log4j2.configurationFile", "classpath:log4j2-config-for-dynamodb.xml")
val port = randomFreePort()
logger.info { "Creating local in-memory Dynamo server on port $port" }
val instance = ServerRunner.createServerFromCommandLineArgs(arrayOf("-inMemory", "-port", port.toString()))
try {
instance.safeStart()
} catch (e: Exception) {
instance.stop()
fail("Could not start Local Dynamo Server on port $port.", e)
}
Runtime.getRuntime().addShutdownHook(object : Thread() {
override fun run() {
logger.debug("Stopping Local Dynamo Server on port $port")
instance.stop()
}
})
"http://localhost:$port"
}
Our dynamo log4j configuration is:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Logger name="com.amazonaws.services.dynamodbv2.local" level="DEBUG">
<AppenderRef ref="Console"/>
</Logger>
<Logger name="com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccess" level="WARN">
<AppenderRef ref="Console"/>
</Logger>
<Root level="DEBUG">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Our client is created with:
val client: DynamoDbAsyncClientWrapper by lazy {
DynamoDbAsyncClientWrapper(
DynamoDbAsyncClient.builder()
.region(Region.EU_WEST_1)
.credentialsProvider(DefaultCredentialsProvider.builder().build())
.endpointOverride(URI.create(url))
.httpClientBuilder(AwsCrtAsyncHttpClient.builder())
.build()
)
}
The code for the Kotlin Dynamo Wrapper DSL we use in the code above is open sourced and worth a look.
After enabeling the debug logging in com.amazonaws.services.dynamodbv2.local as described above, we noticed the following in the logs:
09:14:22.917 [SQLiteQueue[]] DEBUG com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccessJob - SELECT ObjectJSON FROM "events-table" WHERE hashKey = ? AND rangeKey = ?;
09:14:22.919 [qtp1058328657-20] ERROR com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler - Unexpected exception occured
com.amazonaws.services.dynamodbv2.local.shared.exceptions.LocalDBAccessException: [1] DB[1] prepare() INSERT OR REPLACE INTO "events-table" (rangeKey, hashKey, ObjectJSON, indexKey_1, indexKey_2, indexKey_6, rangeValue, hashRangeValue, hashValue,itemSize) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?,?); [table events-table has no column named indexKey_6]
at com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.AmazonDynamoDBOfflineSQLiteJob.get(AmazonDynamoDBOfflineSQLiteJob.java:84) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccess.putRecord(SQLiteDBAccess.java:1718) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.PutItemFunction.putItemNoCondition(PutItemFunction.java:183) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.PutItemFunction$1.criticalSection(PutItemFunction.java:83) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.LocalDBAccess$WriteLockWithTimeout.execute(LocalDBAccess.java:361) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.PutItemFunction.apply(PutItemFunction.java:85) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction.doWrite(TransactWriteItemsFunction.java:353) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction.access$000(TransactWriteItemsFunction.java:60) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction$1.run(TransactWriteItemsFunction.java:109) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.helpers.MultiTableLock$SingleTableLock$2.criticalSection(MultiTableLock.java:66) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.LocalDBAccess$WriteLockWithTimeout.execute(LocalDBAccess.java:361) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.helpers.MultiTableLock$SingleTableLock.run(MultiTableLock.java:68) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.api.dp.TransactWriteItemsFunction.apply(TransactWriteItemsFunction.java:113) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.shared.access.awssdkv1.client.LocalAmazonDynamoDB.transactWriteItems(LocalAmazonDynamoDB.java:401) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBRequestHandler.transactWriteItems(LocalDynamoDBRequestHandler.java:240) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.dispatchers.TransactWriteItemsDispatcher.enact(TransactWriteItemsDispatcher.java:16) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.dispatchers.TransactWriteItemsDispatcher.enact(TransactWriteItemsDispatcher.java:8) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler.packageDynamoDBResponse(LocalDynamoDBServerHandler.java:395) ~[DynamoDBLocal-1.16.0.jar:?]
at com.amazonaws.services.dynamodbv2.local.server.LocalDynamoDBServerHandler.handle(LocalDynamoDBServerHandler.java:482) ~[DynamoDBLocal-1.16.0.jar:?]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1369) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:190) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1284) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:234) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.Server.handle(Server.java:501) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) ~[jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) [jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) [jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:272) [jetty-server-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) [jetty-io-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) [jetty-io-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) [jetty-io-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) [jetty-util-9.4.30.v20200611.jar:9.4.30.v20200611]
at java.lang.Thread.run(Thread.java:829) [?:?]
This stacktrace hints at a problem in dynamically creating the underlying SQLLite table or querie(s), and the fact that it not always occurs feels like a bug in the form of a race condition or a failure to clean up memory or old objects between statements.
In this case, the generated SQL was:
INSERT OR REPLACE INTO "events-table"
(rangeKey, hashKey, ObjectJSON, indexKey_1, indexKey_2, indexKey_6, rangeValue, hashRangeValue, hashValue,itemSize)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?,?);
[table events-table has no column named indexKey_6]
We have tried several changes to our code, but we are running out of options. We are looking for a possible cause of this intermittent problem, or a way to reliably reproduce it.
On the Amazon forums we found this post which seems to hint at a similar problem, but is also unanswered/unsolved since August 2020. It would be great if we could also solve Andrew's problem. I also posted this question on re:Post but somehow AWS does not let me log back in with the userid/password I created there, and to create a ticket I have to log in. I guess I should have posted this on StackOverflow to begin with.
Edit: Looking at the decompiled com.amazonaws.services.dynamodbv2.local.shared.access.sqlite.SQLiteDBAccess code, it seems that there is an internal queue for firing queries at the database. Is it possible that meta information is fetched from the SQLite table to build an item in the queue, but meanwhile the data in the table is changed by a statement that is fired earlier than the statement that was put on the queue? I haven't been able to create this situation yet, but it almost feels like this is what is happening.
It turns out that this problem indeed has to do with the queue and the way the DynamoLocalDb works, but not in the way we thought. We use DynamoLocalDb in a Kotlin project where we use coroutines. In case of I/O work, we dispatch routines, like so:
withContext(Dispatchers.IO) {
// Dynamo.PutItem() code here
}
By using a dispatcher, the code is executed on one of the threads in the I/O Thread Pool. If we happen to delete or change a table on a different IO thread, or the main thread, the SQLite statements sometimes are executed before the statements in the IO threads are done, and that in turn causes the errors we get.
We "solved" the issue by never throwing tables away in our unittests, but rather delete all items from a table like so:
private suspend fun clearTable(table: DynamoTable<Any>) {
val scanRequest = ScanRequest.builder().tableName(table.name).build()
lateinit var items: List<Map<String, AttributeValue>>
while (client.scan(scanRequest).items()
.let {
items = it
!it.isEmpty()
}
) {
items.forEach {
client.deleteItem(table.name) {
key {
table.partitionKey from it.getValue(table.partitionKey).s()
table.sortKey from it.getValue(table.sortKey).s()
}
}
}
}
logger.debug { "Removed all items from local dynamo table ${table.name}" }
}
In our code, the DynamoTable class is a simple data class holding the name, pk and sk of a table. Please note that client.scan() returns paginated results. Because we are deleting, the pagination is expected to break and since we don't really care about the pagination here, we just fire the request again until we get an empty first page back.
I hope this helps other people struggling with similar problems.
Cheers!

How do I enable "clickable" logs to the "Messages" window in intellij [duplicate]

I have an existing project that I want to build in the IntelliJ Community Edition 11.1.4 running on Ubuntu 12.04.1 LTS
In the Ant Build window I added the project's build.xml by clicking on the + button in the top left hand corner of the window and navigating to the file. The ant tasks associated with the build file are listed and I click on the green play button to run the ant build which commences as expected.
I was expecting to see compiler errors and have IntelliJ CE present those compiler errors and allow me to Jump to (the offending) Source having double clicked on the errors in the Messages window.
Instead, the messages window displays the following error which when I double-click on it takes me to the javac ant task in the build.xml file.
build.xml:389: Compile failed; see the compiler error output for details.
This is great advice and I very much want to follow it but I cannot because the compiler error is not displayed anywhere in the Messages window. Next and Previous Message do not navigate to an actual compiler error.
I want to know how to be able to see the compiler error messages in IntelliJ having run an Ant build.
I tried adding the -v flag to the Ant command line:field in Execution Properties. This made no difference to the behaviour.
I then tried down grading from Ant 1.8 to Ant 1.7. this time I did see a change in behaviour. the build does not run at all and I get the following error https://gist.github.com/4073149 at the terminal.
The javac ant task looks like this.
<target name="compile-only" depends="">
<stopwatch name="Compilation"/>
<javac destdir="${build.classes.dir}" debug="on" deprecation="off"
classpathref="base.path" excludes="/filtering/**/**">
<src path="${src.dir}"/>
<src path="${build.autogen.dir}"/>
</javac>
<!-- Copy all resource files to the output dir -->
<copy todir="${build.classes.dir}">
<fileset dir="${src.dir}">
<include name="**/*.properties"/>
<include name="**/*.gif"/>
<include name="**/*.png"/>
<include name="**/*.jpg"/>
<include name="**/*.svg"/>
<include name="**/*.jpeg"/>
<exclude name="**/.svn"/>
</fileset>
</copy>
<stopwatch name="Compilation" action="total"/>
IDEA just prints Ant's output. Try switching the output from the tree view to the plain text view using the corresponding button on the left of the messages panel:
If the output contains errors, you should be able to see them there.
Also it's much faster and easier to use IDEA provided incremental compilation (Build | Make).
When using plain (not tree) output for ant IntelliJ uses
file (line,col): error msg format for javac errors.
But error view parser only understands file:line: error msg format.
You could tweak it in PlainTextView class from antIntegration.jar
http://grepcode.com/file/repository.grepcode.com/java/ext/com.jetbrains/intellij-idea/10.0/com/intellij/lang/ant/config/execution/PlainTextView.java#98
I just changed addJavacMessage method to following and recompiled the class
``` java
public void addJavacMessage(AntMessage message, String url) {
final VirtualFile file = message.getFile();
if (message.getLine() > 0) {
final StringBuilder builder = StringBuilderSpinAllocator.alloc();
try {
if (file != null) {
ApplicationManager.getApplication().runReadAction(new Runnable() {
public void run() {
String presentableUrl = file.getPresentableUrl();
builder.append(presentableUrl);
// builder.append(' ');
}
});
}
else if (url != null) {
builder.append(url);
// builder.append(' ');
}
// builder.append('(');
builder.append(':');
builder.append(message.getLine());
builder.append(':');
builder.append(' ');
// builder.append(message.getColumn());
// builder.append(")");
print(builder.toString(), ProcessOutputTypes.STDOUT);
}
finally {
StringBuilderSpinAllocator.dispose(builder);
}
}
print(message.getText(), ProcessOutputTypes.STDOUT);
}
public void addException(AntMessage exception, boolean showFullTrace) {
String text = exception.getText();
showFullTrace = false;
if (!showFullTrace) {
int index = text.indexOf("\r\n");
if (index != -1) {
text = text.substring(0, index) + "\n";
}
}
print(text, ProcessOutputTypes.STDOUT);
}
```
And now I have clickable links in 'plain text' output mode of ant tool.
Note, that 'tree mode' shows line & columns correctly - I used IDEA 13 CE.
It would be nice if somebody to create pull request for Intellij regarding this issue.
Ant Correct usage in IntelliJ:
javac includeantruntime="false" srcdir="${src.dir}" fork="yes" executable="D:/jdk1.7/bin/javac"
destdir="${classes.dir}" includes="**/*.java" source="7" classpathref="library.classpath" ...
the most important:
executable="D:/jdk1.7/bin/javac"

Spring Batch: unit test late binding

I have reader configured as below:
<bean name="reader" class="...Reader" scope="step">
<property name="from" value="#{jobParameters[from]}" />
<property name="to" value="#{jobParameters[to]}" />
<property name="pageSize" value="5"/>
<property name="saveState" value="false" /> <!-- we use a database flag to indicate processed records -->
</bean>
and a test for it like this:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration({"classpath:testApplicationContext.xml"})
#ActiveProfiles({"default","mock"})
#TestExecutionListeners( {StepScopeTestExecutionListener.class })
public class TestLeadsReader extends AbstractTransactionalJUnit4SpringContextTests {
#Autowired
private ItemStreamReader<Object[]> reader;
public StepExecution getStepExecution() {
StepExecution execution = MetaDataInstanceFactory.createStepExecution();
execution.getExecutionContext().putLong("campaignId", 1);
execution.getExecutionContext().putLong("partnerId", 1);
Calendar.getInstance().set(2015, 01, 20, 17, 12, 00);
execution.getExecutionContext().put("from", Calendar.getInstance().getTime());
Calendar.getInstance().set(2015, 01, 21, 17, 12, 00);
execution.getExecutionContext().put("to", Calendar.getInstance().getTime());
return execution;
}
#Test
public void testMapper() throws Exception {
for (int i = 0; i < 10; i++) {
assertNotNull(reader.read());
}
assertNull(reader.read());
}
Now, although the pageSize and saveState are injected correctly into my reader, the job parameters are not. According to the documentation this is all that it needs to be done and the only issues I found were about using jobParameters['from'] instead of jobParameters[from]. Any idea what could be wrong?
Also, the open(executionContext) method is not called on my reader before it enters the test method, which is not ok, because I use those job parameters to retrieve some data that needs to be available when the read method is called. This might be related to the above problem though, because the documentation concerning testing with late binding says that "The reader is initialized and bound to the input data".
You are setting the from and to as step execution context variables in your test. But in your application context configuration you are retrieving them as job parameters. You should set them as job parameters in your unit test.
Also, if you want the open/update/close ItemStream lifecycle methods to be called, you should execute the step. See http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/test/JobLauncherTestUtils.html#launchStep-java.lang.String-org.springframework.batch.core.JobParameters-

Spring Batch: Always looking for files with MultiResourceItemReader

I'm a newbie in Spring Batch. I have inherited a batch process implemented with Spring Batch.
This works well, except for one thing I'll try to describe.
I launch parseJob and, when it's reading XML to process in bean parsingStepReader,
read() method is been invoking always.
The directory *path_to_xml* contains only one XML, invoke read() and return XML parsed, which is processed OK. Then, read() method is invoked again, return a null object, and is invoked again, return null... and so on.
When debugging, MultiResourceItemReader read method try to read, does not read anything (all resources has already been readed), increment currentResources and return null.
I have readed something about the job stops when the reader return a null object, but that read method returns null and reads again and again...
I changed restartable to false, but does not work.
The job is launched in Linux, batch mode, with org.springframework.batch.core.launch.support.CommandLineJobRunner
Because of this problem, the .sh that launch the job does not finish, and resources are busy.
How can I avoid this, or stop the job when resources (XML) input directory have already been processed?
Any help would be very appreciated. Best regards.
Beans file and Java class pieces are attached
<batch:job id="parseJob" restartable="true" incrementer="jobParametersIncrementer">
<batch:flow parent="parseFlow"/>
<batch:flow .../>
<batch:flow .../>
</batch:job>
<batch:flow id="parseFlow">
<batch:step id="parsingStep">
<batch:tasklet start-limit="100" allow-start-if-complete="true" transaction-manager="..." task-executor="taskExecutor" throttle-limit="$...">
<batch:chunk reader="parsingStepReader" writer="..." processor="..." commit-interval="..." skip-limit="10000000">
<batch:skippable-exception-classes>
<batch:include class="java.lang.Exception" />
</batch:skippable-exception-classes>
</batch:chunk>
<batch:listeners>
<batch:listener ref="iwListener" />
<batch:listener ref="mySkipListener" />
<batch:listener ref="myStep1Listener" />
</batch:listeners>
<batch:no-rollback-exception-classes>
<batch:include class="java.lang.Exception" />
</batch:no-rollback-exception-classes>
</batch:tasklet>
</batch:step>
</batch:flow>
<!-- -->
<bean id="bpfReader" class="org.springframework.batch.item.xml.StaxEventItemReader" scope="prototype">
<property name="fragmentRootElementName" value="..." />
<property name="unmarshaller" ref="..." />
<property name="strict" value="false" />
</bean>
<bean id="multiresourceItemReader" class="...SyncMultiResourceItemReader" abstract="true">
<property name="strict" value="false" />
<property name="delegate" ref="bpfReader" />
</bean>
<bean id="parsingStepReader" parent="multiresourceItemReader" scope="step">
<property name="resources" value="<path_to_xml>" />
</bean>
And the reader class is:
public class SyncMultiResourceItemReader<T> extends MultiResourceItemReader<T> {
. . .
#Override
public T read() throws Exception, UnexpectedInputException, ParseException {
synchronized (this) {
return super.read();
}
}
. . .
}
UPDATE: Solution suggested by #vsingh works perfectly. Once an input element is chosen, it must be removed from the input. I don't know why, but class org.springframework.batch.item.file.MultiResourceItemReader does not work as I expected, especially in an input error.
I hope this helps. Best regards
The read method will read the data , store at class level and pass it to the write method.
I will give you an example of how we did it
for eg
#Override
public Long read() throws Exception, UnexpectedInputException,
ParseException, NonTransientResourceException {
synchronized (this.myIds) {
if (!this.myIds.isEmpty()) {
return this.myIds.remove(0);
}
return null;
}
}
myIds is a List at class level
This list is populated at before step method
#Override
public void beforeStep(final StepExecution stepExec) {
this.stepExecution = stepExec;
// read the ids from service and set at class level
}
Solution suggested by #vsingh works perfectly. Once an input element is chosen, it must be removed from the input. I don't know why, but class org.springframework.batch.item.file.MultiResourceItemReader does not work as I expected, especially in an input error.
I hope this helps. Best regards

making an apple-scriptable application in Objective C, getting bizarre errors

Ok, so I've got an application, and I want to make it scriptable. I set up the plist, I set up the sdef file.
So far I have only one apple Event command: gotoPage. it takes an integer. and returns a boolean.
The relevant XML is:
<command name="gotoPage" code="dcvwgoto" description="Goto a specified page">
<cocoa class="AEGoto"/>
<direct-parameter description="Page Number" type="integer"/>
<result description="True if the page exists, False othrewise" type="boolean"/>
</command>
I have an Objective-C class AEGoto.h:
#interface AEGoto :NSScriptCommand {
}
- (id)performDefaultImplementation;
- (id)performDefaultImplementation
{
int page = [[self directParameter] intValue];
Boolean retval = [gController setPage: page];
return retval? #"YES" : #"NO";
}
setPage: (int) is correct, and works fine.
When I call this, my program seems to work correctly. But then I get the error:
error "DocView got an error: 4 doesn’t understand the gotoPage message." number -1708 from 4
I also get, in my DocView output:
Error while returning the result of a script command: the result object... YES ...could not be converted to an Apple event descriptor of type 'boolean'. This instance of the class 'NSCFString' doesn't respond to -scriptingBooleanDescriptor messages.
However, if I return just the straight Boolean, I get:
Single stepping until exit from function -[NSScriptingAppleEventHandler handleCommandEvent:withReplyEvent:],
which has no line number information.
Program received signal: “EXC_BAD_ACCESS”.
so, I guess I've got 2 questions: 1) Why does it think it wants to tell 3 to goto a page? and 2) what is the correct way to return a Boolean from the applescript?
thanks.
return [NSNumber numberWithBool:retval];