Getting "org.springframework.web.client.HttpServerErrorException: 500 null" on restTemplate.postForObject request to Graphdb - graphdb

**Code:**
String path = String.format("/%s/uploadRuleSet", REST_REPO_PATH);
String repoUrl = getRepositoryManager().getLocation().toString();
URI uri = new URIBuilder(repoUrl + path).build();
InputStream stream = getClass().getClassLoader().getResourceAsStream("repoconfig/ccs-ruleset.pie");
ByteArrayResource byteResource = new ByteArrayResource(IOUtils.toString(stream, UTF_8).getBytes(UTF_8)) {
#Override
public String getFilename() {
return "ccs-ruleset.pie";
}
};
MultiValueMap<String, Object> multipartMap = new LinkedMultiValueMap<>();
multipartMap.add("ruleset", byteResource);
logger.info("Uploading ruleset to: {}", uri);
response = rest.postForObject(uri, multipartMap, RulesetResponse.class);
At rest.postForObject(uri, multipartMap, RulesetResponse.class);
When it is trying to upload the ruleset,I get the Following stack trace:
Caused by: org.springframework.web.client.HttpServerErrorException: 500 null
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:97) ~[spring-web-5.0.9.RELEASE.jar:5.0.9.RELEASE]
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:79) ~[spring-web-5.0.9.RELEASE.jar:5.0.9.RELEASE]
at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63) ~[spring-web-5.0.9.RELEASE.jar:5.0.9.RELEASE]
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:730) ~[spring-web-5.0.9.RELEASE.jar:5.0.9.RELEASE]
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:688) ~[spring-web-5.0.9.RELEASE.jar:5.0.9.RELEASE]
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:662) ~[spring-web-5.0.9.RELEASE.jar:5.0.9.RELEASE]
I checked the logs in the graphdb:
[DEBUG] 2020-04-01 02:48:45,716 [http-nio-7200-exec-2 | o.e.r.h.s.ProtocolUtil] Acceptable formats: text/csv;q=0.8, application/sparql-results+json;q=0.8, application/json;q=0.8, application/sparql-results+xml;q=0.8, application/xml;q=0.8, text/tab-separated-values;q=0.8, application/x-binary-rdf-results-table
[INFO ] 2020-04-01 02:48:45,723 [http-nio-7200-exec-2 | o.e.r.h.s.r.TupleQueryResultView] Request for query 3392903 is finished
[INFO ] 2020-04-01 02:58:52,742 [http-nio-7200-exec-4 | c.o.t.r.RuleCompilerBase] Current file: /opt/graphdb/graphdb-free-8.9.0/work/tmp/graphdb6671553462073041570/work/Tomcat/localhost/ROOT/ccs-ruleset1585735132740.pie
[INFO ] 2020-04-01 02:58:52,762 [http-nio-7200-exec-4 | c.o.t.r.RuleCompilerBase] Compiled: '/opt/graphdb/graphdb-free-8.9.0/work/tmp/graphdb6671553462073041570/work/Tomcat/localhost/ROOT/ccs-ruleset1585735132740.pie'
[DEBUG] 2020-04-01 02:59:11,593 [http-nio-7200-exec-5 | o.e.r.h.s.ProtocolUtil] Acceptable formats: text/csv;q=0.8, application/sparql-results+json;q=0.8, application/json;q=0.8, application/sparql-results+xml;q=0.8, application/xml;q=0.8, text/tab-separated-values;q=0.8, application/x-binary-rdf-results-table
[INFO ] 2020-04-01 02:59:11,593 [http-nio-7200-exec-5 | o.e.r.h.s.r.TupleQueryResultView] Request for query 3392903 is finished
GraphDB logs seems to indicate that there weren't any issues.
But on the client side I get the HttpServerErrorException : 500 log.
Not able to figure out what exactly the issue is...

Returned result from GraphDB is UploadRuleSetResult, not RulesetResponse. You're giving invalid responseType to postForObject method. Changing the type will solve the issue.

Related

Redis Reactive Streams Subscriber Thread Hangs

Am trying to use Spring Boot Redis Reactive Stream to subscribe to the stream as a listener.When the data is inserted into the stream listener will pass to client using GRPC stream. I have pointer which is just redis value to keep track of last record which was delivered to the client.The thread is getting blocked randomly when i set pointer to redis and getting timeout.
Mainly am using this pointer if client re-connects after sometime I wanted to deliver data which was last sent and till current data template.opsForValue().set(pointerKey, msg.getId().toString()) .block(Duration.ofSeconds(5)); this place thread is getting blocked
Please let me know is anything wrong in the below code. If posted 10 record to the stream I got the error after receiving 5 record.
Code
public void subscribe(){
String channelId = this.streamRequest.getTopic();
String identifier = this.streamRequest.getIdentifier();
boolean isNew = this.streamRequest.getNew();
String pointerKey = channelId + "_" + identifier + "_pointer";
StreamOffset<String> stringStreamOffset = StreamOffset.fromStart(channelId);
if(isNew){
// If client want to read data from the start
// Removed the pointer
template.opsForValue().delete(pointerKey).block();
} else {
String id = template.opsForValue().get(pointerKey).block();
stringStreamOffset = id != null ? StreamOffset.create(channelId, ReadOffset.from(id)) : StreamOffset.fromStart(channelId);
}
logger.info("[SC] subscribed {}", this.streamRequest);
Flux<ObjectRecord<String, String>> receiver = this.streamReceive.receive(stringStreamOffset);
disposable = receiver.subscribe(msg -> {
logger.info("Processing message {}", msg.getValue());
String value = msg.getValue();
StreamResponse streamResponse = StreamResponse.newBuilder().setData(value).build();
try{
logger.info("[SC] posting data to the grpc client topic {}", this.streamRequest);
this.responseObserver.onNext(streamResponse);
logger.info("[SC] Successfully posted data to the grpc client {}", this.streamRequest);
logger.info("[SC] Updating pointer {}", pointerKey);
template.opsForValue().set(pointerKey, msg.getId().toString())
.block(Duration.ofSeconds(5));
logger.info("[SC] pointer update completed {}", pointerKey);
}catch (Exception ex){
logger.error("Error:{}", ex.getMessage());
this.responseObserver.onError(ex.getCause());
close();
}
});
}
Error:
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Name: lettuce-nioEventLoop-4-1
State: TIMED_WAITING on java.util.concurrent.CountDownLatch$Sync#3c9ebf7a
Total blocked: 2 Total waited: 60
Stack trace:
java.base#17.0.2/jdk.internal.misc.Unsafe.park(Native Method)
java.base#17.0.2/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:252)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:717)
java.base#17.0.2/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1074)
java.base#17.0.2/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:276)
app//reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:121)
app//reactor.core.publisher.Mono.block(Mono.java:1731)
app//ai.jiffy.message.publisher.ws.StreamConnection.setPointer(StreamConnection.java:68)
app//ai.jiffy.message.publisher.ws.StreamConnection.lambda$new$0(StreamConnection.java:54)
app//ai.jiffy.message.publisher.ws.StreamConnection$$Lambda$1318/0x0000000801553610.accept(Unknown Source)
app//reactor.core.publisher.LambdaSubscriber.onNext(LambdaSubscriber.java:160)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
app//reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
app//reactor.core.publisher.FluxCreate$SerializedFluxSink.next(FluxCreate.java:154)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.onStreamMessage(DefaultStreamReceiver.java:398)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription.access$300(DefaultStreamReceiver.java:210)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:360)
app//org.springframework.data.redis.stream.DefaultStreamReceiver$StreamSubscription$1.onNext(DefaultStreamReceiver.java:351)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.onNext(FluxUsingWhen.java:345)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
app//reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onNext(MonoFlatMapMany.java:250)
app//reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
app//io.lettuce.core.RedisPublisher$ImmediateSubscriber.onNext(RedisPublisher.java:886)
app//io.lettuce.core.RedisPublisher$RedisSubscription.onNext(RedisPublisher.java:291)
app//io.lettuce.core.output.StreamingOutput$Subscriber.onNext(StreamingOutput.java:64)
app//io.lettuce.core.output.StreamReadOutput.complete(StreamReadOutput.java:110)
app//io.lettuce.core.protocol.RedisStateMachine.doDecode(RedisStateMachine.java:343)
app//io.lettuce.core.protocol.RedisStateMachine.decode(RedisStateMachine.java:295)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:841)
app//io.lettuce.core.protocol.CommandHandler.decode0(CommandHandler.java:792)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:766)
app//io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:658)
app//io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
app//io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
app//io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
app//io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
app//io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
app//io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
app//io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
app//io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
app//io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
app//io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
app//io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base#17.0.2/java.lang.Thread.run(Thread.java:833)
Thanks in advance.

spring-cloud-stream3.0 batch-mode is true consumer data Can't convert value of class

config info
spring:
kafka:
consumer:
max-poll-records: 5
cloud:
stream:
instance-count: 5
instance-index: 0
kafka:
binder:
brokers: 127.0.0.1:9092
auto-create-topics: true
auto-add-partitions: true
min-partition-count: 5
bindings:
log-data-in:
destination: log-data1
group: log-data-group
contenttype: text/plain
consumer:
partitioned: true
batch-mode: true
log-data-out:
destination: log-data1
contentType: text/plain
producer:
use-native-encoding: true
partitionCount: 5
# configuration:
# key.serializer: org.apache.kafka.common.serialization.StringSerializer
# value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer
send kafka code:
LogData logData = new LogData();
logData.setId(1)
logData.setVer("22")
MessageChannel messageChannel = logDataStreams.outboundLogDataStreams();
boolean sent = messageChannel.send(MessageBuilder.withPayload(logData).setHeader("partitionKey", key).build());
consumer data listener kafka topic
I think there is an error in processing the data type in kafka, there is no problem, there is no less setting, why is the conversion type error when consuming kafka information, bytes cannot know to convert an entity class
The code here is the consumption data, which is the place where the error is reported
#StreamListener(LogDataStreams.INPUT_LOG_DATA)
public void handleLogData(List<LogData> messages) {
System.out.println(messages);
messages.parallelStream().forEach(item -> {
System.out.println(item);
});
}
Define topic identifier
public interface LogDataStreams {
String INPUT_LOG_DATA = "log-data-in";
String OUTPUT_LOG_DATA = "log-data-out";
String INPUT_LOG_SC = "log-sc-in";
String OUTPUT_LOG_SC = "log-sc-out";
String INPUT_LOG_BEHAVIOR = "log-behavior-in";
String OUTPUT_LOG_BEHAVIOR = "log-behavior-out";
#Input(INPUT_LOG_DATA)
SubscribableChannel inboundLogDataStreams();
#Output(OUTPUT_LOG_DATA)
MessageChannel outboundLogDataStreams();
}
I think there is an error in processing the data type in kafka, there is no problem, there is no less setting, why is the conversion type error when consuming kafka information, bytes cannot know to convert an entity class
Error in last run:
Caused by: org.apache.kafka.common.errors.SerializationException: Can't convert value of class com.xx.core.data.model.LogData to class org.apache.kafka.common.serialization.ByteArraySerializer specified in value.serializer
Caused by: java.lang.ClassCastException: com.xx.core.data.model.LogData cannot be cast to [B
Please help me, how to solve this problem

Retry is executed only once in Spring Cloud Stream Reactive

When I try again in Spring Cloud Stream Reactive, a situation that I don't understand arises, so I ask a question.
In case of sending String type data per second, after processing in s-c-stream Function, I intentionally caused RuntimeException according to conditions.
#Bean
fun test(): Function<Flux<String>, Flux<String>?> = Function{ input ->
input.map { sellerId ->
if(sellerId == "I-ZICGO")
throw RuntimeException("intentional")
else
log.info("do normal: {}", sellerId)
sellerId
}.retryWhen(Retry.from { companion ->
companion.map { rs ->
if (rs.totalRetries() < 3) { // retrying 3 times
log.info("retry!!!: {}", rs.totalRetries())
rs.totalRetries()
}
else
throw Exceptions.propagate(rs.failure())
}
})
}
However, the result of running the above logic is:
2021-02-25 16:14:29.319 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 0 subscriber(s).
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] k.c.m.c.service.impl.ItemServiceImpl : retry!!!: 0
2021-02-25 16:14:29.322 INFO 36211 --- [container-0-C-1] o.s.c.s.m.DirectWithAttributesChannel : Channel 'consumer.processingSellerItem-in-0' has 1 subscriber(s).
Retry is processed only once.
Should I change from reactive to imperative to fix this?
In short, yes. The retry settings are meaningless for reactive functions. You can see a more details explanation on the similar SO question here

javax.cache.CacheException: Failed to find SQL table for type: XXXXXX

I am using ignite 2.7 version and getting the below exception while querying the ignite cache:
19/10/30 05:33:14 ERROR yarn.ApplicationMaster: User class threw exception: javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
This is my code to start ignite:
def getIgnite(): Ignite = {
val clusterName = clusterProperties.getProperty("XXXXXXXX")
Ignition.setClientMode(true)
try {
val ignite = Ignition.ignite(clusterName);
if ( clusterName == ignite.name() ) {
logInfo("#### Found and returning client for cluster: " +
clusterName)
return ignite
}
}
catch {
case e: Exception => e.printStackTrace()
}
val configFilePath = clusterProperties.getProperty("XXXXXXXX")
logInfo("#### configFilePath: " + configFilePath)
val configInputStream = FileSystem.get(new Configuration()).open(new
Path(configFilePath));
logInfo("#### Starting Ignite Client")
return Ignition.start(configInputStream) }
I am loading data into the cache and data get loaded successfully and cache size also get printed as below:
INFO dataloader.IgniteDataLoader: #### OFFICIAL_NAME_CACHE SIZE => 51016471
Below code is for accessing the cache:
def createXXXXXXCache: IgniteCache[String, CanonicalXXXXX] = {
val orgCacheCfg: CacheConfiguration[String, CanonicalXXXXX] =
new CacheConfiguration[String, CanonicalXXXXX](OFFICIAL_NAME_CACHE)
orgCacheCfg.setIndexedTypes(classOf[String], classOf[CanonicalXXXXX])
orgCacheCfg.setCacheMode(CacheMode.PARTITIONED)
orgCacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC)
orgCacheCfg.setBackups(3)
getIgnite().getOrCreateCache(orgCacheCfg) }
But while trying to query the cache I am getting the exception. Below code is for querying the cache:
val companyName = "SUFFOLK CONST CO"
val queryString = "orgName = '" + companyName + "'"
val companyNameQuery = new SqlQuery[String, CanonicalXXXXX]
(classOf[CanonicalXXXXX], queryString)
val queryCursor = igniteXXXXXX.createXXXXXXCache.query(companyNameQuery)
val queryResults = Future {
queryCursor.getAll()
}
try {
val companyResults = extractXXXXXVO(queryResults)
logInfo(s"Ignite Results returned - $companyResults")
} catch {
case exe: Exception => logInfo(s"OfficialXXXX exact search timed out")
queryCursor.close()
Vector.empty
}
The code throwing exception at query() method for the below line:
val queryCursor = igniteXXXXXX.createXXXXXXCache.query(companyNameQuery)
19/10/30 05:33:14 INFO dataloader.IgniteServerXXXXXX: Examplelogger1 - 'SqlQuery [type= CanonicalXXXXX, alias=null, sql=orgName = 'SUFFOLK CONSTRUCTION CO INC', args=null, timeout=0, distributedJoins=false, replicatedOnly=false]'
19/10/30 05:33:14 INFO dataloader.igniteXXXXXX: #### Found and returning client for cluster: JalaDalaXXXXX
19/10/30 05:33:14 ERROR yarn.ApplicationMaster: User class threw exception: javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
javax.cache.CacheException: Failed to find SQL table for type: CanonicalXXXXX
at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697)
at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:376)
at xx.xx.dataloader.IgniteServerXXXXXX$.loadOAData(IgniteServerXXXXXX.scala:70)
at xx.xx.StartXXXXXXX$.startIgniteAndDataloading(StartXXXXXXX.scala:49)
at xx.xx.StartXXXXXXX$delayedInit$body.apply(StartXXXXXXX.scala:13)
at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.App$$anonfun$main$1.apply(App.scala:71)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:32)
at scala.App$class.main(App.scala:71)
at xx.xx.StartXXXXXXX$.main(StartXXXXXXX.scala:12)
at xx.xx.StartXXXXXXX.main(StartXXXXXXX.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:567)
Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to find SQL table for type: CanonicalNameOAVO
at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSql(IgniteH2Indexing.java:1843)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$7.applyx(GridQueryProcessor.java:2289)
at org.apache.ignite.internal.processors.query.GridQueryProcessor$7.applyx(GridQueryProcessor.java:2287)
at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.queryDistributedSql(GridQueryProcessor.java:2286)
at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySql(GridQueryProcessor.java:2267)
at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:682)
Could someone please suggest how to resolve the issue.
Thanks,
Chandan
I'm not sure about the root cause of the issue, but first of all you should replace
val queryString = "orgName = '" + companyName + "'"
val companyNameQuery = new SqlQuery[String, CanonicalXXXXX](classOf[CanonicalXXXXX], queryString)
with
val queryString = "orgName = ?"
val companyNameQuery = new SqlQuery[String, CanonicalXXXXX](classOf[CanonicalXXXXX], queryString)
companyNameQuery.setArgs(companyName)
That's the correct way of using of this API. By the way SqlQuery is a kind of deprecated stuff. You should prefer to use SqlFieldsQuery instead of it.

Using karate-config parameters in a feature file

The karate header examples do not show how to access config values other than baseUrl. When I switch environments (passing in -Dkarate.env=qual as part of the run command) then baseUrl is set correctly.
The problem is, I want to use other config values as shown here but when I run the test, it fails to access config.ApiKey correctly. Instead I get this error
html report:
file:/C:/bitbucket/karate-checkdigit-api/target/surefire-reports/TEST-features.checkdigitapi.VA.html
Tests run: 250, Failures: 0, Errors: 50, Skipped: 175, Time elapsed: 4.112 sec <<< FAILURE!
* def secretKey = config.apiKey(| XYZ | 2110974841 | 204 | Valid |) Time elapsed: 0.005 sec <<< ERROR!
java.lang.RuntimeException: no variable found with name: config
at com.intuit.karate.Script.getValuebyName(Script.java:323)
at com.intuit.karate.Script.evalJsonPathOnVarByName(Script.java:378)
at com.intuit.karate.Script.eval(Script.java:309)
at com.intuit.karate.Script.eval(Script.java:194)
at com.intuit.karate.Script.assign(Script.java:656)
at com.intuit.karate.Script.assign(Script.java:587)
at com.intuit.karate.StepDefs.def(StepDefs.java:265)
at ✽.* def secretKey = config.apiKey(features/checkdigitapi/XYZ.feature:6)
My .feature file and karate-config.js are below.
XYZ.feature
#regression
Feature: Checkdigit Algorithm API
Background:
* url baseUrl
* def secretKey = config.apiKey
* configure ssl = true
Scenario Outline: Testing XYZ algorithm
* configure headers = { KeyId: secretKey, Accept: 'application/json' }
Given path 'headers'
And param url = baseUrl
And params { customerId: '<custcode>', algoId: '<algo>' }
When method get
Then status <val>
Examples:
| algo | custcode | val | comment |
| XYZ | 2110974841 | 204 | Valid |
| XYZ | 7790011614 | 204 | Valid |
| XYZ | 5580015174 | 204 | Valid |
| XYZ | 2110974840 | 400 | expected check digit 1 |
| XYZ | 211097484 | 400 | not 10 digits |
| XYZ | 211097484x | 400 | not numeric |
karate-config.js
function() {
//set up runtime variables based on environment
//get system property 'karate.env'
var env = karate.env;
if (!env) { env = 'dev'; } // default when karate.env not set
// base config
var config = {
env: env,
baseUrl: 'https://localapi.abc123.example.com/api/v1/validate/customerid',
apiKey: ''
}
//switch environment
if (env == 'dev') {
config.baseUrl = 'https://devapi.abc123.example.com/api/v1/validate/customerid';
config.apiKey = 'fake-1ba403ca8938176f3a62de6d30cfb8e';
}
else if (env == 'qual') { //Pre-production environment settings
config.baseUrl = 'https://qualapi.abc123.example.com/api/v1/validate/customerid';
config.apiKey = 'fake-d5de2eb8c0920537f5488f6535c139f2';
}
karate.log('karate.env =', karate.env);
karate.log('config.baseUrl =', config.baseUrl);
karate.log('config.apiKey =', config.apiKey);
return config;
}
(similar issue here, using a separate headers.js: https://github.com/intuit/karate/issues/94)
Keep in mind that all the keys within the JSON object returned by karate-config.js will be injected as variables, and nothing else. So you will not be able to refer to config, but you will certainly be able to refer to apiKey.
I think if you make this simple change, things will start working:
* def secretKey = apiKey
Also, I think you have a problem in the first line of the scenario, it should be:
* configure headers = { KeyId: '#(secretKey)', Accept: 'application/json' }
FYI my final, correctly working XYZ.feature file looks like this now.
The line Given path 'headers' caused header info to creep into the url so it's removed.
XYZ.feature
#regression
Feature: Checkdigit Algorithm API
Background:
* url baseUrl
* def secretKey = apiKey
* configure ssl = true
Scenario Outline: Testing XYZ algorithm
* configure headers = { KeyId: '#(secretKey)', Accept: 'application/json' }
Given url baseUrl
And params { customerId: '<custcode>', algoId: '<algo>' }
When method get
Then status <val>
Examples:
[...]