Stream returns wrong type - spring-webflux

I'm trying to understand reactive style. But stuck on this example.
public class ScriptServiceImpl implements ScriptService{
private static Logger log = LoggerFactory.getLogger(ScriptServiceImpl.class);
private final ScriptEngineManager manager = new ScriptEngineManager();
private final ScriptEngine engine = manager.getEngineByName("JavaScript");
#Override
public Flux<MyFunctionResult> evaluate(MyFunction myFunction, Integer iterations){
Flux<MyFunctionResult> flux = Flux.empty();
flux.mergeWith(
Flux.range(1,iterations)
.map(counter -> {
engine.put("parametr", counter);
try {
long start = System.currentTimeMillis();
String functionResult = engine.eval(myFunction.getSource()).toString();
long timer = System.currentTimeMillis() - start;
return Mono.just(new MyFunctionResult(timer, functionResult, myFunction.getNumber(), counter));
} catch (ScriptException ex) {
return Mono.error(ex);
}
})
);
return flux;
}
}
I want to return Flux of MyFunctionResult but get Flux of Object in Flux.mergeWith section. What am i doing wrong?

There are multiple issues here
you don't need to wrap MyFunctionResult into Mono. map expects none-reactive return type. As result, instead of Mono.error you should just wrap checked exception into unchecked RuntimeException.
you need to return result of the flux.mergeWith and not flux. But in general for this example you don't need mergeWith
Your code could be converted into
return Flux.range(1,iterations)
.map(counter -> {
engine.put("parametr", counter);
try {
long start = System.currentTimeMillis();
String functionResult = engine.eval(myFunction.getSource()).toString();
long timer = System.currentTimeMillis() - start;
return new MyFunctionResult(timer, functionResult, myFunction.getNumber(), counter);
} catch (ScriptException ex) {
throw Exceptions.propagate(ex);
}
});
In addition, not sure about engine.eval but in case this is blocking code consider wrapping it and run on a separate scheduler How Do I Wrap a Synchronous, Blocking Call?

Related

webflux Mono<T> onErrorReturn not called

this is my HandlerFunction
public Mono<ServerResponse> getTime(ServerRequest serverRequest) {
return time(serverRequest).onErrorReturn("some errors has happened !").flatMap(s -> {
// this didn't called
return ServerResponse.ok().contentType(MediaType.TEXT_PLAIN).syncBody(s);
});
}
time(ServerRequest serverRequest) method is
private Mono<String> time(ServerRequest request) {
String format = DateTimeFormatter.ofPattern("HH:mm:ss").format(LocalDateTime.now());
return Mono.just("time is:" + format + "," + request.queryParam("name").get());
}
when i don't using param "name",it will throw one NoSuchElementException;
But, the Mono onErrorReturn not working!
why or what do i wrong?
The onError... operators are meant to deal with error signals happening in the pipeline.
In your case, the NoSuchElementException is thrown outside of the reactive pipeline, before anything can subscribe to the returned Mono.
I think you might get the behavior you're looking for by deferring the execution like this:
private Mono<String> time(ServerRequest request) {
return Mono.defer(() -> {
String format = DateTimeFormatter.ofPattern("HH:mm:ss").format(LocalDateTime.now());
Mono.just("time is:" + format + "," + request.queryParam("name").get());
});
}

WebFlux filter: Flux access redis,if ok then run next filter,But unconformity

In project,I verify the app_key is valid by pass redis.
I use ReactiveRedisTemplate to access redis data,and in filter I verify the app_key is valid.if the app_key is valid,then jump to next filter,else output to client the exception.
Actually:if redis connection timeout,ex should be runnig.but when the redis running normal ,the program is not exec verfiy app_key ,It direct jump to next filter.
Please tell me how do,Thanks!
#Resource
private AppKeyProvider appKeyProvider;
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
try {
String app_key =exchange.getRequest().getQueryParams().getFirst("app_key"));
//app_key verify
Flux.just(app_key).flatMap(key -> appKeyProvider.getAppKey(key)).subscribe(
appKey -> {
if (appKey == null) {
//app_key is not valid
throw new AppException(ErrorCode.ILLEGAL_APP_KEY);
}else{
//do... jump to next filter
}
},
ex -> {
throw new AppException(ErrorCode.SERVICE_BASIC_ERROR, ex);
}
);
} catch (AppException ex) {
exchange.getResponse().setStatusCode(HttpStatus.BAD_REQUEST);
exchange.getResponse().getHeaders().setContentType(MediaType.APPLICATION_JSON);
String result = RestHelper.build(ex, exchange).toString();
return exchange.getResponse().writeWith(Mono.just(exchange.getResponse().bufferFactory().wrap(result.getBytes(Charsets.UTF_8))));
}
return chain.filter(exchange);
}
AppKeyProvider.java
#Component
public class AppKeyProvider {
#Resource
private ReactiveRedisTemplate reactiveRedisTemplate;
private final static Logger logger = LoggerFactory.getLogger(AppKeyProvider.class);
private final static AppKeyProvider instance = new AppKeyProvider();
private static ConcurrentHashMap<String, Api> apiMap = new ConcurrentHashMap<String, Api>();
private final static Lock lock = new ReentrantLock(true);
/**
* Get AppKey
*
* #param app_key
* #return
*/
public Mono<AppKey> getAppKey(String app_key) {
ReactiveValueOperations<String, AppKey> operations = reactiveRedisTemplate.opsForValue();
Mono<AppKey> appKey = operations.get(RedisKeypPrefix.APP_KEY + app_key);
return appKey;
}
}
This happens because you've manually subscribed to the key lookup part. Doing so decouples the main filter processing from that operation, meaning they can happen concurrently in different threads - so they can't track each others' result.
Also, in reactive programming errors happen within the pipeline and should be dealt with operators; try/catch blocks won't work in this case.
Here's an attempt at fixing this code snippet:
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
String app_key = exchange.getRequest().getQueryParams().getFirst("app_key"));
return appKeyProvider.getAppKey(app_key)
.switchOnEmpty(Mono.error(new AppException(ErrorCode.ILLEGAL_APP_KEY)))
.flatMap(key -> chain.filter(exchange))
.onErrorResume(AppException.class, exc -> {
exchange.getResponse().setStatusCode(HttpStatus.BAD_REQUEST);
exchange.getResponse().getHeaders().setContentType(MediaType.APPLICATION_JSON);
String result = RestHelper.build(ex, exchange).toString();
return exchange.getResponse().writeWith(Mono.just(exchange.getResponse().bufferFactory().wrap(result.getBytes(Charsets.UTF_8))));
});
}

stop polling files when rabbitmq is down: spring integration

I'm working on a project where we are polling files from a sftp server and streaming it out into a object on the rabbitmq queue. Now when the rabbitmq is down it still polls and deletes the file from the server and losses the file while sending it on queue when rabbitmq is down. I'm using ExpressionEvaluatingRequestHandlerAdvice to remove the file on successful transformation. My code looks like this:
#Bean
public SessionFactory<ChannelSftp.LsEntry> sftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost(sftpProperties.getSftpHost());
factory.setPort(sftpProperties.getSftpPort());
factory.setUser(sftpProperties.getSftpPathUser());
factory.setPassword(sftpProperties.getSftpPathPassword());
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public SftpRemoteFileTemplate sftpRemoteFileTemplate() {
return new SftpRemoteFileTemplate(sftpSessionFactory());
}
#Bean
#InboundChannelAdapter(channel = TransformerChannel.TRANSFORMER_OUTPUT, autoStartup = "false",
poller = #Poller(value = "customPoller"))
public MessageSource<InputStream> sftpMessageSource() {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(sftpRemoteFileTemplate,
null);
messageSource.setRemoteDirectory(sftpProperties.getSftpDirPath());
messageSource.setFilter(new SftpPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(),
"streaming"));
messageSource.setFilter(new SftpSimplePatternFileListFilter("*.txt"));
return messageSource;
}
#Bean
#Transformer(inputChannel = TransformerChannel.TRANSFORMER_OUTPUT,
outputChannel = SFTPOutputChannel.SFTP_OUTPUT,
adviceChain = "deleteAdvice")
public org.springframework.integration.transformer.Transformer transformer() {
return new SFTPTransformerService("UTF-8");
}
#Bean
public ExpressionEvaluatingRequestHandlerAdvice deleteAdvice() {
ExpressionEvaluatingRequestHandlerAdvice advice = new ExpressionEvaluatingRequestHandlerAdvice();
advice.setOnSuccessExpressionString(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])");
advice.setPropagateEvaluationFailures(false);
return advice;
}
I don't want the files to get removed/polled from the remote sftp server when the rabbitmq server is down. How can i achieve this ?
UPDATE
Apologies for not mentioning that I'm using spring cloud stream rabbit binder. And here is the transformer service:
public class SFTPTransformerService extends StreamTransformer {
public SFTPTransformerService(String charset) {
super(charset);
}
#Override
protected Object doTransform(Message<?> message) throws Exception {
String fileName = message.getHeaders().get("file_remoteFile", String.class);
Object fileContents = super.doTransform(message);
return new customFileDTO(fileName, (String) fileContents);
}
}
UPDATE-2
I added TransactionSynchronizationFactory on the customPoller as suggested. Now it doesn't poll file when rabbit server is down, but when the server is up, it keeps on polling the same file over and over again!! I cannot figure it out why? I guess i cannot use PollerSpec cause im on 4.3.2 version.
#Bean(name = "customPoller")
public PollerMetadata pollerMetadataDTX(StartStopTrigger startStopTrigger,
CustomTriggerAdvice customTriggerAdvice) {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setAdviceChain(Collections.singletonList(customTriggerAdvice));
pollerMetadata.setTrigger(startStopTrigger);
pollerMetadata.setMaxMessagesPerPoll(Long.valueOf(sftpProperties.getMaxMessagePoll()));
ExpressionEvaluatingTransactionSynchronizationProcessor syncProcessor =
new ExpressionEvaluatingTransactionSynchronizationProcessor();
syncProcessor.setBeanFactory(applicationContext.getAutowireCapableBeanFactory());
syncProcessor.setBeforeCommitChannel(
applicationContext.getBean(TransformerChannel.TRANSFORMER_OUTPUT, MessageChannel.class));
syncProcessor
.setAfterCommitChannel(
applicationContext.getBean(SFTPOutputChannel.SFTP_OUTPUT, MessageChannel.class));
syncProcessor.setAfterCommitExpression(new SpelExpressionParser().parseExpression(
"#sftpRemoteFileTemplate.remove(headers['file_remoteDirectory'] + headers['file_remoteFile'])"));
DefaultTransactionSynchronizationFactory defaultTransactionSynchronizationFactory =
new DefaultTransactionSynchronizationFactory(syncProcessor);
pollerMetadata.setTransactionSynchronizationFactory(defaultTransactionSynchronizationFactory);
return pollerMetadata;
}
I don't know if you need this info but my CustomTriggerAdvice and StartStopTrigger looks like this :
#Component
public class CustomTriggerAdvice extends AbstractMessageSourceAdvice {
#Autowired private StartStopTrigger startStopTrigger;
#Override
public boolean beforeReceive(MessageSource<?> source) {
return true;
}
#Override
public Message<?> afterReceive(Message<?> result, MessageSource<?> source) {
if (result == null) {
if (startStopTrigger.getStart()) {
startStopTrigger.stop();
}
} else {
if (!startStopTrigger.getStart()) {
startStopTrigger.stop();
}
}
return result;
}
}
public class StartStopTrigger implements Trigger {
private PeriodicTrigger startTrigger;
private boolean start;
public StartStopTrigger(PeriodicTrigger startTrigger, boolean start) {
this.startTrigger = startTrigger;
this.start = start;
}
#Override
public Date nextExecutionTime(TriggerContext triggerContext) {
if (!start) {
return null;
}
start = true;
return startTrigger.nextExecutionTime(triggerContext);
}
public void stop() {
start = false;
}
public void start() {
start = true;
}
public boolean getStart() {
return this.start;
}
}
Well, would be great to see what your SFTPTransformerService and determine how it is possible to perform an onSuccessExpression when there should be an exception in case of down broker.
You also should not only throw an exception do not perform delete, but consider to add a RequestHandlerRetryAdvice to re-send the file to the RabbitMQ: https://docs.spring.io/spring-integration/docs/5.0.6.RELEASE/reference/html/messaging-endpoints-chapter.html#retry-advice
UPDATE
So, well, since Gary guessed that you use Spring Cloud Stream to send message to the Rabbit Binder after your internal process (very sad that you didn't share that information originally), you need to take a look to the Binder error handling on the matter: https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/#_retry_with_the_rabbitmq_binder
And that is true that ExpressionEvaluatingRequestHandlerAdvice is applied only for the SFTPTransformerService and nothing more. The downstream error (in the Binder) is not included in this process already.
UPDATE 2
Yeah... I think Gary is right, and we don't have choice unless configure a TransactionSynchronizationFactory on the customPoller level instead of that ExpressionEvaluatingRequestHandlerAdvice: ExpressionEvaluatingRequestHandlerAdvice .
The DefaultTransactionSynchronizationFactory can be configured with the ExpressionEvaluatingTransactionSynchronizationProcessor, which has similar goal as the mentioned ExpressionEvaluatingRequestHandlerAdvice, but on the transaction level which will include your process starting with the SFTP Channel Adapter and ending on the Rabbit Binder level with the send to AMQP attempts.
See Reference Manual for more information: https://docs.spring.io/spring-integration/reference/html/transactions.html#transaction-synchronization.
The point with the ExpressionEvaluatingRequestHandlerAdvice (and any AbstractRequestHandlerAdvice) that they have a boundary only around handleRequestMessage() method, therefore only during the component they are declared.

SAP JCo RETURN Table empty when using TransactionID

I'm using the JCo Library to access SAP standard BAPI. Well everything is also working except that the RETURN Table is always empty when I use the TID (TransactionID).
When I just remove the TID, I get the RETURN table filled with Warnings etc. But unfortunately I need to use the TID for the transactional BAPI, otherwise the changes are not commited.
Why is the RETURN TABLE empty when using TID?
Or how must I commit changes to a transactional BAPI?
Here speudo-code of a BAPI access:
import com.sap.conn.jco.*;
import org.apache.commons.logging.*;
public class BapiSample {
private static final Log logger = LogFactory.getLog(BapiSample.class);
private static final String CLIENT = "400";
private static final String INSTITUTION = "1000";
protected JCoDestination destination;
public BapiSample() {
this.destination = getDestination("mySAPConfig.properties");
}
public void execute() {
String tid = null;
try {
tid = destination.createTID();
JCoFunction function = destination.getRepository().getFunction("BAPI_PATCASE_CHANGEOUTPATVISIT");
function.getImportParameterList().setValue("CLIENT", CLIENT);
function.getImportParameterList().setValue("INSTITUTION", INSTITUTION);
function.getImportParameterList().setValue("MOVEMNT_SEQNO", "0001");
// Here we will then all parameters of the BAPI....
// ...
// Now the execute
function.execute(destination, tid);
// And getting the RETURN Table. !!! THIS IS ALWAYS EMPTY!
JCoTable returnTable = function.getTableParameterList().getTable("RETURN");
int numRows = returnTable.getNumRows();
for (int i = 0; i < numRows; i++) {
returnTable.setRow(i);
logger.info("RETURN VALUE: " + returnTable.getString("MESSAGE"));
}
JCoFunction commit = destination.getRepository().getFunction("BAPI_TRANSACTION_COMMIT");
commit.execute(destination, tid);
destination.confirmTID(tid);
} catch (Throwable ex) {
try {
if (destination != null) {
JCoFunction rollback = destination.getRepository().getFunction("BAPI_TRANSACTION_ROLLBACK");
rollback.execute(destination, tid);
}
} catch (Throwable t1) {
}
}
}
protected static JCoDestination getDestination(String fileName) {
JCoDestination result = null;
try {
result = JCoDestinationManager.getDestination(fileName);
} catch (Exception ex) {
logger.error("Error during destination resolution", ex);
}
return result;
}
}
UPDATE 10.01.2013: I was finally able to get both, RETURN table filled and Inputs commited. Solution is to do just both, a commit without TID, get the RETURN table and then making again a commit with TID.
Very very strange, but maybe the correct usage of the JCo Commits. Can someone explain this to me?
I was able to get both, RETURN table filled and Inputs commited.
Solution is to do just both, a commit without TID, get the RETURN table and then making again a commit with TID.
You should not call execute method 2 times it will incremenmt sequence number
You should use begin and end method in JCoContext class.
If you call begin method at the beginning of the process, the data will be updated and message will be returned.
Here is the sample code.
JCoDestination destination = JCoDestinationManager.getDestination("");
try
{
JCoContext.begin(destination);
function.execute(destination)
function.execute(destination)
}
catch (AbapException ex)
{
...
}
catch (JCoException ex)
{
...
}
catch (Exception ex)
{
...
}
finally
{
JCoContext.end(destination);
}
you can reffer the further information from this URL.
http://www.finereporthelp.com/download/SAP/sapjco3_linux_32bit/javadoc/com/sap/conn/jco/JCoContext.html

Load external properties files into EJB 3 app running on WebLogic 11

Am researching the best way to load external properties files from and EJB 3 app whose EAR file is deployed to WebLogic.
Was thinking about using an init servlet but I read somewhere that it would be too slow (e.g. my message handler might receive a message from my JMS queue before the init servlet runs).
Suppose I have multiple property files or one file here:
~/opt/conf/
So far, I feel that the best possible solution is by using a Web Logic application lifecycle event where the code to read the properties files during pre-start:
import weblogic.application.ApplicationLifecycleListener;
import weblogic.application.ApplicationLifecycleEvent;
public class MyListener extends ApplicationLifecycleListener {
public void preStart(ApplicationLifecycleEvent evt) {
// Load properties files
}
}
See: http://download.oracle.com/docs/cd/E13222_01/wls/docs90/programming/lifecycle.html
What would happen if the server is already running, would post start be a viable solution?
Can anyone think of any alternative ways that are better?
It really depends on how often you want the properties to be reloaded. One approach I have taken is to have a properties file wrapper (singleton) that has a configurable parameter that defines how often the files should be reloaded. I would then always read properties through that wrapper and it would reload the properties ever 15 minutes (similar to Log4J's ConfigureAndWatch). That way, if I wanted to, I can change properties without changing the state of a deployed application.
This also allows you to load properties from a database, instead of a file. That way you can have a level of confidence that properties are consistent across the nodes in a cluster and it reduces complexity associated with managing a config file for each node.
I prefer that over tying it to a lifecycle event. If you weren't ever going to change them, then make them static constants somewhere :)
Here is an example implementation to give you an idea:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;
/**
* User: jeffrey.a.west
* Date: Jul 1, 2011
* Time: 8:43:55 AM
*/
public class ReloadingProperties
{
private final String lockObject = "LockMe";
private long lastLoadTime = 0;
private long reloadInterval;
private String filePath;
private Properties properties;
private static final Map<String, ReloadingProperties> instanceMap;
private static final long DEFAULT_RELOAD_INTERVAL = 1000 * 60 * 5;
public static void main(String[] args)
{
ReloadingProperties props = ReloadingProperties.getInstance("myProperties.properties");
System.out.println(props.getProperty("example"));
try
{
Thread.sleep(6000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
System.out.println(props.getProperty("example"));
}
static
{
instanceMap = new HashMap(31);
}
public static ReloadingProperties getInstance(String filePath)
{
ReloadingProperties instance = instanceMap.get(filePath);
if (instance == null)
{
instance = new ReloadingProperties(filePath, DEFAULT_RELOAD_INTERVAL);
synchronized (instanceMap)
{
instanceMap.put(filePath, instance);
}
}
return instance;
}
private ReloadingProperties(String filePath, long reloadInterval)
{
this.reloadInterval = reloadInterval;
this.filePath = filePath;
}
private void checkRefresh()
{
long currentTime = System.currentTimeMillis();
long sinceLastLoad = currentTime - lastLoadTime;
if (properties == null || sinceLastLoad > reloadInterval)
{
System.out.println("Reloading!");
lastLoadTime = System.currentTimeMillis();
Properties newProperties = new Properties();
FileInputStream fileIn = null;
synchronized (lockObject)
{
try
{
fileIn = new FileInputStream(filePath);
newProperties.load(fileIn);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if (fileIn != null)
{
try
{
fileIn.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
properties = newProperties;
}
}
}
public String getProperty(String key, String defaultValue)
{
checkRefresh();
return properties.getProperty(key, defaultValue);
}
public String getProperty(String key)
{
checkRefresh();
return properties.getProperty(key);
}
}
Figured it out...
See the corresponding / related post on Stack Overflow.