WSO2 ESB route endpoints based on results from DSS - dynamic

So pretty much I want to call
/sendAllUsersAnEmail
Which will call the DSS and do something along the lines of SELECT user_id FROM users WHERE status = 'PENDING'
Here is the issue. How can I get the ESB to loop through the results (or can I get the DSS to call an API directly?) and make a call to /sendEmail/{user_id} for each user? or is this not possible and do I need to return the results to an outside language and call the esb again for each results.

If I understand what you need it´s something like that:
You have a table inside your system DB, with the user_id from users pending to do something in your system and you need to consult this table, get the list of user_id and for every entry in this list made a call to a restful service passing the user_id
So my idea is:
Use a data service to obtain the user_id list.
Create a proxy service that in seq1 call this data service and in a seq2 get the result.
In seq2 use the iterator mediator and splits the messages into parts and processes them asynchronously like in this sample: https://docs.wso2.com/display/ESB481/Sample+400%3A+Message+Splitting+and+Aggregating+the+Responses
An example:
<iterate expression="//m0:getQuote/m0:request" preservePayload="true"
attachPath="//m0:getQuote"
xmlns:m0="http://services.samples">
<target>
<sequence>
<send>
<endpoint>
<address
uri="http://localhost:9000/services/SimpleStockQuoteService"/>
</endpoint>
</send>
</sequence>
</target>
</iterate>
I hope this help you.
Regards.

Related

SAP - Send values To RFC

I have the following RFC structure:
<xds:complexType name="RFC_FUNCTION_NAME">
<xds:sequence>
<xds:element name="FIELD1" minOccurs="0">
<xds:simpleType>
<xds:restriction base="xds:string">
<xds:maxLength value="5"/>
</xds:restriction>
</xds:simpleType>
</xds:element>
<xds:sequence>
<xds:element name="FIELD2" minOccurs="0">
<xds:simpleType>
<xds:restriction base="xds:string">
<xds:maxLength value="5"/>
</xds:restriction>
</xds:simpleType>
</xds:element>
</xds:sequence>
</xds:complexType>
I would like to know how to send data to SAP with this specific RFC.
How am i supposed to call this rfc to send, as an example "Hello" and "world"?
Thanks a lot
I need to send data to a SAP system from a non-SAP system that exposes the ability to "Call RFC".
The specifications of the RFC to call is like the one that I've posted here.
All I am supposed to archive is to update fields in a specific record.
So I imagine FIELD1 will identify my record, and FIELD2 will contain the value to update.
If my question still makes no sense, can you address me towards some relevant topics?

WSO2 ESB access context property in the response sequence of an HTTP endpoint

I'm calling an HTTP endpoint and getting the response in
a sequence. Response is getting logged in the seq_sla_resp.
<send receive="seq_sla_resp">
<endpoint key="gov:EDI/SLA/endpoints/edi_sla_payment_ep.xml" />
</send>
Inside this response sequence I'm unable to get a property which I set previously during the call (in the main proxy).
<property expression="//m1:sla_row/m1:tran_id/text()"
name="tran_id" scope="default" type="STRING"
xmlns:m1="http://ws.wso2.org/dataservice" />
When I try to log the property in the seq_sla_resp it ends up in the below error message
<log>
<property expression="$tran_id" name="tran_id" xmlns:m0="http://ws.wso2.org/dataservice"/>
</log>
Following is the error.
SynapseXPath Evaluation of the XPath expression $tran_id resulted in an error
org.jaxen.UnresolvableException: Variable tran_id
How can i get the context value in the response sequence.
In the documentaion it says default scope has the largest life span for the property.
Any help is very much appreciated.
I think you would find that your expression would also not work in the inSequence. You should either use expression="$ctx:tran_id" or expression="get-property('tran_id')"
Please note that WSO2 recommends using $ctx instead of the get-property if the scope is default. The get-property methods search in the registry if the value is not available in the message context. Therefore, it affects the performance.
In your case , you can use
<property expression="$ctx:tran_id" name="tran_id" scope="default" xmlns:m0="http://ws.wso2.org/dataservice"/>
Thanks
Kranthi

WCF returning nil value and actual value in Data Contract

I have WCF service that is calling another WCF service to get some information from one of our systems and it appears that the values being returned contained some nil values. However, on looking at the XML that was returned, it appeared that the returned values contain two entries for the same DataMamber, one with a nil value and one with the actual value i was expecting eg,
I see something similar to the following in the of the XML returned where the DataMembers have nil values:
<b:AccountNumber i:nil="true" />
<b:Created>0001-01-01T00:00:00</b:Created>
<b:CreatedBy i:nil="true" />
<b:EmailAddress i:nil="true" />
<b:GivenNames i:nil="true" />
and then in the same document but further down, I see the following where the same Data Members have the values I expect:
<b:Id>16996172</b:Id>
<b:Created>2007-07-16T16:32:48.789755</b:Created>
<b:CreatedBy>SYSTEM</b:CreatedBy>
<b:RowStatus>None</b:RowStatus>
<b:AccountNumber>1234567</b:AccountNumber>
<b:EmailAddress>email#test.com.au</b:EmailAddress>
<b:GivenNames>TEST NAME</b:GivenNames>
Not all the DataMembers that are being returned are duplicated like this and it seems that a few values are returned with nil then all of the correct values are returned.
Has anyone seen something like this before or could hazard a guess as to what could be causing it?
It seems that the problem was caused by the WSDL and Data Contracts not matching the web services themselves.
Running svcutil.exe against the web services that were running and not the WSDL files provided fixed the problem.

Is Apache Camel's idempotent consumer pattern scalable?

I'm using Apache Camel 2.13.1 to poll a database table which will have upwards of 300k rows in it. I'm looking to use the Idempotent Consumer EIP to filter rows that have already been processed.
I'm wondering though, whether the implementation is really scalable or not. My camel context is:-
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="main">
<from
uri="sql:select * from transactions?dataSource=myDataSource&consumer.delay=10000&consumer.useIterator=true" />
<transacted ref="PROPAGATION_REQUIRED" />
<enrich uri="direct:invokeIdempotentTransactions" />
<!-- Any processors here will be executed on all messages -->
</route>
<route id="idempotentTransactions">
<from uri="direct:invokeIdempotentTransactions" />
<idempotentConsumer
messageIdRepositoryRef="jdbcIdempotentRepository">
<ognl>#{request.body.ID}</ognl>
<!-- Anything here will only be executed for non-duplicates -->
<log message="non-duplicate" />
<to uri="stream:out" />
</idempotentConsumer>
</route>
</camelContext>
It would seem that the full 300k rows are going to be processed every 10 seconds (via consumer.delay parameter) which seems very inefficient. I would expect some sort of feedback loop as part of the pattern so that the query that feeds the filter could take advantage of the set of rows already processed.
However, the messageid column in the CAMEL_MESSAGEPROCESSED table has the pattern of
{1908988=null}
where 1908988 is the request.body.ID I've set the EIP to key on so this doesn't make it easy to incorporate into my query.
Is there a better way of using the CAMEL_MESSAGEPROCESSED table as a feedback loop into my select statement so that the SQL server is performing most of the load?
Update:
So, I've since found out that it was my ognl code that was causing the odd message id column value. Changing it to
<el>${in.body.ID}</el>
has fixed it. So, now that I have a usable messageId column, I can now change my 'from' SQL query to
select * from transactions tr where tr.ID IN (select cmp.messageid from CAMEL_MESSAGEPROCESSED cmp where cmp.processor = 'transactionProcessor')
but I still think I'm corrupting the Idempotent Consumer EIP.
Does anyone else do this? Any reason not to?
Yes, it is. But you need to use scalable storage for holding sets of already processed messages. You can use either Hazelcast - http://camel.apache.org/hazelcast-idempotent-repository-tutorial.html or Infinispan - http://java.dzone.com/articles/clustered-idempotent-consumer - depending on which solution is already in your stack. Of course, JDBC repository would work, but only if it meets performance criteria selected.

SPARQL Query questions in WSO2 DSS

I've got a question with using WSO2 DSS and SPARQL queries in Fedora Commons. At the moment I'm running WSO2 DSS from my desktop machine and accessing it as a localhost service. My SPARQL endpoint is a remote server, running an open source application called Fedora Commons, requires basic authentication to preform a query and accepts input via GET or POST requests with the content of the query being placed in the "query" var. For the sake of this example, we'll say the the endpoint URL I'm attempting to query looks like this:
http://fedoraAdmin:fedoraPW#fedora-server.yoyodyne.com:8080/fedora/risearch?lang=sparql
The query I'm attempting to run works in the Fedora Resource Index Query Service test page and looks like this:
PREFIX fedora: <info:fedora/fedora-system:def/relations-external#>
SELECT ?pid
FROM <#ri>
WHERE {
?pid fedora:isMemberOfCollection <info:fedora/islandora:root>
}
At some point I'd like to replace the identifier of "islandora:root" with a query param, but that's not important at the moment. The result of the above query look something like this:
<sparql>
<head>
<variable name="pid"/>
</head>
<results>
<result>
<pid uri="info:fedora/islandora:sp_basic_image_collection"/>
</result>
<result>
<pid uri="info:fedora/islandora:sp_large_image_collection"/>
</result>
<result>
<pid uri="info:fedora/islandora:70"/>
</result>
<result>
<pid uri="info:fedora/rick:1"/>
</result>
<result>
<pid uri="info:fedora/islandora:419"/>
</result>
<result>
<pid uri="info:fedora/islandora:420"/>
</result>
</results>
</sparql>
Given the above situation and data output, I have created a data service in WSO2 DSS that resembles the following:
<data name="FedoraSPARQL">
<config id="FedoraDEVServer">
<property name="rdf_datasource">http://fedoraAdmin:fedoraPW#fedora-server.yoyodyne.com:8080/fedora/risearch?lang=sparql</property>
</config>
<query id="getMemberOfCollection" useConfig="FedoraDEVServer">
<sparql><![CDATA[PREFIX fedora: <info:fedora/fedora-system:def/relations-external#> SELECT ?pid FROM <#ri> WHERE {?pid fedora:isMemberOfCollection <info:fedora/islandora:root>}]]></sparql>
<result element="results" rowName="result">
<element column="pid" name="pid" xsdType="string"/>
</result>
<param name="targetPID" sqlType="STRING"/>
</query>
<operation name="getMemberOfCollection">
<description>Returns the collection objects under islandora:root</description>
<call-query href="getMemberOfCollection">
<with-param name="targetPID" query-param="targetPID"/>
</call-query>
</operation>
</data>
Currently I'm experiencing a 401 Unauthorized likely due to either a typo or formatting error in the connection string or the fact that WSO2 DSS may not be able to connect to an SPARQL endpoint requiring authentication. My question here is how do I make the above data source work given my current setup? If I can't get authentication working with DSS, I do have the option of bypassing authentication completely if WSO2 DSS were running from another server. Assuming this can be made to work, what change do I need to make to use the "targetPID" query param in place of the "islandora:root" string currently used?
Ah, I think i see the problem with this approach. Wow, I totally misunderstood what a RDF data source operation is supposed to do. Funny how more coffee tends to make things a bit clearer.
When going through the Edit Data Source configuration I see that the property name of "rdf_datasource" in the configuration XML is listed as "RDF File Location" in the data source editor wizard. That make me think that DSS isn't sending any query to the Fedora server as I had hoped but is executing the query (locally) on a pre-existing RDF result it expects to download from the Fedora server.
I guess I need to restructure this as a Web Data Source to get the result I'm expecting.