I have the following RFC structure:
<xds:complexType name="RFC_FUNCTION_NAME">
<xds:sequence>
<xds:element name="FIELD1" minOccurs="0">
<xds:simpleType>
<xds:restriction base="xds:string">
<xds:maxLength value="5"/>
</xds:restriction>
</xds:simpleType>
</xds:element>
<xds:sequence>
<xds:element name="FIELD2" minOccurs="0">
<xds:simpleType>
<xds:restriction base="xds:string">
<xds:maxLength value="5"/>
</xds:restriction>
</xds:simpleType>
</xds:element>
</xds:sequence>
</xds:complexType>
I would like to know how to send data to SAP with this specific RFC.
How am i supposed to call this rfc to send, as an example "Hello" and "world"?
Thanks a lot
I need to send data to a SAP system from a non-SAP system that exposes the ability to "Call RFC".
The specifications of the RFC to call is like the one that I've posted here.
All I am supposed to archive is to update fields in a specific record.
So I imagine FIELD1 will identify my record, and FIELD2 will contain the value to update.
If my question still makes no sense, can you address me towards some relevant topics?
Related
So pretty much I want to call
/sendAllUsersAnEmail
Which will call the DSS and do something along the lines of SELECT user_id FROM users WHERE status = 'PENDING'
Here is the issue. How can I get the ESB to loop through the results (or can I get the DSS to call an API directly?) and make a call to /sendEmail/{user_id} for each user? or is this not possible and do I need to return the results to an outside language and call the esb again for each results.
If I understand what you need it´s something like that:
You have a table inside your system DB, with the user_id from users pending to do something in your system and you need to consult this table, get the list of user_id and for every entry in this list made a call to a restful service passing the user_id
So my idea is:
Use a data service to obtain the user_id list.
Create a proxy service that in seq1 call this data service and in a seq2 get the result.
In seq2 use the iterator mediator and splits the messages into parts and processes them asynchronously like in this sample: https://docs.wso2.com/display/ESB481/Sample+400%3A+Message+Splitting+and+Aggregating+the+Responses
An example:
<iterate expression="//m0:getQuote/m0:request" preservePayload="true"
attachPath="//m0:getQuote"
xmlns:m0="http://services.samples">
<target>
<sequence>
<send>
<endpoint>
<address
uri="http://localhost:9000/services/SimpleStockQuoteService"/>
</endpoint>
</send>
</sequence>
</target>
</iterate>
I hope this help you.
Regards.
I'm using Apache Camel 2.13.1 to poll a database table which will have upwards of 300k rows in it. I'm looking to use the Idempotent Consumer EIP to filter rows that have already been processed.
I'm wondering though, whether the implementation is really scalable or not. My camel context is:-
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="main">
<from
uri="sql:select * from transactions?dataSource=myDataSource&consumer.delay=10000&consumer.useIterator=true" />
<transacted ref="PROPAGATION_REQUIRED" />
<enrich uri="direct:invokeIdempotentTransactions" />
<!-- Any processors here will be executed on all messages -->
</route>
<route id="idempotentTransactions">
<from uri="direct:invokeIdempotentTransactions" />
<idempotentConsumer
messageIdRepositoryRef="jdbcIdempotentRepository">
<ognl>#{request.body.ID}</ognl>
<!-- Anything here will only be executed for non-duplicates -->
<log message="non-duplicate" />
<to uri="stream:out" />
</idempotentConsumer>
</route>
</camelContext>
It would seem that the full 300k rows are going to be processed every 10 seconds (via consumer.delay parameter) which seems very inefficient. I would expect some sort of feedback loop as part of the pattern so that the query that feeds the filter could take advantage of the set of rows already processed.
However, the messageid column in the CAMEL_MESSAGEPROCESSED table has the pattern of
{1908988=null}
where 1908988 is the request.body.ID I've set the EIP to key on so this doesn't make it easy to incorporate into my query.
Is there a better way of using the CAMEL_MESSAGEPROCESSED table as a feedback loop into my select statement so that the SQL server is performing most of the load?
Update:
So, I've since found out that it was my ognl code that was causing the odd message id column value. Changing it to
<el>${in.body.ID}</el>
has fixed it. So, now that I have a usable messageId column, I can now change my 'from' SQL query to
select * from transactions tr where tr.ID IN (select cmp.messageid from CAMEL_MESSAGEPROCESSED cmp where cmp.processor = 'transactionProcessor')
but I still think I'm corrupting the Idempotent Consumer EIP.
Does anyone else do this? Any reason not to?
Yes, it is. But you need to use scalable storage for holding sets of already processed messages. You can use either Hazelcast - http://camel.apache.org/hazelcast-idempotent-repository-tutorial.html or Infinispan - http://java.dzone.com/articles/clustered-idempotent-consumer - depending on which solution is already in your stack. Of course, JDBC repository would work, but only if it meets performance criteria selected.
The data that I'm getting only contains the SKU numbers. I am trying to figure out how I can link these SKU numbers to the product variants in Shopify without the actual product id number.
Example data:
<Inventory ItemNumber="100B3001-B-01">
<ItemStatus Status="Avail" Quantity="0" />
</Inventory>
<Inventory ItemNumber="100B3001-B-02">
<ItemStatus Status="Avail" Quantity="0" />
</Inventory>
<Inventory ItemNumber="100B3001-B-03">
<ItemStatus Status="Avail" Quantity="-1" />
<ItemStatus Status="Alloc" Quantity="1" />
</Inventory>
<Inventory ItemNumber="100B3001-B-04">
<ItemStatus Status="Avail" Quantity="-1" />
<ItemStatus Status="Alloc" Quantity="1" />
</Inventory>
Here's a delightful, condescending discussion from Shopify employees in 2011 asking why you can't just store the Shopify ID everywhere. The "stock-keeping unit" is a universal systems integration point and, in every system I've seen, each SKU uniquely maps to a product because words have meaning, but apparently not at Shopify.
Three years later, you seem to have two options.
One is to create a Fulfillment Service and provide a URL where Shopify will call you asking for stock levels on a SKU; this is probably the simplest solution, provided you have a Web server sitting somewhere where you can expose such a callback.
The second is to periodically page through all of the Products and store a mapping of the Shopify ID to a SKU somewhere, consulting your map when you need to do an update. Because most of our integrations are cron jobs and I'd like to keep them that way, I periodically ask for the products that have changed since the last run, and then update my mapping.
As David Lazar points out in his comment, the ability to find a product based on its SKU is not currently supported in the Shopify API.
EDIT: This is an unreliable option that I once used as a last resort. I wouldn't recommend using this but I will leave it here in case as someone may find it helpful.
You can use this API endpoint:
/admin/products/search.json?query=sku:abc123
I have only used it in the browser though, and I can't find any documentation for it. And it may stop working at any time.
You can use
*.myshopify.com/search?view=json&q=sku:SKUID
Graph query on Shopify works for me:
I am developing a DFS application (оn С#) that imports a document into Documentum as dm_document. A document may be in any format – DOC, DOC, PDF, whatever. Thus, when I create a document, I have to specify corresponding format (it will be put into a_content_type): “gif”, “msw8” etc.
How can I solve this task? I have looked through DFS_66_reference.pdf and DFS-SDK Help – do not see simple solution yet. Can you give me an advice?
I usually do what David suggests for common formats that I am expecting to encounter. This has the added benefit of giving you a reference to look at while debugging your application. For other formats, you can make the following query.
DQL:
SELECT name from dm_format WHERE dos_extension = lower('<extension>')
Note that this is not always reliable, because it could return multiple results for an extension (XLS is a good example), so you should decide how to handle multiple results. You may have to ask the user in that case.
I would recommend caching the responses in your application so you are not making this query needlessly. As David said above, these values do not change unless you change them.
Are you asking how to match the dos extension to Documentum format ?
If yes, the simplest is to simply hardcode the mapping directly in your application.
In Webtop file wdk/app.xml there is the mapping it uses.
Here is what I have in mine :
<format extension="txt" name="crtext"/>
<format extension="xls" name="excel8book"/>
<format extension="doc" name="msw8"/>
<format extension="ppt" name="ppt8"/>
<format extension="vsd" name="vsd"/>
<format extension="zip" name="zip"/>
<format extension="wpd" name="wp8"/>
<format extension="psd" name="photoshop6"/>
<format extension="au" name="audio"/>
<format extension="jpeg" name="jpeg"/>
<format extension="jpg" name="jpeg"/>
<format extension="html" name="html"/>
<format extension="htm" name="html"/>
<format extension="ai" name="illustrator10"/>
I'm doing a research project for the summer and I've got to use get some data from Wikipedia, store it and then do some analysis on it. I'm using the Wikipedia API to gather the data and I've got that down pretty well.
What my questions is in regards to the links-alllinks option in the API doc here
After reading the description, both there and in the API itself (it's down and bit and I can't link directly to the section), I think I understand what it's supposed to return. However when I ran a query it gave me back something I didn't expect.
Here's the query I ran:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=google&rvprop=ids|timestamp|user|comment|content&rvlimit=1&list=alllinks&alunique&allimit=40&format=xml
Which in essence says: Get the last revision of the Google page, include the id, timestamp, user, comment and content of each revision, and return it in XML format.
The allinks (I thought) should give me back a list of wikipedia pages which point to the google page (In this case the first 40 unique ones).
I'm not sure what the policy is on swears, but this is the result I got back exactly:
<?xml version="1.0"?>
<api>
<query><normalized>
<n from="google" to="Google" />
</normalized>
<pages>
<page pageid="1092923" ns="0" title="Google">
<revisions>
<rev revid="366826294" parentid="366673948" user="Citation bot" timestamp="2010-06-08T17:18:31Z" comment="Citations: [161]Tweaked: url. [[User:Mono|Mono]]" xml:space="preserve">
<!-- The page content, I've replaced this cos its not of interest -->
</rev>
</revisions>
</page>
</pages>
<alllinks>
<!-- offensive content removed -->
</alllinks>
</query>
<query-continue>
<revisions rvstartid="366673948" />
<alllinks alfrom="!2009" />
</query-continue>
</api>
The <alllinks> part, its just a load of random gobbledy-gook and offensive comments. No nearly what I thought I'd get. I've done a fair bit of searching but I can't seem to find a direct answer to my question.
What should the list=alllinks option return?
Why am I getting this crap in there?
You don't want a list; a list is something that iterates over all pages. In your case you simply "enumerate all links that point to a given namespace".
You want a property associated with the Google page, so you need prop=links instead of the alllinks crap.
So your query becomes:
http://en.wikipedia.org/w/api.php?action=query&prop=revisions|links&titles=google&rvprop=ids|timestamp|user|comment|content&rvlimit=1&format=xml