Walmart Developer API connection - api

I have celigo and I am trying to connect to the Walmart API manually. The walmart API wants an epoch timestamp and an authentication key which requires me to run a jar file, and I can get those two values.
The time stamp and authentication key changes every time I run the jar file, so the connection ends up running for about 5 minutes before losing the connection. How can I make it so that it doesn't lose connection to Walmart.

As you are using the jar to generate those two values, they will return WM_SEC.AUTH_SIGNATURE and WM_SEC.TIMESTAMP as per the documentation. These need to be generated using the jar every time you make the API call (even if you are trying to make the same API call).
It works for 5 minutes because the WM_SEC.TIMESTAMP has a validity of 5 minutes. In that case as I mentioned earlier, using the jar you will get a WM_SEC.AUTH_SIGNATURE and WM_SEC.TIMESTAMP and it will work fine for you.

The time stamp and authentication key changes every time I run the jar file, so the connection ends up running for about 5 minutes before losing the connection. How can I make it so that it doesn't lose connection to Walmart.
Well , timestamps and signature you generated by jar file will always change.
You said connection ends up running about 5minutes But i think its your script
that was running for 5 min. Walmart API do not allow you to make a live connection. As soon you send request , walmart-api will respond in few seconds.
For Bulk Items Feeds upto 9.5mb(feed of 5K items) , it takes 2-3 sec maximum.
For Inventory Feeds upto 5mb (about 2K items) , it takes 2sec max.

No, you cannot. This is basically the entire point of the uniquely generated connection key. It prevents connections from being left in an open state that would cause unneeded server load.
Your question doesn't identify what you are trying to accomplish, nor why you would want to maintain an open connection to Walmart. After looking at Celigo's website, I'm still not sure what you are trying to accomplish, but based on the limited information, it appears that you are trying to do something with the Walmart API that it is not intended to do. Connections to the Walmart API should be on a request by request basis, and do not consist of a live connection.
The Walmart API documentation indicates that you should use a uniquely generated authentication key for every request made to the API, so the fact that you are able to keep the connection alive for a full 5 minutes is even beyond what you are supposed to be doing.
What are you trying to accomplish?

Related

What is the most performant way to submit a POST to an API

A little background on what I am needing to accomplish. I am a developer of a cloud-based SaaS application. In one instance, my clients use my application to log the receipt of goods as they come across a conveyor line. Directly before the PC where they are logged into my app, there is another Windows PC that is collecting from instruments, the moisture and weight of the item. I (personally not my app) have full access to this pc and its database. I know how I am going to grab the latest record from the db via stored procedure/SQLCMD.
On the receiving end, I have an API endpoint that needs to receive the ID, Weight, Moisture, and ClientID. This all needs to happen in less than ~2 seconds since they are waiting to add this record to the my software's database.
What is the most-perfomant way for me to stand up a process that triggers retrieving the record from the db and then calls the API? I also want to update the record flagging success for 200 response. My thoughts were to script all of this in a batch file and use cURL to make the API call. Then call this batch file from a task in windows. But I feel like there may be a better way with less moving parts.
P.S. I am not looking for code solutions per say, just direction or tools that will help, also I am using the AWS stack to host my application.
The most performant way is to use AWS Amplify, its ready aws framework and development environment that can connect your existing DB to a REST API easily
you can check their documentation on how to build it
https://docs.amplify.aws/lib/restapi/getting-started/q/platform/js

How to split a long timeout API call into smaller ones

I have an API js file which gets called by a cronjob via curl GET.
This js file basically makes a query to an external API via await fetch and saves some data from the response onto Mongodb via await .. updateOne. The problem is this happens in loop for about 500 different values and it takes more than 10 seconds to finish, whereas my server timeout limit for serverless functions is 10 sec.
So how can I split it into multiple "GET" requests ?
Isn't doing a for loop inside the API js file the same since it'd still count as a single operation?
Every time I google for this via different keywords it finds me non-related stuff, am I missing something? maybe is rare to find such a case? I'm new to the whole cronjob/serverless functions thing, if this is not the correct place to ask for this please point me out where should I post it whithin stackexchange
Two potential solutions:
The brute force method would be to increase the timeout setting, you can do this via the serverless.yml, either in the provider section or in the function definition directly. (Maximum timeout for AWS Lambdas is 900 seconds or 15 minutes.) (Not relevant as on Vercel, timeout of 900 seconds for Enterprise and 60 seconds for Pro but 10 seconds for the free plan.)
Doing the for loop inside the Lambda function wouldn't change much. If you can break it down into multiple cron jobs which you can parameterise. E.g. imagine a cron job which goes through a staff list to do some processing on a daily basis. You could change your cron job to accept a range of letters which filters the staff list by last name. So instead of one cron job you would do four: A-F, G-M, N-S and T-Z. (In your case trying to find a parameter which splits the 500 values into equal sized buckets.)
As you get billed by duration and memory consumption with serverless (at least with AWS) it probably doesn't make a lot of sense to split it so increasing the timeout setting might be the easier solution, but I don't know your full context so this is just a guess.

How to handle temporary unreachable online api

This is a more general question, so bear with my abstraction of the following problem.
I'm currently developing an application, that is interfacing with a remote server over a public api. The api in question does provide mechanisms for fetching data based on a timestamp (e.g. "get me everything that changed since xxx"). Since the amount of data is quite high, I keep a local copy in a database and check for changes on the remote side every hour.
While this makes the application robust against network problems (remote server in maintenance, network outage, etc.) and enables employees to continue working with the application, there is one big gaping problem:
The api in question also offers write access. E.g. my application can instruct the remote server to create a new object. Currently I'm sending the request via api, and upon success create the object in my local database, too. It will eventually propagate via the hourly data fetching, where my application (ideally) sees that no changes need to be made to the local database.
Now when the api is unreachable, i create the object in my database, and cache the request until the api is reachable again. This has multiple problems:
If the request fails (due to not beforehand validateble errors), I end up with an object in the database which shouldn't even exist. I could delete it, but it seems hard to explain to the user(s) ("something went wrong with the api, we deleted that object again").
The problem especially cascades when depended actions que up. E.g. creating the object, and two more requests for modifying it. When the initial create fails, so will the modifying requests (since the object does not exist on the remote side)
Worst case is deletion - when an object is deleted locally, but will not be deleted on the remote site, I have no way of restoring it (easily).
One might suggest to never create objects locally, and let them propagate only through the hourly data sync. This unfortunately is not an option. If the api is not accessible, it can be for hours. And it is mandatory that employees can continue working with the application (which they cannot when said objects don't exist locally).
So bottom line:
How to handle such a scenario, where the api might not be reachable, but certain requests must be cached locally and repeated when the api is reachable again. Especially how to handle cases where those requests unpredictable fail.

Web application hangs after multiple requests

The application is using Apache Server as a web server and Tomcat as an application server.
Operations/requests can be triggered from the UI, which can take time to return from the server as it does some processing like fetching data from the database and performing calculations on that data. This time depends on the amount of data in the database and the duration of data it is processing. It could be as long as 30min to an hour or 2 min's based on the parameters.
Apart from this, there are some other calls which fetche small amount of data from the database and return immediately.
Now when I have multiple, say 4 or 5 of these long heavy calls to the server, and they are currently running, when I make a call that is supposed to be smaller and return immediately, this call also hangs as it never reaches my controller.
I am unable to find a way to debug this issue or find a resolution. Please let me know if in case you happen to know how to proceed with this issue.
I am using Spring, with c3p0 connection pooling with Hibernate.
So I figured out what was wrong with the application, and thought about sharing it in case someone somewhere faces the same issue. It turns out nothing was wrong with the application server or the web server, when technically speaking it was the browsers fault.
I found out that the browser can only have a limited number of open concurrent calls to a domain. In the case of the latest version of chrome at the time of writing is 6. This is something all the browsers do to prevent DDOS attacks.
As in my application, the HTTP calls take a lot of time to return until the calculations are completed several HTTP calls accumulate concurrently and as a result, the browser stops sending any further calls after the 6th concurrent call and it feels like the application is unresponsive. You can read about the maximum no of concurrent calls by a browser in SO.
A possible solution I have thought is either polling or even better Long Polling. I would have used WebSockets but then we would need to make a lot of changes.

High response time in WSO2 DSS

I have created a simple data service using WSO2 DSS for the following simple query.
"SELECT * FROM EMP_VIEW"
"EMP_VIEW" is having around 45 columns and 8500 entries(tuples). My DB instance is Oracle 11g Enterprise edition & i'm using ojdbc6.jar as the driver. Due to some reason Data Service takes around 14 mins to get the response once I try it in SoapUI.
But the same query takes around 14 or less seconds to retrieve all the records in Oracle SQL Developer/ Eclipse database explorer.
Any idea why it's taking high response time?
Not an answer but potential direction in order to get to an answer.
There may be multiple factors at play here. You have proven that the Oracle side is working well (assuming the 14s response time is acceptable).
You mention that SOAPUI takes considerable time. This could be a SOAPUI problem where it is waiting for all results to be returned (time taken) and then building a full display (more time taken) before showing the full result.
The Oracle Dev tool could be faster at showing results since it may not be; waiting for the full result set and/or taking much time to build the display.
Keep in mind that DSS is taking the SQL result and placing XML, that in itself may add some time but I suspect the SOAPUI tool is taking a significant amount of time to decode the XML and place on your screen.
To further narrow down the problem I suggest you use another tool
1. possibly the TRYIT tool from DSS and see what type of timing it gets for the same calls.
2. write a small client c# / java etc and measure that actual time between your request and the response. This will definitely tell you how long DSS is taking versus how long it takes for the client to form a display.
Please do post your results as this type of information is definitely helpful to others.
As per my understandings and observations, SOAP UI waits till whole message receive. therefore that much of time will spent. but when you try curl, you can find less seconds to generate the response.
I tried curl to receive 2MB messages with streaming enabled DSS service,
The response was generated within less than one second.