What is the most performant way to submit a POST to an API - api

A little background on what I am needing to accomplish. I am a developer of a cloud-based SaaS application. In one instance, my clients use my application to log the receipt of goods as they come across a conveyor line. Directly before the PC where they are logged into my app, there is another Windows PC that is collecting from instruments, the moisture and weight of the item. I (personally not my app) have full access to this pc and its database. I know how I am going to grab the latest record from the db via stored procedure/SQLCMD.
On the receiving end, I have an API endpoint that needs to receive the ID, Weight, Moisture, and ClientID. This all needs to happen in less than ~2 seconds since they are waiting to add this record to the my software's database.
What is the most-perfomant way for me to stand up a process that triggers retrieving the record from the db and then calls the API? I also want to update the record flagging success for 200 response. My thoughts were to script all of this in a batch file and use cURL to make the API call. Then call this batch file from a task in windows. But I feel like there may be a better way with less moving parts.
P.S. I am not looking for code solutions per say, just direction or tools that will help, also I am using the AWS stack to host my application.

The most performant way is to use AWS Amplify, its ready aws framework and development environment that can connect your existing DB to a REST API easily
you can check their documentation on how to build it
https://docs.amplify.aws/lib/restapi/getting-started/q/platform/js

Related

How to work with the data warehouse in the application with tasks (real time)?

I study Vue and Vuex. In the official documentation there is a simple example of using a Vuex with saving data to localStorage.
To better understand the material I studied, I decided to consolidate the knowledge into practice and write a mini application - a clone of trello (SPA).
Namely:
Create three routes:
General dashboard (/dashboard) where are boards
Board (/board) where one or several columns are located, each column has a button for
creating a task in it
Task (/:task-id) that are in columns, tasks can be moved between columns
Sidebar in which all notice with the board are displayed (CRUD by tasks and columns, changes in the status of a task, and so on.)
Sockets so that other users can see the
changes on the board in real time.
Questions!
What data should I store exclusively in the storage Vuex? Excluding authorization. It is obvious.
For what data in this application can localStorage be useful?
What should I use so that data is not discarded when I refresh the page or navigate? I can use localStorage, but hypothetically there can be a lot of data. The fourth question follows from this.
Is a better solution to use persistent remote storage on server / cloud? If so, could you give me information on how to do this? And in this case, interaction with the database is of interest, at what point is it better to save data in the database?
I'm interested in how to properly build such an application, as in a real commercial application.
I use and learn the stack MEVN
1- you can store any type of data in your store, 2 - I don't thing is useful. Because if users remove browser cache all them data will be forget. So you need configure an database for this. 3 - You need a Database and some Backend to provide your data. 4 - It's depends. if you need only for developement, you can install any things in your machine. If you need some thing more robust, could you take a cloud server. But for configure the server you need a little bit system administrator skills.

How to handle temporary unreachable online api

This is a more general question, so bear with my abstraction of the following problem.
I'm currently developing an application, that is interfacing with a remote server over a public api. The api in question does provide mechanisms for fetching data based on a timestamp (e.g. "get me everything that changed since xxx"). Since the amount of data is quite high, I keep a local copy in a database and check for changes on the remote side every hour.
While this makes the application robust against network problems (remote server in maintenance, network outage, etc.) and enables employees to continue working with the application, there is one big gaping problem:
The api in question also offers write access. E.g. my application can instruct the remote server to create a new object. Currently I'm sending the request via api, and upon success create the object in my local database, too. It will eventually propagate via the hourly data fetching, where my application (ideally) sees that no changes need to be made to the local database.
Now when the api is unreachable, i create the object in my database, and cache the request until the api is reachable again. This has multiple problems:
If the request fails (due to not beforehand validateble errors), I end up with an object in the database which shouldn't even exist. I could delete it, but it seems hard to explain to the user(s) ("something went wrong with the api, we deleted that object again").
The problem especially cascades when depended actions que up. E.g. creating the object, and two more requests for modifying it. When the initial create fails, so will the modifying requests (since the object does not exist on the remote side)
Worst case is deletion - when an object is deleted locally, but will not be deleted on the remote site, I have no way of restoring it (easily).
One might suggest to never create objects locally, and let them propagate only through the hourly data sync. This unfortunately is not an option. If the api is not accessible, it can be for hours. And it is mandatory that employees can continue working with the application (which they cannot when said objects don't exist locally).
So bottom line:
How to handle such a scenario, where the api might not be reachable, but certain requests must be cached locally and repeated when the api is reachable again. Especially how to handle cases where those requests unpredictable fail.

Database for live mobile tracking

I'm developing an app that allows to track a mobile device instantly (live) ... I need an of advice. The application must send the location to a webservice that in it's turn records the received data in a database.
What would be, in your opinion, the best way to store the location values?
I'm new in using bigdata and I'm afraid that simple sql requests wont be able to do the work properly ... I imagine if there is lot of users and each user send a request each 1sec I'll have issue with the database ...
An advice ? Thank you very much
i think you could have a look into the geospatial queries in mongo, if you chose to go ahead with mongodb.
Refer here
And here
for the design of the database would depend on the nature of the query (essentially the read and write).
Worth having a look into
Working at Cintric we landed on using elasticsearch. We process billions of location points in real time and provide advanced analytics to our users.
We started with mongoDB and ran into a lot of troubles, eventually leading to a painful migration.
Our stack currently has mobile devices dump location updates into AWS Kinesis, which are then processed by AWS Lambda handlers, and then dumped into elasticsearch. We're able to serve, process and store 300 million requests/month for only a few hundred dollars/month. Analytics for our dashboard add additional cost but for your needs I would highly recommend checking out your options on AWS.

How to proceed with query automation using Import.io

I've successfully created a query with the Extractor tool found in Import.io. It does exactly what I want it to do, however I need to now run this once or twice a day. Is the purpose of Import.io as an API to allow me to build logic such as data storage and schedules tasks (running queries multiple times a day) with my own application or are there ways to scheduled queries and make use of long-term storage of my results completely within the Import.io service?
I'm happy to create a Laravel or Rails app to make requests to the API and store the information elsewhere but if I'm reinventing the wheel by doing so and they provides the means to address this then that is a true time saver.
Thanks for using the new forum! Yes, we have moved this over to Stack Overflow to maximise the community atmosphere.
At the moment, Import does not have the ability to schedule crawls. However, this is something we are going to roll out in the near future.
For the moment, there is the ability to set a Cron job to run when you specify.
Another solution if you are using the free version is to use a CI tool like travis or jenkins to schedule your API scripts.
You can query live the extractors so you don't need to make them run manually every time. This will consume one of your requests from your limit.
The endpoint you can use is:
https://extraction.import.io/query/extractor/extractor_id?_apikey=apikey&url=url
Unfortunately the script will not be a very simple one since most websites have very different respond structures towards import.io and as you may already know, the premium version of the tool provides now with scheduling capabilities.

Pull Values from compactRio with Python

I have a compactRio system that I've inherited but don't know much about (I have no background with LabView). All I really need to do is poll the values from some of the probes attached to the the cRio every few minutes over the network interface.
Currently, I have a Python script that grabs hourly summary files of the collected data via FTP. However those files are only updated by the cRio on an hourly basis and I need data more frequently than that.
Do cRios commonly have SNMP/console/etc interfaces available over TCP/UDP that I could poll to get this data on a remote machine? Any suggestions for the optimal way to do this kind of thing?
There isn't a way to poll the cRIO without modifying the LabVIEW program.
If you do decide to have a go at LabVIEW programming, I suggest setting up a RESTful API. Since you are already accessing the cRIO over FTP, I am assuming you can access it via HTTP calls with python curl. Here is a quick tutorial on how to setup a RESTful API in LabVIEW 2013 or for LabVIEW 2012 and earlier