HTTP Request processing in ABAP system - abap

I have a very basic question in how a external HTTP request is processed in an ABAP (S/4 system).
Are the requests handled by per process or per thread. (terms taken more from the java http world). ?
By threads will mean which already have the objects initialised in memory by the previous request.
By process will mean that the objects are initialised in memory every time which is obviously time consuming and non performant.
In case of a clustered system the request can be load balanced to a new systems which is a separate topic.
Best Regards,
Saurav

Internet Communication Manager (ICM) handle request and forward it to your class which is extend from IF_HTTP_EXTENSION interface by url (configure it in SICF).
SAP need authorization for accept http request. Web logon screen set cookie to client for tracking it. If you configure static user to your service on t-code SICF, you can add cookies to client (with http header in response) for tracking and checking it.
There is no cache for object in this interface, but you can create your own with static class attributes and other general function caching capabilities from ABAP. Please check below rest service api for sample project:
https://github.com/pacroy/abap-rest-api
Load balancers has cookie based route capabilities (session based) for finding correct system.

Related

what is the difference between web service and http and api?

i am taking a course in web data so i understand that when we want to retrive a webpage on a browser we do a request response cycle using a communication protocol like http or https and a web service is a piece of software which i dont know where it is stored or how it is accessed so we can make two applications from different architectures communicate using a serialization language like XML or JSON i dont know what is the difference between a web service and http they are both a way to connect 2 different computers together and what confused me the more is api which according to the research i did is something used to access web services.
Let's begin with defining all the terms in your question since it's a bit all over the place.
HTTP (Hypertext-Transport Protocol): Allows you to transfer data over the web. Your browser will perform a request using HTTP to your web service.
Service: Any software that performs a specific task. We are interested in a web service, which is typically invoked via HTTP, however this can be anything else such as a Linux signal.
For now, let's assume it listens on HTTP.
API (Application Programming Interface): An interface by which all clients of your software have to abide by to use it. For example, in our web service, we can dictate an API so requests follow some convention.
Let's put it all together now.
You're making a website that wants to calculate the sum of two numbers. First, users will go to http://yoursite.com, and then the browser will always do an HTTP request to the domain yoursite.com on port 80. This will hit either your hosting site or some backend server.
Here you have the option if you're using something like GitHub pages to serve static content or you have some server (i.e., serverd) that will load a file and serve it.
So now the web-browser did an HTTP request and your webpage should load with an index.html. The user can now click on buttons, and everything looks good until they press Calculate -- what happens now?
We want to offload the computation to our backend. We perform an HTTP request to our backend server. We can define an API, that is in our case an endpoint, so that the HTTP request can hit it and it'll return the sum of the two numbers.
How do we return the result? We need to represent the data somehow, and this can be done through a body payload that is encoded as either JSON or XML. Again, this is a serialization format and can encode it in various different ways. JSON is nice because you can parse it easily with JavaScript on the client side.
Great -- so now we got an entire site and it works! Now we can do an HTTP request from our browser straight to the backend according to our setup endpoint and it should fulfill our request. Notice how now we're using the API from the backend server from within our site.
Other keywords you can may run into: CORS, AJAX, Apache Server; good luck!

correct pattern of Ktor's HttpClient usage

What's the correct pattern of usage for HttpClient in KTOR. Should I use it like singleton per app lifecycle, or should I create it per each request?
I would say that you may have more than one client per app if you need to connect to more than one logical services.
But if you are dealing with one single HTTP server it's better to have one client because it establishes and holds a connection with server. It also allocates the following resources: prepared threads, coroutines and connections. If you have multiple clients you can potentially run out of these resources.
Should I use it like singleton per app lifecycle, or should I create it per each request
Creation of a http client instance is usually a bit resource intensive, hence you should not create an instance of client for every request. You should create just one http client instance per app's lifecycle, injected wherever required in your app, ensuring that
you have used the right http client configurations like the thread pool size, timeouts etc
you are releasing the resources upon the app's shutdown.
The client can be configured with HttpClientEngineConfig(doc) or any of its inheritors. More details in the documentation here.
It is better to reuse the HttpClient instance for performance reasons if the requests can be performed using the same configuration/settings.
But in some cases you have to create separate instances because the features of a HttpClient are determined by the engine and the plugins specified when creating the instance.
For example when using bearer authentication the HttpClient instances can be reused only when sending requests to the same resource server (with the same authorization configuration).
Similarly, if two requests should use different timeouts then they can be performed only by different HttpClients.
To summarize, a HttpClient instance should be created per "feature set", determined by the required engine and plugins.

Server Sent Events and Ajax VS Websockets and Ajax

I am creating an application(Nuxtjs) and am having troubles determining a good approach for sending data to the API(expressjs) and retrieving real-time updates. It seems that i can create "bi-di" connections with both protocals [Server Sent Events(SSE) and Axios or Websocket(WS)].
Both technologies work with most of the browsers, so i do not see a need to add additional libraries such as socket.io - For those individuals that do not have a current browser (too bad).
The application is based on user input of form data/clicks. Other users are then notified/updated with the information. At which point, the user can respond and the chain goes on(Basic chat like flow some information will be exchanged quickly while some may not or ever).
In my experience, the user flow would rely more heavily on listening for changes than actually changing the data - hence why i'm considering SSE. Unfortunately, both protocols have their flaws.
Websockets:
Not all components will require a WS to get/post information as such it doesn't make sense to upgrade a basic http connection at the additional server expense. Therefore another method other than WS will be required(Axios/SSR). Example: Checking to see if a user name exists
Security firewalls may prevent WS for operating properly
express-ws makes sockets easy on the API end
I believe you can have more than 6 concurrent connections by one user (which may be pro and con)
Server Sent Events
Seems like the technology is fading in favor of WS
Listening to the events seem to be as easy as listening to events for WS
No need to upgrade the connection but will have to use node-spdy within the expressjs API - This may also be a good implementation for WS due to multiplexing
Little more backend code to setup http2 and emit the SSEs(Ugly code as well - so functions will be made)
Limited to HTTP limitations (6 concurrent connections) which is a problem as the users could easily max this out(ie. having multiple chat windows open)
TLDR
The application will be more "feed" orientated with occasional posting(which can be handled by Axios). However, users will be listening to multiple "feeds" and the HTTP limitations will be a problem. I do not know what the solution would be because SSE seem like the better option as i do not need to continually handshake. If this handshake is truly inconsequential(which from everything i have read isn't the case) than WS is likely a better alternative. Unfortunately, there is soooo much conflicting information regarding the two.
Thoughts?
SSE, Web Sockets, and normal HTTP requests (via AJAX or Fetch API) are all different tools for different jobs.
SSE
Unidirectional, from server to client.
Text-based data only. (Anything else must be serialized, i.e. JSON.)
Simple API, widely compatible, auto-reconnects, has built-in provision for catching up on possibly missed events.
Web Sockets
Bi-directional.
Text or binary data.
Requires you to implement your own meaning for the data sent.
Standard HTTP Requests
Client to Server or Server to Client, but only one direction at a time.
Text or binary data.
Requires extra effort to stream server-to-client response in realtime.
Streaming from client-to-server requires that the entire data be known at the time of the request. (You can't do an event stream, for example.)
How to decide:
Are you streaming event-like data from the server to the client? Use SSE. It's purpose-built for this and is a dead simple way to go.
Are you sending data in only one direction, and you don't need to spontaneously notify clients of something? Use a normal HTTP request.
Do you need to send bidirectional data with a long-term established connection? Use Web Sockets.
From your description, it sounds like either SSE or Web Sockets would be appropriate for your use case. I'd probably lean towards SSE, while sending the random API calls from the client with normal HTTP requests.
I do not know what the solution would be because SSE seem like the better option as i do not need to continually handshake. If this handshake is truly inconsequential(which from everything i have read isn't the case) than WS is likely a better alternative.
Keep in mind that you can simply configure your server with HTTP keep-alive, making this point moot.
I personally avoid using websockets as a 2-way communication between client and server.
I try to use sockets to broadcast data from server to users or a single user(socket), so they can get real-time updates, but for the post requests from client to server I tend to use axios or something similar, because I don't want to pass sensitive data (like access keys etc) from client to server.
My data flow goes something like
User posts data to the server using axios, SSE or whatever
Backend server does what it has to and notifies socket that an event has occured
Socket server then notifies who he has to
My problem with using sockets to send data from client to server is the authentication issue. Technically, you can't pass anything that is not available to client-side javascript through a socket, meaning that to authenticate the action you will have to send sensitive information through a websocket. This is an issue for multiple reasons - if your sensitive data can be accessed using client-side js, there is a bunch of attacks that can be done here. Also someone can listen to the communication between ws and client. This is why I use API calls (axios etc) and store sensitive data to http-only cookies.
So once server wants to notify the user that something has happened, you can easily do that by telling the websocket server to send the data to the user.
You also want to keep your API server stateless, meaning no sockets in your API. I use separate server just for websocket connections, and my API server and websocket server communicate using redis. Pub/sub is a really neat feature for internal server communication and state management.
And to answer your question regarding multiple connections - you can use a single connection between your websocket server and client, and broadcast data using channels. So one channel would be for notification feed, other channel could be for story feed etc.
I hope this makes sense to you. This stack has worked really good for me.

api gateways. how are they written and is there guideline for how they interact with its clients.

I've been doing a ton of research on Microservices however I cannot find a single piece of code that is written for an API gateway. I understand that between the clients and services you would have an the API Gateway which allows a client to make 1 requests over IoT to the gateway and then the gateway can make many requests internally to services which then build up a response. Now from an article on NGINX
The API Gateway is responsible for request routing, composition, and protocol translation.
use case
Suppose we support 2 clients. Android and an Angular App (browser) and let's make a tangible user story that the client is an online shopping store.
the shopping store would then have different services broken out into servers and each service could be built on a different platform/language with a different database. They are completely self contained so that they can scale in the cloud really quickly without having to scale the entire application. If there is some intense Algorithm that needs to run for payment. Then the payment services can quickly spin up a few more servers to balance the load and decrease user wait time.
but that could be written in Java which could have a HTTP/REST exposed api to be consumed. However what if it's written in say c++/Golang/Node it doesn't really matter what language, but instead of exposing their api via HTTP it's by a different protocol what would that mean on the api gateway - how would it handle the response?
the client goes and makes a request to the home page where we have 3 things loaded
shopping cart
list view of shoppings items
current specials
the client would only make 1 request to the api gateway let's say to apigateway/apiv1/home to the api gateway which then would have 3 requests to the services so
serviceShopping/apiv1/shoppingList
serviceCart/apiv1/cart
serviceSpecial/apiv1/specials
at this point the 3 services could be written in a different language and use a different protocol. How would those 3 services be requested and on the response back to the client (a single response) how would it be concatenated? json object with a specific schema? this is where I get confused...
sorry for the long post it's a simple question I think, but I needed to setup something I can conceptualize with and explain.

Injecting data caching and other effects into the WCF pipeline

I have a service that always returns the same results for a given parameter. So naturally I would like to cache those results on the client.
Is there a way to introduce caching and other effect inside the WCF pipeline? Perhaps a custom binding class that could site between the client and the actual HTTP binding.
EDIT:
Just to be clear, I'm not talking about HTTP caching. The endpoint may not necessarily be HTTP and I am looking at far more effects than just caching. For example, one effect I need is to prevent multiple calls with the same parameters.
The WCF service can use Cache-Control directives in the HTTP header to say the client how it should use the client side cache. There are many options, which are the part of HTTP protocol. So you can for example define how long the client can just get the data from the local cache instead of making requests to the server. All clients implemented HTTP, like all web browsers, will follow the instructions. If your client use ajax requests to the WCF server, then the corresponding ajax call just return the data from the local cache.
Moreover one can implement many interesting caching scenarios. For example if one set "Cache-Control" to "max-age=0" (see here an example), then the client will always make revalidation of the cache by the server. Typically the server send so named "ETag" in the header together with the data. The "ETag" represent the MD5 hash or any other free information which will be changed if the data are changed. The client send automatically the "ETag", received previously from the server, together inside the header of the GET request to the server. The server can answer with the special response HTTP/1.1 304 Not Modified (instead of the typical HTTP/1.1 200 OK response) and with the body having no data. In the case the client will safe to get the data from the local cache.
I use "Cache-Control:max-age=0" additionally with Cache-Control: private which switch off caching the data on the proxy and declare that the data could be cached, but not shared with another users.
If you want read more about caching control with respect of HTTP headers I'll recommend you to read the following Caching Tutorial.
UPDATED: If you want implement some general purpouse caching you can use Microsoft Enterprise Library which contains Caching Application Block. The Microsoft Enterprise Library are published on the CodePlex with the source code. As an alternative in .NET 4.0 you can use System.Runtime.Caching. It can be used not only in ASP.NET (see here)
I continue recommend you to use HTTP binding with HTTP caching if it only possible in your environment. In the way you could save many time of development and receive at the end more simple, scalable and effective application. Because HTTP is so important, one implemened already so much useful things which you can use out-of-the-box. Caching is oly one from the features.