I have a sling servlet that invokes a 3rd party api and fetches a json response. I have mapped the json response to a pojo class using Jackson. I now have to display this dynamically fetched and mapped response in sightly. How do i do that? I am stuck after the response mapping
With the new version of Sling Models, you can directly expose a model as a Servlet by specifying a resource type and the selector to use in your model annotations. When the Model is loaded into Apache Sling, it automatically registers a Servlet corresponding to the model, allowing you to with nearly zero additional code, create a Servlet to access a JSON representation of the model. That’s super cool!
The above life makes your Life Easier!!
You can have all your objects in Sling Model. Since the sling model acts as a servlet You can make the AJAX call and get a real-time response.
Please refer to this document.
https://blogs.perficient.com/2018/07/26/no-servlets-required-exporting-data-with-sling-models/
The correct path is:
HTL/Sightly -> Sling Model -> OSGi Service -> External API
So you have to extract the code that fetches the data into an OSGi service.
But please secure your code that calls the external API. As example if the External API is not responding or is extremely slow, it could consume all available threads of AEM. Then AEM could be completely unusable. To secure it, you could use as Example a Semaphore.
Assuming the JSON returned is arbitrary, the best thing to do is simply display it as a string. To do that, instead of mapping the JSON response to a POJO I would recommend adapting a Sling model to the response.
Then, you can set that Sling model to be the model in your sightly code, using data-sly-use.model, and in the Sling model constructor you can set the response value to an attribute of the sling model.
Then all you'd need to do is put that attribute in a ${} in the sightly html.
If the format/structure of the JSON isn't completely unknown, you could use the POJO in the sightly. Create some conditionals to test what attributes the POJO has, so you can put them into the sightly code.
Related
There are a lot of answers on how to convert ODataQuery into an Expression or into a Lambda, but what I need is quite the opposite, how to get from a Linq Expression the OData query string.
Basically what I want is to transcend the query to another service. For example, having 2 services, where your first service is not persisting anything and your second service is the one that will return the data from a database. Service1 sends the same odata request to Service2 and it can add more parameters to the original odata request to Service2
What I would like:
public IActionResult GetWeatherForecast([FromServices] IWeatherForcastService weatherForcastService)
{
//IQueryable here
var summaries = weatherForcastService.GetSummariesIQ();
var url = OdataMagicHelper.ConvertToUri(summaries);
var data = RestClient2.Get(url);
return data;
}
OP Clarified the request: generate OData query URLs from within the API itself.
Usually, the queries are so specific or simple, that it's not really necessary to try and generate OData urls from within the service, the whole point of the service configuration is to publish how the client could call anything, so it's a little bit redundant or counter-intuitive to return complex resource query URLs from within the service itself.
We can use Simple.OData.Client to build OData urls:
If the URL that we want to generate is:
{service2}/api/v1/weather_forecast?$select=Description
Then you could use Simple.OData.Client:
string service2Url = "http://localhost:11111/api/v1/";
var client = new ODataClient(service2Url);
var url = await client.For("weather_forecast")
.Select("Description")
.GetCommandTextAsync();
Background, for client-side solutions
If your OData service is a client for another OData Service, then this advice is still relevant
For full linq support you should be using OData Connected Services or Simple.OData.Client. You could roll your own, or use other derivatives of these two but why go to all that effort to re-create another wheel.
One of the main drivers for a OData Standard Compliant API is that the meta data is published in a standard format that clients can inspect and can generate consistent code and or dynamic queries to interact with the service.
How to choose:
Simple.OData.Client provides a lightweight framework for dynamically querying and submitting data to OData APIs. If you already have classes that model the structure of the API then you can use typed linq style query syntax, if you do not have a strongly typed model but you do know the structure of the API, then you can use either the untyped or dynamic expression syntax to query the API.
If you do not need full compile-time validation of your queries or you already have the classes that represent the resources served by the API then this is a simple enough interface to use.
This library is perfect for use inside your API logic if you have need of generating complex URLs in a strongly typed style of code without trying to generate a context to manage the connectivity to the server.
NOTE: Simple.OData.Client is sometimes less practical when developing against a large API that is rapidly evolving or that does not have a strict versioned route policy. If the API changes you will need to diligently refactor your code to match and will have to rely on extensive regression testing.
OData Connected Services follows a pattern where some or all of the API is modelled in the client with strongly typed client side proxy interfaces. These are POCO classes that have the structure necessary to send to and receive data from the server.
The major benefit to this method is that the POCO structures, requests and responses are validated against the schema of the API. This effectively gives you full intellisense support for the API and allows you to explor it's structure, the generated code becomes your documentation. It also gives you compile time checking and runtime safety.
The general development workflow after the API is deployed or updated is:
Download the $metadata document
Select the Operations and Types from the API that you want to model
Generate classes to represent the selected DTO Types as defined in the document, so all the inputs and outputs.
Now you can start using the code.
In VS 2022/19/17 the Connected Services interface provides a simple wizard for establishing the initial connection and for updating (or re-generating) when you need to.
The OData Connected Service or other client side proxy generation pattern suits projects under these criteria:
The API definition is relatively stable
The API definition is in a state of flux
You consume many endpoints
You don't want to manually code the types to serialize or deserialze payloads
Full disclosure, I prefer the connected service approach, but I have my own generation scripts. However if you are trying to generate OData query urls from inside your API, its not really an option, it creates a messy recursive dependency... just don't go there.
Connected services is the low-(manual)-code and lazy approach that is perfect for a stable API, generate once and never do it again. But the Connected Service architecture is perfect for a rapidly changing API because it will manage the minute changes to the classes for you, you just need to update your client side proxy classes more frequently.
I want to limit the amount of properties that are returned from my REST api built in dotnet core. When accessing resources the client only needs specific subsets of the data returned from the api. What is a good way to tell the api which properties the client wants returned?
My initial thought would be to add query parameters to the endpoint like this:
http://www.restapi.com/v1/resource?fields=id,name,type
But I am not sure the best way to implement this in the api so that it is reusable and clean.
You're right that you usually don't want to return your full domain model or data model over your web API. Usually you define a custom model type for this purpose, which you can also decorate with attributes for model binding and model validation, if desired.
If you want the client to be able to determine what properties it gets, you can return an anonymous type built for this purpose, or perhaps have several DTO types predefined that the client's parameters result in.
In our project we expose a number of web-services that were generated from a wsdl. After generating them, I can see that the requests and responses are mapped to POJOs and when I am making the response, I just set a new POJO. This works really nice. However, I have a problem with the request. When we receive the request I expected that the payload will be a POJO mapping the parameters from the request. The payload becomes actually an array of objects. I can access the values but this is not very comfortable. You can take a look at the picture.
I can see that the under "Variables" in the method it is correctly matched to the POJO we would like to have. Is there some setting that I am missing somewhere so that we can get the payload to be mapped to correct POJO type?
Re-run the WSDL to Java codegen but this time with wrapper style disabled, see: https://cxf.apache.org/docs/wsdl-to-java.html#WSDLtoJava-wrapperstyle
Currently spring-data-rest is returning JSON in HAL format in a spring-boot project of mine. I am using an ember.js frontend and want to use jsonapi (http://jsonapi.org/) specification.
How might I register a new JSON formatting strategy given I will need to write the formatter myself as one does not exists yet?
This is how you can customize the HATEOAS that Spring Data REST produces:
https://docs.spring.io/spring-data/rest/docs/current/reference/html/#customizing-sdr.customizing-json-output
If you need to completely replace the JSON representation with your own, then you can write and register your own org.springframework.core.convert.converter
Or you write your custom REST endpoints in a plain old #RepositoryRestController and implement your own REST endpoints. (<= I suggest try this)
I am building an application that exposes a Rest API and on the backend communicates and orchestrates multiple SOAP services to build the responses to the REST API. I have been reading about Canonical Data Models and how they can help me loosely couple these backend SOAP services.
Should I be using a canonical Data Model between my Rest API and the backend services?
At the moment the backend SOAP responses are unmarshalled to Java objects using JAXB. I then use scripts to map the jaxb objects to the a map that represents the structure I want to return as JSON and simply convert the Map to Json via my Rest API.
So SOAP -> jaxb Java Object -> Java Map(representing JSON) -> Json
Should I add another step in here for a Canonical Model?
So SOAP -> jaxb Java Object -> CANONICAL MODEL not representing SOAP or JSON structure -> Java Map(representing JSON) -> Json
Is this a good fit for a CDM? Or does adding this extra level redundant?
I think you're talking about having a facade between you and the services rather than a CDM.
You would map the jaxb generated objects into internal objects, perform application logic on these, and then map these to the objects representing your JSON interface.
The jaxb-internal mapping will decouple your application from the interfaces you are consuming.
The internal-to-json mapping will decouple the interface you expose to your consumers from your internal objects.
Whether this is worth it or not depends on the complexity of your environment, what the cost & likelihood of change is. For instance it might be acceptable to be tightly coupled to services which share and expose a mature and versioned canonical model. It's a very different risk profile if you are consuming a set of ad-hoc or third party interfaces.
As far as I know canonical data model means a common data model that represents all possible message formats and/or protocols. For an example, in Mule MuelMessage is a canonical data model because every message we sent, Mule creates the MuleMessage which represents your message irrespective of the protocol we use. So creating such a canonical data model is a bit difficult, in general
Coming to your case, I don't know how complex your SOAP objects are. If they are too complex, i.e., having multi-levels, then it would be a difficult job. My suggestion is, instead of having a canonical data model, why can't your write your own custom transformer (see if you can use a built in transformer) which parses and transforms your SOAP message to the corresponding JSON response. You can have a common transformer interface but with multiple implementations do perform parsing and transforming, depends on your SOAP message.
Hope this helped.