As per our current architecture, we have Datapower that acts as a gatekeeper for validating each incoming request (in JSON) against JSON schemas.
We have lot of restful services having corresponding JSON schemas residing at Datapower itself. However, every time there is a change in service definition corresponding schema has to be changed. That results in a Datapower deployment of affected schema.
Now we are planning to have a restful service that will be called by Datapower for every incoming request and it will return the JSON schema for the service to be invoked and that schema will be present along with service code itself not on Datapower. That way even if there are any changes in service definition, there itself we can make the changes in schema as well and deploy the service. It will save us an unnecessary Datapower deployment.
Is there any better approach to validate the schema? All I want is not to have Datapower deployment for every schema change.
Just FYI we get schema changes on frequent basis.
Keep your current solution as is as pulling in new JSON schemas for every request will affect performance. Instead when you deploy the schema in the backend system have a RMI (REST management interface) or SOMA call that uploads the new schema or simply a XML Firewall where you add a GWS script that writes the json data to file in the directory (requires 7.5 or higher).
Note that you have to clean the cache as well through the call!
A better approach is to have some push system based on subscription to changes. You can store schemas in etcd, redis, postgres or any other system that has notification channels for data changes so you can update schemas in the validating service without doing it on every request. If your validating service uses validator that compiles schemas to code (ajv - I am the author, is-my-json-valid, jsen) it would be even better performance gain if you only do it on change.
Related
I am using MFP 8.0, and there are requirements that we want implement cache on the adapter level.
Whenever MFP server starts we want to dump all the database in cache till the server restart again.
Now whenever user hit some transaction or adapter procedure which call database so instead of calling database it must read from cache.
Adapters support read-only and transactional access modes to back-end systems.
Adapters are Maven projects that contain server-side code implemented in either Java or JavaScript. Adapters are used perform
any necessary server-side logic, and to transfer and retrieve
information from back-end systems to client applications and cloud
services.
JSONStore is an optional client-side API providing a lightweight, document-oriented storage system. JSONStore enables persistent storage
of JSON documents. Documents in an application are available in
JSONStore even when the device that is running the application is
offline. This persistent, always-available storage can be useful to
give users access to documents when, for example, there is no network
connection available in the device.
From your description, assuming you are talking about some custom DB where you have data stored, then you need to implement the logic of caching the data.
Adapter's have two classes <AdapterName>Application.java and <AdapterName>Resource.java. <>Application.java contains the lifecycle methods - init() and destroy().
You should put your custom code of loading data from your DB into cache in the init() method. And also take care of removing it in the destroy().
Now during transactional access (which hits <>Resource.java), you refer to the cache you have already created.
Your requirement, however may not be ideal for heavily loaded systems. You need to consider that:
a) Your adapter initialization is delayed. Any wrongly written code can also break the adapter initialization. An adapter isn't available to service your request until it has been initialized. In case of a clustered environment, the adapter load in all cluster members will delayed depending on the amount of data your are loading. Any client request intended for this adapter will get a runtime exception until the initialization is complete.
b) Holding the cache in memory means, so much space in the heap is used up. If your DB keeps growing, this adversely affects adapter initialization and also heap usage.
c) You are in charge maintaining the data at the latest level and also cleaning it up after use.
To summarize, while it is possible, it is not recommended. While this may work in case of very small data set, this cannot scale well. The design of adapters is to provide you transactional access to data/backend systems. You should use the adapter the way it was designed to.
I have a WCF service which is called in 4 different places in my system. It returns approx 500 records from the database each time it's called.
I would like to use a cache in place of making the call to WCF every time because data in the DB will remain unchanged.
Is there anything built into WCF for this or do I have to create my own solution?
There are several classes in .NET to allow you to cache objects in memory and there are also open source solutions which will do this for you.
If you wish to code this yourself one class you can use is MemoryCache
An open source solution can be found here: Redis.IO
In addition to Jon's answer, you can also use SQL Cache Dependency.
If your WCF Web HTTP service depends on data stored in a SQL database,
you may want to cache the service's response and invalidate the cached
response when data in the SQL database table changes.
I'm writing a CMIS interface(server) for my application. The server needs to load data from a database to process the request. At the moment I'm loading the same data for every request.
Is there a common way to cache this data. Are cookies supported for each cmis client? Is there an other chance to cache this data?
Thank you
You should not rely on cookies. Several clients and client libraries support them but not all. Cookies can help and you should make use of them, but be prepared for simple clients without cookie support.
Since your data is usually bound to a user, you can build a cache based on the username. But it depends on your repository and your use case what you can and should cache. Repository infos and type definitions are good candidates. But you should be careful with everything else.
I've got an issue with WCF, streaming, and security that isn't the biggest deal but I wanted to get people's thoughts on how I could get around it.
I need to allow clients to upload files to a server, and I'm allowing this by using the transferMode="StreamedRequest" feature of the BasicHttpBinding. When they upload a file, I'd like to transactionally place this file in the file system and update the database with the metadata for the file (I'm actually using Sql Server 2008's FILESTREAM data type, that natively supports this). I'm using WCF Windows Authentication and delegating the Kerberos credentials to SQL Server for all my database authentication.
The problem is that, as the exception I get helpfully notes, "HTTP request streaming cannot be used in conjunction with HTTP authentication." So, for my upload file service, I can't pass the Windows authentication token along with my message call. Even if I weren't using SQL Server logins, I wouldn't even be able to identify my calling client by their Windows credentials.
I've worked around this temporarily by leaving the upload method unsecured, and having it dump the file to a temporary store and return a locator GUID. The client then makes a second call to a secure, non-streaming service, passing the GUID, which uploads the file from the temporary store to the database using Windows authentication.
Obviously, this isn't ideal. From a performance point of view, I'm doing an extra read/write to the disk. From a scalability point of view, there's (in principle, with a load balancer) no guarantee that I hit the same server with the two subsequent calls, meaning that the temporary file store needs to be on a shared location, meaning not a scalable design.
Can anybody think of a better way to deal with this situation? Like I said, it's not the biggest deal, since a) I really don't need to scale this thing out much, there aren't too many users, and b) it's not like these uploads/downloads are getting called a lot. But still, I'd like to know if I'm missing an obvious solution here.
Thanks,
Daniel
We've got a smart client that talks to a SQL Server database via WCF, displaying the entities in the database, and allowing the user to edit those entities.
Some of the WCF calls return a large data set. Since this data set doesn't change very often, I'm considering some sort of write-through cache on the client, and only getting the deltas from the WCF service.
That is: the client both reads from the service and writes to the service.
I'm not looking for disconnected/offline operation, but since the majority of the data doesn't change very often, I'd probably implement this with a local data store.
I don't want the local store to get too stale, and I don't think I'm too concerned about conflict resolution, because updates will always go straight to the WCF service -- think of it as a write-through cache.
Would Microsoft's Sync Framework be good for this? Could I use a local SQL-CE cache and perform the updates over WCF? The service end has a SQL Server 2005/2008 backend, but I don't want to talk to it directly. Does Sync Framework integrate well with WCF?
Are there other solutions out there? Should I roll something myself?
I don't think you have to couple it to WCF at all. FeedSync allows you to publish directly to an RSS feed.
The only that I'm not too sure about is if it would be suitable for a "large dataset" though. Since you don't need two way replication, if your dataset is extremely large, you might want to write your own WCF implementation to optimize it; especially for the initial population.