I'm interested in understand how the Adapter invocations works from Worklight server point of view if I am in this situation:
Basically, we are defining an architecture for several (n) adapters that must use a common function.
We are planning to create a dedicated adapter to do this, so in this way each adapter should be able to call this "common" procedure using WL.Server.invokeProcedure API.
The doubt is if with a large number (y) of requests from these several n adapters that call this "common" one may occur performance issues on the Worklight Server that receives a lot of invocations on a single procedure.
What I would understand (or at least have an official confirmation) is: if the Worklight server receives a lot of invocation on a single procedure of an adapter (particularly, an HTTP adapter) how manages these invocations (e.g. WL Server manage different invocations with different threads in parallel for each invocation, or put them in a queue?) and if sharing a procedure between different adapters using another adapter is a common practice (and if we can use an alternative way avoiding an additional invocation to WL server ).
I've read the Performance and Scalability documents, but I haven't noticed information on this specific point.
One aspect that may be of interest to you in regards to performance settings of adapters is
maxConcurrentConnectionsPerNode.
maxConcurrentConnectionsPerNode – The maximum number of concurrent requests that can be
performed on the back-end application from the Worklight server node. This
maxConcurrentConnectionsPerNode parameter is set in the adapter.xml in the connectivity
entry.
There are two considerations when setting this parameter:
If there is no limitation in the back-end about the incoming connections then,
a "Rough" rule of thumb will be to set the number of connection threads per adapter
to be the number of http threads in the application server. A more precise rule
of thumb will be to understand the percent of requests going to each back-end and
set the number respectively.
The back-end may have a limitation on the incoming connection threads: In that case
we can have only BACKEND_MAX_CONNECTIONS/NUM_OF_CLUSTER_NODES connection threads where
BACKEND_MAX_CONNECTIONS is the maximum incoming connections define in the back-end
server and NUM_OF_CLUSTER_NODES is the number Worklight server nodes in the cluster.
You can also look into the Tuning Worklight Server documentation that covers some of the aspects you mentioned above:
https://www.ibm.com/developerworks/community/blogs/worklight/entry/tuning_worklight_server?lang=en
Related
I want to develop a Web API using .NET Core that needs to be able to handle a large number of concurrent requests. Furthermore, the WebAPI needs to connect to a database. The Web API will be internal and won't be exposed on Internet.
I'm considering two possibilities:
Hosting an ASP.NET Core application in Kestrel in one or more containers / EC2 instances, with or without IIS.
A serverless solution using AWS Lambda
I'm trying to understand what considerations I need to be aware of that will impact the performance and cost of each of these solutions.
I understand that a Kestrel server with an application that uses async / await patterns can be expected to handle large numbers of concurrent requests. Database connection pooling means database connections can be efficiently shared between requests.
In this forum post on AWS Lambda I read that:
I understand this questions more in terms “if AWS Lambda calls:
response = Function(request) are thread-safe”.
The answer is yes.
It is because of one very simple reason. AWS Lambda does not allow for
another thread to invoke the same lambda instance before the previous
thread exits. And since multiple lambda instances of the same function
do not share resources, this is not an issue also.
I understood and interpreted this as meaning:
Each lambda instance can only handle one request at a time.
This means a large number of instances are needed to handle a large number of concurrent requests.
Each lambda instance will have its own database connection pool (probably with only a single connection).
Therefore the total number of database connections is likely to be higher, bounded by the Lambda function level concurrency limit.
Also as traffic increases and new lambda instances are created to respond to the demand, there will be significant latency for some requests due to the cold start time of each lambda instance.
Is this understanding correct?
Yes, each lambda instance is completely isolated, and are running as a single thread. So in your case there will be a lot of database connections.
One problem with your architecture is that you are trying to mix scalable resource in this case lambda with non scalable resource in this case relational database. I've seen such setups explode in very spectacular ways.
So in your case I'd either go with a number of static servers running Kestrel or another high performance web server or would replace the relational database with something that could smoothly scale, like DynamoDB or maybe AWS Aurora
I am using MFP 8.0, and there are requirements that we want implement cache on the adapter level.
Whenever MFP server starts we want to dump all the database in cache till the server restart again.
Now whenever user hit some transaction or adapter procedure which call database so instead of calling database it must read from cache.
Adapters support read-only and transactional access modes to back-end systems.
Adapters are Maven projects that contain server-side code implemented in either Java or JavaScript. Adapters are used perform
any necessary server-side logic, and to transfer and retrieve
information from back-end systems to client applications and cloud
services.
JSONStore is an optional client-side API providing a lightweight, document-oriented storage system. JSONStore enables persistent storage
of JSON documents. Documents in an application are available in
JSONStore even when the device that is running the application is
offline. This persistent, always-available storage can be useful to
give users access to documents when, for example, there is no network
connection available in the device.
From your description, assuming you are talking about some custom DB where you have data stored, then you need to implement the logic of caching the data.
Adapter's have two classes <AdapterName>Application.java and <AdapterName>Resource.java. <>Application.java contains the lifecycle methods - init() and destroy().
You should put your custom code of loading data from your DB into cache in the init() method. And also take care of removing it in the destroy().
Now during transactional access (which hits <>Resource.java), you refer to the cache you have already created.
Your requirement, however may not be ideal for heavily loaded systems. You need to consider that:
a) Your adapter initialization is delayed. Any wrongly written code can also break the adapter initialization. An adapter isn't available to service your request until it has been initialized. In case of a clustered environment, the adapter load in all cluster members will delayed depending on the amount of data your are loading. Any client request intended for this adapter will get a runtime exception until the initialization is complete.
b) Holding the cache in memory means, so much space in the heap is used up. If your DB keeps growing, this adversely affects adapter initialization and also heap usage.
c) You are in charge maintaining the data at the latest level and also cleaning it up after use.
To summarize, while it is possible, it is not recommended. While this may work in case of very small data set, this cannot scale well. The design of adapters is to provide you transactional access to data/backend systems. You should use the adapter the way it was designed to.
My app will work as follows:
I'll have a bunch of replica servers and a load balancer. The data updates will be managed outside CometD. EDIT: I still intend to notify each CometD server of those updates, if necessary, so they can respond back to clients.
The clients are only subscribing to those updates (i.e. read only), so the CometD server nodes don't need to know anything about each other's behavior.
Am I right in thinking I could have server side "client" instances on the load balancer, per client connection, where each instance listens on the same channel as its respective client and forwards any messages back to it? If so, are there any disadvantages to this approach, instead of using Oort?
Reading the docs about Oort, it seems that the nodes "know" about each other, which I don't need. Would it be better then for me to avoid using Oort altogether, in my case? My concern would be that if I ended up adding many many nodes, the fact that they communicate to "each other" could mean unnecessary processing?
The description of the issue specifies that the data updates are managed outside CometD, but it does not detail how the CometD servers are notified of these data updates.
The two common solutions are A) to notify each CometD server or B) to use Oort.
In solution A) you have an event that triggers a data update, and some external application performs the data update on, say, a database. At this point the external application must notify the CometD servers that there was a data update. If the external application runs on a JVM, it can use the CometD Java client to send a message to each CometD server, notifying them of the data update; in turn, the CometD servers will notify the remote clients.
In solution B) the external application must notify just one CometD server that there was a data update; the Oort cluster will do the rest, broadcasting that message across the cluster, and then to remote clients.
Solution A) does not require the Oort cluster, but requires the external application to know exactly all nodes, and send a message to each node.
Solution B) uses Oort, so the external application needs only to know one Oort node.
Oort requires a bit of additional processing because the nodes are interconnected, but depending on the case this processing may be negligible, or the complications of notifying each CometD server "manually" (as in solution A) may be greater than running Oort.
I don't understand exactly what you mean by having "server side client instances on the load balancer". Typically load balancers don't run a JVM so it is not possible to run CometD clients on them, so this sentence does not sound right.
Besides the CometD documentation, you can also look at these slides.
I am using SQL Server 2008 R2 to connect to a number of other servers of the same type from within triggers and stored procedures. These servers are geographically distributed around the world and it is vital that any errors in communication between the servers are logged along with the data that was supposed to be sent so the communication may be re-attempted at a later time. The servers are participating in an Observer pattern with one of the servers acting as the observer and handling routing of messages between the other servers.
I am looking for specific advice on how best to handle errors in this situation, particularly connectivity errors and any potential pitfalls to look out for when performing queries on remote servers.
If you are using the Linked Server and sending the data to the other server over linked server connection, there is no inherent way to log these request, unless you add an application logic to do so.
with a linked server, if one of the server goes down then there will be an error thrown in the application logic, i.e. in your case the stored procedure or the trigger will fail, saying the server does not exist or the server is down.
In order to avoid this, we try to use the Service Broker, where it implements the Queue Logic, in this case you can always keep the logging and also ensure that the messages will be delivered irrespective of the server down times ( in case of server down time, the message waits until it is read).
http://technet.microsoft.com/en-us/library/ms166104%28v=sql.105%29.aspx
Hope this helps
Linked servers may not be the best solution for the model you're trying to implement, since the resilience you require is very difficult to achieve in the case of a linked server communication failure.
The fundamental problem is that in the case of a linked server communication failure the database engine raises an error with a severity of 20, which is high enough to abort the currently executing batch - bypassing any error handling code in the batch (for example TRY...CATCH).
SQL 2005 and later include the procedure sp_testlinkedserver which enable the availability of the linked server to be tested before attempting to execute commands - however, this doesn't get around problems created by communication errors encountered during a command.
There are a couple of more robust options you could consider. One is the Service Broker, which provides an asynchronous message queuing model. This isn't a perfect fit for the observer pattern but the activation feature provides a means to implement push-notifications from a central point. Since you mention messaging, the conversation model employed by Service Broker might suit your aims.
The other option is transactional replication; this might be more suitable if the data flow is purely from the central server to the observers.
I am new to weblogic server. I am using work manager. I want to know what is work manager and why we need it. What is the difference between normal request with out work manager and with work manager !!
I think the documentation is rather good on this subject.
WebLogic Server prioritizes work and allocates threads based on an
execution model that takes into
account administrator-defined
parameters and actual run-time
performance and throughput.
Administrators can configure a set of
scheduling guidelines and associate
them with one or more applications, or
with particular application
components. For example, you can
associate one set of scheduling
guidelines for one application, and
another set of guidelines for other
application. At run-time, WebLogic
Server uses these guidelines to assign
pending work and enqueued requests to
execution threads.
Essentially, with work managers you can attach a scheduling policy to an application to e.g. make sure that a specific application gets a fair share of the available computing resources under a heavy load situation. Or you might want to restict the maximum number of threads that will be allocated to an application to prevent a buggy/untested application to bring the whole application server to its knees. (But surely all apps have been tested not to do anything like that.... ;) )
Outside of modifying the default allocation algorithms, the Work Manager is also useful if you are using a Foreign JMS Provider (such as IBM MQ) and need to process more than 16 messages at a time.