Redis best practices recommendeds using a long-lived ConnectionMultiplexer. However I want to use Redis inside an azure consumption function that may only live a couple of seconds (but runs many times).
I was wondering if I had code like this:
private static Lazy<ConnectionMultiplexer> lazyRedisConnection = new Lazy<ConnectionMultiplexer>(() =>
{
string cacheConnection = ConfigurationManager.AppSettings["RedisKey"].ToString();
return ConnectionMultiplexer.Connect(cacheConnection);
});
public static ConnectionMultiplexer RedisConnection
{
get
{
return lazyRedisConnection.Value;
}
}
On an Azure Consumption function, that runs e.g. 10000 times. Because of the way Azure Consumption functions work, this would actually create 10000 connections rather then reuse a single one?
Would it be safer to manually create/dispose a connection per function?
Even though a single function execution might only take a couple seconds, the function instance (server) is being reused for multiple requests. In practice, with a constant stream of incoming requests, each instance lives long (minutes to hours).
Database Connections should be reused between the calls that are executed on the same instance.
Static fields are initialized once and then get reused for multiple executions, so your code will not create 10000 connections, but maybe 1 or 2 or 3 depending on how many instances will be created by the scale controller.
When an instance goes down, your App Domain will be recycled, so the connections to Redis will be killed.
I would suggest you go ahead with the code you quoted.
Related
I've been using MVC since version 2, and lately I have come across a project where all of the controller actions are 'async', returning Tasks, and I am trying to understand why somebody would do this.
The View Model for each view is built via an async call to an API. I understand that in order to use the await keyword one must use an async method (and return a Task), but surely without the View Model then the view will fail. There is no choice but to wait for the API to build my View Model.
public async Task<ActionResult> MyCar()
{
return View(await MyAPI.BuildMyCarViewModel());
}
For what reason would controller actions need to be asynchronous?
Let's assume that your part of code
MyAPI.BuildMyCarViewModel()
needs for execution 15 seconds. Then let's assume, that you have 10 000 users, which in range of 2 seconds decided to load some model. And then assume that you don't use caching ( for the sake of example ).
IIS by default has pool of threads 5000.
In described case application pool of IIS will be busy with 5000 threads which will translate into awaiting of your 5000 users for 5 seconds, and other 5000 users will wait until code finish executing. But with async/await .Net will generate state machine, and threads will be executed till moment of awaiting, and then threads will be released for making another useful job. And as soon as
MyAPI.BuildMyCarViewModel()
will return results, other threads or the same threads will return you result. And as outcome application pool of IIS will not be exhausted quickly for long running tasks and your users will receive response much faster, then without usage await/async. If to put simply, await/async gives you possibility to avoid thread pool exhausting quickly for long running fragments of code.
I have an MSDN article on the topic of async ASP.NET. In summary, the benefit is that the request does not take up a thread for the duration of the request. This allows your web app to scale if your backend can scale.
I was wondering if it was possible to keep an RFC called via JCO opened in SAP memory so I can cache stuff, this is the scenario I have in mind:
Suppose a simple function increments a number. The function starts with 0, so the first time I call it with import parameter 1 it should return 1.
The second time I call it, it should return 2 and so on.
Is this possible with JCO?
If I have the function object and make two successive calls it always return 1.
Can I do what I'm depicting?
Designing an application around the stability of a certain connection is almost never a good idea (unless you're building a stability monitoring software). Build your software so that it just works, no matter how often the connection is closed and re-opened and no matter how often the session is initialized and destroyed on the server side. You may want to persist some state using the database, or you may need to (or want to) use the shared memory mechanisms provided by the system. All of this is inconsequential for the RFC handling itself.
Note, however, that you may need to ensure that a sequence of calls happen in a single context or "business transaction". See this question and my answer for an example. These contexts are short-lived and allow for what you probably intended to get in the first place - just be aware that you should not design your application so that it has to keep these contexts alive for minutes or hours.
The answer is yes. In order to make it work, you need to implement two tasks:
The ABAP code needs to store its variable in the ABAP session memory. A variable in the function group's global section will do that. Or alternatively you could use the standard ABAP technique "EXPORT TO MEMORY/IMPORT FROM MEMORY".
JCo needs to keep the user session between calls. By default, JCo resets the backend-side user session after every call, which of course destroys all data stored in that user session memory. In order to prevent it, you need to use JCoContext.begin() and JCoContext.end() to get a stateful RFC connection that keeps the user session alive on backend side.
Sample code:
JCoDestination dest = ...
JCoFunction func = ...
try{
JCoContext.begin(dest);
func.execute(dest); // Will return "1"
func.execute(dest); // Will return "2"
}
catch (JCoException e){
// Handle network problems, ABAP exceptions, SYSTEM_FAILUREs
}
finally{
// Make sure to release the stateful connection, otherwise you have
// a resource-leak in your program and on backend side!
JCoContext.end(dest);
}
I am trying to write a control panel to
Inform about certain KPIS
Enable the user to init certain requests / jobs by pressing a button that then runs a stored proc on the DB or sets a specific setting etc
So far, so good, except I would like to run some bigger jobs where the length of time that the job is running for is unknown and could run over both the script timeout period AND the time the user is willing to wait for a response.
What I want is a "fire and forget" process so the user hits the button and even if they kill the page or turn off their phone they know the job has been initiated and WILL complete.
I was looking into C# BeginExecuteNonQuery which is an async call to the query so the proc is fired but the control doesn't have to wait for a response from it to carry on. However I don't know what happens when the page/app that fired it is shut.
Also I was thinking of some sort of Ajax command that fires the code in a page behind the scenes so the user doesn't know about it running but then again I believe if the user shuts the page down the script will die and the command will die on the server as well.
The only way for certain I know of is a "queue" table where jobs are inserted into this table then an MS Agent job comes along every minute or two checking for new inserts and then runs the code if there is any. That way it is all on the DB and only a DB crash will destroy it. It won't help with multiple jobs waiting to be run concurrently that both take a long time but it's the only thing I can be sure of that will ensure the code is run at all.
Any ideas?
Any language is okay.
Since web browsers are unconnected, requests from them always take the full amount of time. The governing factor isn't what the browser does, but how long the web site itself will allow an action to continue.
IIS (and in general, web servers) have a timeout period for requests, where if the work being done takes simply too long, the request is terminated. This would involve abruptly stopping whatever is taking so long, such as a database call, running code, and so on.
Simply making your long-running actions asynchronous may seem like a good idea, however I would recommend against that. The reason is that in ASP and ASP.Net, asynchronously-called code still consumes a thread in a way that blocks other legitimate request from getting through (in some cases you can end up consuming two threads!). This could have performance implications in non-obvious ways. It's better to just increase the timeout and allow the synchronously blocking task to complete. There's nothing special you have to do to make such a request complete fully, it will occur even if the sender closes his browser or turns off his phone immediately after (presuming the entire request was received).
If you're still concerned about making certain work finish, no matter what is going on with the web request, then it's probably better to create an out-of-process server/service that does the work and to which such tasks can be handed off. Your web site then invokes a method that, inside the service, starts its own async thread to do the work and then immediately returns. Perhaps it also returns a request ID, so that the web page can check on the status of the requested work later through other methods.
You may use asynchronous method and call the query from this method.
Your simple method can be changed in to a asynch method in the following manner.
Consider that you have a Test method to be called asynchronously -
Class AsynchDemo
{
public string TestMethod(out int threadId)
{
//call your query here
}
//create a asynch handler delegate:
public delegate string AsyncMethodCaller(out int threadId);
}
In your main program /or where you have to call the Test Method:
public static void Main()
{
// The asynchronous method puts the thread id here.
int threadId;
// Create an instance of the test class.
AsyncDemo ad = new AsyncDemo();
// Create the delegate.
AsyncMethodCaller caller = new AsyncMethodCaller(ad.TestMethod);
// Initiate the asychronous call.
IAsyncResult result = caller.BeginInvoke(
out threadId, null, null);
// Call EndInvoke to wait for the asynchronous call to complete,
// and to retrieve the results.
string returnValue = caller.EndInvoke(out threadId, result);
Console.WriteLine("The call executed on thread {0}, with return value \"{1}\".",
threadId, returnValue);
}
From my experience a Classic ASP or ASP.NET page will run until complete, even if the client disconnects, unless you have something in place for checking that the client is still connected and do something if they are not, or a timeout is reached.
However, it would probably be better practice to run these sorts of jobs as scheduled tasks.
On submitting your web page could record in a database that the task needs to be run and then when the scheduled task runs it checks for this and starts the job.
Many web hosts and/or web control panels allow you to create scheduled tasks that call a URL on schedule.
Alternately if you have direct access to the web server you could create a scheduled task on the server to call a URL on schedule.
Or, if ASP.NET, you can put some code in global.asax to run on a schedule. Be aware though, if your website is set to stop after a certain period of inactivity then this will not work unlesss there is frequent continuous activity.
I need to run a background logic that takes around 25-30 sec inside a WCF method that can't take more than 1 sec to complete. I've decided to wrap that logic into a WaitCallback and pass it to ThreadPool.QueueUserWorkItem right before I exit the web method. Initially it worked ok but now I'm having second thoughts because I suspect that sometimes QueueUserWorkItem method doesn't return in a timely manner as a result web method doesn't respond within 1 sec on a regular basis. Are there any issues with using QueueUserWorkItem inside WCF methods?
No not as such, but your question touches upon a more general problem, what to do with long-running service calls? You can either:
Change the configs so that client and server tolerate long service calls, i.e. increase timeouts
Or, design your service calls with a start / get current progress / get final result API, all of which return quickly:
int jobID = serviceProxy.StartJob();
float progress = serviceProxy.GetJobProgress(int jobID);
Result finalResult = serviceProxy.GetJobResult(int jobID);
This is more work, but a better design, and you now also have to maintain a list of running jobs (your async proceessing which could use QueueUserWorkItem or whatever), but all the service calls would return quickly.
I'm after some advice on polling an external web service every 30 secs from a Domino server side action.
A quick bit of background...
We track the location of cars thru the TomTom api. We now have a requirement to show this in our web app, overlayed onto a map (google, bing, etc.) and mashed up with other lat long data from our application. Think of it as dispatching calls to taxis and we want to assign those calls to the taxis (...it's not taxis\ calls, but it is similar process). We refresh the dispatch controllers screens quite aggressively, so they can see the status of all the objects and assign to the nearest car. If we trigger the pull of data from the refresh of the users screen, we get into some tricky controlling server side, else we will hit the max allowable requests per minute to the TomTom api.
Originally I was going to schedule an agent to poll the web service, write to a cached object in our app, and the refreshing dispatch controllers screen pulls the data from our cache....great, except, user requirement is our cache must be updated every 30secs. I can create a program doc that runs every 1 min, but still not aggressive enough.
So we are currently left with: our .net guy will create a service that polls TomTom every 30secs, and we retrieve from his service, or I figure out a way to do in Domino. It would be nice to do in Domino database, and not some stand alone java app or .net, to keep as much of the logic as possible in one system (Domino).
We use backing beans heavily in our system. I will be testing this later today I hope, but would this seem like a sensible route to go down..?:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
...or are their limitations I am not aware of, has anyone tackled this before in Domino or have any comments?
Thanks in advance,
Nick
Check out DOTS (Domino OSGi Tasklet Service): http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=OSGI%20Tasklet%20Service%20for%20IBM%20Lotus%20Domino
It allows you to define background Java tasks on a Domino server that have all the advantages of agents (can be scheduled or triggered) with none of the performance or maintenance issues.
If you cache the data in a bean (application or session scoped). Have a date object that contains the last refreshed date. When the data is requested, check last cached date against current time. If it's more than/equal to 30 seconds, refresh data.
A way of doing it would be to write a managed bean which is created in the application scope ( aka there can only be one..). In this managed bean you take care of the 30sec polling of the webservice by good old java webservice implementation and a java thread which you start at the creation of your managed-bean something like
public class ServicePoller{
private static myThread = null;
public ServicePoller(){
if(myThread == null){
myThread = new ServicePollThread();
(new Thread(myThread)).start());
}
}
}
class ServicePollThread implements Runnable(){
private hashMap yourcache = null;
public ServicePollThread(){
}
public void run(){
while(running){
doPoll();
Thread.sleep(4000);
}
}
....
}
This managed bean will then poll every 30 seconds the webservice and save it's findings in a hashmap or some other managed-bean classes. This way you dont need to run an agent or something like that and you achieve when you use the dispatch screen to retrieve data from the cache.
Another option would be to write an servlet ( that would be possible with the extlib but I cant find the information right now ) which does the threading and reading the service for you. Then in your database you should be able to read the cache of the servlet and use it wherever you need.
As Tim said DOTS or as jjtbsomhorst said a thread or an Eclipse job.
I've created a video describing DOTS: http://www.youtube.com/watch?v=CRuGeKkddVI&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=4&feature=plcp
Next Monday I'll publish a sample how to do threads and Eclipse jobs. Here is a preview video: http://www.youtube.com/watch?v=uYgCfp1Bw8Q&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=1&feature=plcp