Play frame work 2.2 No of parallel threads configuration - playframework-2.2

Hi we are using play frame work 2.2. We are doing concurrent execution by using Promise.promise. But as of now its spawning a batch of 8 threads.Once this 8 thread over it will move to next 8 threads. But we need to run more than this concurrently. How can we configure this parallelism in our configuration.
Thanks in advance

Found solution for the above, please put the below piece of code in application.conf file. You can adjust with parallelism-max and parallelism-min values.
play {
akka {
akka.loggers = ["akka.event.Logging$DefaultLogger", "akka.event.slf4j.Slf4jLogger"]
loglevel = WARNING
actor {
default-dispatcher = {
fork-join-executor {
parallelism-factor = 1.0
parallelism-max = 500
parallelism-min = 500
}
}
}
}
}

Related

Concurrent processing of Channels

I'm following this tutorial to create a hosted service. The program runs as expected. However, I want to process the queued items concurrently.
In my app, there are 4 clients, each of these clients can process 4 items at a time. So at any given time, 16 items should be processed in parallel.
So based on these requirements, I've modified the code a bit:
In the MonitorLoop class:
private int count = 0;
private async ValueTask MonitorAsync()
{
while (!_cancellationToken.IsCancellationRequested)
{
await _taskQueue.QueueAsync(BuildWorkItem);
Interlocked.Increment(ref count);
Console.WriteLine($"Count: {count}");
}
}
and in the same class:
if (delayLoop == 3)
{
_logger.LogInformation("Queued Background Task {Guid} is complete.", guid);
Interlocked.Decrement(ref count);
}
This shows that, if I set the "Capacity" as 4, the value will never increase after 5.
Basically, if the queue is full, it will wait until there's room for one more.
The problem is that the items are processed one at a time.
Here's the code for the BackgroundProcessing method on the QueuedHostedService class:
private async Task BackgroundProcessing(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var workItem = await TaskQueue.DequeueAsync(stoppingToken);
try
{
//instead of getting a single item from the queue, somehow, here
//we should be able to process them in parallel for 4 clients
//with a limit for maximum items each client can process
await workItem(stoppingToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error occurred executing {WorkItem}.", nameof(workItem));
}
}
}
I want to process them in parallel. I'm not sure if using Channel as the queue in the system is the best solution. Maybe I should have a ConcurrentQueue instead. But again, I'm not sure how to achieve a robust implementation that can have 4 clients with 4 threads each.
If you want four processors, then you can refactor the code to use four instances of your main loop, and use Task.WhenAll to (asynchronously) wait for all of them to complete:
private async Task BackgroundProcessing(CancellationToken stoppingToken)
{
var task1 = ProcessAsync(stoppingToken);
var task2 = ProcessAsync(stoppingToken);
var task3 = ProcessAsync(stoppingToken);
var task4 = ProcessAsync(stoppingToken);
await Task.WhenAll(task1, task2, task3, task4);
async Task ProcessAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var workItem = await TaskQueue.DequeueAsync(stoppingToken);
try
{
await workItem(stoppingToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Error occurred executing {WorkItem}.", nameof(workItem));
}
}
}
}
I'm not sure how to achieve a robust implementation
If you want a robust implementation, then you can't use that tutorial, sorry. The primary problem with that kind of background work is that it will be lost on any app restart. And app restarts are normal: the server can lose power or crash, OS or runtime patches can be installed, IIS will recycle your app periodically, and whenever you deploy your code, the app will restart. And whenever any of these things happen, all in-memory queues like channels will lose all their work.
A production-quality implementation requires a durable queue at the very least. I also recommend a separate background processor. I have a blog series on the subject that may help you get started.

Sending HTTP requests in a MacCatalyst app opened in command line [duplicate]

When writing a Command Line Tool (CLT) in Swift, I want to process a lot of data. I've determined that my code is CPU bound and performance could benefit from using multiple cores. Thus I want to parallelize parts of the code. Say I want to achieve the following pseudo-code:
Fetch items from database
Divide items in X chunks
Process chunks in parallel
Wait for chunks to finish
Do some other processing (single-thread)
Now I've been using GCD, and a naive approach would look like this:
let group = dispatch_group_create()
let queue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT)
for chunk in chunks {
dispatch_group_async(group, queue) {
worker(chunk)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
However GCD requires a run loop, so the code will hang as the group is never executed. The runloop can be started with dispatch_main(), but it never exits. It is also possible to run the NSRunLoop just a few seconds, however that doesn't feel like a solid solution. Regardless of GCD, how can this be achieved using Swift?
I mistakenly interpreted the locking thread for a hanging program. The work will execute just fine without a run loop. The code in the question will run fine, and blocking the main thread until the whole group has finished.
So say chunks contains 4 items of workload, the following code spins up 4 concurrent workers, and then waits for all of the workers to finish:
let group = DispatchGroup()
let queue = DispatchQueue(label: "", attributes: .concurrent)
for chunk in chunk {
queue.async(group: group, execute: DispatchWorkItem() {
do_work(chunk)
})
}
_ = group.wait(timeout: .distantFuture)
Just like with an Objective-C CLI, you can make your own run loop using NSRunLoop.
Here's one possible implementation, modeled from this gist:
class MainProcess {
var shouldExit = false
func start () {
// do your stuff here
// set shouldExit to true when you're done
}
}
println("Hello, World!")
var runLoop : NSRunLoop
var process : MainProcess
autoreleasepool {
runLoop = NSRunLoop.currentRunLoop()
process = MainProcess()
process.start()
while (!process.shouldExit && (runLoop.runMode(NSDefaultRunLoopMode, beforeDate: NSDate(timeIntervalSinceNow: 2)))) {
// do nothing
}
}
As Martin points out, you can use NSDate.distantFuture() as NSDate instead of NSDate(timeIntervalSinceNow: 2). (The cast is necessary because the distantFuture() method signature indicates it returns AnyObject.)
If you need to access CLI arguments see this answer. You can also return exit codes using exit().
Swift 3 minimal implementation of Aaron Brager solution, which simply combines autoreleasepool and RunLoop.current.run(...) until you break the loop:
var shouldExit = false
doSomethingAsync() { _ in
defer {
shouldExit = true
}
}
autoreleasepool {
var runLoop = RunLoop.current
while (!shouldExit && (runLoop.run(mode: .defaultRunLoopMode, before: Date.distantFuture))) {}
}
I think CFRunLoop is much easier than NSRunLoop in this case
func main() {
/**** YOUR CODE START **/
let group = dispatch_group_create()
let queue = dispatch_queue_create("", DISPATCH_QUEUE_CONCURRENT)
for chunk in chunks {
dispatch_group_async(group, queue) {
worker(chunk)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
/**** END **/
}
let runloop = CFRunLoopGetCurrent()
CFRunLoopPerformBlock(runloop, kCFRunLoopDefaultMode) { () -> Void in
dispatch_async(dispatch_queue_create("main", nil)) {
main()
CFRunLoopStop(runloop)
}
}
CFRunLoopRun()

Alamorefile upload images timeout when using many threads

I'm using moya to upload many images by using OperationQueue to control maxConcurrentOperationCount. Supposing i have 100 images, uploading 5 images everytime. Alamorefire timeout sets to 10 seconds.
Uploading one image is very fast, nerver triggering timeout. But when i uploading 100 images using the method below, even using multithread, it triggering timeout. Why?
Thank you!
queue = OperationQueue()
queue.maxConcurrentOperationCount = 5
var i = 0
for image in photos {
autoreleasepool {
let operation:BlockOperation = BlockOperation(block: {
[weak self] in
guard let strongSelf = self else {return}
print("hyl cur thread %#", Thread.current)
strongSelf.uploadImage(image)
return
})
i += 1
queue.addOperation(operation)
}
}
private func uploadImage(_ image: UIImage) {
AladdinProvider.rx.request(.upload(access_token: UserInfo.instance.access_token!, file_name: "file_name", data: image)).asObservable().mapJSON().mapObject(type: AlbumDatas.self).subscribe(onNext: {
[weak self] result in
guard let strongSelf = self else {return}
// TODO success
}, onError: {
error in
print(\(error))
}).disposed(by: disposeBag)
}
Fundamentally, this is an issue with creating and resuming URLSessionTasks before they're allowed to run. This is exacerbated by Alamofire, which creates the URLSessionTasks immediately upon creation of a corresponding *Request value. Increasing the timeout of the URLRequests you're using for image downloads could help. A more thorough solution would be to stop creating your Alamofire requests immediately, instead only creating them as the first few requests complete. This should prevent the associated tasks from timing out.

Play Frame work 2.2 How Concurrent execution works

Recently, we started to work with play 2.2. Previously we were working with play 2.1.3.
In play 2.2 it says Akka.future and async methods are seen as deprecated. Also when we tried to run below piece of code fetchSample() through a loop, it took more time to complete in play 2.2.
So how can we replace the below deprecated code with the latest one?
private static Promise<SampleDBResponseBean> fetchSample(
final Document sampleDoc) throws Exception {
Promise<SampleBean> promiseOfSampleJson = Akka.future(
new Callable<SampleBean>() {
public SampleBean call() throws Exception
{
return doSomeCalc(sampleDoc);
}
});
}
private Result getAsyncResult(final SampleResponseBean sampleDbResponseBean) {
List<F.Promise<? extends SampleDBResponseBean>> promiseList = sampleDbResponseBean
.getSampleHelperList();
Promise<List<SampleDBResponseBean>> promiseJsonObjLists = Promise
.sequence(promiseList);
return async(
promiseJsonObjLists.map(
new Function<List<SampleDBResponseBean>, Result>() {
public Result apply(List<SampleDBResponseBean> sampleList) {
SampleResponseBean sampleResponseBean = new SampleResponseBean();
sampleResponseBean.setStatus("success");
sampleResponseBean.setSampleList(sampleList);
JsonNode jsNodeResponse = Json.toJson(sampleResponseBean);
return ok(jsNodeResponse);
}
}));
}
I had searched a lot of places not seeing any solution. The problem effects our code performance when comparing to 2.1.3.
Any ideas how can we implement the deprecated methods for the above 2 methods in play 2.2?
As pointed out in the migration docs:
http://www.playframework.com/documentation/2.2.x/Migration22
You want to use Promise.promise. This is also described in the documentation:
http://www.playframework.com/documentation/2.2.x/JavaAsync
And of course in the API docs:
http://www.playframework.com/documentation/2.2.x/api/java/play/libs/F.Promise.html#promise(play.libs.F.Function0)
One of the really nice things about Play 2.2 Java promises is now you can control exactly which execution context the code runs in, so you can create your own execution context, or get one from Akka, and so control exactly how many, in your case, concurrent DB operations are run across the whole app at the same time.

Flex 4 & AIR 2 NativeProcess API: The NativeProcess could not be started

I'm trying to build an application using AIR 2's new NativeProcess API's going from Brent's little video:
http://tv.adobe.com/watch/adc-presents/preview-command-line-integration-in-adobe-air-2
but I'm having some issues, namely I get an error every time I try to start my process.
I am running OS X 10.5.8 and I want to run diskutil and get a list of all mounted volumes.
Here is the code I am trying:
private function unmountVolume():void
{
if(!this.deviceMounted){ return; }
// OS X
if (Capabilities.os.indexOf("Mac") == 0){
diskutil = new NativeProcess();
// TODO: should really add event listeners
// in case of error
diskutil.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onDiskutilOut);
var startupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
startupInfo.executable = new File('/usr/sbin/diskutil');
var args:Vector.<String> = new Vector.<String>();
args.push("list");
//args.push(this.currentVolumeNativePath);
startupInfo.arguments = args;
diskutil.start(startupInfo);
}
}
which seems pretty straightforward and was based off of his grep example.
Any ideas of what I'm doing wrong?
The issue was that the following line was not added to my descriptor:
<supportedProfiles>extendedDesktop</supportedProfiles>
That should really be better documented :) It wasn't mentioned in the video.