Documentum find out which ADTS is processing renditon - instance

I have 2 instances of ADTS servers for 1 docbase and would like to find out which one processed my renditions.
I would like to know which renditions have been done by RenderServer1 and which have been done by RenderServer2
The 2 instances are set as cts_instance_info and and I know each one is working as I can check logs.
Does someone have an idea?
Thanks,
Stéphane

Related

Publishing of multiple Angular apps to Cloudflare workers with wrangler

I'm new to CF workers and the wrangler publish system, and I can find very little information around my requirements within online sources, perhaps my search query is wrong, so hoping I can find some help here.
I have an NX workspace, containing 2x apps. One app is deployed into the top-level worker, and the second one should be deployed to a sub-directory in the same worker, effectively create a parent-child structure, like the following:
example.com/ -> top-level app
example.com/site2/ -> child-level app
My issue is, I do not understand where and how to define, in wrangler.toml, the /sub-directory/. Should I have 2x separate worker-sites for these? I was under the impression that, I could just update the worker (index.js) file in my single worker-site to handle /site2/ otherwise treat the request as standard?
All I would really like to know is, how can I specify that my publish should to the /site2/ sub-directory, if at all possible?
Thanks in advance.
There are a couple ways to handle this. If your code / logic in the workers for the top-level vs child-level is completely different, I'd recommend using two separate workers. Then you can configure which "routes" each worker will run on -
https://developers.cloudflare.com/workers/cli-wrangler/configuration
Worker 1 could be -
routes = ["example.com/"]
Worker 2 could be -
routes = ["example.com/site2/"]
Check this out for more details on how routing / matching behaves -
https://developers.cloudflare.com/workers/platform/routes#matching-behavior
The other way to do it would be to have a single worker, and inspect the incoming request to behave differently depending on whether the request is at the root, or at /site2/. I'd only recommend this if there are small differences between how the two sites should behave (e.g. swapping out a variable).

ActiveMQ MessageGroupHashBucket - what is cache property needed for?

I'm trying to find a best strategy to deal with ActiveMQ Message Groups support.
ActiveMQ has several strategies (MessageGroupMap implementations).
The one that is confusing me a little is MessageGroupHashBucket.
Specifically, after looking at sources, I don't understand why is the cache property needed there? When assigning consumer id for message group or retrieving consumer id by message group - the array of buckets is used.
It would be great if someone can suggest why.
Thanks in advance,
MessageGroupHashBucket implements MessageGroupMap interface method getGroups() by returning the cache property as a map of all group names and associated consumer Id.
Adding to #piola's answer, it looks like the cache property is used to configure the number of group names that will be inside a bucket. This is a very efficient way to handle a large number of groups. Going by this logic, a configuration of 1024 buckets with the cache size of 64 can handle 65,536 groups.

Symfony 4 and queue

what need to use for queue in Symfony4. Example I need cretae some task in the background for some logic. Example set fullname from firts and last name. Example how to use RabbitMQ for this task in Symfony4 or maybe it's not good idea use RabbbitMq for this issue ?
In such cases, when I have to use some new technology in my project, first what I do is: go to Google and look for some ready bundle. In your case it's, for example, https://github.com/php-amqplib/RabbitMqBundle

Listing current Ignite jobs and cancelling them

I got a partial answer here but not exactly what I wanted.
The link describes how to get a list of task futures but what I'd really like to be able to do is list out and cancel individual jobs (that might be hung, long running etc etc). I've seen another post implying that this is not possible but I'd like to confirm (see second link)
Thanks
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-obtain-a-list-of-executing-jobs-on-an-ignite-node-td8841.html
http://apache-ignite-users.70518.x6.nabble.com/Cancel-tasks-on-Ignite-compute-grid-worker-nodes-td5027.html
Yes, this is not possible and actually I'm not sure how this can be done in general case. Imagine there are 5 jobs running and you want to cancel one of them. How are you going to identify it? It seems to be very use case specific to me.
However, you can always implement your own mechanism to do this. One of the possible ways is to use ComputeTaskSession API and task attributes. E.g., set a special attribute that will act as signal for job cancellation and create attribute listener that will stop job execution accordingly.

Up and download directly - no waiting

I would want to program something where you upload a file on the one side and the other person can download it the moment I start uploading. I knew such a service but can't remember the name. If you know the service I'd like to know the name if its not there anymore I'd like to program it as an opensource project.
And it is supposed to be a website
What you're describing sounds a lot like Bit Torrent.
You might be able to achieve this by uploading via a custom ISAPI filter (if you use IIS) -- all CGI implementations won't start to run your script until the request has completed, which makes sense, as you won't have been told all the values just yet, I'd suspect ISAPI may fall foul of this as well.
So, your next best bet is to write a custom HTTP server, that can handle the serving of files yet to finish uploading.
pipebytes.com I found it :)