Is it possible to send information from an O3 core's load queue all the way to a Ruby state machine? - gem5

I am writing a custom cache coherence policy where the state machine invalidates local cache lines on a write, but will behave differently based on whether or not there's already a load for that same line in the load queue of at least one O3 core.
Basically, the state machine should send the invalidation, wait for a response from the load queues, then go into one state or the other based on the response.
Is there an easy way to send a snoop response from the LSQ if the L1 cache causes an invalidation on one of its stored loads?

Related

Can RabbitMQ (or similar message queuing system) be used to single thread requests per user?

The issue is we have some modern web applications that are integrated with a legacy system that was never designed to support multiple concurrent requests from a single user. Basically there are certain types of requests that the legacy system can only handle one-at-a-time from a single user. It can handle multiple concurrent requests coming from different users, but for technical reasons cannot handle multiple from a single user. In these situations, the user's first request will complete successfully, but any subsequent requests from that same user that come in while the first request is still executing will fail.
Because our apps are ajax enabled, multi-tab/multi-browser friendly, and just the fact that there are multiple apps - there are certain scenarios where a user could wind up having more than one of these types of requests being sent to the legacy system at the same time.
I'm trying to determine if something like RabbitMQ could be positioned in front of the legacy system and leveraged to single-thread requests per user/IP. The thinking being that the web apps would send all requests to MQ, and they'd stack into per-user queues and pass on to the legacy system one at a time.
I don't know if there would be concerns about the potential number of queues this could create - we have a user-base of approx 4,000.
And I know we could somewhat address this in the web apps individually, but since there are multiple apps it'd be duplicating logic across them, and you'd still have the potential for two different apps to fire off concurrent requests.
Any feedback would be appreciated. Thanks-
I'm not sure a unique queue per user will work as you would need to have a backend worker process listening for messages on that queue that would need to be dynamically created.
Below is one option but it does have a performance bottleneck potential as a single backend process would be handling all requests sequentially. You could use multiple worker processes but you wouldn't know if one had completed before the other causing a race condition if your app requires a specific sequence of actions.
You could simply put all transactions (from all users) into a single queue and have a backend process pull off of that queue and service the request. If there needs to be a response back to the user once the request was serviced, then the worker process could respond back to a separate queue with a correlationID that could be used to send the response date back to the correct user.
I've done this before with ExpressJS apps where the following flow would happen:
The user/process/ajax makes a request
Express takes the payload from the request object and sends it to a RabbitMQ queue with a unique correlationId (e.g. UUID).
Express then takes the response object and stores it in a responseStore object with the key being the correlationId
Meanwhile, a backend worker process pulls the item from the queue, does some work and then sends a message to a different response queue with the same correlationId
The ExpressJS application has a connection to the response queue and when it receives a message, it takes the correlationId from the response and looks for a response object stored with same correlationId in the responseStore. If it finds it, it takes the payload from the message and does something like response.send(payload) or response.json(payload)
To do this, you should also have a mechanism that stores the creation time of the response object in the responseStore along with the response object. Then have a separate process that will check the responseStore and clean up old response objects after a certain timeout in case there are issues with the backend process completing.
Look here for more info on RPC with RabbitMQ:
https://www.rabbitmq.com/tutorials/tutorial-six-javascript.html
Hope this helps.

Controlling Gemfire cache updates in the background

I will be implementing a Java program that acts as a gemfire client. The program will continuosly process records that it receives on its port from a remote program. Each record will be processed using the static data cached with my program. The cache may get updated behind the scenes in my program when it is changed on the gemfire server. The processing of one record may take a few seconds. I run the risk of processing half the record with static data that was prevalent before the change and rest of the record with static data that has taken effect after the change. Is there a way I can tell gemfire to not apply the cache to the local client until I am done processing the ongoing record?
Regards,
Yash
Consider this approach: Use a Continuous query "Select *" instead of event registration. A CQ does not update the client region like a subscription does. Make your client region LOCAL. After receiving the CQ event on the client, execute your long running process and put the value that you received from the CQ into your client region. Decoupling client and server in this way will allow your client to run long-running processes.
Alternatively: if you must have the client cache proxied with the server as an absolute requirement, then keep the interest registration AND register a CQ. Ignore the event callback from the subscription but handle your long-running process using the event callback from the CQ.
The following is from page 683 at http://gemfire.docs.pivotal.io/pdf/pivotal-gemfire-ug.pdf
CQs do not update the client region. This is in contrast to other server-to-client messaging like the updates sent to satisfy interest registration and responses to get requests from the client's Pool.

NSURLSession background session can run data tasks?

The doc mentions gives one line
In Background sessions, Only upload and download tasks are supported (no data tasks)
but this doc,
seems to indicate that background sessions can execute data tasks?
The behavior of a session is determined by the configuration object used to create it. Because there are three types of configuration objects, there are similarly three types of sessions: default sessions that behave much like NSURLConnection, ephemeral sessions that do not cache anything to disk, and download sessions that store the results in a file and continue transferring data even when your app is suspended, exits, or crashes.
Within those sessions, you can schedule three types of tasks: data tasks for retrieving data to memory, download tasks for downloading a file to disk, and upload tasks for uploading a file from disk and receiving the response as data in memory.
what is correct? will I be able to make a GET http request on an NSURL and then JSONSerialize the NSDATA received in the "Background"
You can only run upload and download tasks in the background. Here's a quote taken directly from the ULR Loading System.
Background Transfer Considerations
The NSURLSession class supports background transfers while your app is suspended. Background transfers are provided only by sessions created using a background session configuration object (as returned by a call to backgroundSessionConfiguration:).
With background sessions, because the actual transfer is performed by a separate process and because restarting your app’s process is relatively expensive, a few features are unavailable, resulting in the following limitations:
The session must provide a delegate for event delivery. (For uploads and downloads, the delegates behave the same as for in-process transfers.)
Only HTTP and HTTPS protocols are supported (no custom protocols).
Only upload and download tasks are supported (no data tasks).
Redirects are always followed.
If the background transfer is initiated while the app is in the background, the configuration object’s discretionary property is treated as being true.
What you want to do instead is run your GET request as a download request and save the JSON data to a file. Once the download is completed, read the contents of the file into memory, and parse the NSData just like you would if it came from a data request.

NServicebus - Stopping a long running process?

Here is my application I'm attempting to put together using NServiceBus:
I have a 1000 files that need to be processed by a service. So far I'm thinking I'd have one endpoint, the client, find all of those files and send them out on the bus to be processed
My other endpoint, the server that does the processing, would listen for these client messages, when one comes in process the file, and return the results.
Client takes the results, marks the file as processed, and waits for the next 999 files to be processed. Client doesn't care the order of the messages that come back, just as long as they all get processed at some point. (In reality the client is going to do something more with the data after it is processed that can't be done by the server, so I can't just fire and forget the request for processing.)
Since processing a single message can take over an hour I would scale out the application to have multiple servers all attempting to eat through the 1000 files that need to be processed.
Conceptually, its like building a personal SETI at home service to run on all of my servers.
The issues I'm having is, how do I stop midway through processing the 1000 files?
I want to keep all of my servers working as much as they can on my data, so when the client starts does it publish a 1000 commands for the 1000 files to process and then sit back and wait? And if it does this, and decides to stop, how can it clear the bus of all of those commands to process files?
If my client only pushes one or two messages on the bus at a time I could easily stop sending messages if I decide to stop on the client, but then I have two other problems
The servers could be underutilized and I'd end up with idle servers.
How do I stop the servers that are loaded up and processing data? Send them a second command of a different message format?
Thoughts, ideas? Am I approaching this problem using the right tool/right methodology?
One of things you might want to think about is how you are going to correlate the message processing. I would use a saga for this and have the client generate some kind of batch id which is attached to all the files to be processed. This allows your client to be able to send a CancelProcessing message to the saga, the handler for which could then stop the processing / sending of messages to the file processing endpoints and perform any clean-up operations such as completing the saga and removing data from the database.
So you would have client endpoint, saga endpoint and one or more file processing endpoints (which would sit behind a distributor). Your client would be responsible for initiating / sending the files to the saga. The saga manages the file correlation and processing activities, while your processing endpoints focus doing the work.
Remember that the processing endpoints don't necessarily have to be physical endpoints. You can have many of these on one server if you wanted to and use monitoring tools to determine whether or not you need to add or remove nodes.

Is it possible to have asynchronous processing

I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client.
I am looking for suggestions of this implementation at the server end. Basically what I need is this:
1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client
2. server process now waits for new client connections
3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required.
Can we do something like this in Apache module:
1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection
2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any.
So can a Apache process do these things:
1. Have more than one connection associated with it
2. Asynchronously waiting for new connection and at the same time processing the previous connections?
This is a complicated and ineffecient model of updating. Your server will try to update clients that have closed down. And the server has to maintain all that client data and meta data (last update time, etc).
Usually, for continuous updates ajax is used in a polling model. The client has a javascript timer that when it fires, hits a service that provides updated data. The client continues to get updates at regular intervals without having to write an apache module.
Would this model work for your scenario?
More reasons to opt for poll instead of push
Periodic_Refresh
With a little patch to resume a SUSPENDED mpm_event connection, I've got an asynchronous Apache module working. With this you can do the improved polling:
javascript connects to Apache and asks for an update;
if there's no updates, then instead of answering immediately the module uses SUSPENDED;
some time later, after an update or a timeout happens, callback fires somewhere;
callback gives an update (or a "no updates" message) to the client and resumes the connection;
client goes to step 1, repeating the poll which with Keep-Alive will use the same connection.
That way the number of roundtrips between the client and the server can be decreased and the client receives the update immediately. (This is known as Comet's Reverse Ajax, AFAIK).