CQS and CRUD operation - crud

I working on high-scalability web site for learning purpose. I decided to use CQS pattern and some ideas of CQRS. I have separate write and read layers used by command handlers and event handlers which system sends and receives from message buses (two separate message- buses).
I have some problem dealing with commands. I read that command shouldn't return anything. And now the point is: for example I have a form on with user can create an event or for example change something in his profile (photo or name). After user clicked save i want to show him, his profile or add a new event to his wall. But how can I know that his profile has been already updated when command is only send to the bus ? I How connect idea of command and CRUD operations ? Or maybe this wrong idea at all ?

Well first off, the split should not be between commands and events, but rather between domain and read models. You can't really map CQRS commands to CRUD operations as a general rule, although most of the commands in your system will change the state of your repositories. I will give you a general overview of how this works. Say you want to add a user, you create a command AddUserCommand and assign an id to that message. On the back end, you have an handler for that command and you're right that the command does not return nothing. Once that command is handled you should publish and event reflecting the change: UserWasAddedEvent. The id of this message will be unique, but it can and should have an id related to the command which you created in the UI. Your read models should handle the event and update a read model with the command status (waiting, processing, completedOnError, completedSuccessfully) depending on the event you published. On the UI after you submited the command, you should start querying the read models whith the ID of the command you created to get the status and then update your UI accordigly.
Your right that CQRS handlers return void, but you should bear in mind that typically in an architecture like this, the backend should return the validation results of the submited commands, not the handler itself but the infrastucture around your cqrs handlers.

Just update te UI on the assumption the command succeeds - which most of the time it will.
If validation is required on the user input, you could run validation as the user types or tabs to increase the likelyhood the command will succeed.

Related

How to queue requests in React Native without Redux?

Let's say I have a notes app. I want to enable the user to make changes while he is offline, save the changes optimistically in a Mobx store, and add a request to save the changes (on the server) to a queue.
Then when the internet connection is re-established I want to run the requests in the queue one by one so the data in the app syncs with data on the server.
Any suggestions would help.
I tried using react-native-job-queue but it doesn't seem to work.
I also considered react-native-queue but the library seems to be abandoned.
You could create a separate store (or an array in AsyncStorage) for pending operations, and add the operations to an array there when the network is disconnected. Tell your existing stores to look there for data, so you can render it optimistically. Then, when you detect a connection, run the updates in array order, and clear the array when done.
You could also use your existing stores, and add something like pending: true to values that haven't posted to your backend. However, you'll have less control over the order of operations, which sounds like it is important.
As it turns out I was in the wrong. The react-native-job-queue library does work, I just made a mistake by trying to pass a function reference (API call) to the Worker instead of just passing an object that contains the request URL and method and then just implement the Worker to make the API call based on those parameters.

Is it possible to write a plugin for Glimpse's existing SQL tab?

Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).

Run code when user deletes module from a page

I'm currently developing a DNN module. It would be nice if we were able to run custom code whenever a module is deleted from a page by the user, and also when a module gets restored from recycle bin.
I haven't found any examples on how this could be done, so I'm not sure if this is possible? Any ideas?
I am unaware of any event mechanism inside DNN where you could set your hooks. You could probably debug the DNN code and trace the call stack until you find a usable spot where you can inject some code (which would likely be destroyed after the next DNN update), or maybe detect a way that was intended to be used by the core team.
However, if a module is deleted from a page, the IsDeleted field in the Modules table is set to true. If it gets restored from the bin, it is again set to false.
You can use a TRIGGER in your Sql Server that fires when the Modules table is updated, checks if the update refers to an IsDeleted field, write stuff into a Notification table, and use Sql Query Notification based on SqlDependency to run some code (see http://www.codeproject.com/Articles/144344/Query-Notification-using-SqlDependency-and-SqlCach for an introduction).
Some steps to go, but it should work (and be less exhausting than climbing the Matterhorn :-) ).
There is absolutely hooks in DNN Platform (DotNetNuke) that a developer can attached C# code to.
While there isn't a turn key example that I can provide at the moment, check out the following:
https://github.com/dnnsoftware/Dnn.Platform/blob/c35fdc7fb75db0438f3b872ce4e279e3ea73e7c2/DNN%20Platform/Library/Entities/EventManager.cs
You are looking to hook onto the ModuleRemoved event by the sounds of it.
Here is an example for the user logging in that you could adapt to your event:
Does DNN fire an event when a user logs in?
I hope this helps in the future.

WorkflowCreationEndpoint ResumeBookmark with a filled response

I've spend days trying to find a solution the problem i'm going to try to describe, i've googled alot and even looked at the .NET 4 reference source for SendReply and InternalSendReply activity. But until now i'm stuck.
To make the life of our end customers simpler i want to replace the Receive and SendReply activities with custom activites and use bookmarks instead.
I'm implementing a central webservice which can route to a correct workflow instance, that workflow modifies the bookmark value and finaly it creates a new bookmark while returning the modified bookmark value. It's rather complex already with a WorkflowServiceHostFactory which adds Behaviours and Attach a DataContractResolver to the endpoint.
The endpoint is derived from WorkflowHostingEndpoint which resolves a bookmark created in a custom activity (instead of a receive). And i want another activity instead of a sendreply. Those 2 should correlate and the custom sendreply does send a response on the open channel through the endpoint while creating a new bookmark.
The problem is that i didn't find a way yet to access the endpoint responseContext from within my custom send activity. On the other side, at the workflowcreating endpoint side, it seems that i'm not able to be notified whenever the workflow becomes Idle and as well i don't seem to be able to access the WorkflowExtensions from the host. i'm missing something?
A possible solution i've in mind might be not using a WorkflowServiceHost, but then i loose alot of AppFabric functionaly.
The workflowapplication in platform update 1 has some extension methods called RunEpisode with an overload Func called idleEventCallback. There it's possible to hook into the OnIdle and get a workflowextension to get the object to send back as response.
To answer my own question, i ended up in a workaround using the servicebroker functionality of sql server. The SqlDependency class where the workflow listens for the event to be fired whenever the workflow reach the activity that creates a new bookmark in another state.

Notifications in wxWidgets?

I'm working on a small application using C++/wxWidgets, where several parts of the GUI need to be updated based on e.g. received UDP datagrams. More specifically, a secondary thread tries to keep a list of available "clients" in the network (which may come and go away) and e.g. corresponding comboboxes in the UI need to be updated to reflect the changes.
The documentation mentions that for this kind of thing EVT_UPDATE_UI would be a good choice. As far as I can understand from the sparse documentation, this event is sent automatically by the system and provides some support for assisted UI change.
However, I'd feel more comfortable using a more direct approach, i.e. where e.g. a window object could register/subscribe to receive notifications (either events or callbacks) upon particular events and another part of the code is sending out these notifications when required. I could do this in C++ using my own code, however I guess if wxWidgets already supports something like that, I should make use of it. However I haven't found anything in that regards.
So, the question is: does wxWidgets support this kind of notification system (or similar alternatives) or would I be best served coding my own?
AFAIK there is nothing directly usable in wxWidgets, but doing it on your own seems easy.
What I would do:
Create a wxEvtHandler-descendent class to hold the list of available "clients" in the network. Let this class have a wxCriticalSection, and use a wxCriticalSectionLocker for that in all methods that add or delete "clients".
Create a worker thread class by inheriting wxThread to handle your UDP datagrams, using blocking calls. The thread should directly call methods of the client list object whenever a client has to be added or removed. In these methods update the list of clients, and ::wxPostEvent() an event to itself (this will execute the whole notification calls in the main GUI thread).
Handle the event in the client list class, and notify all listeners that the list of clients has changed. The observer pattern seems to me a good fit. You could either call a method of all registered listeners directly, or send a wxCommandEvent to them.
Have you tried calling Update() on the widget(s) that change? Once you update the contents of the combo box, call Update(), and the contents should update.