I am learning Elm and I seems that you make a new VirtualDOM every view "frame".
view : Model -> Html Msg
There is no reference to the previous generated Html value, so there is no option to update it (in a functional way). You must rebuild it from scratch. This is highly inefficient.
How can you reuse the old frame and only update changed nodes (and their ancestors)?
EDIT
As an answers points, Elm have Html.Lazy. With it and the clever runtime you can avoid repeating most of the work and data allocation (useless garbage collector pressure is a bad thing), but at the expenses of adding a lot of cognitive load on the programmer.
Reasoning about strictness/laziness on the term level (instead of on the type level) is error prone (see Haskell and seq).
The perfect solution would be a view function with this signature:
view : Model -> Html Msg -> Html Msg
This way you have access to the previous frame VirtualDOM and you can share as much of it with the new frame data structure as you want.
Is this option available? If not, why not?
As you know, Elm uses a "virtual DOM". Your program outputs lightweight objects that describe the DOM structure you want, and the virtual DOM implementation "diffs" the current and new structure, adding, modifying, and removing elements/attributes/properties as required.
Of course, there is a small performance penalty to using a virtual DOM over directly manipulating the DOM, but the difference doesn't matter in most web applications. If you find that your application has poor performance in the view function, there are several ways to improve performance, such as Html.Lazy
Many other popular libraries use a virtual DOM, such as React, Angular, and Vue. Usually, it's easier to render your application with a virtual DOM (rather than "manually" manipulating the DOM), so many of the popular frameworks implement one.
Related
I am recently learning Angular 6 with #ngrx/store while one of the tutorial is to use #ngrx/store for state management, however I don't understand the benefit of using #ngrx/store behind the scene.
For example, for a simple login and signup action, previously by using the service(Let's call it AuthService) we might use it to call the backend api, store "userInfo" or "token" in the AuthService, redirect user to "HOME" page and we can inject AuthService in any component where we need to get the userInfo by using DI, which simply that one file AuthService handles everything.
Now if we are using #ngrx/store, we need to define the Action/State/Reducer/Effects/Selector which probably need to write in 4 or 5 files to handle above action or event, then sometimes still we need to call backend api using service, which seems much much more complicated and redundant...
In some other scenario, I even see some page uses #ngrx/store to store the object or list of object like grid data., is that for some kind of in-memory store usage?
So back to the question, why are we using #ngrx/store over service registration store here in Angular project? I know it's for "STATE MANAGEMENT" usage, but what exactly is the "STATE MANAGEMENT"? Is that something like transaction log and When do we need it? Why would we manage it on the front end? Please feel free to share your suggestion or experience in the #ngrx/store area!
I think you should read those two posts about Ngrx store:
Angular Service Layers: Redux, RxJs and Ngrx Store - When to Use a Store And Why?
Ngrx Store - An Architecture Guide
If the first one explains the main issues solved by Ngrx Store, it also quote this statement from the React How-To "that seems to apply equally to original Flux, Redux, Ngrx Store or any store solution in general":
You’ll know when you need Flux. If you aren’t sure if you need it, you don’t need it.
To me Ngrx store solves multiple issues. For example when you have to deal with observables and when responsability for some observable data is shared between different components. In this case store actions and reducer ensure that data modifications will always be performed "the right way".
It also provides a reliable solution for http requests caching. You will be able to store the requests and their responses, so that you could verify that the request you're making has not a stored response yet.
The second post is about what made such solutions appear in the React world with Facebook's unread message counter issue.
Concerning your solution of storing non-obvervable data in services. It works fine when you're dealing with constant data. But when several components will have to update this data you will probably encounter change detection issues and improper update issues, that you could solve with:
observer pattern with private Subject public Observable and next function
Ngrx Store
I'm almost only reading about the benefits of Ngrx and other Redux like store libraries, while the (in my opinion) costly tradeoffs seem to be brushed off with far too much ease. This is often the only reason that I see given: "The only reason not to use Ngrx is if your app is small and simple". This (I would say) is simply incomplete reasoning and not good enough.
Here are my complaints about Ngrx:
You have logic split out into several different files, which makes the code hard to read and understand. This goes against basic code cohesion and locality principles. Having to jump around to different places to read how an operation is performed is mentally taxing and can lead to cognitive overload and exhaustion.
With Ngrx you have to write a lot more code, which increases the chances of bugs. More code -> more places for bugs to appear.
An Ngrx store can become a dumping ground for all things, with no rhyme or reason. It can become a global hodge podge of stuff that no one can get a coherent overview of. It can grow and grow until no one understands it any more.
I've seen a lot of unnecessary deep object cloning in Ngrx apps, which has caused real performance issues. A particular app I was assigned to work on was taking 40 ms to persist data in the store because of deep cloning of a huge store object. This is over two lost render frames if you are trying to hit a smooth 60 fps. Every interaction felt janky because of it.
Most things that Ngrx does can be done much simpler using a basic service/facade pattern that expose observables from rxjs subjects.
Just put methods on services/facades that return observables - such a method replaces the reducer, store, and selector from Ngrx. And then put other methods on the service/facade to trigger data to be pushed on these observables - these methods replace your actions and effects from Ngrx. So instead of reducers+stores+selectors you have methods that return observables. Instead of actions+effects you have methods that produce data the the observables. Where the data comes from is up to you, you can fetch something or compute something, and then just call subject.next() with the data you want to push.
The rxjs knowledge you need in order to use ngrx will already cause you to be competent in using bare rxjs yourself anyways.
If you have several components that depend on some common data, then you still don't need ngrx, as the basic service/facade pattern explicitly handles this already.
If several services depend on common data between them, then you just make a common service between these services. You still don't need ngrx. It's services all the way down, just like it is components all the way down.
For me Ngrx doesn't look so good on the bottom line.
It is essentially a bloated and over engineered Enterprise™🏢👨💼🤮 grade Rxjs Subject, when you could have just used the good old and trusty Rxjs Subject. Listen to me kids, life is too short for unnecessary complexity. Stick to the bare necessities. The simple bare necessities. Forget about your worries and your strife.
I've been working with NgRx for over three years now. I used it on small projects, where it was handy but unnecessary and I used it in applications where it was perfect fit. And meanwhile I had a chance to work on the project which did not use it and I must say it would profit from it.
On the current project I was responsible for designing the architecture of new FE app. I was tasked to completely refactor the existing application which for the same requirements used non NgRx way and it was buggy, difficult to understand and maintain and there was no documentation. I decided to use NgRx there and I did it because of following reasons:
The application has more than one actor over the data. Server uses
the SSE to push state updates which are independent from user
actions.
At the application start we load most of available data which are
then partially updated with SSE.
Various UI elements are enabled/disabled depending on multiple
conditions which come from BE and from user decisions.
UI has multiple variations. Events from BE can change currently
visible UI elements (texts in dialogs) and even user actions might
change how UI looks and works (recurring dialog can be replaced by
snack if user clicked some button).
State of multiple UI elements must be preserved so when user leaves
the page and goes back the same content (or updated via SSE) is
visible.
As you can see the requirements does not meet the standard CRUD operations web page. Doing it the "Angular" way brought such complexity to the code that it became super hard to maintain and what's worst by the time I joined the team the last two original members were leaving without any documentation of that custom made, non NgRx solution.
Now after the year since refactoring the app to use NgRx I think I can sum up the pros and cons.
Pros:
The app is more organized. State representation is easy to read,
grouped by purpose or data origin and is simple to extend.
We got rid of many factories, facades and abstract classes which lost
their purpose. The code is lighter, and components are 'dumber', with
less hidden tricks coming from somewhere else.
Complicated state calculations are made simple using effects and
selectors and most components can be now fully functional just by
injecting the store and dispatching the actions or selecting the
needed slice of the state while handling multiple actions at once.
Because of updated app requirements we were forced to refactor the
store already and it was mostly Ctrl + C, Ctrl + V and some renaming.
Thanks to Redux Dev Tools it is easier to debug and optimize (yep
really)
This is most important - even thought our state itself is unique the
store management we are using is not. It has support, it has
documentation and it's not impossible to find solutions to some
difficult problems on the internet.
Small perk, NgRx is another technology you can put to your CV :)
Cons:
My colleagues were new to the NgRx and it took some time for them to
adapt and fully understand it.
On some occasions we introduced the issue where some actions were
dispatched multiple times and it was difficult to find the cause of
it and fix it
Our Effects are huge, that's true. They can get messy but that's what
we have pull requests for. And if this code wasn't there it would
still end up somewhere else :)
Biggest issue? Actions are differentiated by their string type. Copy
an action, forget to rename it and boom, something different is
happening than you expect, and you have no clue why.
As a conclusion I would say that in our case the NgRx was a great choice. It is demanding at first but later everything feels natural and logical. Also, when you check the requirements, you'll notice that this is a special case. I understand the voices against NgRx and in some cases I would support them but not on this project. Could we have done it using 'Angular' way? Of course, it was done this way before, but it was a mess. It was still full of boiler plate code, things happening in different places without obvious reasons and more.
Anyone who would have the chance to compare those two versions would say the NgRx version is better.
There is also a 3rd option, having data in service and using service directly in html, for instance *ngFor="let item of userService.users". So when you update userService.users in service after add or update action is automatically rendered in html, no need for any observables or events or store.
If the data in your app is used in multiple components, then some kind of service to share the data is required. There are many ways to do this.
A moderately complex app will eventually look like a front end back end structure, with the data handling done in services, exposing the data via observables to the components.
At one point you will need to write some kind of api to your data services, how to get data in and out, queries, etc. Lots of rules like immutability of the data, and well defined single paths to modify the data. Not unlike the server backend, but much quicker and responsive than the api calls.
Your api will end up looking like one of the many state management libraries that already exist. They exist to solve difficult problems. You may not need them if your app is simple.
NGRX sometimes has a lot of files and a lot of duplicate code. Currently working on a fix for this. To make generic type classes for certain NGRX state management situations that are very common inside an Angular project like pagers and object loading from back-ends
Our company is developing a very complex single page app(something like excel) with vue.js. There are 10000+ components(each cell is a component) and each component would have about 100 reactive props(data items). We also use vuex. It works but we are worried about its performance(Indeed it performs a little slowly). We heard that too many reactive data will bring poor performance.
I often hear people said that if it is rewritten by jQuery it will be faster.
My question is: can vue handle so many reactive data? If not, what is the limit? Or if my app performs poorly, is it really caused by vue itself?
if it is rewritten by jQuery it will be faster
Even if that was true, it would make your app harder to maintain. But this statement is a False Dichotomy, as if the choice between the frameworks/libraries were the deciding factor in determining the application's performance. It's not. However if you want to get the best performance, benchmarks have shown time and time again, that a tuned vanilla js application outperforms any framework.
The key to having anything perform well is to design (and implement) properly. While Vue has many performance improvements built in, there are additional things you can do to improve performance, such as use of functional (stateless) components.
You could also consider react, it doesn't come with the out-of-the-box performance tuning that Vue has, but it makes controlling these things easier. The end result (going back to my original point) will still largely depend on your implementation.
The following link proves Vue/Vuex is not the problem, perhaps your design is flawed?
This simple test measures the creation of elements in an array
and the output to the DOM through a sync and an async loop, on a VUEjs
reactive attribute. From the UX point of view, the async method offers
a better experience as it allows for notifications.
Credits to Pablo Garaguso
I have looked at a few threads discussing passing navigation objects between view models in MvvmCross, (e.g. here and here), and I wonder why MvvmCross doesn't have built-int support for serialization of complex types.
Let me clarify. If I have a navigation objects that consist of a CustomerName (string) and RecentPurchases (List) where Purchase type is a class with a few primitive type properties, then when I pass this navigation object to ShowViewModel, on the receiving side I will get a correct CustomerName and null for RecentPurchases. List is not recognized by MvvmCross as being simple enough for the serialization. This can easily be fixed by replacing RecentPurchases with SerializedRecentPurchases and assigning its value like this:
SerializedRecentPurchases = Mvx.Resolve<IMvxJsonConverter>()
.SerializeObject(RecentPurchases);
In a similar manner the string is deserialized in ViewModels' Init method.
It is all very simple, but I am a little puzzled why MvvmCross doesn't attempt to perform serialization saving developers from writing these lines of code again and again. I know we have to be careful about passing large amounts of data with navigation objects, but on the other hand it's quite common that navigation (or persistent state) objects may contain collections of simple complex types, so wouldn't it be more practical if MvvmCross supported this scenario out of the box?
The reasons why "simple serialisation" for navigation was introduced in v3 were:
We wanted to remove the dependency of MvvmCross on any Json serializer - we love Json.Net and we love ServiceStack Text but we wanted people to be able to ship apps with neither of these if they wanted to
We intended that it would be easy to switch back to Json if people wanted to
this should be possible using just one line in setup - but there's a bug currently logged against this - see https://github.com/MvvmCross/MvvmCross/issues/450
even with that open bug, it's still easy to do with ~4 lines using a base class and code like that shown in your question or in the linked question
there are also ways that the simple serialisation should be extensible to more complex objects - but those are also linked to that 450 issue.
We wanted to make it more obvious to people that serialisation was taking place (it feels like 'why can't I pass an object' is an FAQ)
We wanted to try to discourage people from serialising large objects
because this is slow
and because WindowsPhone in particular has quite small limits on the size of the Xaml Uri that can be used (there is a .Net Uri limit of ~2050 characters, but beneath that I believe the WP limit is smaller still - about 1100 characters)
wouldn't it be more practical if MvvmCross supported this scenario out of the box?
Possibly - and that is the intention of the "1 line setup change" which https://github.com/MvvmCross/MvvmCross/issues/450 is currently blocking
There are some situations where passing complex list-based might be convenient - and there are several platforms which don't have the navigation limits of WindowsPhone.
To help with this, one of the key objectives of MvvmCross v3 was "Project CHIMP" also known as "CrossLight". The aim of CHIMP was to split MvvmCross apart into separate CrossCore, Binding, Mvvm and plugin layers - the idea being that this structure should make it easier for others to build their own app frameworks. Because of this, it should be easy for others now to provide alternative frameworks - perhaps including completely different navigation service patterns.
There's more on Project Chimp/CrossLight in:
the v3 progress slideshow - http://slodge.blogspot.co.uk/2013/03/hot-tuna-mvvmcross-v3-progress.html
a Chimp is born - http://slodge.blogspot.co.uk/2013/03/a-chimp-is-born-mvvm-on-monodroid.html
CrossLight for Droid - http://slodge.blogspot.co.uk/2013/06/n30-crosslight-aka-project-chimp-n1.html
CrossLight for iOS - http://slodge.blogspot.co.uk/2013/09/n39-crosslight-on-xamariniosmonotouch.html
However, within MvvmCross itself I personally would still recommend against passing large complex objects during serialisation - very few of my navigation objects are temporary so to me it generally "feels better" to pass keys to objects rather than the objects themselves.
How does one pass a complex object to the target Page using the NavigationService.Navigate method?
Unfortunately you can't do that. It kind of makes sense, because the idea is to provide deep linking support for pages/views, but it's definitely annoying that you can't do it. The options you have are:
For small objects, you could serialise them and pass them to the next view in the query string, although I'd recommend against that approach (different browsers have different number of maximum characters in a URL that they support, and also the object may be out of date if the user bookmarks that page and returns to it).
Store the object in a global cache, from which the view being navigated to can access it. Not nice, but it will work.
The Navigation Framework source code is a part of the Silverlight Toolkit. You could modify this to support complex objects, but I'd strongly recommend against doing so.
Use the MVVM pattern, with one view model used to manage multiple views, and therefore the object would be available to all those views.
Hope this helps...
Chris
P.S. I discuss this in my book Pro Business Applications with Silverlight 4, although only in as much depth as above as there's not a particularly nice solution to the problem :).
Can someone explain to me what interface bloat in OOP is (preferably with an example).
G'day,
Assuming you mean API and not GUI, for me I/F bloat can happen in several ways.
An API just keeps getting extended and extended with new functions without any form of segregation so you finish up with a monolithic header file that becomes hard to use.
The functions declared in an existing API keep getting new parameters added to their signatures so you have to keep upgrading and your existing applications are not backwards compatible.
Functions in an existing API keep getting overloaded with very similar variants which can lead to difficulty selecting the relevant function to be used.
To help with this you can:
Separate out the API into a series of headers and libraries so you can more easily control what parts you actually need. Any internal dependencies should be resolved automatically by the vendor so the user doesn't have to find out the dependencies by trial and error, e.g. that I need to include header file wibble.h when I only wanted to use the functions in the API declared in the shozbot.h header file.
Make upgrades to the API backwards compatible by introducing overloading where applicable. But you should group the overloaded functions into categpories, e.g. if new set of overloaded functions are added to an existing API, say our_api.h, to adapt it to a new technology, say SOA, then they are provided separately in their own header file our_api_soa.h in addition to the existing header our_api.h.
HTH
Think of an OO language where all methods are defined in Object, even though they are only meaningful for some subclasses. That would be the most extreme example.
Most Microsoft products?
Interface bloat is having too much on the screen at once, particularly elements that are little used, or are confusing in their function. Probably an easier way to describe interface bloat is to look at something that does not have it, try Basecamp from 37signals. There are only a few tabs, and a few links in the header.
Interface bloat can be remedied by collapsable panes (using Javascript, for example), or drill-down menus that hide less-often used choices until they are needed.
Interface bloat is the gradual addition of elements that turn what may been a simple, elegant interface into one littered with buttons, menus, options, etc. all over the place that ruin the original cohesive feel of the application. One example that comes to mind for me is iTunes. In it's early renditions, it was quite simple, but has, over time, added quite a lot of features that might qualify as bloat (iTunes DJ, Coverflow, Genius).
Interface bloat is sometimes caused by trying to have every feature one click away, as in this humorous example:
Too many toolbar buttons
(Although funny, this example isn't fair to Firefox because in this example the user added all those toolbars)
A UI design technique called "progressive disclosure" is one way to reduce interface bloat. Only expose the most frequently-used features as a top-level click. If you have less-frequently-used features that are still valuable enough to include in your app, group them in a logical way, e.g. behind a dropdown menu or other navigation element.
Learning by example:
http://img46.imageshack.us/img46/5127/ofilematrix.png
An extreme example of interface bloat that most C++ programmers will be familiar with is std::basic_string. Page up and page down of member functions with only small variations, most of these functions wouldn't have had to be member functions but could have been free functions in a string utility library.