How do I update my data based on the query parameter in vue-router - vue.js

I am new to vue-router and I am facing this problem.
Here is my code;
filter() {
this.$router.push({
query: {
bathrooms: this.selectedBathroom,
beds: this.selectedBed,
minPrice: this.selectedMinPrice,
maxPrice: this.selectedMaxPrice,
minLandSize: this.selectedMinLandSize,
maxLandSize: this.selectedMaxLandSize,
minLotSize: this.selectedMinLotSize,
maxLotSize: this.selectedMaxLotSize,
minYear: this.selectedMinYear,
maxYear: this.selectedMaxYear,
},
})
}
when user clicks on the filter button, the filter method is run. But the data doesn't update and when the user goes back to the previous page, the the query params is removed, but the data isn't updated.
Is they a way I can work around with this?

There is no path or name value here to actually initiate a change to the page route. If you want to simply reload the existing page with a new set of query parameters see this information from the documentation:
Note: If the destination is the same as the current route and only params are changing (e.g. going from one profile to another /users/1 -> /users/2), you will have to use beforeRouteUpdate to react to changes (e.g. fetching the user information).
In your comments you mentioned that you are making a request to a server. It's hard to give a full answer without more code from your application but query parameters might not be the most straightforward way to approach your problem. You should explore setting up a fetch request in your method or using axios to interact with the server (assuming you're communicating via an API) to build a more reliable and better scaling communication pattern.

Related

Fetch data after an update

VueJS + Quasar + Pinia + Axios
Single page application
I have an entity called user with 4 endpoints associated:
GET /users
POST /user
PUT /user/{id}
DELETE /user/{id}
When I load my page I call the GET and I save the response slice of users inside a store (userStore)
Post and Put returns the created/updated user in the body of the response
Is it a good practice to manually update the slice of users in the store after calling one of these endpoints, or is better to call the GET immediatly after ?
If you own the API or can be sure about the behavior of what PUT/POST methods return, you can use local state manipulation. Those endpoints should return the same value as what the GET endpoint returns inside. Otherwise, you might end up with incomplete or wrong data on the local state.
By mutating the state locally without making an extra GET request, the user can immediately see the change in the browser. It will also be kinder to your server and the user's data usage.
However, if creating the resource(user, in this case) was a really common operation accessible by lots of users, then calling the GET endpoint to return a slice would be better since it would have more chance to include the new ones that are created by other users. But, in that case, listening to real-time events(i.e. using WebSockets) would be even better to ensure everyone gets accurate and new data in real-time.

How to return Vuex-generated page to client on initial Vue load?

I have a Vue / Nuxtjs app which displays lots of user-provided content (think of it as a crowdsourced blog). The content on the client is retrieved and stored in Vuex. When a page is loaded, it displays the current content and then uses fetch to get the updated data. Here is a typical component:
fetch() {
this.$store.dispatch('feeds/refreshLatest')
},
computed: {
feed() {
return this.$store.state.feeds.latest
}
}
where feeds/refreshLatest uses axios to retrieve the posts.
This works quite well. The problem is the initial load is very slow, especially on the front page which has to process and display dozens of articles.
I have SSR enabled, and would like the server to store the content, and then on initial load provide a rendered page to the client. However, the Vuex object on the server seems to be new for each request, and so the client has to wait for the entire set of articles to be fetched before anything is displayed, which is unacceptable. Doing all the fetches only on the client solves this problem, but it is still too slow.
I thought I could somehow use the same server Vuex on each call and sending it to the client with nuxtServerInit, but I don't see a way to achieve sharing the Vuex. Thank you for any pointers or other packages which could help.
The question is that after the fetch is finished after the api call in the server rendering, the DOM is dropped to the client, and the process is running every time and slow?
I solved similar issues using cookies. This is because cookies can also be used to render servers. I used the method below.
Store the data in the cookie after the initial api call, and send the data in the cookie to the client first.(If cookies are present, do not call api from server)
Call api from client to update data.
I use this library.
https://github.com/microcipcip/cookie-universal/tree/master/packages/cookie-universal-nuxt#readme

Change results URL in Alfresco AIkau faceted search page

I have some difficulties customizing the Aikau faceted search page on Alfresco, which may be more a matter of lack of my knowledge about dojo/AMD.
What I want to do is to replace the document details page URL by a download URL.
I extended the Search Results Widget to include my own custom module :
var searchResultWidget = widgetUtils.findObject(model.jsonModel, "id", "FCTSRCH_SEARCH_RESULT");
if(searchResultWidget) {
searchResultWidget.name = "mynamespace/search/CustomAlfSearchResult";
}
I understand search results URLs are rendered this way :
AlfSearchResult module => uses SearchResultPropertyLink module => mixins _SearchResultLinkMixin renderer => bring the "generateSearchLinkPayload" function => renders URLs depending on the result type
I want to override this "generateSearchLinkPayload" function but I can't figure out what is the best way to do that.
Thanks in advance for the help !
This answer assumes you're able to use the latest version of Aikau (at the time of writing this is 1.0.61). Older versions might require slightly different overriding...
In order to do this you're going to need to override the createDisplayNameRenderer function of AlfSearchResult in your CustomAlfSearchResult widget. This will allow you to create an extension of alfresco/search/SearchResultPropertyLink.
If you want to take advantage of the the download capabilities provided by the alfresco/services/DocumentService for downloading both documents and folders (as a zip) then you're going to want to change both the publishTopic and publishPayload of the SearchResultPropertyLink.
You should extend the getPublishTopic and generateSearchLinkPayload functions. For the getPublishTopic function you'll want to change the return value to be "ALF_SMART_DOWNLOAD" (there are constants available for these strings in the alfresco/core/topics module). This topic can be used to tell the DocumentService to take care of figuring out if the node is a folder or document and will make an XHR request for the full node metadata (in order to get the contentUrl attribute that is not included in the data returned by the Search API.
You should extend the generateSearchLinkPayload function so that for document or folder types the payload contains the attribute nodes that is a single array where the object is the search result object that you wish to download.
I would recommend that you call this.inherited first to get the default payload and only update it for documents and folders.
Hopefully that all makes sense - if not, add a comment and I'll try to provide further assistance!
This is the answer for 1.0.25.2 - unfortunately it's not quite so straightforward...
You still need to extend the alfresco/search/AlfSearchResult widget, however this time you need to extend the postCreate function (remembering to call this.inherited(arguments)). It's not possible to stop the original alfresco/search/SearchResultPropertyLink widget from being created... so it will be necessary to find it and destroy it.
The widget is not assigned to a variable, so it will be necessary to find it using dijit/registry. Use the byNode function from dijit/registry to find the widget assigned to this.nameNode and then call destroy on it (be sure to pass the argument true to preserve the DOM). However, you will then need to empty the DOM node so that you can start again...
Now you need to add in your extension to alfresco/search/SearchResultPropertyLink. Unfortunately, because the smart download capability is not available you'll need to do more work. The difference here is that you'll need to make an XHR request to retrieve the full node metadata in order to obtain the contentURL. It's possible to publish a request to the DocumentService(via the "ALF_RETRIEVE_SINGLE_DOCUMENT_REQUEST" topic). However, you need to be aware that having the XHR step will not allow you to then proceed with the download as is. Instead you'll need to use an iframe download solution, I'd suggest you take a look at the changes in the pull request we recently made to solve this problem and backport them into your own solution.

Pass data across Hapi JS application

I want to detect current selected language from a domain (like es.domain.com, de.domain.com) so I need to pass it to all non static route handlers and to all views.
To detect a language I need a request object. But global view context it is possible to update where request object is not accessible (in server.views({})). Also server.bind (to pass data to route handler) works only where request object is not accessible.
Hapi version: 11.1.2
You could try something like this:
server.ext('onPreResponse', function (request, reply) {
if (request.response.variety === 'view') {
request.response.source.context.lang = request.path;
}
reply.continue();
});
This will attach a lang data point to the context that is being sent into the view. You'll have to extract the lang from the url as request.path is probably not what you actually want.
Also, if you look here you'll see a few pieces of request data is made available to every view via reply.view() If the locale/language is available directly in one of those data points, or can be derived from them, you can skip the extension point approach entirely.
Again, this is assuming version 10+ of hapi. If you're using an older version, the extension point method is your best bet.

Flux without data caching?

Almost all examples of flux involve data cache on the client side however I don't think I would be able to do this for a lot of my application.
In the system I am thinking about using React/Flux, a single user can have 100's of thousands of the main piece of data we store (and 1 record probably has at least 75 data properties). Caching this much data on the client side seems like a bad idea and probably makes things more complex.
If I were not using Flux, I would just have a ORM like system that can talk to a REST API in which case a request like userRepository.getById(123) would always hit the API regardless if I requested that data in the last page. My idea is to just have the store have these methods.
Does Flux consider it bad that if I were to make request for data, that it always hit the API and never pulls data from a local cache instance? Can I use Flux in a way were a majority of the data retrieval requests are always going to hit an API?
The closest you can sanely get to no caching is to reset any store state to null or [] when an action requesting new data comes in. If you do this you must emit a change event, or else you invite race conditions.
As an alternative to flux, you can simply use promises and a simple mixin with an api to modify state. For example, with bluebird:
var promiseStateMixin = {
thenSetState: function(updates, initialUpdates){
// promisify setState
var setState = this.setState.bind(this);
var setStateP = function(changes){
return new Promise(function(resolve){
setState(changes, resolve);
});
};
// if we have initial updates, apply them and ensure the state change happens
return Promise.resolve(initialUpdates ? setStateP(initialUpdates) : null)
// wait for our main updates to resolve
.then(Promise.params(updates))
// apply our unwrapped updates
.then(function(updates){
return setStateP(updates);
}).bind(this);
}
};
And in your components:
handleRefreshClick: function(){
this.thenSetState(
// users is Promise<User[]>
{users: Api.Users.getAll(), loading: false},
// we can't do our own setState due to unlikely race conditions
// instead we supply our own here, but don't worry, the
// getAll request is already running
// this argument is optional
{users: [], loading: true}
).catch(function(error){
// the rejection reason for our getUsers promise
// `this` is our component instance here
error.users
});
}
Of course this doesn't prevent you from using flux when/where it makes sense in your application. For example, react-router is used in many many react projects, and it uses flux internally. React and related libraries/patters are designed to only help where desired, and never control how you write each component.
I think the biggest advantage of using Flux in this situation is that the rest of your app doesn't have to care that data is never cached, or that you're using a specific ORM system. As far as your components are concerned, data lives in stores, and data can be changed via actions. Your actions or stores can choose to always go to the API for data or cache some parts locally, but you still win by encapsulating this magic.