How does Blazor carry objects across servers when load balancing - blazor-server-side

I have some Blazor server side code that I can reduce down to this little bit:
#foreach (var employee in Employees)
{
<tr>
<td class="btn-group-sm">
<button class="btn btn-outline-danger"
#onclick="() => HandleDelete(employee)">
Delete
</button>
</td>
</tr>
}
And that calls HandleDelete(employee) on a click passing the employee object that it has persisted in the session data (I assume) to pass when the click occurs.
However, if I have multiple servers being load balanced, what happens if the submit event is sent to a different server? That different server won't have the employee object (I believe).
Does this all depend on the https session staying connected to the original server and all subsequent communication going to that server?
And if so, in a fail-over situation, what happens. In that case it's definitely a connection to a new server getting that submit action.

Related

How to return Vuex-generated page to client on initial Vue load?

I have a Vue / Nuxtjs app which displays lots of user-provided content (think of it as a crowdsourced blog). The content on the client is retrieved and stored in Vuex. When a page is loaded, it displays the current content and then uses fetch to get the updated data. Here is a typical component:
fetch() {
this.$store.dispatch('feeds/refreshLatest')
},
computed: {
feed() {
return this.$store.state.feeds.latest
}
}
where feeds/refreshLatest uses axios to retrieve the posts.
This works quite well. The problem is the initial load is very slow, especially on the front page which has to process and display dozens of articles.
I have SSR enabled, and would like the server to store the content, and then on initial load provide a rendered page to the client. However, the Vuex object on the server seems to be new for each request, and so the client has to wait for the entire set of articles to be fetched before anything is displayed, which is unacceptable. Doing all the fetches only on the client solves this problem, but it is still too slow.
I thought I could somehow use the same server Vuex on each call and sending it to the client with nuxtServerInit, but I don't see a way to achieve sharing the Vuex. Thank you for any pointers or other packages which could help.
The question is that after the fetch is finished after the api call in the server rendering, the DOM is dropped to the client, and the process is running every time and slow?
I solved similar issues using cookies. This is because cookies can also be used to render servers. I used the method below.
Store the data in the cookie after the initial api call, and send the data in the cookie to the client first.(If cookies are present, do not call api from server)
Call api from client to update data.
I use this library.
https://github.com/microcipcip/cookie-universal/tree/master/packages/cookie-universal-nuxt#readme

Why do browsers with an "offline" option still behave mostly like apps "online"?

tl;dr: what's the logic behind browsers (Chrome, FF, Safari) behaving as an app that's online after clicking offline and not simply... go offline?
CPP, FOP, STP
I have a small socket.io app that fetches from Twitter's API to make an image gallery.
I wanted to style the the divs that create a frame around the photos, but while the app is running found out that when selecting the elements in the dev tools, whenever a new image was added, Chrome emits a "purple
pulsing" (hereafter referred to as CPP) that kicked me out of the div I wanted to style and (rudely) put me at its parent div (the Gallery proper, if you will).
Voilà:
I started by shutting off my WiFi, which solved the problem with two drawbacks:
remembered the offline option in the network panel
needed a connection to read the socket.io docs :~)
Next I tried the offline option and found that, like the production version, CPP reëmerged, the image requests logging net::ERR_INTERNET_DISCONNECTED.
I realized that I could probably set the option reconnection: false in the socket.io bit but alas, this novella of question (which contains multitudes) still beckoned:
The Actual Question(s)
What chez Google (and Firefox (Orange Pulse), Safari (Transparent Pulse), et. al) is the logic to this behavior?
Why not Truly Sever the relevant tab's connection?
Better yet, why not let the poor developer both hold fast to their element and acknowledge visually that new elements are being thrown in?
The images are still fetched (!) which makes the Offline option seem even more misleading.
The docs from Google reference PWAs and those with service workers... does
Check the Offline checkbox to simulate a completely offline network experience.
apply only to them?
The Code that Kinda Could:
Here are the ~20 relevant lines at play (and here's the whole gig):
// app.js
var T = new Twit(config)
var stream = T.stream('statuses/filter', { track: '#MyHashtag', })
stream.on('tweet', function(tweet) { io.sockets.emit('tweet', tweet) })
function handler(request, response) {
var stream = fs.createReadStream(__dirname + '/index.html')
stream.pipe(response)
}
... and the index.html's relevant script:
// index.html
var socket = io.connect('/');
socket.on('tweet', function(tweet) {
if (someConditions = foo) {
tweet_container.innerHTML = '<img src="'
+ tweet.user.profile_image_url +
'" />'
}
}, 1000)
Nota Bene: I realize this question contains questions germane to polling, streams, networking, and topics whose names I'm not even familiar with, but my primary curiosity is what's the logic behind behaving as an app that's online after clicking offline and not simply... go offline" (and behave as it does when disconnecting from WiFi).
P.S.S here's a quote from knee-deep in the socket.io docs
If a certain client is not ready to receive messages (because of network slowness or other issues, or because they’re connected through long polling and is in the middle of a request-response cycle), if it doesn’t receive ALL the tweets related to bieber your application won’t suffer.
In that case, you might want to send those messages as volatile messages.

Socket.io Rooms in a Hostile Network Environment?

I have a very frustrating problem with a client's network environment, and I'm hoping someone can lend a hand in helping me figure this out...
They have an app that for now is written entirely inside of VBA for Excel. (No laughing.)
Part of my helping them improve their product and user experience involved converting their UI from VBA form elements to a single WebBrowser element that houses a rich web app which communicates between Excel and their servers. It does this primarily via a socket.io server/connection.
When the user logs in, a connection is made to a room on the socket server.
Initial "owner" called:
socket.on('create', function (roomName, userName) {
socket.username = userName;
socket.join(roomName);
});
Followup "participant" called:
socket.on('adduser', function (userName, roomName){
socket.username = userName;
socket.join(roomName);
servletparam = roomName;
var request = require('request');
request(bserURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('messages', body);
});
servletparam = roomName + '|' + userName;
request( baseURL + servletparam, function (error, response, body) {
io.sockets.to(roomName).emit('participantList', body);
});
});
This all worked beautifully well until we got to the point where their VBA code would lock everything up causing the socket connection to get lost. When the client surfaces form it's forced VBA induced pause (that lasts anywhere from 20 seconds to 3 minutes), I try to join the room again by passing an onclick to an HTML element that triggers a script to rejoin. Oddly, that doesn't work. However if I wait a few seconds and click the object by hand, it does rejoin the room. Yes, the click is getting received from the Excel file... we see the message to the socket server, but it doesn't allow that call to rejoin the room.
Here's what makes this really hard to debug. There's no ability to see a console in VBA's WebBrowser object, so I use weinre as a remote debugger, but a) it seems to not output logs and errors to the console unless I'm triggering them to happen in the console, and b) it loses its connection when socket.io does, and I'm dead in the water.
Now, for completeness, if I remove the .join() calls and the .to() calls, it all works like we'd expect it to minus all messages being written into a big non-private room. So it's an issue with rejoining rooms.
As a long-time user of StackOverflow, I know that a long question with very little code is frowned upon, but there is absolutely nothing special about this setup (which is likely part of the problem). It's just simple emits and broadcasts (from the client). I'm happy to fill anything in based on followup questions.
To anyone that might run across this in the future...
The answer is to manage your room reconnection on the server side of things. If your client can't make reliable connections, or is getting disconnected a lot, the trick it to keep track of the rooms on the server side and join them when they do a connect.
The other piece of this that was a stumper was that the chat server and the web UI weren't on the same domain, so I couldn't share cookies to know who was connecting. In their case there wasn't a need to have them hosted in two different places, so I merged them, had Express serve the UI, and then when the client surfaced after a forced disconnect, I'd look at their user ID cookie, match them to the rooms they were in that I kept track of on the server, and rejoined them.

dojo load js script and then execute it

I am trying to load a template with xhr and then append it to the page in some div.
the problem is that the page loads the script but doesn't execute it.
the only solution I got is to add some flags in the page (say: "Splitter"), before the splitter, I put the js code, and after the splitter I add the html code, and when getting the template by ajax, I split it. here is an example:
the data I request by ajax is:
//js code:
work_types = <?php echo $work_types; ?>; //json data
<!-- Splitter -->
html code:
<div id="work_types_container"></div>
so the callback returns 'data' which I simply split and exeute like this:
data = data.split("<!-- Splitter -->");
dojo.query("#some_div").append(data[1]); //html part
eval(data[0]); //js part
Although this works for me, but it doesn't seem so professional!
is there another way in dojo to make it work?
If you're using Dojo, it might be worth to look at the dojox/layout/ContentPane module (reference guide). It's quite similar to the dijit/layout/ContentPane variant but with one special extension, that it allows executing the JavaScript on that page (using eval()).
So if you don't want to do all that work by yourself, you could do something like:
<div data-dojo-type="dojox/layout/ContentPane" data-dojo-props="href: myXhrUrl, executeScripts: true"></div>
If you're concerned about it being a DojoX module (DojoX will disappear in Dojo 2.0), the module is labeled as maintained, so it has a higher chance of being integrated in dijit in later versions.
As an anwer to your eval() safety question (in comments). Well, it's allowed of course, else they wouldn't have such a function called eval(). But indeed, it's less secure, the reason for this is that the client in fact trusts the server and executes everything the server sends to the client.
Normally, there are no problems unless the server sends malicious content (this could be due to an issue on your server or man in the middle attacks) which will be executed and thus, causing an XSS vulnerability.
In the ideal world the server only sends data and the client interpretes this data and renders it by himself. In this design, the client only trusts data from the server, so no malicious logic can be executed (so there will be no XSS vulnerability).
It's unlikely that it will happen and the ideal world solution is not even possible in many cases since the initial page request (loading your webpage) is in fact a similar scenario where the client executes whatever the server sends.
Web application security is not about being 100% safe (it's impossible), but it's to try to create as less as possible open doors that can be used by hackers. It's up to you what you consider safe and to verify if the "ideal world" solution is possible in this specific scenario (it might not be, or it might take too much time compared to the other solution).

Stateful session with nhibernate and asp.net

Let's say I have a client who is filling up a data from a website. the underlying persistence used is Nhibernate.
Now the series of events goes like this
user fills up the form .
he submits the form .
the nhibernate sessionfactory is created and via customer object its is saved to database .
the database commits the session object using a native generator for nhibernate
but a mishap happens before the response from the server reaches the client . the connection to the client goes off .
the client gets to see a page not found error or request time out error . and has a dilemma that he isnt registered yet .
so he again presses the refresh button . and the same set of data (although that is committed to database ) has been send to server for database commit.
the server sees the data and again registers the same customer with different id .
so the problem goes like . now the same customer entry has been duplicated two times cause the connection has got cut off .
Now can someone tell me how to proceed into this scenario. So that even if the customer submits he just recognizes only one entry and submits it, even if the response has got cut off and he presses the refresh button?
Thoughts:
1) You don't want to create the NHibernate SessionFactory on every request. This should be created once and then used by all future requests. It's a heavy-weight operation. Only the Sessions need to be created each request.
2) Manage the transaction in a high level method - to reduce the likelihood of something going wrong AFTER you've committed the transaction, but BEFORE the client has a response.
3) Guard against the "Refresh" method resubmitting, by having the submit function return a different page which presents the information. This page should not submit anything.
4) Guard against a resubmit server side by validating the submission against previously submitted credentials. Inform the client if they've previously registered and provide them with a means to access the saved details (Password recovery for example.)
so for example if your users are keyed by email address then in the page load event perform the following steps: