How can I restrict data stored on GUN.js browser peers to data they subscribe to ONLY? - gun

I'm new to GUN and it looks very promising for the project I'm working on. One thing I haven't been able to make happen is restricting data on a browser peer to just data that it requests or subscribes to with on(). The following example is my very simple test setup: I have two distinct "conversations", each represented by a distinct data node. One browser gets and puts to a "blue" data node, and the other browser gets and puts to a "red" node. Both browsers sync up to a single server peer. I'd like the server peer to store a copy of all data and each browser to only store the data it subscribes to. Using version 0.9.2.
On Browser1, I run the following:
var peers = ['http://localhost:8080/gun',];
var gun = Gun(peers);
var blue = gun.get('blue');
blue.on(data => console.log('Blue update!', data.message));
On Browser2, I run the following:
var peers = ['http://localhost:8080/gun',];
var gun = Gun(peers);
var red = gun.get('red');
red.on(data => console.log('Red update!', data.message));
The server node runs this:
var http = require('http');
var server = http.createServer();
var Gun = require('gun');
var gun = Gun({web: server});
server.listen(8080, function () {
console.log('Server listening on http://localhost:8080/gun')
})
I then use the console of each browser peer to post some data to the node it is subscribed to. I would expect that each browser's local storage should only contain data for the node it subscribes to, while the server's data.json should contain data for both nodes.
What I see is that the server is storing all data as expected, but in viewing the localstorage I see that the browsers are also storing everything, even the data they've never requested. Is this the intended behavior, or am I missing something? I thought browser peers only store data they subscribed to. While it makes sense for server peers to replicate data to maintain redundancy in the case of failures, I wouldn't want my app clients themselves to be storing countless conversations that they're not a part of.
Thanks for the help!

Related

Nodejs how to pass parameters into an exported route from another route?

Suppose I module export "/route1" in route1.js, how would I pass parameters into this route from "/route2" defined in route2.js?
route1.js
module.exports = (app) => {
app.post('/route1', (req,res)=>{
console.log(req.body);
});
}
route2.js
const express = require('express');
const app = express();
//import route1 from route1.js
const r1 = require('./route1')(app);
app.post('/route2', (req, res) => {
//how to pass parameters?
app.use(???, r1) ?
})
In short, route 1 output depends on the input from route 2.
You don't pass parameters from one route to another. Each route is a separate client request. http, by itself, is stateless where each request stands on its own.
If you describe what the actual real-world problem you're trying to solve is, we can help you with some of the various tools there are to use for managing state from one request to the next in http servers. But, we really need to know what the REAL world problem is to know what best to suggest.
The general tools available are:
Set a cookie as part the first response with some data in the cookie. On the next request sent from that client, the prior cookie will be sent with it so the server can see what that data is.
Create a server-side session object (using express-session, probably) and set some data in the session object. In the 2nd request, you can then access the session object to get that previously set data.
Return the data to the client in the first request and have the client send that data back in the 2nd request either in query string or form fields or custom headers. This would be the truly stateless way of doing things on the server. Any required state is kept in the client.
Which option results in the best design depends entirely upon what problem you're actually trying to solve.
FYI, you NEVER embed one route in another like you showed in your question:
app.post('/route2', (req, res) => {
//how to pass parameters?
app.use(???, r1) ?
})
What that would do is to install a new permanent copy of app.use() that's in force for all incoming requests every time your app.post() route was hit. They would accumlate forever.

How to put data in gundb at server side as a peer

I thought gun instance in the server was also one of the peers.
But when I put data on the server, the peer can't get the data.
Here is my simple test code.
global.gun.get('servertest').put('yes'); // at server side
gun.get('servertest').once(console.log); // at client side
And it prints undefined.
please, let me know how to use a gun instance on server side.
On the server, run this to actually accept remote connections:
var server = require('http').createServer().listen(8080);
var gun = Gun({web: server});
On the client, run this to connect to your server:
var gun = Gun({peers: ["http://server-ip-or-hostname:8080/gun"]})
As a side note, even if you establish a peer connection to get your data, you still need to handle undefined, as once() might fire several times as data is coming in.
Relevant links:
https://gun.eco/docs/Installation#server:
https://github.com/amark/gun/tree/master/examples
https://github.com/skiqh/gun-cli
EDIT:
To be more explicit about my side note above -- the once callback on your client getting undefined for non-local data is actually by design. It means the client does not have the requested data available yet. It will however request it from its peers, which will try to answer with what they themselves can resolve (locally or from their respective peers). These answers will trigger the callback again (if they got through the CRDT algorithm I think).
Getting undefined on the client could also mean the server's response might have timed out and GUN considered it unanswered. You can prolong the waiting time with .once(callback_function, {wait: time_in_miliseconds}).
As per Hadar's answer, try using on() instead of once() to mitigate race conditions, i.e. your client requesting data from the server before you actually wrote it. Once you got your data and don't want any more updates, you can unsubscribe with gun.get('servertest').off()
Also, it might be noteworthy that GUN instances are not magically linked; having two of them connected does not mean they are one and the same in any way. Conceptually, they are peers in a distributed system, which in GUN's case gives you eventual consistency with all the limits and tradeoffs associated with that.
#skiqh
Hello, Thanks for your answer.
I initiated gun instance well in both server and client.
server
let server = https.createServer(options, app);
server.listen( port );
let gun = Gun({ file: 'data', web: server });
global.gun = gun; // <-- my gun instance on server side
global.gun.get('servertest').put('yes'); <-- I tried to put data
// listening~~~~~
client
window.G = G;
let opt = {};
opt.store = RindexedDB(opt);
opt.localStorage = false;
opt.peers = ['https://my.link/gun'];
G.gun = Gun(opt); // <-- my gun instance on client
gun.get('servertest').once(console.log) // <-- it prints "undefined" even though I put data here by server!
I really want to know how to use methods like .put(), .get(), .on() etc.. on the server side using gun instance.
I tried doing this but failed as I attached the result on my post.
Please, Let me know what Im doing something wrong and the correct way.
Thank you
try gun.on instead of once. on will subscribe to all changes.
your example should work if you run .once only after you write something to the server.
using gun.on on client should work regardless and will trigger the moment the server write somehting

Akka HTTP Source Streaming vs regular request handling

What is the advantage of using Source Streaming vs the regular way of handling requests? My understanding that in both cases
The TCP connection will be reused
Back-pressure will be applied between the client and the server
The only advantage of Source Streaming I can see is if there is a very large response and the client prefers to consume it in smaller chunks.
My use case is that I have a very long list of users (millions), and I need to call a service that performs some filtering on the users, and returns a subset.
Currently, on the server side I expose a batch API, and on the client, I just split the users into chunks of 1000, and make X batch calls in parallel using Akka HTTP Host API.
I am considering switching to HTTP streaming, but cannot quite figure out what would be the value
You are missing one other huge benefit: memory efficiency. By having a streamed pipeline, client/server/client, all parties safely process data without running the risk of blowing up the memory allocation. This is particularly useful on the server side, where you always have to assume the clients may do something malicious...
Client Request Creation
Suppose the ultimate source of your millions of users is a file. You can create a stream source from this file:
val userFilePath : java.nio.file.Path = ???
val userFileSource = akka.stream.scaladsl.FileIO(userFilePath)
This source can you be use to create your http request which will stream the users to the service:
import akka.http.scaladsl.model.HttpEntity.{Chunked, ChunkStreamPart}
import akka.http.scaladsl.model.{RequestEntity, ContentTypes, HttpRequest}
val httpRequest : HttpRequest =
HttpRequest(uri = "http://filterService.io",
entity = Chunked.fromData(ContentTypes.`text/plain(UTF-8)`, userFileSource))
This request will now stream the users to the service without consuming the entire file into memory. Only chunks of data will be buffered at a time, therefore, you can send a request with potentially an infinite number of users and your client will be fine.
Server Request Processing
Similarly, your server can be designed to accept a request with an entity that can potentially be of infinite length.
Your questions says the service will filter the users, assuming we have a filtering function:
val isValidUser : (String) => Boolean = ???
This can be used to filter the incoming request entity and create a response entity which will feed the response:
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.model.HttpResponse
import akka.http.scaladsl.model.HttpEntity.Chunked
val route = extractDataBytes { userSource =>
val responseSource : Source[ByteString, _] =
userSource
.map(_.utf8String)
.filter(isValidUser)
.map(ByteString.apply)
complete(HttpResponse(entity=Chunked.fromData(ContentTypes.`text/plain(UTF-8)`,
responseSource)))
}
Client Response Processing
The client can similarly process the filtered users without reading them all into memory. We can, for example, dispatch the request and send all of the valid users to the console:
import akka.http.scaladsl.Http
Http()
.singleRequest(httpRequest)
.map { response =>
response
.entity
.dataBytes
.map(_.utf8String)
.foreach(System.out.println)
}

My SQL Server Can Only Handle 2 players?

I am developing a game using TCP. The clients send and listen the server using TCP. When the server receives a request, then it consults the database (SQL Server Express / Entity Framework) and sends a response back to client.
I'm trying to make a MMORPG, so I need to know all the players locations frequently, so I used a System.Timer to ask the server the location of the players around me.
The problem:
If I configure the timer to trigger for every 500ms a method that asks the server the currently players location, then I can open 2 instances of the client app, but it's laggy. If I configure to trigger for every 50ms, then when I open the second instance, the SQL Server throws this exception often:
"The connection was not closed. The connection's current state is open."
I mean, what the hell? I know I am requesting A LOT of things to the database in a short period, but how do real games deals with this?
Here is one code that throws the error when SQL Server seems to be overloaded (second line of the method):
private List<CharacterDTO> ListAround()
{
List<Character> characters = new List<Character>();
characters = ObjectSet.Character.AsNoTracking().Where(x => x.IsOnline).ToList();
return GetDto(characters);
}
Your real problem is ObjectSet is not Thread Safe. You should be creating a new database context inside ListAround and disposing it when you are done with it, not re-using the same context over and over again.
private List<CharacterDTO> ListAround()
{
List<Character> characters = new List<Character>();
using(var ObjectSet = new TheNameOfYourDataContextType())
{
characters = ObjectSet.Character.AsNoTracking().Where(x => x.IsOnline).ToList();
return GetDto(characters);
}
}
I resolved the problem changing the strategy. Now I don't update the players positions in real time to the database. Instead, I created a list (RAM memory) in the server, so I manage only this list. Eventually I will update the information to the database.

VEMap and a GeoRSS feed(hosted separately)

The scenario is as follows:
A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it.
A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object).
Now, VEMap can accept an input feed in this format via the following:
var layer = new VEShapeLayer();
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer);
map.ImportShapeLayerData(veLayerSpec, onComplete, true);
onComplete is a callback function I'm using to replace the default pin graphic with something custom.
The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format.
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer);
When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error.
The order of operations is currently: remote feed -> local handler -> VEMap import
If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.
The format I have above is actually very close to what I needed. A similar solution was found by Mike McDougall. Although I was passing the RSS feed directly through the handler(writing the read stream directly), I just needed to specify the following from within the handler:
context.Response.ContentType = "text/xml";
context.Response.ContentEncoding = System.Text.Encoding.UTF8;
With the above fix, I'm able to have a remote GeoRSS feed successfully load a separately hosted Virtual Earth map instance.