How to make sure client doesn't sync when switching from dev to live server - gun

When I change the url in the client from localhost to myLiveServer.com, how do I make sure that the localStorage doesn't get synced with the live data?
Do I need to configure that somewhere and extend the url like...
localhost:8080/gun/dev and myLiveServer.com/gun/live

Yes, it is probably wise to do something like this:
var gun = Gun('http://localhost:8080/gun').get('development');
And then in production, do something like:
var gun = Gun('http://myserver.com/gun').get('uniqueAppName');
Great question though, maybe this is something we can improve/make easier in the future.
Let me know if this works.

Related

Listen data changes in Gun server

How can I listen the changes occurred in the Gun server database :
My server is listening like below
var Gun = require('gun');
var server = http.createServer();
var gun = Gun({web: server});
server.listen(port, function () {
console.log('Server listening on ...')
})
While puting the data from Gun client my data.json file updated. Now I need to get the listern the whole changes happening in the db. I think on a perticuler node I can listern using gun.on method. Can we able to Listern the whole changes/ change request coming from the client?
#ajmal-m-a yes, via the "wire spec" (you'll need to understand the graph format, here is a tech talk where I explain it in 30min on stage):
gun.on('in', function(msg){}) where gun is the root.
Note: You'll need to understand how to handle the middleware event hook system - your listener will need to remember to call this.to.next(msg).
A good and simple resource to look at is this:
https://github.com/zrrrzzt/bullet-catcher
Does this help?
(Sorry for the delay in answering the question, we were headed into a conference at that time, and it got buried under a bunch of emails)

Convert Minecraft Mod into Server Plugin

I have developing Forge Minecraft mods for some time now. I was wondering if it were possible to actually put them on a server. I can't seem to find a direct answer. I know that this may not be the place to put this but I am just dying to know. Please let me know if I can do it, and if so, how.
You can't turn a MC mod into a server plugin because Bukkit and Forge are different things (take it from a plugin dev); however, you can make a mod work on Forge servers.
When making a Forge server, all of your server's players will have to have the client mod, in addition to the Forge client, installed for it to work, so be prepared.
The first step is to actually install the server. Next, go to your Forge server folder, and upload the mod(s) into the folder. Then restart your server and boom, its there. Server mods can be found here.
So if you didn't want to read this, the gist of it is, no, you cannot, but your mods can be used on a Forge server, but not as plugins for a Bukkit or Spigot server, and that your players will need a clientside modpack. Hope this helped!
If you want to make your mod server side only, without having your "clients" having to download it, add acceptableRemoteVersions = "*" to your #Mod line.
This way players don't have to have the mod to be able to connect to the server. This way you can have plugins/mods like dynmap or whotookmycookies without the players having to have the mod as well.
If you want to put them on a server, yes you can. But keep in mind if you solely developped for single player worlds you'll run into "Side" issues.
Code will break beacuse some code is specifically Client side only, and some code is Server side only.
You will have to adapt some synching code with packets to make sure all your graphical things will happen and user input will be returned to you.
Single player worlds are way less picky in that aspect. So make sure you run all your code on the "proper" side, and remember if(!world.isRemote)(test to see if you are running serverside) is your friend.
#Mod(modid = MyMod.MODID, name = MyMod.NAME, version = MyMod.VERSION), acceptableRemoteVersions = "*"
public class MyMod {
....
}

returning absolute vs relative URIs in REST API

suppose the DogManagementPro program is an application written in client/server architecture, where the customers who buys it is supposed to run the server on his own PC, and access it either locally or remotely.
suppose I want to support a "list all dogs" operations in the DogManagementPro REST API.
so a GET to http://localhost/DogManagerPro/api/dogs should fetch the following response now:
<dogs>
<dog>http://localhost/DogManagerPro/api/dogs/ralf</dog>
<dog>http://localhost/DogManagerPro/api/dogs/sparky</dog>
</dogs>
where I want to access it remotely on my local LAN, [the local IP of my machine is 192.168.0.33]
what should a a GET to http://192.168.0.33:1234/DogManagerPro/api/dogs fetch?
should it be:
<dogs>
<dog>http://localhost/DogManagerPro/api/dogs/ralf</dog>
<dog>http://localhost/DogManagerPro/api/dogs/sparky</dog>
</dogs>
or perhaps:
<dogs>
<dog>http://192.168.0.33/DogManagerPro/api/dogs/ralf</dog>
<dog>http://192.168.0.33/DogManagerPro/api/dogs/sparky</dog>
</dogs>
?
some people argue that I should subside the problem altogether by returning just a path element like so:
<dogs>
<dog>/DogManagerPro/api/dogs/ralf</dog>
<dog>/DogManagerPro/api/dogs/sparky</dog>
</dogs>
what is the best way?
I've personally always used non-absolute urls. It solves a few other problems as well, such as reverse / caching proxies.
It's a bit more complicated for the client though, and if they want to store the document as-is, it may imply they also now need to store the base url, or expand the inner urls.
If you do choose to go for the full-url route, I would not recommend using HTTP_HOST, but setup multiple vhosts, and environment variable and use that.
This solves the issue if you later on need proxies in front of your origin server.
I would say absolute URLs created based on the Host header that the client sent
<dogs>
<dog>http://192.168.0.33:1234/DogManagerPro/api/dogs/ralf</dog>
<dog>http://192.168.0.33:1234/DogManagerPro/api/dogs/sparky</dog>
</dogs>
The returned URIs should be something the client is able to resolve.

NodeJS - use remote module?

I'm working with node and would like to include a module stored on a remote server in my app.
I.E. I'd like to do something along these lines (which does not work as is):
var remoteMod = require('http:// ... url to my remote module ... ');
As a workaround I'd be happy with just grabbing the contents of the remote file and parsing out what I need if that's easier - though I haven't had much luck with that either. I have a feeling I'm missing something basic here (as I'm a relative beginner with node), but couldn't turn up anything after scouring the docs.
EDIT:
I own both local and remote servers so I'm not concerned with security issues here.
If I'm just going to grab the file contents I'd like to do so this synchronously. Using require('http').get can get me the file, but working from within the callback is not optimal for what I'm trying to do. I'd really be looking for something akin to php's fopen function - if that's even doable with node.
Running code loaded from another server is very dangerous. What if someone can modify this code? This person would be able to run every code he wants on your server.
You can grab remote file just via http
http://nodejs.org/docs/v0.4.6/api/http.html#http.get
require('http').get({host: 'www.example.com', path: '/mystaticfile.txt'}, function(res) {
//do something
});

How do you dynamically edit robots.txt in a load balanced environment?

Looks like we are going to have to start load balancing our webservers here soon.
We have a feature request to edit robots.txt dynamically which is not a problem for one host -- however once we get our load balancer up and going -- it sounds like I will have to scp the file over to the other host(s).
This sounds extremely 'bad'. How would you handle this situation?
I already let the client edit the meta tag 'robots' which (imo) should effectively do the same thing as he wants from the robots.txt editing but I really don't know that much about SEO.
Maybe there is a completely different way of handling this?
UPDATE
looks like we will store it in s3 for now and memcache it frontside...
HOW WE ARE DOING IT NOW
so we are using merb..I mapped a route to our robots.txt like so:
match('/robots.txt').to(:controller => 'welcome', :action => 'robots')
then that relevant code looks like this:
def robots
#cache = MMCACHE.clone
begin
robot = #cache.get("/robots/robots.txt")
rescue
robot = S3.get('robots', "robots.txt")
#cache.set("/robots/robots.txt", robot, 0)
end
#cache.quit
return robot
end
I might have the app edit the contents of robots.txt and have the user input saved to a database. Then at certain intervals, have a background process pull the latest from the DB and push to your servers.
An alternative would be to have the reverse proxy that is doing your load balancing treat robots.txt differently. You could serve it directly from the reverse-proxy or have all requests for that file go to a single server. It makes a lot of sense since robots.txt is going to be required relatively infrequently.
I'm not sure if you're home on this yet. If so ignore. (UPDATE: I see a note to your original post, but this may be useful reagrdless.)
If you mapped a call to robots.txt to an http-handler or similar, you can generate the response from say a dB.
serve it via whatever dynamic content generation you are using. its just a file . nothing special.