I've been spending hours upon hours on this problem, but to no avail.
EDIT: solution found (see my answer)
Project background
I'm building a project in Symfony2, which requires a module for uploading large files. I've opted for Node.js and Socket.IO (I had to learn it from scratch, so I might be missing something basic).
I'm combining these with the HTML5 File and FileReader API's to send the file in slices from the client to the server.
Preliminary tests showed this approach working great as a standalone app, where everything was handled and served by Node.js, but integration with Apache and Symfony2 seems problematic.
The application has an unsecured and secured section. My goal is to use Apache on ports 80 and 443 for serving the bulk of the app built in Symfony2, and Node.js with Socket.io on port 8080 for file uploads. The client-side page connecting to the socket will be served by Apache, but the socket will run via Node.js. The upload module has to run over HTTPS, as the page resides in a secured environment with an authenticated user.
The problem is events using socket.emit or socket.send don't seem to work. Client to server, or server to client, it makes no difference. Nothing happens and there are no errors.
The code
The code shown is a simplified version of my code, without the clutter and sensitive data.
Server
var httpsModule = require('https'),
fs = require('fs'),
io = require('socket.io');
var httpsOptions =
{
key: fs.readFileSync('path/to/key'),
cert: fs.readFileSync('/path/to/cert'),
passphrase: "1234lol"
}
var httpsServer = httpsModule.createServer(httpsOptions);
var ioServer = io.listen(httpsServer);
httpsServer.listen(8080);
ioServer.sockets.on('connection', function(socket)
{
// This event gets bound, but never fires
socket.on('NewFile', function(data)
{
// To make sure something is happening
console.log(data);
// Process the new file...
});
// Oddly, this one does fire
socket.on('disconnect', function()
{
console.log("Disconnected");
});
});
Client
// This is a Twig template, so I'll give an excerpt
{% block javascripts %}
{{ parent() }}
<script type="text/javascript" src="https://my.server:8080/socket.io/socket.io.js"></script>
<script type="text/javascript">
var socket = io.connect("my.server:8080",
{
secure: true,
port: 8080
});
// Imagine this is the function initiating the file upload
// File is an object containing metadata about the file like, filename, size, etc.
function uploadNewFile(file)
{
socket.emit("NewFile", item);
}
</script>
{% endblock %}
So the problem is...
Of course there's much more to the application than this, but this is where I'm stuck. The page loads perfectly without errors, but the emit events never fire or reach the server (except for the disconnect event). I've tried with the message event on both client and server to check if it was a problem with only custom events, but that didn't work either. I'm guessing something is blocking client-server communication (it isn't the firewall, I've checked).
I'm completely at a loss here, so please help.
EDIT: solution found (see my answer)
After some painstaking debugging, I've found what was wrong with my setup. Might as well share my findings, although they are (I think) unrelated to Node.js, Socket.IO or Apache.
As I mentioned, my question had simplified code to show you my setup without clutter. I was, however, setting up the client through an object, using the properties to configure the socket connection. Like so:
var MyProject = {};
MyProject.Uploader =
{
location: 'my.server:8080',
socket: io.connect(location,
{
secure: true,
port: 8080,
query: "token=blabla"
}),
// ...lots of extra properties and methods
}
The problem lay in the use of location as a property name. It is a reserved word in Javascript and makes for some strange behaviour in this case. I found it strange that an object's property name can't be a reserved word, so I decided to test. I had also noticed I was referencing the property incorrectly, I forgot to use this.location when connection to the socket. So I changed it to this, just as a test.
var MyProject = {};
MyProject.Uploader =
{
location: 'my.server:8080',
socket: io.connect(this.location,
{
secure: true,
port: 8080,
query: "token=blabla"
}),
// ...lots of extra properties and methods
}
But to no avail. I was still not getting data over the socket. So the next step seemed logical in my frustration-driven debugging rage. Changing up the property name fixed everything!
var MyProject = {};
MyProject.Uploader =
{
socketLocation: 'my.server:8080',
socket: io.connect(this.socketLocation,
{
secure: true,
port: 8080,
query: "token=blabla"
}),
// ...lots of extra properties and methods
}
This approach worked perfectly, I was getting loads of debug messages. SUCCESS!!
Whether it is expected behaviour in Javascript to override (or whatever is happening here, "to misuse" feels like a better way of putting it to me right now) object properties if you happen to use a reserved word, I don't know. I only know I'm steering clear of them from now on!
Hope it helps anyone out there!
Related
I am running a ipfs js instance which is working well but i get some error with web sockets and I am unsure why it is even calling a local IP ?
Firefox can’t establish a connection to the server at ws://127.0.0.1:8081/p2p/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64.
In firefox on deployed site no errors but in safari
The page at https://alpha.nodenogg.in/ was not allowed to run insecure content from ws://127.0.0.1:8081/p2p/Qmbut9Ywz9YEDrz8ySBSgWyJk41Uvm2QJPhwDJzJyGFsD6.
here is some of the code I am using, any pointers and to where this error is coming from would be great. The site however does what I want with IPFS so I am not sure what this error is related to. Thank you
import VueIpfs from 'ipfs'
const ipfs = VueIpfs.create()
mounted: function () {
// console.log(VueIpfs)
this.getIpfsNodeInfo()
},
methods: {
async getIpfsNodeInfo() {
try {
// Await for ipfs node instance.
node = await ipfs
} catch (err) {
// Set error status text.
this.status = `Error: ${err}`
}
},
onFileSelected(event) {
this.selectedFile = event.target.files[0]
this.saveIPFS()
},
async saveIPFS() {
try {
this.fileContents = await node.add(this.selectedFile)
this.getIPFS()
} catch (err) {
// Set error status text.
this.status = `Error: ${err}`
}
},
}
This is not indeed an issue with your code.
I believe you are using webrtc-star for transport and discovery (it is the default for browser environment in js-ipfs, if you did not custom it, you should have it).
So, you use webrtc-star to discover other peers to talk to. Once you get to know these peers, your node will get to know all the multiaddrs that peer is announcing to the network. There will be nodes announcing several addresses, some of which are local IP addresses. js-libp2p has a feature to specify announce and noAnnounce addresses in its configuration, which allows people to be able to specify in these cases if you should not announce your local addr and announce the public one. However, this feature is not widely known at the moment.
However, from a libp2p/IPFS stand point, we need to find a better way of catching and logging these errors since they seem error related to your node/code, but they are a result of bad propagation of multiaddrs that other peers announce in the network.
I hope that I answered your question, and we will look into a patch to get rid of these errors
In order to support running my ASP.NET Core application on Linux with a reverse proxy (nginx in this case), I had to add the following code snippet:
// Forward headers in order to be able to operate behind a reverse proxy
app.UseForwardedHeaders(new ForwardedHeadersOptions
{
ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto
});
// The above does not appear to be enough to get the right redirect URI result when
// logging in with OpenID Connect. This code snippet from
// https://github.com/aspnet/Docs/issues/2384 fixed it.
app.Use((context, next) =>
{
if (context.Request.Headers.TryGetValue(XForwardedPathBase, out StringValues pathBase))
{
context.Request.PathBase = new PathString(pathBase);
}
if (context.Request.Headers.TryGetValue(XForwardedProto, out StringValues proto))
{
context.Request.Protocol = proto;
}
return next();
});
I'm unable to find any definite advice on whether I can leave this enabled by default, or if I should put this behind some explicit configuration flag?
It seems to me that this could be used to have strange effects if these headers were added when a reverse proxy isn't being used? I can't think of a way it could be exploited, but perhaps I'm missing something.
So, is there an advantage to introducing a flag for this specific piece of configuration, aside from a very minor performance improvement perhaps?
No, don't leave this in without a reverse proxy, it's dangerous. Clients could use it to provide false values (spoofing) and trick any app logic you have that checks these values.
I need your help to solve an issue.
I use express and socket.io and I try to preload my client-side javascript (my client-side socket connection) before server-side code.
Currently, I do this :
Server-side :
app.get('/branche/:branche', function(req, res) {
res.render('branchebuild', {branchname: branche, listCommande : listCommandeApplis}, function(err, html){
if(html){ //callback dès que le template est chargé
res.send(html);
io.sockets.emit("resultStep", "foo");
}
});
});
Client-side (in a socket.js file) :
socket.on("resultStep", function(foo) {
console.log("coucou", foo);
});
The issue is server-side load faster than client-side so client-side can't use my socket.js function.
How can I solve this kind of problem ?
Thank's for your advices.
Your emit() method is ran just after the HTML file is served to the client. Obviously, the client won't be quick enough to render the page, interpret the JS code and then start to listen to the web socket calls.
You need to either :
Add in your JS code a client side emit() that the server will respond to with its own emit(). This will make sure the client fully rendered the page before interacting with the server.
Pass "foo" through the res.render() method. This will immediately be sent to the client, without a single web socket call.
I've run into a problem with using the request/respond pattern of EasyNetQ while using it on our server (Windows Server 2008). Not able to reproduce it locally at the moment.
The setup is we have 2 windows services (running as console applications for testing) which are connected through the request/respond pattern in EasyNetQ. This has been working as expected until recently on the server where the request side does not "consume" the responses until after the request timeouts.
I have included 2 links to pastebin which contain the console logging of EasyNetQ which will hopefully make my problem a bit more clear.
RequestSide
RespondSide
Besides that, my request code looks like this:
var request = new foobar();
var response = _bus.Request<foobar, foobar2>(request);
and on the respond side:
var response = new response();
_bus.Respond<foobar, foobar2>(request =>
{
try
{
....
return response;
}
catch (Exception e)
{
....
return response;
}
});
As I've said, the request side sends the request as expected and the respond side consumes/catches it. This works as it should, but when the respond side is done processing and responds (which it does, the messages can be seen in the RabbitMQ management thingy) the request doesn't consume/catch the response until after the request has timed out (default timeout is 10s, tried setting to 60s aswell, makes no difference). This is also evident in the logs linked above as you'll see on the RequestSide, with the 5 or so messages received from the response queue which previously timed out.
I've tried using RespondAsync in case the processing was taking too long and messing something up, didn't help. Tried using both RespondAsync & RequestAsync, just messed everything up even more (I was probably doing something wrong with the request :)).
I might be missing something, but I'm not sure what to try from here.
EDIT: Noticed I messed something up. As well as added more context below:
The IBus used for the request/response is created and injected with Ninject:
class FooModule : NinjectModule
{
public override void Load()
{
Bind<IBus>().ToMethod(ctx => RabbitHutch.CreateBus("host=localhost", x => x.Register<IEasyNetQLogger>(_ => logger))).InSingletonScope();
}
}
And it's all tied together by the service being constructed using Topshelf with Ninject like so:
static void Main(string[] args)
{
HostFactory.Run(x =>
{
x.UseNinject(new FooModule());
x.Service<FooService>(s =>
{
s.ConstructUsingNinject();
s.WhenStarted((service, control) => service.Start(control));
s.WhenStopped((service, control) => service.Stop(control));
});
x.RunAsLocalSystem();
});
}
The Topshelf setup has all been tested pretty thoroughly and it works as intended, and should not really be relevant for the request/respond problem, but I thought I would provide a bit more context.
I had this same issue, my problem was i set the timeout only in the response but not in the request side, after i set the timeoute in both side it worked fine
my connection for eg.
host=hostname;timeout=120;virtualHost=myhost;username=myusername;passw
ord=mypassword
I want to achieve such a functionality.
That is:
1) in case of connecting to worklight server successfully, Direct Update is available.
2) in case of failing to connect to worklight server, the app can run offline.
Below is my configuration in "initOptions.js".
// # Should application automatically attempt to connect to Worklight Server on application start up
// # The default value is true, we are overriding it to false here.
connectOnStartup : true,
// # The callback function to invoke in case application fails to connect to Worklight Server
onConnectionFailure: function (){
alert("onConnectionFailure");
doDojoReady();
},
// # Worklight server connection timeout
timeout: 10 * 1000,
// # How often heartbeat request will be sent to Worklight Server
heartBeatIntervalInSecs: 20 * 60,
// # Should application produce logs
// # Default value is true
//enableLogger: false,
// # The options of busy indicator used during application start up
busyOptions: {text: "Loading..."
But it doesn't work.
Any idea?
Direct Update happens only when a connection to the server is available. From the way you phrased your question, your problem is that when the app cannot connect to the server it doesn't work "offline". So your question has got nothing to do with Direct Update (if it does, re-phrase your question appropriately).
What you should do, is read the training material for working offline in Worklight.
You are not specifying what "doesn't work". Do you get the alert you've placed in onConnectionFailure? How does your doDojoReady function look like?
I too am using Dojo in Worklight.
My practice is have worklight configured not to connect on startup
var wlInitOptions = {
connectOnStartup : false
in my wl init I then initialise my dojo app,
function wlCommonInit(){
loadDojoLayers();
};
requiring whatever layers I'm using, and then do the actual dojo parsing
require([ "dojo/parser",
"myApp/appController",
"dojo/domReady!"
],
function(parser, appController) {
parser.parse().then (function(){
appController.init();
});
});
Finally, now WL, Dojo, and myApp are all ready I attempt the WL connection, calling this method from my appController.init()
connectWLServer: function() {
// possibly WL.Client.login(realm) here
var options = {
onSuccess: lang.hitch(this, "connectedWLServer"),
onFailure: lang.hitch(this, "connectWLServerFailed"),
};
WL.Client.connect(options);
}
Any Direct Update activities happen at this point. Note that the app as whole keeps going whether or not the connection works, but clearly we can run appropriate code in success and fail cases. Depending upon exactly what authentication is needed an explicit login call may be needed - adapter-based authentication can't happen automatically from inside the connect().