Connecting deepstream nodes directly - deepstream.io

Deepstream docs:
For smaller clusters it used to be possible to connect deepstream nodes directly in a full-mesh configuration (everyone-to-everyone). This feature has been deprecated in its current incarnation, but will soon be replaced by a more scalable (and hopefully slightly smarter) direct-message-connector plugin based on the Small World Network Paradigm.
Is it possible to create the described (but deprecated) mesh with a deepstream cluster? I wasn't able to find any real example of this.
An example thought is a Chat Application. This application would run on each users desktop and each establish a deepstream server. There would be some discovery logic to connect to other instances on the same LAN. The clients would sync data across each other through their own ds servers running on their desktop.
I know IPFS has this sort of thought, but wanted this to be more application-based and deepstream seemed like a good place to start.
Edit:
I did just find this: https://deepstreamhub.com/tutorials/protocols/webrtc-full-mesh/
-- Interested in understanding why this might not be the best scalable solution and if there are possible work arounds

Clustering deepstream servers is currently only available as part of our enterprise offering [1]. We've built a decentralized clustering mechanism allowing it to scale to millions of concurrent connections and billions of messages.
If you're looking to build a chat application you wouldn't have a deepstream server running on each persons computer. What you would do is either:
set up one deepstream server [2] (we've found that an individual server can easily handle ~100 000 connected clients)
create an application on deepstreamHub [3] (deepstreamHub is our hosted version of deepstream where you don't need to run any servers yourself).
Each user of your chat application has a deepstream client that connects to the server. These clients are websocket based and are able to send/receive messages and sync data for your chat application.
Take a look at some of the example apps [4] we've built, these include some chat apps as well as other demos you might find interesting.
deepstream enterprise 1
deepstream open source 2
deepstreamHub 3
example applications 4

Related

Is LoRaWan only accessible with an internet connection?

I'm planning to build an IoT project for an oil palm plantation through the use of an Arduino and an Android Mobile application for my final year project in University. As plantations have low to no communication signals which includes wifi, it is possible to implement LoRaWAN without access to the internet/use/ of a web-based application?
The LoRaWAN node does not need any other communications channel aside from LoRaWAN, of course. Would not make any sense otherwise. ;-)
The gateway however does need a connection to the server application that is to be used as a central instance for your use case. Usually this is an existing LoRaWAN cloud service such as The Things Network (TTN) with your application connected behind, but in theory you could connect the gateway to your very own central, making your whole network independent. This is possible because LoRa uses frequency bands free for use (ISM bands) so anyone can become a „network operator“. The TTN software is available as Open Source, for example.
Connection from the gateway to the central is usually done via existing Ethernet/WiFi infrastructures or mobile internet (3G/4G), whatever suits best.
Besides, the LoRa modules available for Arduinos can be used for a low-level, point-to-point LoRa (not LoRaWAN) connection between two such modules. No gateway here. Maybe that is an option, too, for your use case.
The LoraWAN is using the Gateway connected to some kind of cloud, for example the TTN network which is community based. If you live in a bigger city you have good chances to have a TTN Gateway in your area.
You can however connect two Lora nodes together to get a point to point connection. You can send data from Node1, which is connected to some kind of sensor and batterypowered, to Node2, which is stationary and stores all the data to a flashdrive for example. From this flashdrive you can import the data to a website or you could use an application like Node-Red to display the data on a Dashboard.
Here you will find instructions on how to send Data from one Lora-Node to another.
Here you will find instuctions on how to use Node-Red to display your Lora-Data. You will have to change the input from the TTN-Cloud to a textfile on your Raspberry, or whatever gateway you use. (Optional)

What is a Cheaper, more pragmatic way to store Synced Data for a UWP App?

I am building a UWP app that targets both x86, x64 and ARM platforms. I want to replace the current implementation that uses Azure for the backed (an App Service and an SQL Server) because of the high price and because my Pay-As-You-Go subscription does not allow me to set a spending limit.
I thought about using a local database but I don't know if that could be a solution since I want the user to be able to have his data synced on both PC and phone for example. I am also ok with renouncing the idea of a structured database in favor of structured files (like xml) if I can find a way to keep them somewhere in the cloud (and then I can read/write them from the client app - no need for App Service).
Are there any free, non-trial alternatives to Azure? Or should I look more into the file storage implementation? Thanks in advance.
Instead of Azure you could use another web hosting solution to publish you API. Azure also offers small free plans that might be sufficient.
An alternative would be to request access and store/sync data to user's OneDrive. Each logged in user with Microsoft Account should have OneDrive storage available so this is a good middle-ground, which is still free for you. A nice introduction to this can be found in this article.
UWP also offers RoamingFolder where you can store small files that are synced across the devices that you use. Unfortunately this is less reliable because you are not able to control when the sync happens and cannot resolve conflicts.
I have successfully migrated to another cloud platform: Heroku. In my opinion, at least for small apps, Heroku offers the best solution both technology-wise and price-wise.
I am now able to have a webservice hosted for free in the cloud, without worring about traffic and number of requests. Of course you can scale up if you want better performance, but you can start with a free plan. Also, I have a postgressql db hosted also in the cloud, also for free (up until 10 000 records, and it will be just 9$/month if I want to upgrade to 10 milion). One can never found an offer like this free on Azure.
I had to learn a bit of Node.js (there are a lot of languages Heroku supports for backend services, but .Net is not one of them) but it was totally worth it!
Another option that is now starting to gain more and more popularity is FireBase. I will certantly also check that out for my future apps.

Web UI to manage computer machines in the network

I'm looking for a platform with Web UI access that allows me to do the following:
Maintain a list of computers and add / remove based on their IP address.
Provide the SSH information for each computer machine.
Monitor if the machines are up ( ping ? )
Restart the machines with a web UI using the ssh information on the backend of the application.
I'm close to start making such an app myself since I can't seem to find anything close to that in the internet. Any clues if such an application exists ?
You might want to take a look at MeshCentral: https://meshcentral.com/ - you can add systems that you are managing and do some remote operations.
http://info.meshcentral.com/: Meshcentral is open source and is both a peer-to-peer technology with a wide array of uses and web service that is targeted for remote monitoring and management of computers and devices. Users can manage all their devices from a single web site, no matter the location of the computers or if they are behind routers or proxies.
If you are looking for source code you could take a look at the "Open Manageabilty Developer's Toolkit" http://opentools.homeip.net/open-manageability. This tool was built for managing systems with Intel Active Management Technology, but it does a lot of what you are looking for. You can download the source and see if you can use any of it if you decide to write your own UI.

What's the best way to monitor rabbitmq to make sure everything is running smoothly?

Many times, I get:
-Frozen, load goes to 5.0. Can't use my box.
-Just doesn't work.
Do following steps:
1.rabbitmq-plugins enable rabbitmq_management
2.service rabbitmq-server restart
3.browse to http://rabbitmq-server-ip:15672
4.login with
username: guest
password: guest
Dont forget to change your password later.
As sheki notes, rabbitmqctl is your first port of call for diagnostics, and for building monitoring on top of, but it's not suitable for actual monitoring directly being a manual command line.
I've found DataDog very good to monitor both the MQ details, plus the host platform in parallel. e.g. you can watch the queue levels and set alerts on queues backing-up, while also watching the CPU/memory/IO inflicted by these queue levels. It really helps to get ratios of resource usage, and the alerts are good. Having a uniform platform for both infrastructure and application level monitoring is surprisingly rare, but speeds up diagnoses of production issues hugely.
NewRelic is similar and also has a RabbitMQ plugin, although I've not used this plugin specifically, I've used NR for years and found it invaluable in diagnosing operational issues.
AppDynamics is another example. Similarly this allows you to drill down into your app from a high-level dashboard, and visually navigate from problems to causes. It's especially good with visualising the network of a distributed application across various services/servers. I've used this, for example, to find complex problems in .NET applications and SQL Server clusters using 3rd party Web Services (e.g. latency and its consequences to your app over chatty protocols). These things are very difficult to diagnose, especially for developers who are limited to checking their code. Diagnosing operational issues requires a much broader picture.
I gave up trying to even install and configure Nagios. I know it's the 'best' but it's the best of an old breed of self-configured beasts which we don't have time to manage. I didn't even get it going... and eventually turned to the more 'modern' cloud approach. Once you get over the trust factor, it's pretty liberating.
I'm using these APM platforms together* to aggregate data from:
Windows O/S level Event Logs/Services
Linux O/S level
AWS console level
RDS, EC2
Apache
MySQL
App integrations / custom NR plugins I've written
Rabbit MQ
*NewRelic can feed into Datadog! So if you are already using NR you don't need to install DD on those hosts as well.
Being able to view all these levels together gives you a view on the publishers, middleware, MQ servers, workers and front-end app - all in one dashboard.
I would highly recommend an approach like this, because just looking at one server alone leads you to a lot of head-scratching. Seeing an entire stack in one customisable dashboard is just so illuminating it takes most of the guesswork out of it.
Worried about installing these things? I found New Relic to be especially light-weight and unobtrusive. AppDynamics seemed to stress the host a bit more, but mostly that's because you had to run the visualisation tools on the host! (this may have changed). DataDog seems performant, but creates a lot of control panels/icons on the target host (perhaps just a visual impression).
To a four year old question - this answer probably wasn't available in 2011, but in 2015 these once 'startup' style APM services are just tens or hundred dollars a month for an unbelievably rich enterprise-level solution.
There are bunch of RabbitMQ monitoring plugins available for different monitoring systems like Nagios, Zabbix etc.
Look at http://www.rabbitmq.com/how.html#management
Using rabbitmqctl is the most straight forward solution to check the status of the node.
$ rabbitmqctl status
This should tell you the status of the RabbitMQ node.
If you have PRTG (or any probe system with a HTTP sensor check), you can check the server status described at the following page:
https://blog.cdemi.io/monitoring-rabbitmq-in-prtg/
In particular you have to
Enable Management Plugin
The rabbitmq-management plugin provides an HTTP-based API for management and monitoring of your RabbitMQ
server, along with a browser-based UI and a command line tool,
rabbitmqadmin. The management plugin is included in the RabbitMQ
distribution. To enable it, we need to run: rabbitmq-plugins enable
rabbitmq_management on the RabbitMQ nodes. For more details on the
Management plugin refer to RabbitMQ Documentation.
The web UI is located at: http://server-name:15672/ The HTTP API and
its documentation are both located at: http://server-name:15672/api/
Once done, you can check the overview of your server with the API:
http://server-name:15672/api/overview
Where you have a JSON with all details about the server, active connections, queues, etc.
This cmd will help you service rabbitmq-server status
OR try theseservice rabbitmq-server stop and service rabbitmq-server start then service rabbitmq-server status.

RabbitMQ - Basic newbie questions

Our scenario: dozens of Windows laptops which are occasionally connected to the network. Need to store simple data records on each laptop, then have these reliably transferred to a service running on the network once connection is available. Considering RabbitMQ on each laptop, feeding data to a "main" RabbitMQ on the network. This is a Fortune 100, and packaging etc is a concern.
Question 1: In general, does Rabbit make sense here? If not, any suggestions for an approach?
Question 2: When I installed on Win I had to manually install Erlang first. Are there packaging/deployment options which are simpler/more friendly? (Their IT people can do all the normal deployment stuff including create win service, but installing Erlang on user machines might raise eyebrows...)
Thanks for any help from those of you who've been there, done that with Rabbit.
Question 1: What you need is a store and forward mechanism. RabbitMQ can be used for that, actually by using the Shovel plug-in to take care of moving messages from the local Rabbit to the remote one (handling reconnection, retries, etc... for you).
Question 2: The answer is related to question 1. RabbitMQ+Shovel is conceptually suitable for your store and forward needs but if, alas, not technologically acceptable, you may want to consider simpler/cruder approaches like... SMTP!
If the Windows laptops are backed by a windows infrastructure, the most logical choice is MSMQ, which offers this "Out-of-the-box"; e.g. store and forward from clients to server(s). Easy to install by policy and administrate.