CPU Mining on Private Peercoin network - bitcoin

I have setup a private Peercoin network with two nodes on virtual box. Both clients are able to connect and i am able to start a separate private blockchain.
I have started mining using the setgenerate true.
Now i want to achieve similar thing using the latest client. I am able to connect two nodes with no outside connections. Now i want to start mining just like the older version, but seems like they have removed the setgenerate command.
How can i start CPU mining on the new client?

Get a mining program like cgminer that supports peercoin and supports talking to a local full node over RPC. Configure your mining program to talk to one of your clients.

Related

Is LoRaWan only accessible with an internet connection?

I'm planning to build an IoT project for an oil palm plantation through the use of an Arduino and an Android Mobile application for my final year project in University. As plantations have low to no communication signals which includes wifi, it is possible to implement LoRaWAN without access to the internet/use/ of a web-based application?
The LoRaWAN node does not need any other communications channel aside from LoRaWAN, of course. Would not make any sense otherwise. ;-)
The gateway however does need a connection to the server application that is to be used as a central instance for your use case. Usually this is an existing LoRaWAN cloud service such as The Things Network (TTN) with your application connected behind, but in theory you could connect the gateway to your very own central, making your whole network independent. This is possible because LoRa uses frequency bands free for use (ISM bands) so anyone can become a „network operator“. The TTN software is available as Open Source, for example.
Connection from the gateway to the central is usually done via existing Ethernet/WiFi infrastructures or mobile internet (3G/4G), whatever suits best.
Besides, the LoRa modules available for Arduinos can be used for a low-level, point-to-point LoRa (not LoRaWAN) connection between two such modules. No gateway here. Maybe that is an option, too, for your use case.
The LoraWAN is using the Gateway connected to some kind of cloud, for example the TTN network which is community based. If you live in a bigger city you have good chances to have a TTN Gateway in your area.
You can however connect two Lora nodes together to get a point to point connection. You can send data from Node1, which is connected to some kind of sensor and batterypowered, to Node2, which is stationary and stores all the data to a flashdrive for example. From this flashdrive you can import the data to a website or you could use an application like Node-Red to display the data on a Dashboard.
Here you will find instructions on how to send Data from one Lora-Node to another.
Here you will find instuctions on how to use Node-Red to display your Lora-Data. You will have to change the input from the TTN-Cloud to a textfile on your Raspberry, or whatever gateway you use. (Optional)

Connecting deepstream nodes directly

Deepstream docs:
For smaller clusters it used to be possible to connect deepstream nodes directly in a full-mesh configuration (everyone-to-everyone). This feature has been deprecated in its current incarnation, but will soon be replaced by a more scalable (and hopefully slightly smarter) direct-message-connector plugin based on the Small World Network Paradigm.
Is it possible to create the described (but deprecated) mesh with a deepstream cluster? I wasn't able to find any real example of this.
An example thought is a Chat Application. This application would run on each users desktop and each establish a deepstream server. There would be some discovery logic to connect to other instances on the same LAN. The clients would sync data across each other through their own ds servers running on their desktop.
I know IPFS has this sort of thought, but wanted this to be more application-based and deepstream seemed like a good place to start.
Edit:
I did just find this: https://deepstreamhub.com/tutorials/protocols/webrtc-full-mesh/
-- Interested in understanding why this might not be the best scalable solution and if there are possible work arounds
Clustering deepstream servers is currently only available as part of our enterprise offering [1]. We've built a decentralized clustering mechanism allowing it to scale to millions of concurrent connections and billions of messages.
If you're looking to build a chat application you wouldn't have a deepstream server running on each persons computer. What you would do is either:
set up one deepstream server [2] (we've found that an individual server can easily handle ~100 000 connected clients)
create an application on deepstreamHub [3] (deepstreamHub is our hosted version of deepstream where you don't need to run any servers yourself).
Each user of your chat application has a deepstream client that connects to the server. These clients are websocket based and are able to send/receive messages and sync data for your chat application.
Take a look at some of the example apps [4] we've built, these include some chat apps as well as other demos you might find interesting.
deepstream enterprise 1
deepstream open source 2
deepstreamHub 3
example applications 4

Is a google compute virtual machine highly available?

So I have a cloud virtual machine on google compute, does this mean by nature that it is highly available? If the VM is running on a single piece of hardware on GCE, if the piece of hardware breaks then the VM could go down. Is the VM running on some kind of RAID, but for servers? So if one of the machines goes down another machine will pick up and continue running the vm? Thanks.
The machine itself is not highly available. However, Google takes several steps to increase reliability:
Storage is replicated and independent of the physical machine the VM is running on (obviously not for local SSD). This means that even if the physical machine catches on fire, only the "runtime" state is lost but the attached disks are fine.
VMs can live-migrate. This is a setting you can control. If enabled, the VM will be migrated to a different physical machine on maintenance events. Live-migration can lead to brief performance degradation while memory etc. is synced to the other host but the machine is not shut down / restarted.
Even when the physical host suddenly dies, you can set your instance to restart automatically on a new machine. If you plan to use this mode, make sure your instance is able to cleanly boot to serving state without manual intervention.
If you need high availability, the best approach is to spread your instances among zones of the same region and using a network or HTTP(S) loadbalancer. These will automatically stop sending traffic to a machine in case it becomes unhealthy. Also see this short youtube video on Google's network architecture for more info.
For high availability of your application data, there are highly available options like Datastore for database-like usage and Cloud Storage for file-oriented data. Keep in mind that Cloud SQL also runs on a single instance/physical machine which means that you have to setup slaves/replicas to get high availability. However, you can also do that with your favorite DB system on plain Compute Engine instances if you are willing to maintain them yourself.

Can NS3 EMU be applied on different machines in WAN?

we are currently considering whether ns3 satisfying our requirements, we're looking for a convenient tool to run in distributed devices in the real network (every kind of possible connections) and capture the network performance data (like a sniffer). I realize that the primary purpose of ns3 is to simulate network topology in a single machine, but its emu module sounds promising and the flow monitor can save our effort on data capture.
In the following link
http://www.nsnam.org/wiki/HOWTO_make_ns-3_interact_with_the_real_world
it is declared that NS-3 EMU can be applied to inject simulated nodes interacting with real live network, and 3 kinds of testbed are given. However the first solution, virtual machine vmware testbed is still woking within LAN -- in promiscuous mode the virtual machines network card are listening to all LAN broadcasts so that the emu-udp-echo server and client can find each other.
My question is, is it possible that the emu-udp-echo server/client running in different, physical systems from different positions in wide network?
e.g. in different cities or from different network providers, given ip address of the hardware where the other ns-3 node is running? if it is possible, how can i specify the "real" ip address and port for the node, instead of assign a virtual ipv4 address?
Thanks a lot.
Yes, while the documentation describes how to perform this using virtual machines, this can be done in general on real hardware. Since that HOWTO was written, there has been additional work on providing helpers for running this type of experiment, including running on PlanetLab testbed machines. This documentation describes the generalized file descriptor NetDevice, added to the ns-3.17 release: http://www.nsnam.org/docs/release/3.19/models/html/fd-net-device.html. A similar example to the one described in that HOWTO is found in the file fd-emu-udp-echo.cc.
When using emulation mode on real networks, configuration of the MAC addresses and IP addresses must be done carefully. First, the device must be able to be put into promiscuous mode. Second, the MAC address needs to be different than the hardware address of the NIC. If you intend to be riding on top of an active NIC with existing IP address (in use for other Internet traffic), you'll need to have another IP address for ns-3 that is within the right link subnet. If instead you want to dedicate the NIC to ns-3 use, then do not assign the IP address to the host NIC and just assign it to the ns-3 configuration.
The PlanetLab example also shows another configuration that uses Tap devices to send data to/from PlanetLab testbed nodes. Some of this configuration is specific to how PlanetLab works, but the use of Tap device bridged to an ns-3 device may also facilitate emulation.

How to simulate a large network of machines for testing?

Currently, I am writing an application that utilizes WMI to scan all the computers on our Active Directory network.
I'm interested in testing the program against all flavors of Windows machines in a testing environment.
Is there a way to similuate this environment in VMware or something?
Any ideas?
VMWare works well and can host many virtual computers on a single physical computer. You can also put the virtual computers on your active directory network.
If your goal is to set up a separate large network for testing that has it's own AD server you can look into Amazon EC2 for testing. The advantage here is once you setup your set of servers, you can turn them on and off as needed and only pay for the time actually used ($0.12 per hour).
http://aws.amazon.com/
You can use network simulation: http://en.wikipedia.org/wiki/Network_simulation
and good GPL tool is http://www.nsnam.org/
You have two options.
You probably have it right, with VMWare this is easy, try looking for cloning tools. If you plan on copying and pasting the image, you will get several problems (computer Guids repeated, Network Computer Names repeated, etc)
You can also "mock" the WMI response by wrapping the WMI methods that you want to call and implementing an interface, using Rhino Mock or NMock if you are working in .NET (which I assume you are).