How to connect bitcore-lib to a running full node-bitcoin - bitcoin

I want to run full node on my box, and write a program using bitcore-lib using which I can get balance from a given address and also transfer money, using the running full node on my box. I would really appreciate any pointers to achieve it.

How to connect bitcore-lib to a running full node-bitcoin
Use npm to install bitcore-lib. It should also come with it's own version of bitcoind (unsure if they've switched over to bcoin). If not/you're unsure, you can download and set up your own bitcoind node by cloning the bitcoin repository and follow the docs on setting it up on your machine (OSX, Linux, Windows, etc)
Then, to configure bitcore-lib to connect to your node, you can configure your bitcore-node.json file to look something like this.
{
"network": "livenet",
"port": 3001,
"services": [
"bitcoind",
"insight-api",
"insight-ui",
"web"
],
"servicesConfig": {
"bitcoind": {
"connect": [
{
"rpcuser": "bitcoin",
"rpcpassword": "local321",
"zmqpubrawtx": "tcp://127.0.0.1:28332"
}
]
}
}
}
I want to run full node on my box, and write a program using bitcore-lib using which I can get balance from a given address and also transfer money, using the running full node on my box.
If you're interesting in building applications on top of bitcoin, and you're fine using javascript you should take a look at bcoin(bcoin.io). It's a full node implementation written in node.js and has great tutorials on how to use it's fleshed out api. If you have issues, they also have an open slack team where you can ask the developers for help. bitcore-lib, while a front runner in the past is not as well supported and suffers from a host of issues.

Related

Building Superset locally encounters missing static assets

I'm trying to build Superset locally using docker-compose.
After cloning the repository, I modify docker-compose.yml so that it builds images from local source code instead of pulling from Docker Hub. My modifications include:
In service db, change Postgres image version from image: postgres:14 to image: postgres:10 since the service cannot be built properly with Postgres 14.
In services superset, superset-init, superset-worker, superset-worker-beat and superset-tests-worker, change image: *superset-image to build: . so that Docker builds the application from local source code.
However, after running docker-compose build and then docker-compose up, I got the blank screen like this. I checked out the logs and found out that a lot of asset files are missing, for example /static/assets/images/loading.gif is missing which results in that blank screen.
What am I wrong or missing from my configuration steps? Please help me.
I finally figured it out, it's because the webpackage of the superset frontend is installed inside the container superset_node instead of while building it. That's why although the superset_node is built, we have to wait (in my case for about 15-20 more minutes) for the webpackage to be completely installed. Another point to note is that this installation takes up a lot of memory, so make sure you allocate enough RAM to it (in my case I allocate 16GB to Docker).

Puppet - how to set up a conditional to check if a package is installed and in case of not installed, I need to skipped it?

Sorry but I'm starting my jorny in puppet and I'm facing my first challenge with that tool.
I want to check in puppet if a package is insttalled on server and in case of yes, guarantee it is running and in case of NOT installed, just skipped to the next task.
class profile::windows::rapid7 {
$manage_rapid7 = lookup('manage_rapid7', Optional[Boolean], 'first', true)
$rapid7_filepath = 'C:\Program Files\Rapid7\Insight Agent\ir_agent.exe'
$rapid7_service_exists = find_file($rapid7_filepath)
if $facts['kernel'] == 'Windows' {
if $manage_rapid7 {
if($rapid7_service_exists){
service { 'ir_agent':
ensure => 'running',
enable => 'true',
}
}
}
}
}
As you can see above, in case of rapid7 is installed, I'll have sure it is running but now I have some servers that don't have this package and because of this, I'm getting error.
So my question is:
Is it possible to just skipp this task in case of this pkg isn't installed?
Best Regards,
I want to check in puppet if a package is insttalled on server and in case of yes, guarantee it is running and in case of NOT installed, just skipped to the next task.
The most appropriate approach is for Puppet to know based on node identity whether the package is supposed to be installed. This is the realm of node classification, generally realized via node blocks, via external node classifier, and / or via node-specific external data. Using these tools, you would inform Puppet during catalog building whether to apply the class that manages the service in question, with what class parameters. It would be typical to also choose on the same basis whether to apply a class that manages the package itself. You can find many examples of this approach on the Forge.
But where you do need to feed data about the current state of a node into the catalog building process, the Puppet mechanism that serves is facts. It would be possible to write a custom or external fact that tested whether the package was installed. You could then write Puppet conditional statements to influence the contents of the node's catalog based on the value of that fact. Note well, however, that this speaks to node state before the run, and also that it is superfluous if you want Puppet to manage the installation status of the package.

dbt deps command results in "Unable to connect to registry hub"

When running dbt deps, I get back this error message:
Running with dbt=0.17.0
Error sending message, disabling tracking
Encountered an error:
Unable to connect to registry hub
What's happening here, and how can I work around it?
First of all, it's worth understanding what's going on here. It looks like you're trying to install a package from the dbt hub site (hub.getdbt.com) — if you open up your packages.yml file, you'll find something like this:
packages:
- hub: package-owner/package-name
version: 0.1.0
When you run dbt deps (at a high level):
dbt sends a request to hub.getdbt.com
From hub.getdbt.com, a request is sent to GitHub to download the package.
The package is copied into your project
This error occurs if dbt cannot connect to the hub site after sending a network request repeatedly. First off, we recommend you retry the dbt deps command — sometimes it's just a blip in connectivity that goes away on the second try.
If the error persists, there may be a few different reasons for it:
hub.getdbt.com might be unavailable. This happens but is relatively rare. You can navigate to hub.getdbt.com to check if this is the case. Also check the Netlify status page to see if there are any issues.
GitHub might be down — you can check this by going to the GitHub status page.
Finally, it may be that a firewall rule or antivirus software on your computer is rejecting the request. Talk to your IT team to find out if this is the case and whether that restriction can be removed.
We generally recommend using the hub syntax for packages, however if you need to work around it, you can consider using the git syntax (docs) or installing the package from a local directory (docs)

How do I build and run the code from github for NS3 in the link provided

How do I build and run the code from github for NS3 in the link provided below
https://github.com/mkheirkhah/mptcp
it has already ns3 installation steps with mptcp
https://github.com/mkheirkhah/mptcp
this is the Installations steps go according to it ul get to know
We have tested this code on Mac (with llvm-gcc42 and python 2.7.3-11) and several Linux distributions (e.g. Red Hat with gcc4.4.7 or Ubuntu16.4 with gcc5.4.0).
Clone the MPTCP's repository
git clone https://github.com/mkheirkhah/mptcp.git
Configure and build
CXXFLAGS="-Wall" ./waf configure build
Run a simulation
./waf --run "mptcp"
https://github.com/Kashif-Nadeem/ns-3-dev-git is the more recent fork of https://github.com/teto/ns-3-dev-git/wiki which started as mkheirkhah's fork.
It should work with the latest ns-3. Compared to mkheirkhah's approach (I haven't checked if it is still valid), it tries to reuse the TCP socket code so that it can use TCP socket application. You can read more details from https://www.researchgate.net/publication/313623789_An_Implementation_of_Multipath_TCP_in_ns3

How to manage a project folder via ssh?

Developing full-stack web apps, I would like to have all my code and build logic on a linux machine (i.e. git, docker containers and other terminal commands), but all my development workflow on my windows machine (so my IDE, web browser and REST client) accessing it via SSH.
I've managed to do all of that except for the IDE, I could only edit individual files via SSH instead of managing a folder as a project. So right now I use VSCode on the linux machine (Ubuntu), and it's the last thing preventing me for dropping the graphical interface and install Ubuntu Server on it.
And no, I don't want to use Vim or Emacs. I want to use VSCode, or another modern IDE, but preferably VSCode.
Try using the Remote VSCode plugin as explained here: Using Remote VSCode
This discussion is exactly about your problem: VSCode 13643 issue Github
EDIT: I have recently found a new VSCode plugin on Github: vs-deploy. It was designed to deploy files and folders remotely very quickly. It seems to be working and I haven't found any bugs so far. It works with FTP, SFTP (SSH) and many other protocols.
The SSH.NET nuget Package can be used quite nicly to copy files and folders.
Here is an example:
var host = "YourServerIpAddress";
var port = 22;
var user = "root"; // TODO: fix
var yourPathToAPrivateKeyFile = #"C:\Users\Bob\mykey"; // Use certificate for login
var authMethod = new PrivateKeyAuthenticationMethod(user, new PrivateKeyFile(yourPathToAPrivateKeyFile));
var connectionInfo = new ConnectionInfo(host, port, user, authMethod);
using (var client = new SftpClient(connectionInfo))
{
client.Connect();
if (client.IsConnected)
{
//TODO: Copy folders recursivly etc.
DirectoryInfo source = new DirectoryInfo(#"C:\your\probject\publish\path");
foreach (var file in source.GetFiles())
{
client.UploadFile(File.OpenRead(file.FullName), $"/home/yourUploadPath/{file.Name}", true);
}
}
}
When you create a upload console application using the code above your should be able to automatically trigger an upload by using postbuild events by adding a section to your Project.
<Target Name="PostBuild" AfterTargets="PostBuildEvent">
<Exec Command="path to execute your script or application" />
</Target>
If you prefer to do the same but more manual you can perform a
dotnet build --configuration Release
followed by a
dotnet publish ~/projects/app1/app1.csproj
and then use the code above to perform an upload.
Search for the extension SSHExtension developed by Vitaly Kondratiev
Install the extension.
And edit the serverlist json with the server details.
ex:
"sshextension.serverList": [
{
"name": "Kuberntes 212",
"host": "10.64.234.54",
"port": 22,
"username": "root",
"password": "byebye"
}
]
save the file
Then log in using ctrl+shift+p and choose sshextension open ssh extension category. It shall create a session for you.
More Easily if u need the entire directory structure into ur local workspace.
Use extension ftp-simple in vscode. It works as a wonder, trust me.
Install ftp-simple in vscode
ctrl+shift+p
select ftp-simple:config.
Configure your settings
[
{
"name": "Kubernetes 212",
"host": "10.75.64.2",
"port": 22,
"type": "sftp",
"username": "root",
"password": "byebye",
"path": "/home/vinod/",
"autosave": true,
"confirm": true
}]
save the file.
Now ctrl+shift+p
and select ftp-simple:remote directory to workspace.
Viola Your work is done, Life is simple