running bitcoin-qt so it loads data from a local testnet (just 2 nodes on my own pc) - bitcoin

I'm running a local tesnet from freewil/bitcoin-testnet-box, which I built and ran locally (through docker*), with the command:
docker run -ti --name btcdev -P -p 49020:19000 bitcoin-testnet-box
which was inspired by the advice of this issue on github, anyway- here how it looked:
bitcoin-dev-box, and mapping it's internal port 19000 to your localhost:49020
$ docker run -ti --name btcdev -P -p 49020:19000 poliver/bitcoin-dev-box
the advice on why to run it that way goes as follows:
The connect parameter is the server address.
If you leave it blank it will connect to the bitcoin network directly.
In the case above it's going to connect to your bitcoin testnet running inside the docker container.
It's connecting to localhost:49020 which should be talking to the network inside the docker container if you mapped it to that port when you started bitcoin-dev-box.
then I ran bitcoin-qt with the command:
# Running bitcoin-dev-box, and mapping it's internal port 19000 to your localhost:49020
$ docker run -ti --name btcdev -P -p 49020:19000 poliver/bitcoin-dev-box
but still it seems that it's not connecting to my local testnet, here's a screenshot
bitcoin-qt:
output of 'docker ps':
alright- so- here comes the quesetion
QUESTION:
how can I configure bitcoin-qt, or another wallet- such that it will load only the data from my local testnet, just two nodes, on my own machine, that looks something like this:
bitcoin-cli -datadir=1 getinfo
{
"version" : 90300,
"protocolversion" : 70002,
"walletversion" : 60000,
"balance" : 0.00000000,
"blocks" : 0,
"timeoffset" : 0,
"connections" : 1,
"proxy" : "",
"difficulty" : 0.00000000,
"testnet" : false,
"keypoololdest" : 1413617762,
"keypoolsize" : 101,
"paytxfee" : 0.00000000,
"relayfee" : 0.00001000,
"errors" : ""
}
bitcoin-cli -datadir=2 getinfo
{
"version" : 90300,
"protocolversion" : 70002,
"walletversion" : 60000,
"balance" : 0.00000000,
"blocks" : 0,
"timeoffset" : 0,
"connections" : 1,
"proxy" : "",
"difficulty" : 0.00000000,
"testnet" : false,
"keypoololdest" : 1413617762,
"keypoolsize" : 101,
"paytxfee" : 0.00000000,
"relayfee" : 0.00001000,
"errors" : ""
}
*so that I could set the ip address myself- is there a way to do that while running it locally without using docker?

The easiest way to run bitcoin-qt with the testnet-box is
make start-gui

Related

Amazon Cloudwatch only receiving mem_used_percent and nothing else, despite numerous other metrics specified in config

I am trying to get CloudWatch running properly on my Lightsail instance, which I appear to achieved with only partial success.
I have ran the Wizard using sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard which has produced a config file outlining numerous metrics including cpu, memory and disk usage as outlined here. The service loads and starts the config file, and doesn't complain about invalid json (this did happen a few times, but I fixed it).
I can stop the service with sudo amazon-cloudwatch-agent-ctl -a stop
I then reload the config with sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -s -m ec2 -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
Verify the service is running: sudo amazon-cloudwatch-agent-ctl -a status
Which outputs this:
{
"status": "running",
"starttime": "2022-01-10T21:53:12+00:00",
"configstatus": "configured",
"cwoc_status": "stopped",
"cwoc_starttime": "",
"cwoc_configstatus": "not configured",
"version": "1.247349.0b251399"
}
Logging into my CloudWatch console, I can see the data being received, and the single line appearing on the graph there corresponds to the times that I started and stopped the service-- so it's definitely doing something. And yet... the only metric that appears on that graph is mem_used_percent... why? Why only this one metric? Where is the rest of my data pertaining to cpu, etc? What am I doing wrong?
Here is my config.json, which as I said, is being loaded by the service without issue.
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"ImageID": "${aws:ImageId}",
"InstanceId":"${aws:InstanceId}",
"InstanceType":"${aws:InstanceType}"
},
"metrics_collected": {
"cpu": {
"resources": [
"*"
],
"measurement": [
"cpu_usage_active"
],
"metrics_collection_interval": 60,
"totalcpu": false
},
"disk": {
"measurement": [
"free",
"total",
"used",
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_active",
"mem_available",
"mem_available_percent",
"mem_free",
"mem_total",
"mem_used",
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"netstat": {
"measurement": [
"tcp_established",
"udp_socket"
]
}
}
}
}
Any help greatly appreciated here. TIA.
You likely haven't fetched the configuration yet.
Check the logfile, i.e. /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log, to see which inputs are loaded:
2022-05-18T10:18:57Z I! Loaded inputs: mem disk
To fetch the configuration, do as follows (you'll need to adapt this to your environment - this is for systemd, on-premise, without SSM):
sudo amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
sudo systemctl status amazon-cloudwatch-agent.service restart
After:
2022-05-18T11:45:05Z I! Loaded inputs: mem net netstat swap cpu disk diskio
Maybe you face the same issue as I did. In my case two configuration json files
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
were merged.
The files are then translated to
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml.
When I was checking the file, only the mem definition of /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d/file_config.json was taken. Thus, I deleted the file and restarted the service.
sudo systemctl restart amazon-cloudwatch-agent
After the restart, the toml file contained what I expected and the metrics were in place.

Selenoid[/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory]

While working in Selenoid with Docker, in docker logs I can see the error as " [/usr/bin/selenoid: browsers config: read error: open /etc/selenoid/browsers.json: no such file or directory]" . My volume mapping is "-v $PWD/config/:/etc/selenoid/:ro" . if I do "cat $PWD/config/browsers.json" , my browsers.json content is opened and same I can validate manually as well that file is present .
Below commands I am using . These commands I am executing directly through Jenkins . In My local same exact command is working fine , but in jenkins its giving error .
mkdir -p config
cat <$PWD/config/browsers.json
{
"firefox": {
"default": "57.0",
"versions": {
"57.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"58.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
},
"59.0": {
"image": "selenoid/firefox:90.0",
"port": "4444",
"path": "/wd/hub"
}
}
}
}
EOF
chmod +rwx $PWD/config/browsers.json
cat $PWD/config/browsers.json
docker pull aerokube/selenoid:latest
docker pull aerokube/cm:latest
docker pull aerokube/selenoid-ui:latest
docker pull selenoid/video-recorder:latest-release
docker pull selenoid/vnc_chrome:92.0
docker pull selenoid/vnc_firefox:90.0
docker stop selenoid ||true
docker rm selenoid ||true
docker run -d --name selenoid -p 4444:4444 -v /var/run/docker.sock:/var/run/docker.sock
-v $PWD/config/:/etc/selenoid/:ro aerokube/selenoid
The error is self-explaining: you don't have browsers.json in directory you are mounting to /etc/selenoid inside container. I would recommend using absolute paths instead of $PWD variable.

nodejs loopback api is not running while trying to run in pm2

My pm2 ecosystem.config.js configuration like below:
HGBackend is not running. Others are running in pm2
module.exports = {
apps : [
{
name : "HGBackend",
cwd : "hgbackend/server",
script : "config.json"
},
{
name : "HGBlockchain",
cwd : "hgblockchain/localgrammes",
script : "index.js"
// args : "start:staging"
// instances : 4,
// exec_mode : "cluster"
},
{
name : "HGWeb",
cwd : "hgweb/src/server",
script : "server.js",
description: ""
}
]}
All are working except HGBackend. HGBackend is loopback api. Others are react and express api.
What will be the cause for not running HGBackend? Can anyone help me?
For the HGWeb and HGBlockchain applications, you are executing the relevant server.js in the script configuration. For the HGBackend you are passing the config.json as the script. For Loopback you also need to execute the server.js.
Yes I have corrected is. Actually problem in fatasource.json file. Thanks

How to configure Sensu with RMQ and InfluxDB

I am trying to get started with a monitoring server solution. I got the Sensu Clients, RabbitMQ and Uchiwa configured but then I tried using Graphite but there were so many parts to configure I tried InfluxDB instead. I am stuck configuring Sensu to InfluxDB.
Is there a part missing in the below configuration?
Client [Sensu] > RabbitMQ <> Sensu Server <> InfluxDB <> Grafana
Any suggestions?
cat influx.json
{
"influxdb": {
"hosts" : ["192.168.1.1"],
"host" : "192.168.1.1",
"port" : "8086",
"database" : "sensumetrics",
"time_precision": "s",
"use_ssl" : false,
"verify_ssl" : false,
"initial_delay" : 0.01,
"max_delay" : 30,
"open_timeout" : 5,
"read_timeout" : 300,
"retry" : null,
"prefix" : "",
"denormalize" : true,
"status" : true
}
}
cat handler.json
{
"handlers": {
"influxdb": {
"type": "pipe",
"command": "/opt/sensu/embedded/bin/metrics-influxdb.rb"
}}}
checks1,
{
"checks": {
"check_memory_linux": {
"handlers": ["influxdb","default"],
"command": "/opt/sensu/embedded/bin/check-memory-percent.rb -w 90 -c 95",
"interval": 60,
"occurrences": 5,
"subscribers": [ "TEST" ]
}}}
checks2,
{
"checks": {
"check_cpu_linux-elkctrl-pipe": {
"type": "metric",
"command": "/opt/sensu/embedded/bin/check-cpu.rb -w 80 -c 90",
"subscribers": ["TEST"],
"interval": 10,
"handlers": ["debug","influxdb"]
}}}
To use InfluxDB to persist your data, you must have:
InfluxDB plugin installed (also, installation and usage instructions here)
Definitions for the plugin (an influxdb.json containin at least the host, port, user, password and database to be used by Sensu)
The definition, as other config files, must be in /etc/sensu/conf.d/
Handler configuration set properly (also in conf.d)
Mutator for InfluxDB (extensions)
Your checks must send results to the handler, so their definition must contain:
"handlers": [
"influxdb"
]
Or whatever name you gave your handler.
Case, if the influxdb config you provided above is the full extent of your configuration, it would seem to be missing the username/password attributes required by the influxdb configuration. If they're present, but not provided in the post, no big deal. However, I'd recommend doing the following for your Sensu logs:
grep -i influxdb /var/logs/sensu/sensu-server.log
And seeing if the check result is getting sent to your influxdb instance. If they are, you should be receiving an error that might be pointing a bit more to what's going on.
You can also check your influxdb logs to see if they're getting a post from your Sensu server:
journalctl -u influxdb.service -f
But yeah, if the username/password is missing from the configuration, that'd be the first place that I start.

How to attach pre-uploaded SSL cert to ELB in CloudFormation template?

I've been trying to attach a SSL certificate that I'm currently using for one of my Elastic Load Balancing Instances on a new Cloud Formation Template but each time I get:
Server Certificate not found for the key
And then the Cloudformation template starts to roll back at that point.
"Listeners" : [
{
"LoadBalancerPort" : "443",
"InstancePort" : "80",
"SSLCertificateId" : "start_certname_com",
"Protocol" : "HTTPS"
},...
Amazon is asking for the The ARN of the SSL certificate to use. and I believe this is correct since this is the exact string which appears in the dropdown of the current set up ELB which takes 443 to port 80 on the instances.
Am I missing something on my Listener?
You can derive the ARN for a certificate in CloudFormation with only the certificate name. No need to run a command line tool and hard code the value into your CloudFormation template.
"Parameters":{
"Path":{
"Description":"AWS Path",
"Default":"/",
"Type":"String"
}
}
...
"Listeners" : [
{
"LoadBalancerPort" : "443",
"InstancePort" : "80",
"SSLCertificateId" : {
"Fn::Join":[
"",
[
"arn:aws:iam::",
{
"Ref":"AWS::AccountId"
},
":server-certificate",
{
"Ref":"Path"
},
"start_certname_com"
]
]
},
"Protocol" : "HTTPS"
},...
This determines your account id with the {"Ref":"AWS::AccountId"} pseudo parameter and combines it with the other elements needed to form the ARN. Note that I'm using a variable called Path in case you've set a path for your certificate. If not the default of "/" works fine.
This solution was mentioned by #Tristan and is an extension of merrix143243's solution
I've actually figured out how to do this while waiting for the answer, you need to use the IAM CLI tools provided by amazon and then use the command
iam-servercertgetattributes -s certname
This will provide you a string like:
arn:aws:iam::123456789123:server-certificate/start_certname_com
This is the value you place in the "SSLCertificateId" value pair field
The setup instructions for the IAM command line tools (CLI) can be found at:
http://docs.aws.amazon.com/IAM/latest/CLIReference/Setup.html
Download the tool kit from aws here
http://aws.amazon.com/developertools/AWS-Identity-and-Access-Management/4143
All in all your final block will look like:
"Listeners" : [
{
"LoadBalancerPort" : "443",
"InstancePort" : "80",
"SSLCertificateId" : "arn:aws:iam::123456789123:server-certificate/start_certname_com",
"Protocol" : "HTTPS"
},...
Here's how you get the long cert name with the latest AWS CLI:
pip install awscli
aws iam list-server-certificates