1) I setup a private ethereum network using the following command
$geth --genesis <genesis json file path> --datadir <some path to an empty
folder> --networkid 123 --nodiscover --maxpeers 0 console
2) Created an account
3) Then, started the miner using miner.start() command.
After a while ethers were getting added automatically to my account, but I don’t have any pending transaction in my private network. So from where could my miners are getting the ethers?
Even though, I didn’t instantiate any transactions in my network, I could see the some transaction being recorded in the logs, once I start the miner.
The log is as follows:
I0118 11:59:11.696523 9427 backend.go:584] Automatic pregeneration of ethash
DAG ON (ethash dir: /Users/minisha/.ethash)
I0118 11:59:11.696590 9427 backend.go:591] checking DAG (ethash dir:
/Users/minisha/.ethash)
I0118 11:59:11.696728 9427 miner.go:119] Starting mining operation (CPU=4
TOT=5)
true
> I0118 11:59:11.703907 9427 worker.go:570] commit new work on block 1 with 0
txs & 0 uncles. Took 7.109111ms
I0118 11:59:11.704083 9427 ethash.go:220] Generating DAG for epoch 0 (size
1073739904) (0000000000000000000000000000000000000000000000000000000000000000)
I0118 11:59:12.698679 9427 ethash.go:237] Done generating DAG for epoch 0, it
took 994.61107ms
I0118 11:59:15.163864 9427 worker.go:349]
And my genesis block code is as follows:
{
“nonce”: “0xdeadbeefdeadbeef”,
“timestamp”: “0x0”,
“parentHash”:
“0x0000000000000000000000000000000000000000000000000000000000000000”,
“extraData”: “0x0”,
“gasLimit”: “0x8000000”,
“difficulty”: “0x400”,
“mixhash”:
“0x0000000000000000000000000000000000000000000000000000000000000000”,
“coinbase”: “0x3333333333333333333333333333333333333333”,
“alloc”: {
}
}
Since my network is isolated and have only one node (no peers), I am quite confused with this behaviour. Any insights would be greatly appreciated.
Your client is mining empty blocks (containing no transactions) and getting rewards for mined blocks what is 5 ETH per block.
If you want to prevent empty block in your private blockchain you should consider using eth client (C++ implementation).
In case of geth client you can use a JavaScript script that will modify client's behavior. Any script can be loaded with js command: geth js script.js.
var mining_threads = 1
function checkWork() {
if (eth.getBlock("pending").transactions.length > 0) {
if (eth.mining) return;
console.log("== Pending transactions! Mining...");
miner.start(mining_threads);
} else {
miner.stop(0); // This param means nothing
console.log("== No transactions! Mining stopped.");
}
}
eth.filter("latest", function(err, block) { checkWork(); });
eth.filter("pending", function(err, block) { checkWork(); });
checkWork();
You can try EmbarkJS which can run geth client with mineWhenNeeded option on private network. It will only mine when new transactions come in.
Related
Since Redis 6.2.7 (and Redis 7) an existing Lua script stopped working with the error message:
"ERR user_script:6: Attempt to modify a readonly table script: 2f405679dab26da46ec86d29bded48f66a99ff64, on #user_script:6."
The script is working fine with Redis 6.2.6. I did not find any breaking changes in the last Redis release notes.
Any clue ? Thanks !!
Here's the script:
-- returns valid task ID if successfull, nil if no tasks
local tenantId = unpack(ARGV)
local activeTenantsSet, activeTenantsList, tenantQueue = unpack(KEYS)
-- next task lua based function - return nil or taskId
function next ()
local task = redis.call('ZPOPMAX', tenantQueue)
if table.getn(task) == 0 then
redis.call('SREM', activeTenantsSet, tenantId)
redis.call('LREM', activeTenantsList, 0, tenantId)
return nil
end
redis.call('SADD', activeTenantsSet, tenantId)
redis.call('RPUSH', activeTenantsList, tenantId)
return task[1]
end
-------------
return next()
try adding 'local' in front of 'function next'
local function next ()
...
There's a fix for this in a new version of Sentry (the patch also applies cleanly to the lua scripts going back to some older versions):
https://github.com/getsentry/sentry/pull/34416/commits/7c57fe7b17f613fecc47a56c22fff3a20a958496
I am new in ADF (EJB/JPA not Business Component), when the user is using our new app developed on jdeveloper "12.2.1.2.0", after an hour of activity, system is loosing the current record. To note that the object lost is the parent object.
I tried to change the session-timeout (knowing that it will affect the inactivity time).
public List<SelectItem> getSProvMasterSelectItemList(){
List<SelectItem> sProvMasterSelectItemList = new ArrayList<SelectItem>();
DCIteratorBinding lBinding = ADFUtils.findIterator("pByIdIterator");/*After 1 hour I am able to get lBinding is not null*/
Row pRow = lBinding.getCurrentRow();/*But lBinding.getCurrentRow() is null*/
DCDataRow objRow = (DCDataRow) pRow;
Prov prov = (Prov) objRow.getDataProvider();
if (!StringUtils.isEmpty(prov)){
String code = prov.getCode();
if (StringUtils.isEmpty(code)){
return sProvMasterSelectItemList;
}else{
List<Lov> mProvList = getSessionEJBBean().getProvFindMasterProv(code);
sProvMasterSelectItemList.add(new SelectItem(null," "));
for (Lov pMaster:mProvList) {
sProvMasterSelectItemList.add(new SelectItem(pMaster.getId(),pMaster.getDescription()));
}
}
}
return sProvMasterSelectItemList ;
}
I expect to be able to read the current record at any time, specially that it is the master block, and one record is available.
This look like a classic issue of misconfigured Application Module.
Cause : Your application module is timing out and releasing it's transaction before the official adfc-config timeout value.
To Fix :
Go to the application module containing this VO > Configuration > Edit the default > Modify Idle Instance Timeout to be the same as your adf session timeout (Take time to validate the other configuration aswell)
I'm trying to integrate a Datadog monitor check on sshd process in my terraform codebase, but I'm getting datadog_monitor.host_is_up2: error updating monitor: API error 400 Bad Request: {"errors":["The value provided for parameter 'query' is invalid"]}
What I did was to copy the monitor's query I created on the Datadog panel and pasted it into the tf file:
resource "datadog_monitor" "host_is_up2" {
name = "host is up"
type = "metric alert"
message = "Monitor triggered"
escalation_message = "Escalation message"
query = "process.up.over('process:ssh').last(4).count_by_status()"
thresholds {
ok = 0
warning = 1
critical = 2
}
notify_no_data = false
renotify_interval = 60
notify_audit = false
timeout_h = 60
include_tags = true
silenced {
"*" = 0
}
}
ofc the query example "avg(last_1h):avg:aws.ec2.cpu{environment:foo,host:foo} by {host} > 2" works
What's the right way to check via Datadog API or terraform if a specific service, like sshd, is up or not?
There are two error in your code:
The type used is wrong. It should be service check instead of metric alert.
You need to enclose process.up in a pair of ''.
Once done, your code will run flawlessly.
I am attempting to set up a small test environment (homelab) using CentOS 6.6, Rancid 3.1, Looking Glass, and some Cisco Switches/Routers, with httpd acting as the handler. I have picked up a little perl by means of this endeavor, but python (more 2 than 3) is my background. Right now, everything on the rancid side of things works without issue: bin/clogin successfully logs into all of the equipment in the router.db file, and logging of the configs is working as expected. All switches/routers to be accessed are available and online, verified by ssh connection to devices as well as using bin/clogin.
Right now, I have placed the lg.cgi and lgform.cgi files into var/www/cgi-bin/ which allows the forms to be run as cgi scripts. I had to modify the files to split on ';' instead of ':' due to the change in the .db file in Rancid 3.1:#record = split('\:', $_); was replaced with: #record = split('\;', $_); etc. Once that change was made, I was able to load the lgform.cgi with the proper router.db parsing. At this point, it seemed like everything should be good to go. When I attempt to ping from one of those devices out to 8.8.8.8, the file correctly redirects to lg.cgi, and the page loads, but with
main is unavailable. Try again later.
as the error, where 'main' is the router hostname. Using this output, I was able to find the function responsible for this output. Here it is before I added anything:
sub DoRsh
{
my ($router, $mfg, $cmd, $arg) = #_;
my($ctime) = time();
my($val);
my($lckobj) = LockFile::Simple->make(-delay => $lock_int,
-max => $max_lock_wait, -hold => $max_lock_hold);
if ($pingcmd =~ /\d$/) {
`$pingcmd $router`;
} else {
`$pingcmd $router 56 1`;
}
if ($?) {
print "$router is unreachable. Try again later.\n";
return(-1);
}
if ($LG_SINGLE) {
if (! $lckobj->lock("$cache_dir/$router")) {
print "$router is busy. Try again later.\n";
return(-1);
}
}
$val = &DoCmd($router, $mfg, $cmd, $arg);
if ($LG_SINGLE) {
$lckobj->unlock("$cache_dir/$router");
}
return($val);
}
In order to dig in a little deeper, I peppered that function with several print statements. Here is the modified function, followed by the output from the loaded lg.cgi page:
sub DoRsh
{
my ($router, $mfg, $cmd, $arg) = #_;
my($ctime) = time();
my($val);
my($lckobj) = LockFile::Simple->make(-delay => $lock_int,
-max => $max_lock_wait, -hold => $max_lock_hold);
if ($pingcmd =~ /\d$/) {
`$pingcmd $router`;
} else {
`$pingcmd $router 56 1`;
}
print "About to test the ($?) branch.\n";
print "Also who is the remote_user?:' $remote_user'\n";
print "What about the ENV{REMOTE_USER} '$ENV{REMOTE_USER}'\n";
print "Here is the ENV{HOME}: '$ENV{HOME}'\n";
if ($?) {
print "$lckobj is the lock object.\n";
print "#_ something else to look at.\n";
print "$? whatever this is suppose to be....\n";
print "Some variables:\n";
print "$mfg is the mfg.\n";
print "$cmd was the command passed in with $arg as the argument.\n";
print "$pingcmd $router\n";
print "$cloginrc - Is the cloginrc pointing correctly?\n";
print "$LG_SINGLE the next value to be tested.\n";
print "$router is unreachable. Try again later.\n";
return(-1);
}
if ($LG_SINGLE) {
if (! $lckobj->lock("$cache_dir/$router")) {
print "$router is busy. Try again later.\n";
return(-1);
}
}
$val = &DoCmd($router, $mfg, $cmd, $arg);
if ($LG_SINGLE) {
$lckobj->unlock("$cache_dir/$router");
}
return($val);
}
OUTPUT:
About to test the (512) branch.
Also who is the remote_user?:' '
What about the ENV{REMOTE_USER} ''
Here is the ENV{HOME}: '.'
LockFile::Simple=HASH(0x1a13650) is the lock object.
main cisco ping 8.8.8.8 something else to look at.
512 whatever this is suppose to be....
Some variables:
cisco is the mfg.
ping was the command passed in with 8.8.8.8 as the argument.
/bin/ping -c 1 main
./.cloginrc - Is the cloginrc pointing correctly?
1 the next value to be tested.
main is unreachable. Try again later.
I can provide the code for when DoRsh is called, if necessary, but it looks mostly like this:&DoRsh($router, $mfg, $cmd, $arg);.
From what I can tell the '$?' special variable (or at least according to
this reference it is a special var) is returning the 512 value, which is causing that fork to test true. The problem is I don't know what that 512 means, nor where it is coming from. Using the ref site's description ("The status returned by the last pipe close, backtick (``) command, or system operator.") and the formation of the conditional tree above, I can see that it is some error of some kind, but I don't know how else to proceed with this inspection. I'm wondering if maybe it is in response to some permission issue, since the remote_user variable is null, when I didn't expect it to be. Any guidance anyone may be able to provide would be helpful. Furthermore, if there is any information that I may have skipped over, that I didn't think to include, or that may prove helpful, please ask, and I will provide to the best of my ability
May be you put in something like
my $pingret=$pingcmd ...;
print 'Ping result was:'.$pingret;
And check the returned strings?
I have same code on two servers.
I have job for adding batch job to the queue
static void Job_ScheduleBatch2(Args _args)
{
BatchHeader batHeader;
BatchInfo batInfo;
RunBaseBatch rbbTask;
str sParmCaption = "My Demonstration (b351) 2014-22-09T04:09";
SysRecurrenceData sysRecurrenceData = SysRecurrence::defaultRecurrence();
;
sysRecurrenceData = SysRecurrence::setRecurrenceStartDateTime(sysRecurrenceData, DateTimeUtil::utcNow());
sysRecurrenceData = SysRecurrence::setRecurrenceUnit(sysRecurrenceData, SysRecurrenceUnit::Minute,1);
rbbTask = new Batch4DemoClass();
batInfo = rbbTask .batchInfo();
batInfo .parmCaption(sParmCaption);
batInfo .parmGroupId(""); // The "Empty batch group".
batHeader = BatchHeader ::construct();
batHeader .addTask(rbbTask);
batHeader.parmRecurrenceData(sysRecurrenceData);
//batHeader.parmAlerts(NoYes::Yes,NoYes::Yes,NoYes::Yes,NoYes::Yes,NoYes::Yes);
batHeader.addUserAlerts(curUserId(),NoYes::No,NoYes::No,NoYes::No,NoYes::Yes,NoYes::No);
batHeader .save();
info(strFmt("'%1' batch has been scheduled.", sParmCaption));
}
and I have a batch job
class Batch4DemoClass extends RunBaseBatch
{
int methodVariable1;
int metodVariable2;
#define.CurrentVersion(1)
#localmacro.CurrentList
methodVariable1,
metodVariable2
endmacro
}
public container pack()
{
return [#CurrentVersion,#CurrentList];
}
public void run()
{
// The purpose of your job.
info(strFmt("epeating batch job Hello from Batch4DemoClass .run at %1"
,DateTimeUtil ::toStr(
DateTimeUtil ::utcNow())
));
}
public boolean unpack(container _packedClass)
{
Version version = RunBase::getVersion(_packedClass);
switch (version)
{
case #CurrentVersion:
[version,#CurrentList] = _packedClass;
break;
default:
return false;
}
return true;
}
On one server it do runs and in batch history I can see it is saving messages to the batch log and on the other server it does nothing. One server is R2 (running) and R3 (not running).
Both codes are translated to CIL. Both servers are Batch server. I can see the batch jobs in USMF/System administration/Area page inquries. Only difference I can find is that jobs on R2 has filled AOS instance name in genereal and in R3 it is empty. In R2 I can see that info messages in log and batch history but there is nothing in R3.
Any idea how to make batch jobs running?
See how to Configure an AOS instance as a batch server.
There are 3 preconditions:
Marked as a batch server
Time interval set correctly
Batch group set correctly
Update:
Deleting the user of a batch job may make the batch job never complete. Once the execute queue has filled, no further progress will happen.
Deleting the offending batch jobs is problematic, as only the owner can do so! Then consider changing BatchJob.aosValidateDelete.
Yes, worth a try to clear the tables, also if you could provide a screenshot of the already running batchjobs and their status, that might be of help further.