Minecraft Spigot server error: Block at x, y, z is Block{minecraft:block} but has net.minecraft.world.level.block.entity.TileEntityChest - minecraft

I own a private SMP server but in the console there are messages spammed that looks like this:
[21:57:17] [Server thread/ERROR]: Block at -1,151, 69, 777 is Block{minecraft:barrier} but has net.minecraft.world.level.block.entity.TileEntityChest#6a71ab80. Bukkit will attempt to fix this, but there may be additional damage that we cannot recover.
What is this, and how can I fix it?

Related

Apache/PHP7.3 running in Docker randomly drops connection with empty response

I have found several similar questions:
APACHE, PHP Server return randomly empty response
https://serverfault.com/questions/66662/apache-gives-empty-reply
and others
However these does not seem to help to find the cause. I can replicate the behaviour when reloading a specific page ~20 times.
Running current apache2 (= 2.4.38-3+deb10u4). I tried to disable opcache, remove MaxRequestsPerChild with no effect.
Apache log does not show any error. The request is not even logged.
The USE_ZEND_ALLOC=0 seem to have no effect and the problem persists.
I tried to install mod_forensic which shows that the request came in. No error or finished request is then logged.
The container is running in Kubernetes and I cannot replicate the issue locally running directly with Docker machine, that is why I think this might be caused by some memory setting. However I couldn't find what might be causing this as there is no single error message.
Can you think of any reason why this might be happening?
Edit1:
I tried to set log level to trace:
https://gist.github.com/knyttl/861e8a0fe5651408df37cd5c3874946b
The request is handled and then you can see:
[Tue Oct 20 08:37:55.825454 2020] [core:trace4] [pid 1] mpm_common.c(536): mpm child 388 (gen 2/slot 4) exited
With no error and no response.
Edit2:
I updated to php7.4 and the issue persists.
I finally found it:
The process is silently killed by OOM killer of the host machine:
[4019392.626796] Memory cgroup out of memory: Kill process 4178127 (apache2) score 1137 or sacrifice child
[4019392.636520] Killed process 4178127 (apache2) total-vm:143960kB, anon-rss:22856kB, file-rss:10472kB, shmem-rss:28228kB
This is never logged within the container so it was hard to find.
Why don't you use Jorge's answers ?
Finally solved by adding to /etc/apache2/envvars:
export USE_ZEND_ALLOC=0
https://serverfault.com/a/66759

Error while running query on Impala with Superset

I'm trying to connect impala to superset, and when I test the connection prints: "Seems OK!", and when I try to see databases on impala with the SQL Editor in the left side it shows all databases without problems.
Preview of Databases/Tables
But when i write a query and click on "Run Query", it gives the error: "Could not start SASL: b'Error in sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Ticket expired)'"
Error running query
I'm running superset with SSL and in production mode (with Gunicorn) and Impala with SSL in a Kerberized Hadoop Cluster, and my impala database config is:
Impala Config
And in the extras I put:
{
"metadata_params": {},
"engine_params": {
"connect_args": {
"port": 21050,
"use_ssl": "True",
"ca_cert": "path/to/my/ca_cert.pem",
"auth_mechanism": "GSSAPI"
}
},
"metadata_cache_timeout": {},
"schemas_allowed_for_csv_upload": []
}
How can I solve this error? In my superset log it only shows:
Triggering query_id: 65
INFO:superset.views.core:Triggering query_id: 65
Query 65: Running query on a Celery worker
INFO:superset.views.core:Query 65: Running query on a Celery worker
Versions: Superset 0.36.0, Impyla 0.16.2
I was able to fix this error doing this steps:
1 - Created service user for celery-worker, created a kerberos ticket for him and created a crontab to renew the ticket.
2 - Runned celery worker from this service user, instead running from root.
3 - Killed an celery-worker that was running in another machine of my cluster
4 - Restarted Impala and Superset
I think this error ocurred because in some queries instead of use the celery worker in my superset machine, it was using the celery worker that was in another machine without a valid kerberos ticket. I could fix this error because when I was reading celery-worker log , it showed that a connection with the celery worker in other machine failed in a query running.

ncclient: connecting to a NETCONF server

I want use the python library ncclient 0.6.6 with Python 2.7.15 to connect to a NETCONF server (netopeer2) and read out the running config.
I tried to follow the example from the manual, running this code in the console:
with manager.connect(host="*the IP adress*", port=*the port*, timeout=None, username="*user*", password="*pwd*") as m:
c = m.get_config(source='running').data_xml
with open("%s.xml" % host, 'w') as f:
f.write(c)
As written in the manual, I try to disable public-key authentification with allow_agent and look_for_keys as False. Unfortunately, this does not work properly, because I get the error message:
File "<stdin>", line 1, in <module>
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/manager.py", line 177, in connect
return connect_ssh(*args, **kwds)
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/manager.py", line 143, in connect_ssh
session.connect(*args, **kwds)
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/transport/ssh.py", line 481, in connect
raise SSHUnknownHostError(known_hosts_lookup, fingerprint)
ncclient.transport.errors.SSHUnknownHostError: Unknown host key [e3:8d:35:a9:43:f9:3c:8a:f4:d3:88:5b:a9:36:93:59] for [[192.168.56.2]:1831]
I do not get why it still complains about the unknown host key, even though I explicitly disabled public-key authentification.
The netopeer NETCONF server is definitely running, for I get a "Hello" Message as soon as I try to SSH into it from out of the terminal.
Did I miss something?
m = manager.connect(host="172.17.0.2", port=830, username="netconf", password="netconf", hostkey_verify=False)
Did the trick. Hostkey_verify has to be false.

Error running topology in production cluster with Apache Storm 1.0.0, topology does not start

I have a topology that runs well on a Local cluster.
But when I try to run it on a production cluster the following things happens:
The nimbus is up
The storm UI is up
The two workers I use are up
Zookeper is up
I run storm with
storm jar myjar.jar MyClass
Nimbus submits the topology
The topologies and the workers appears in the storm UI
BUT:
The topology does not start despite the fact that its status is ACTIVE
The log file of the topology does not appear in the workers.
I have the following log in the worker on the supervisor.log:
2016-04-15 13:18:19.831 o.a.s.d.supervisor [WARN] There was a connection problem with nimbus. #error {
:cause jobs-rec-storm-nimbus
:via
[{:type java.lang.RuntimeException
:message org.apache.storm.thrift.transport.TTransportException: java.net.UnknownHostException: jobs-rec-storm-nimbus
:at [org.apache.storm.security.auth.TBackoffConnect retryNext TBackoffConnect.java 64]}
{:type org.apache.storm.thrift.transport.TTransportException
:message java.net.UnknownHostException: jobs-rec-storm-nimbus
:at [org.apache.storm.thrift.transport.TSocket open TSocket.java 226]}
{:type java.net.UnknownHostException
:message jobs-rec-storm-nimbus
:at [java.net.AbstractPlainSocketImpl connect AbstractPlainSocketImpl.java 184]}]
:trace
[[java.net.AbstractPlainSocketImpl connect AbstractPlainSocketImpl.java 184]
[java.net.SocksSocketImpl connect SocksSocketImpl.java 392]
[java.net.Socket connect Socket.java 589]
[org.apache.storm.thrift.transport.TSocket open TSocket.java 221]
[org.apache.storm.thrift.transport.TFramedTransport open TFramedTransport.java 81]
[org.apache.storm.security.auth.SimpleTransportPlugin connect SimpleTransportPlugin.java 103]
[org.apache.storm.security.auth.TBackoffConnect doConnectWithRetry TBackoffConnect.java 53]
[org.apache.storm.security.auth.ThriftClient reconnect ThriftClient.java 99]
[org.apache.storm.security.auth.ThriftClient <init> ThriftClient.java 69]
[org.apache.storm.utils.NimbusClient <init> NimbusClient.java 106]
[org.apache.storm.utils.NimbusClient getConfiguredClientAs NimbusClient.java 78]
[org.apache.storm.utils.NimbusClient getConfiguredClient NimbusClient.java 41]
[org.apache.storm.blobstore.NimbusBlobStore prepare NimbusBlobStore.java 268]
[org.apache.storm.utils.Utils getClientBlobStoreForSupervisor Utils.java 462]
[org.apache.storm.daemon.supervisor$fn__9590 invoke supervisor.clj 942]
[clojure.lang.MultiFn invoke MultiFn.java 243]
[org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9351$fn__9369 invoke supervisor.clj 582]
[org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9351 invoke supervisor.clj 581]
[org.apache.storm.event$event_manager$fn__8903 invoke event.clj 40]
[clojure.lang.AFn run AFn.java 22]
[java.lang.Thread run Thread.java 745]]}
2016-04-15 13:18:19.831 o.a.s.d.supervisor [INFO] Finished downloading code for storm id jobs-KafkaMigration-topology-3-1460740616
2016-04-15 13:18:19.850 o.a.s.d.supervisor [INFO] Missing topology storm code, so can't launch worker with assignment ...(some more numbers)
So I asume that I have a connection problem with nimbus, but the properties file in the worker is:
storm.zookeeper.servers:
- "192.168.22.209"
- "192.168.22.216"
- "192.168.22.217"
storm.local.dir: "/app/home/storm"
storm.zookeeper.root: "/storm-prod"
#
nimbus.seeds: ["192.168.120.96"]
And if I make a ping to the nimbus ip from the workers, it returns OK
Where is the error, How can I fix it?
Thanks!
Whats appears to happen in this context is that Storm supervisor resolves nimbus from whatever is configured in storm.yaml seeds/host the first time and from then on uses nimbus host name to download the topology artifacts.
If that is correct, DNS is mandatory for a cluster setup. This is far from ideal, specially when using containers in an orchestrated environment like kubernetes.
Current workaround i'm using is adding
storm.local.hostname: "<local.ip.value>"
to the storm.yaml
Thanks to #bastien who provided the tip on storm user mailing list
I ran into the similar issue. Turns out my firewall rules were blocking the supervisor ports. Make sure the supervisor and nimbus are able to talk to each other.
I found that I need to have the hostnames of the boxes match what I was calling them in the /etc/hosts file
in host file i had
xxx.xxx.xxx.xxx nimbus
but the host name on the box was different and it was pulling the hostname from the os
changing the host name on the os of the nimbus server resolved my issue.

How to get detailed log/info about rabbitmq connection action?

I have a python program connecting to a rabbitmq server. When this program starts, it connects well. But when rabbitmq server restarts, my program can not reconnect to it, and leaving error just "Socket closed"(produced by kombu), which is meaningless.
I want to know the detailed info about the connection failure. On the server side, there is nothing useful in the rabbitmq log file either, it just said "connection failed" with no reason given.
I tried the trace plugin(https://www.rabbitmq.com/firehose.html), and found there was no trace info published to amq.rabbitmq.trace exchange when the connection failure happended. I enabled the plugin with:
rabbitmq-plugins enable rabbitmq_tracing
systemctl restart rabbitmq-server
rabbitmqctl trace_on
and then i wrote a client to get message from amq.rabbitmq.trace exchange:
#!/bin/env python
from kombu.connection import BrokerConnection
from kombu.messaging import Exchange, Queue, Consumer, Producer
def on_message(self, body, message):
print("RECEIVED MESSAGE: %r" % (body, ))
message.ack()
def main():
conn = BrokerConnection('amqp://admin:pass#localhost:5672//')
channel = conn.channel()
queue = Queue('debug', channel=channel,durable=False)
queue.bind_to(exchange='amq.rabbitmq.trace', routing_key='publish.amq.rabbitmq.trace')
consumer = Consumer(channel, queue)
consumer.register_callback(on_message)
consumer.consume()
while True:
conn.drain_events()
if __name__ == '__main__':
main()
I also tried to get some debug log from rabbitmq server. I reconfigured rabbitmq.config according to https://www.rabbitmq.com/configure.html, and set
log_levels to
{log_levels, [{connection, info}]}
but as a result rabbitmq server failed to start. It seems like the official doc is not for me, my rabbitmq server version is 3.3.5. However
{log_levels, [connection,debug,info,error]}
or
{log_levels, [connection,debug]}
works, but with this there is no DEBUG info showing in the logs, which i don't know whether it is because the log_levels configuration is not effective or there is just no DEBUG log got printed all the time.
I know that this answer comes massively late, but for future purveyors, this worked for me:
[
{rabbit,
[
{log_levels, [{connection, debug}, {channel, debug}]}
]
}
].
Basically, you just need to wrap the parameters you want to set in whichever module/plugin they belong to.