Duplicate field 'Status' when I try to run 'pio status' - apache

When ever I do pio status I get the following error
[INFO] [Management$] Inspecting PredictionIO...
[INFO] [Management$] PredictionIO 0.13.0 is installed at /Users/prvns/tools/PredictionIO-0.13.0
[INFO] [Management$] Inspecting Apache Spark...
[INFO] [Management$] Apache Spark is installed at /Users/prvns/tools/PredictionIO-0.13.0/vendors/spark-2.3.1-bin-hadoop2.7
[INFO] [Management$] Apache Spark 2.3.1 detected (meets minimum requirement of 1.6.3)
[INFO] [Management$] Inspecting storage backend connections...
[INFO] [Storage$] Verifying Meta Data Backend (Source: ELASTICSEARCH)...
[ERROR] [Management$] Unable to connect to all storage backends successfully.
The following shows the error message from the storage backend.
PUT http://localhost:9200/pio_meta/_mapping/engine_instances: HTTP/1.1 400 Bad Request
{"error":{"root_cause":[{"type":"parse_exception","reason":"Failed to parse content to map"}],"type":"parse_exception","reason":"Failed to parse content to map","caused_by":{"type":"json_parse_exception","reason":"Duplicate field 'status'\n at [Source: org.elasticsearch.common.compress.DeflateCompressor$1#6b496f00; line: 1, column: 462]"}},"status":400} (org.apache.predictionio.shaded.org.elasticsearch.client.ResponseException)
Dumping configuration of initialized storage backend sources.
Please make sure they are correct.
Source Name: ELASTICSEARCH; Type: elasticsearch; Configuration: HOME -> /usr/local/Cellar/elasticsearch/6.2.4/, HOSTS -> localhost, PORTS -> 9200, SCHEMES -> http, TYPE -> elasticsearch
My pio-env.sh looks like
SPARK_HOME=$PIO_HOME/vendors/spark-2.3.1-bin-hadoop2.7
POSTGRES_JDBC_DRIVER=$PIO_HOME/lib/postgresql-42.0.0.jar
MYSQL_JDBC_DRIVER=$PIO_HOME/lib/mysql-connector-java-5.1.41.jar
PIO_FS_BASEDIR=$HOME/.pio_store
PIO_FS_ENGINESDIR=$PIO_FS_BASEDIR/engines
PIO_FS_TMPDIR=$PIO_FS_BASEDIR/tmp
PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta
PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event
PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model
PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=ELASTICSEARCH
PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost
PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9200
PIO_STORAGE_SOURCES_ELASTICSEARCH_SCHEMES=http
PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/usr/local/Cellar/elasticsearch/6.2.4/
Why is this not working?

I was using ElasticSearch 6.x. I replaced it with ElasticSearch 5.x and it worked.

Related

X-Ray Daemon don't receive any data from envoy

I have a service running a task definition with three containers:
service itself
envoy
x-ray daemon
And I want to trace and monitor my services interacting with each other with x-ray.
But I don't see any data in x-ray.
I can see the request logs and everything in the envoy logs but there are no error messages about missing connection to the x-ray daemon.
Envoy container has three env variables:
APPMESH_VIRTUAL_NODE_NAME = mesh/mesh-name/virtualNode/service-virtual-node
ENABLE_ENVOY_XRAY_TRACING = 1
ENVOY_LOG_LEVEL = trace
The x-ray daemon is pretty plain and has just a name and an image (amazon/aws-xray-daemon:1).
But when looking in the logs of the x-ray dameon, there is only the following:
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Initializing AWS X-Ray daemon 3.0.0
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] Using buffer memory limit of 76 MB
2022-05-31T14:48:05.042+02:00 2022-05-31T12:48:05Z [Info] 1216 segment buffers allocated
2022-05-31T14:48:05.051+02:00 2022-05-31T12:48:05Z [Info] Using region: eu-central-1
2022-05-31T14:48:05.788+02:00 2022-05-31T12:48:05Z [Error] Get instance id metadata failed: RequestError: send request failed
2022-05-31T14:48:05.788+02:00 caused by: Get http://169.254.169.254/latest/meta-data/instance-id: dial tcp xxx.xxx.xxx.254:80: connect: invalid argument
2022-05-31T14:48:05.789+02:00 2022-05-31T12:48:05Z [Info] Starting proxy http server on 127.0.0.1:2000
As far as I read, the error you can see in these logs doesn't affect the functionality (https://repost.aws/questions/QUr6JJxyeLRUK5M4tadg944w).
I'm pretty sure I'm missing a configuration or access right.
It's running already on staging but I set this up several weeks ago and I don't find any differences between the configurations.
Thanks in advance!
In my case, I made a copy-paste mistake by copying trailing line break into the name of the environment variable ENABLE_ENVOY_XRAY_TRACING which wasn't visible in the overview and only inside the text field.

Vault backend storage migration from s3 to dynamodb

I am trying to migrate vault storage from s3 bucket to dynamodb . The new vault ha cluster is in kubernetes
vault-2 1/1 Running 0 10s
vault-1 1/1 Running 0 15s
vault-0 1/1 Running 0 15s
I have used the below command to migrate the vault storage backend
vault operator migrate -config ~/Desktop/migrate.hcl
MacBook-Pro vaultconfig % cat ~/Desktop/migrate.hcl
storage_source “s3” {
bucket = “db-vault-cluster”
}
storage_destination “dynamodb” {
region = “us-east-1”
access_key = “***"
secret_key = "”
table = “demandbase-vault”
}
migration completed successfully, but was not able to view the secrets. When I restarted the pods, vault is not coming up again. and getting the below error.
2022-02-05T11:46:56.787Z [INFO] core: loaded wrapping token key
2022-02-05T11:46:56.803Z [INFO] core: successfully setup plugin catalog: plugin-directory=""""
2022-02-05T11:46:56.821Z [INFO] core: successfully mounted backend: type=system path=sys/
2022-02-05T11:46:56.822Z [INFO] core: successfully mounted backend: type=identity path=identity/
2022-02-05T11:46:56.822Z [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2022-02-05T11:46:58.753Z [INFO] core: pre-seal teardown starting
2022-02-05T11:46:58.753Z [INFO] core: pre-seal teardown complete
2022-02-05T11:46:58.753Z [ERROR] core: post-unseal setup failed: error=“error fetching default policy from store: failed to read policy: decryption failed: cipher: message authentication failed”
can someone help me to migrate the storage backend ?
Are you using the same seal stanza in your configuration? It looks like Vault finds the storage but can't decrypt it.
Copy the seal part of the original (S3) configuration to the new (DynamoDB) configuration.

Manual AWS X-Ray traces not showing even though they are sent

I'm sending xray information from Python manually (no Django, Flask, etc.). I can see the xray information sent in the logs, for example:
Jan 24 16:50:17 ip-172-16-7-143 python3[10700]: DEBUG:sending: {"format":"json","version":1}
Jan 24 16:50:17 ip-172-16-7-143 python3[10700]: {"aws": {"xray": {"sdk": "X-Ray for Python", "sdk_version": "2.4.3"}}, "end_time": 1579884617.5194468, "id": "c59efdf40abecd22", "in_progress": false, "name": "handle request", "service": {"runtime": "CPython", "runtime_version": "3.6.9"}, "start_time": 1579884515.5117097, "trace_id": "1-5e2b1fe3-c1c3cbc802cae49e9c364371"} to 127.0.0.1:2000.
But nothing shows up in the console. I've tried all the different filters and time frames, but nothing shows up. Where should I be looking?
UPDATE:
Adding xray logs:
2020-01-24T01:50:35Z [Info] Initializing AWS X-Ray daemon 3.2.0
2020-01-24T01:50:35Z [Info] Using buffer memory limit of 9 MB
2020-01-24T01:50:35Z [Info] 144 segment buffers allocated
2020-01-24T01:50:35Z [Info] Using region: us-east-2
2020-01-24T01:50:35Z [Info] HTTP Proxy server using X-Ray Endpoint : https://xray.us-east-2.amazonaws.com
2020-01-24T01:50:35Z [Info] Starting proxy http server on 127.0.0.1:2000
From the log it looks like your X-Ray daemon never received any trace segment, otherwise there should be a log line like "[Info] Successfully sent batch of 1 segments (0.100 seconds)".
Are you using the official X-Ray Python SDK? How did the "manually sending" work? Please verify the daemon and your application is running in the same network circumstance. For example, if the daemon is running in a container, please make sure its UDP 2000 port is opened, vice versa.

javax.net.ssl.SSLHandshakeException while using protocol-selenium plugin nutch

I am trying to index this page using Apache Nutch selenium driver but when running parsechecker command it is throwing SSLHandShake exception.
bin/nutch parsechecker -Dplugin.includes='protocol-selenium|parse-tika' -Dselenium.grid.binary=/usr/bin/geckodriver -Dselenium.enable.headless=true -followRedirects -dumpText https://us.vwr.com/store/product?partNum=68300-353
Fetch failed with protocol status: exception(16), lastModified=0: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
When i have tried protocol-httpclient, Nutch is able to crawl content of page but it is not crawling dynamic content as httpclient is not support it. i have also tried protocol-interactiveselenium as well but with this also i am getting SSL handshake issue.
I have downloaded certificate and installed in JRE as well, but still facing same issue.
Version: Nutch 1.16
Update-1
Now when i checked hadoop.log, it is showing below error in log file:
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
... 12 more
I think that this is related to NUTCH-2649. For protocol-httpclient and protocol-http currently, Nutch has a dummy TrustManager for the connection (i.e we don't validate the certificates). As described in NUTCH-2649 protocol-selenium does not use the custom TrustManager and it tries to properly validate the certificate.
That being said, adding the certificate to the JVM should solve the issue for this specific domain. Perhaps selenium is not having access to the list of allowed certificates.

Glassfish 5 error: GRIZZLY0205 Post too large

GF5 build1, Java EE7 + Primefaces 6.1, trying to upload photo ~ 2MB in p:textEditor componnent I always get error:
Severe: java.lang.IllegalStateException: GRIZZLY0205: Post too large
Setting "Max Post Size" to -1 or any >1mljn value in Configurations - server config - Network Config - Network Listeners - http-listener-1 doesn't help. The same on GF 4.1
This is a x-www-form-url encoded content so we need set parameter: max-form-post-size. This isn't exposed via the UI, but you can configure it using cmd:
asadmin set configs.config.server-config.network-config.protocols.protocol.http-listener-1.http.max-form-post-size-bytes=-1