I have upgraded my Opensearch cluster from 1.3 to 2.4.1 (using RPM - just following the instructions on https://opensearch.org/) The cluster runs on openjdk 17.0.5 that comes with the RPM).
I use to be able to leverage the SQL plugin like so:
curl --location --request POST 'https://my_cluster/_plugins/_sql' --header 'Content-Type: application/json'
--data-raw '{ "query": "SELECT date, #timestamp, value FROM profiler_outputs-*", "fetch_size": 100}'
But since I upgraded, I get:
{'error':
{'reason': 'There was internal problem at backend',
'details': 'Index type [profiler_outputs-*] does not exist',
'type': 'IllegalArgumentException'},
'status': 500}
If I use an actual index name, instead of a pattern, it works (aka 'profiler_outputs-2023-02-09').
If I use '_plugins/_sql?format=json' it also works - as in it gives me the result in json format.
It also works in csv. But somehow, in JDBC (default return format) it always errors out if I use an index pattern. Is this a bug or am I missing something? I need the JDBC format so that I can paginate the result.
Related
After a recent upgrade to composer-2.1.0-airflow-2.3.4 the GCSToBigQueryOperator is no longer able to find data in buckets to upload to BigQuery.
All other aspects of the DAGs still work.
The usage is as follows
gcs_to_bq = GCSToBigQueryOperator(
task_id = f"transfer_{data_type}_to_bq_task",
bucket = os.environ["GCS_BUCKET"],
source_objects = file_names,
destination_project_dataset_table = os.environ["GCP_PROJECT"] + f".creditsafe.{data_type}",
schema_object = f"dags/schema/creditsafe/{data_type}.json",
source_format = "CSV",
field_delimiter = '|',
quote_character = "",
max_bad_records = 0,
create_disposition = "CREATE_IF_NEEDED",
ignore_unknown_values = True,
allow_quoted_newlines = True,
allow_jagged_rows = True,
write_disposition = "WRITE_TRUNCATE",
gcp_conn_id = 'google_cloud_default',
skip_leading_rows = 1,
dag = dag
)
The error from the API is
google.api_core.exceptions.NotFound: 404 GET
{ "error": { "code": 400, "message": "Unknown output format: media:", "errors": [ { "message": "Unknown output format: media:", "domain": "global", "reason": "invalidAltValue", "locationType": "parameter", "location": "alt" } ] } }
The error delivered by Cloud Composer is
google.api_core.exceptions.NotFound: 404 GET https://storage.googleapis.com/download/storage/v1/b/[BUCKET_HIDDEN]/o/data%2Fcreditsafe%2FCD01%2Ftxt%2F%2A.txt?alt=media: No such object: [BUCKET_HIDDEN]/data/creditsafe/CD01/txt/*.txt: ('Request failed with status code', 404, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
I can't see the cause of the error. The reference to the GCS location has not changed and appears correct while the gcp_conn_id appears sufficient for all other tasks. I'm at a loss.
A fix for this above issue has now been made
https://github.com/apache/airflow/pull/28444
It is unclear how long it will take for this to be integrated into the Cloud Composer libraries.
GCSToBigQueryOperator does not support wildcard *.csv. For your requirement, you can try the below steps:
You can attach to a pod in the composer environment by running the below commands :
gcloud container clusters get-credentials --region __GCP_REGION__ __GKE_CLUSTER_NAME__
kubectl get pods -n [Namespace]
kubectl exec -it [Worker] -n [Namespace] -- bash
You can run the below command to identify the google provider package,
pip list | grep -i goo | grep provider
If the output of the above command is a different version from 8.3.0 then change the version to apache-airflow-providers-google ==8.3.0.
The release 8.5.0 of apache-airflow-providers-google that cames with airflow (>2.3.4 and <2.5.1) introduced several critical regressions nottably:
GCSToBigquery operator is broken as it ignores its options https://github.com/apache/airflow/pull/27961
This means that all custom settings specified in the operator (delimiter, formatting, null values, wildcards on files) are no longer sent to BigQuery leading to unexpected results.
Until Google releases a composer version based on airflow 2.5.1 the workaround is to upgrade the apache-airflow-providers-google library (or to use a composer version based on apache <=2.2.5).
No need to connect via gcloud/kubectl to change the apache-airflow-providers-google version, you can change it directly in the Composer UI, via PyPi Packages page (or via the terraform provider).
I can confirm that on the latest (today) composer : composer-1.20.4-airflow-2.4.3 configuring the apache-airflow-providers-google ==8.8.0 (latest) solves those issues for me.
But as mentioned previously this is only an workaround and your mileage might vary...
configuring custom PyPI packages
#Xray(requirement = "QA", test = "TM-3553" ,ProjectName="QA")
#Test()
public void GETGradeForGuestStudent() {
}
why the execution is not getting mapped to TM-3553 Rather its getting mapped always to TM-3601
mapped to jira
my testng report.xml-pastebin.com/iHc4hJmD
In jenkins postbuild action i am calling this sh command
token=$(curl -H "Content-Type: application/json" -X POST --data #"./cloud_auth.json" https://xray.cloud.xpand-it.com/api/v2/authenticate| tr -d '"')
curl -H "Content-Type: application/xml" -X POST -H "Authorization: Bearer $token" --data #"./target/surefire-reports/testng-results.xml" https://xray.cloud.xpand-it.com/api/v2/import/execution/testng?projectKey=TM&testExecKey=TM-3563"
The TestNG XML report you shared seems to have the correct format, as detailed here. The Test issue key is being mapped to an attribute named "test" under the <attributes> element, which in turn is inside the <test-method> element.
I changed your XML report, so that the first reference to TM-3553 is one of my existent Tests ... and it worked fine in my scenario.
Therefore, I think your scenario needs to be analyzed by the Xray team in more depth to figure out exactly why this association is not being done in Xray side.
Let me just add that the annotation you shared is not correct, as "requirement" must be an issue key and "ProjectName" doesn't exist/is supported by Xray during the import process..
Therefore, this needs to be changed:
#Xray(requirement = "QA", test = "TM-3553" ,ProjectName="QA")
to something like:
#Xray(requirement = "TM-1243", test = "TM-3553")
Another note about the TestNG XML report, is that it contains several references to the same test method GETGradeForGuestStudent.. which I find weird at first sight but it depends on how you're running the tests.
I'm using karate v0.9.6, and it's wonderful tool.
I have a >1000 scenarios, each of them need to token for work, so i use callSingle in karate-config.js for creating and caching tokens. I use standalone jar file.
Part of my karate-config.js:
var auth_cfg = {
server: env,
credentials: karate.properties['credentials']};
var result = karate.callSingle('classpath:credentials/generate_tokens.feature', auth_cfg);
I'm using .sh file like this:
rm -rf target &&
java -Xmx2048m \
-Dlogback.configurationFile=cfg/logs_debug.xml \
-jar \
-Dcredentials=data.json \
karate-1.0.1.jar -e https://my-server/ \
--tags ~fixme \
--tags ~ignore \
--threads 4 \
features/services/simple_plan.feature
And, it's working perfectly on v.0.9.6 long time.
But, when I'm try to upgrade to v 1.0 or 1.0.1, a give an error:
org.graalvm.polyglot.PolyglotException: not found: credentials/generate_tokens.feature
I found this issue: https://github.com/intuit/karate/issues/1515
But examples not working for me. I'm trying use "file:" and karate.properties['karate.config.dir'] + '/features/auth/auth.feature'.
Always i got an error:
not found: credentials/generate_tokens.feature
Who else has faced this problem?
As you can see in the discussion of #1515 - this is why we'd really like more folks to try the RC versions and give us feedback (we spent months on this) instead of waiting for 1.0.
Yours seems to be an edge case where you are using the stand-alone JAR and a custom shell script.
My first suggestion is to use the -w flag. This is a new argument in 1.0 which can set the "current working directory", but it should default correctly in your case.
The second suggestion is to set the classpath for the JVM. Use this as a reference: https://stackoverflow.com/a/58398958/143475 - and once you do that, classpath: will work as you expect.
Else please follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue - and I have to say that unless you can help us fix this, you may have to remain on 0.9.6 for a while. Sorry.
Peter Thomas, thank you for fast response!
I modify callSingle function, and .feature file wich called by callSingle - change "classpath:" to "file:"
karate.callSingle('file:credentials/generate_tokens.feature', auth_cfg)
,
# read credentials
* def authdata = read('file:credentials/' + credentials)
and it's works now. Before, when i change classpath: to file: i, probably made a mistake.
Thanks for so awesome testing framework!
I have had a PowerShell scripted call to the IBM uDeploy command line client (udclient) in my TFS CI build process for some time now.
My udclient call is scripted like so:
udclient.cmd -weburl $uDeployServer -authtoken $authToken "importVersions" $requestJson
... and my JSON file ($requestJson) content looks like this:
{
"component": "[uDeploy component name]",
"properties": {
"version": "[component version]"
}
}
These requests, and subsequent udclient version deploy requests, have been working as expected until recently. However, a couple of weeks ago, the version import requests started to fail mysteriously.
In the uDeploy UI, in the Version Import History tab in the Component Configuration, I can see the failed Import Requests.
However, when I open the Output Log for inspection, it is empty.
The Error Log contains only this:
"The version import failed for the following reasons:
JSONObject["value"] not found."
Manual version import from the uDeploy UI still works as expected.
Also, once manual intervention has been applied to complete the version import in the CI build, the subsequent version deploy request executes without any problems.
I'm not expert in Java but the error seemed to me to suggest something was amiss with the JSON file. However, to test my JSON (I'm using PS 5, no Test-Json until PS 6), execution of the following PowerShell script:
try {
$json = Get-Content -Path [component version import].json | ConvertFrom-Json
Write-Host "JSON is valid."
} catch {
Write-Host "JSON is dodgy."
}
... returns:
JSON is valid.
So, what's going on? Could it be something to do with encoding in the JSON file or some such??
Ideas and insights appreciated; thanks for looking.
I scripted the REST API call for native PowerShell:
Invoke-RestMethod -Uri $uDeployServer/cli/component/integrate" -Method Put -Headers $headers -ContentType "application/json" -Body $json
The request was sent without issue but sadly, as with the udclient call, the same error persisted.
Looking at the failed version import request record in the uDeploy UI, aside from the vague message in the error log, the request Input Properties showed two properties only (successful requests show many properties from the component configuration):
version (value correctly read from the JSON file provided)
description (value blank)
I added a new property 'description' to my request JSON; the file content now looks like this:
{
"component":"[uDeploy component name]",
"properties":{
"version":"[component version]",
"description":"[description]"
}
}
Hey Presto!
Version Import request executes successfully.
I need to export data for various event types via URL(REST API) which can be manually downloaded from analytic console. (Administration -> Export Data).
These event type includes "Custom Activity", "Server Logs" etc as given in Event Type drop down.
I found the API documentation is available for IBM Mobile First Platform version v7.1.0 and was able to export it via URL in v 7.1.0.
For v 7.1.0
http://www.ibm.com/support/knowledgecenter/en/SSHSCD_7.1.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_raw_reports.html
But When I moved to IBM Mobile First Platform version 7.0.0, I did not find similar API exposed to export these event type data similar to v 7.1.0.
for v 7.0.0
http://www.ibm.com/support/knowledgecenter/en/SSHSCD_7.0.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_raw_reports.html
If any one tried to export data for various event types in v 7.0.0 via URL(API), please help with any documentation or exact URL which can be used to export this data.
Thanks
This is working in 7.0:
curl -XGET -G 'http://:/analytics-service/data/exportData/apps/worklight/export' --data-urlencode 'query={"level":"","server":"","format":"csv","timeInterval":"day","timeAmount":-50,"startDate":1475269200000,"endDate":1482271200000,"event":"server_logs","limit":10,"offset":0,"tenant":"worklight"}' -u admin:admin -o export.csv
Note the URL is: /data/exportData/apps/worklight/export