Apache Flink provides a set of Rest API. When my flink job is running, I tried to access the job using /jobs and it gave me the metadata information. When i tried to fetch the results using /job/:jobid/execution-result, I got the result as {status: "RUNNING"}. Upon stopping I get the response as
{
"status": {
"id": "COMPLETED"
},
"job-execution-result": {
"id": jobid,
"application-status": "CANCELED",
"accumulator-results": {},
"net-runtime": 1872358
}
}
Is there a way I can get the results that get printed on flink std out using any other API calls?
Related
My axios get call takes few seconds to have result as 'succeeded' as below result
{
"status": "succeeded",
"createdDateTime": "2022-12-01T02:42:04Z",
"lastUpdatedDateTime": "2022-12-01T02:42:06Z",
"analyzeResult": {
"version": "3.2.0",
"modelVersion": "2022-04-30",
"readResults": [
{
"page": 1,
"angle": 12.8499,
"width": 1
However sometimes I get below response with status running with no analyzeResult.
{
"status": "running",
"createdDateTime": "2022-12-01T02:42:04Z",
"lastUpdatedDateTime": "2022-12-01T02:42:06Z",
What should be the way to have api call running till we get status as succeeded.
I tried below code with async await and also then catch
const result= await axios.get(url,headers)
console.log(result.data)
This has nothing to do with async/await, If I understand this correctly Backend is sending the response with the status key, and FE will start executing once the response is received. It does not care about status.
Solution:
Ask the backend to wait until the operation gets completed and then send the response.
Send the response with the status key as soon as the operation starts, and expose another API for checking the status of the operation. This is useful when operations take more time.
I am trying to create a controller service using nifi api rest but I am blocked because when I try:
InvokeHTTP
POST
https://hostname/nifi-api/controller/controller-services
using this json
{
"revision": {
"version": 0
},
"disconnectedNodeAcknowledged": false,
"component": {
"name": "DMCS_try",
"type": "org.apache.nifi.distributed.cache.server.map.DistributedMapCacheServer",
"bundle": {
"group": "org.apache.nifi",
"artifact": "nifi-distributed-cache-services-nar",
"version": "1.9.0.3.4.1.1-4"
},
"state": "ENABLED",
"properties": {
"Port": "4555",
"Maximum Cache Entries": "10000",
"Eviction Strategy": null,
"Persistence Directory": null,
"SSL Context Service": null
}
}
}
I got this "error"
Node XXXXXXXXX is unable to fulfill this request due to: Unable to modify the controller. Contact the system administrator. Contact the system administrator.
Controller services can be created in two different places. One place is in the flow as part of a process group so they can be used by processors, the other place is at the controller level for use by reporting tasks.
The URL you specified is for creating a service at the controller level and therefore whatever identity you are using to authenticate as need permissions to modify the controller (WRITE on /controller). The error message is saying you don't have that permission.
I need to call a Jenkins job using its API through Postman. This job requires parameters (HOST, VERBOSITY and PMSP).
Auth works using Jenkins token and header Content-type:application/json is used.
I tried to call the endpoint https://jenkins_server/job/job_name/build/api/json adding the following body to the request but the result is Nothing is submitted, and the job doesn't run.
I tried to call the endpoint https://jenkins_server/job/job_name/buildWithParameters/api/json adding the same body. I get 201 Created (job is running) but no parameters are given to the job.
{
"parameter": [
{
"name": "HOSTS",
"value": "[linux]\n1.2.3.4"
},
{
"name": "VERBOSITY",
"value": "vv"
},
{
"name": "SANS_PMSP",
"value": true
}
]
}
Is my JSON well constructed ? Which endpoint do I need to call ?
If it's Postman that you would like to focus on, you can import the curl command straight into the application.
This creates a new request for you to use and it populates this request, based on the details in the command.
From here, you should be able to add your own URL and point this at the location you need.
I'm trying to use the endpoint Flight Low-fare Search and I realised that it's not filtering properly by maxPrice.
For example, reaching the next endpoint with maxPrice=100.
https://test.api.amadeus.com/v1/shopping/flight-offers?origin=MAD&destination=BIO&departureDate=2018-12-07&returnDate=2018-12-10&adults=1&maxPrice=100¤cy=EUR
I'm getting the following result so I think there is an error:
{
"price": {
"total": "185.09",
"totalTaxes": "39.09"
},
"pricePerAdult": {
"total": "185.09",
"totalTaxes": "39.09"
}
This bug has been fixed and deployed in production.
newbie in apache nutch - writing a client to use it via REST.
succeed in all the steps (INJECT, FETCH...) - in the last step - when trying to index to solr - it fails to pass the parameter.
The Request (I formatted it in some website)
{
"args": {
"batch": "1463743197862",
"crawlId": "sample-crawl-01",
"solr.server.url": "http:\/\/x.x.x.x:8081\/solr\/"
},
"confId": "default",
"type": "INDEX",
"crawlId": "sample-crawl-01"
}
The Nutch logs:
java.lang.Exception: java.lang.RuntimeException: Missing SOLR URL. Should be set via -D solr.server.url
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Was that implemented? the param passing to solr plugin?
You need to create/update a configuration using the /config/create/ endpoint, with a POST request and a payload similar to:
{
"configId":"solr-config",
"force":"true",
"params":{"solr.server.url":"http://127.0.0.1:8983/solr/"}
}
In this case I'm creating a new configuration and specifying the solr.server.url parameter. You can verify this is working with a GET request to /config/solr-config (solr-config is the previously specified configId), the output should contain all the default parameters see https://gist.github.com/jorgelbg/689b1d66d116fa55a1ee14d7193d71b4 for an example/default output. If everything worked fine in the returned JSON you should see the solr.server.url option with the desired value https://gist.github.com/jorgelbg/689b1d66d116fa55a1ee14d7193d71b4#file-nutch-solr-config-json-L464.
After this just hit the /job/create endpoint to create a new INDEX Job, the payload should be something like:
{
"type":"INDEX",
"confId":"solr-config",
"crawlId":"crawl01",
"args": {}
}
The idea is that need to you pass the configId that you created with the solr.server.url specified along with the crawlId and other args. This should return something similar to:
{
"id": "crawl01-solr-config-INDEX-1252914231",
"type": "INDEX",
"confId": "solr-config",
"args": {},
"result": null,
"state": "RUNNING",
"msg": "OK",
"crawlId": "crawl01"
}
Bottom line you need to create a new configuration with the solr.server.url setted instead of specifying it through the args key in the JSON payload.