I noticed that in my apps endpoints just disappear after a while without any information on why. Example: I started the application last night, this morning I curl the endpoint and get
curl -H "Content-type: application/json" http://localhost:8081
{
"_links" : {
"profile" : {
"href" : "http://localhost:8081/profile"
}
}
}
This is what it looks like after I restarted the service:
curl -H "Content-type: application/json" http://localhost:8081
{
"_links" : {
"roleAssignments" : {
"href" : "http://localhost:8081/roleAssignments"
},
"invitations" : {
"href" : "http://localhost:8081/invitations"
},
"tenantProfiles" : {
"href" : "http://localhost:8081/tenantProfiles"
},
"roles" : {
"href" : "http://localhost:8081/roles"
},
"companies" : {
"href" : "http://localhost:8081/companies"
},
"permissions" : {
"href" : "http://localhost:8081/permissions"
},
"accounts" : {
"href" : "http://localhost:8081/accounts"
},
"profile" : {
"href" : "http://localhost:8081/profile"
}
}
}
It is super hard to reproduce reliably. Mostly, by waiting a longer period I was able to see this behavior again. Any ideas what's going on here?
I've been having trouble with this too and eventually came across https://jira.spring.io/browse/DATAREST-1505 which describes the issue. It's been fixed in the latest version of springboot (2.2.7) so hopefully updating to that will fix your issue as well.
Related
I have 2 WireMock mappings JSON files with same URL. In the first mapping JSON file, I only have a xDate as query parameter. In the 2nd mapping JSON file, I have the xDate and yType as query parameters.
How do I make the stub such that when I hit the URL with the 2 parameters, it will get the correct mapping/file information.
1st mapping json file:
"request" : {
"customMatcher" : {
"name" : "is-today",
"parameters" : {
"queryParamName" : "xDate",
"dateFormat": "yyyy-MM-dd"
}
},
"urlPathPattern" : "/myUrl",
"method" : "GET"
},
"response" : {
"status" : 200,
"bodyFileName" : "body1.json",
"headers" : {
"Server" : "Apache-Coyote/1.1",
"Content-Type" : "application/json"
}
}
2nd mapping json:
"request" : {
"customMatcher" : {
"name" : "is-today",
"parameters" : {
"queryParamName" : "xDate",
"dateFormat": "yyyy-MM-dd"
}
},
"queryParameters":{
"yType" : {
"equalTo": "Value"
}
},
"urlPathPattern" : "/myUrl",
"method" : "GET"
},
"response" : {
"status" : 200,
"bodyFileName" : "body2.json",
"headers" : {
"Server" : "Apache-Coyote/1.1",
"Content-Type" : "application/json"
}
}
When I was testing it, it always hits the 1st mapping JSON. When I tried to hit the URL with 2 input parameters, it always go to the 1st mapping.
Tried putting the "priority" value on 1st and 2nd mapping file but somehow, its not working properly for me.
I have two microservice, one of them need at boot to load all operator name/codes and index them in a RadixTree.
I am trying to load around 36000 records using feign/data-rest and it is working but I noticed that in the response approximately half of the data size are coming from links
{
"_embedded" : {
"operatorcode" : [ {
"enabled" : true,
"code" : 9320,
"operatorCodeId" : 110695,
"operatorName" : "Afghanistan - Kabul/9320",
"operatorId" : 1647,
"activationDate" : "01-01-2008",
"deactivationDate" : "31-12-2099",
"countryId" : 1,
"_links" : {
"self" : {
"href" : "http://10.44.0.51:8083/operatorcode/110695"
},
"operatorCode" : {
"href" : "http://10.44.0.51:8083/operatorcode/110695{?projection}",
"templated" : true
},
"operator" : {
"href" : "http://10.44.0.51:8083/operatorcode/110695/operator"
}
}
}
...
]
}
Is there any way to stop sending back the _links as in my case it is not being used I tried setting use-hal-as-default-JSON-media-type: false and using projections but did not succeed.
I am not sure that it is a correct way to do this but you can try something like this:
#Bean
public Jackson2ObjectMapperBuilder jacksonBuilder() {
Jackson2ObjectMapperBuilder b = new Jackson2ObjectMapperBuilder();
b.mixIn(Object.class, IgnorePropertiesInJackson.class);
return b;
}
#JsonIgnoreProperties({"_links"})
private abstract class IgnorePropertiesInJackson {
}
I want to be able to extract content from a PDF file and to be able to search within that content using ElasticSearch.
I did install elasticsearch/elasticsearch-mapper-attachments/2.6.0
I have created a new index named "docs".
I did create a file named "tmp.json" with that content :
{"title": "file.pdf", "file": "IkdvZCBTYXZlIHRoZSBRdWVlbiIgKGFsdGVybmF0aXZlbHkgIkdvZCBTYXZlIHRoZSBLaW5nIg=="}
I did execute the following :
curl -X PUT "http://localhost:9200/docs/attachment/_mapping" -d '{
"attachment": {
"properties" : {
'file" : {
"type" : "attachment",
"fields" : {
"title" : {"store":"yes"},
"file":{
"type":"string",
"term_vector":"with_positions_offsets",
"store":"yes"}
}
}
}
}
}'
and the following :
curl -X POST "http://localhost:9200/docs/attachment" -d #tmp.json
The problem is that the content is stored as it is in the file.
I was expecting the content to be decoded, like so :
base64.b64decode("IkdvZCBTYXZlIHRoZSBRdWVlbiIgKGFsdGVybmF0aXZlbHkgIkdvZCBTYXZlIHRoZSBLaW5nIg==")
That gives :
b'"God Save the Queen" (alternatively "God Save the King"'
To encode in base64, here what I do :
import json, base64
file64 = base64.b64encode(open('file.pdf', "rb").read()).decode('ascii')
f = open('tmp.json', 'w')
data = {"file":file64, "title":fname}
json.dump(data,f)
f.close()
I would like to be able to see the content using kibana (but for now I see only the base64 data ...)
This didn't work :
curl -X PUT "http://localhost:9200/docs/attachment/_mapping" -d '{
"attachment": {
"properties" : {
"content" : {
"type" : "attachment",
"fields" : {
"title" : {"store":"yes"},
"content":{
"type":"string",
"term_vector":"with_positions_offsets",
"store":"yes"}
}
}
}
}
}'
This worked, and I can see the content of the PDF through Kibana :
curl -X PUT "http://localhost:9200/docs" -d '{
"mappings" : {
"attachment" : {
"properties" : {
"content" : {
"type" : "attachment",
"fields" : {
"content" : { "store" : "yes" },
"author" : { "store" : "yes" },
"title" : { "store" : "yes"},
"date" : { "store" : "yes" },
"keywords" : { "store" : "yes", "analyzer" : "keyword" },
"name" : { "store" : "yes" },
"content_length" : { "store" : "yes" },
"content_type" : { "store" : "yes" }
}
}
}
}
}
}'
I am trying to restart Mapreduce Jobtracker through Cloudera Manager API. Stats for Jobtracker is as follows :
local-iMac-399:$ curl -u 'admin:admin' 'http://hadoop-namenode.dev.com:7180/api/v6/clusters/Cluster%201/services/mapreduce/roles/mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86'
{
"name" : "mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86",
"type" : "JOBTRACKER",
"serviceRef" : {
"clusterName" : "cluster",
"serviceName" : "mapreduce"
},
"hostRef" : {
"hostId" : "24259373-7e71-4089-8251-faf055e42ad7"
},
"roleUrl" : "http://hadoop-namenode.dev.com:7180/cmf/roleRedirect/mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86",
"roleState" : "STARTED",
"healthSummary" : "GOOD",
"healthChecks" : [ {
"name" : "JOB_TRACKER_FILE_DESCRIPTOR",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_GC_DURATION",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_HOST_HEALTH",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_LOG_DIRECTORY_FREE_SPACE",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_SCM_HEALTH",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_UNEXPECTED_EXITS",
"summary" : "GOOD"
}, {
"name" : "JOB_TRACKER_WEB_METRIC_COLLECTION",
"summary" : "GOOD"
} ],
"configStalenessStatus" : "STALE",
"haStatus" : "ACTIVE",
"maintenanceMode" : false,
"maintenanceOwners" : [ ],
"commissionState" : "COMMISSIONED",
"roleConfigGroupRef" : {
"roleConfigGroupName" : "mapreduce-JOBTRACKER-BASE"
}
}
local-iMac-399:$
Dont know How do I use API to restart just Jobtracker ?
I tried to restart Hive service using following command but got some error
local-iMac-399:$curl -X POST -u 'admin:admin' 'http://hadoop-namenode.dev.com:7180/api/v6/clusters/Cluster%201/services/hive/roleCommands/restart'
{
"message" : "No content to map due to end-of-input\n at [Source: org.apache.cxf.transport.http.AbstractHTTPDestination$1#4169c499; line: 1, column: 1]"
}
I would appreciate if someone help in understanding how to use Cloudera Manager API
Based on the information provided, this is how you'd invoke the CM API JobTracker restart
curl -u 'admin:admin' -X POST -H "Content-Type:application/json" -d '{"items":["mapreduce-JOBTRACKER-0675ebab2b87e3869e0d90167cf4bf86"]}' 'http://hadoop-namenode.dev.com:7180/api/v6/clusters/Cluster%201/services/mapreduce/roleCommands/restart'
I have set up a "checkbox group" with the five schedule states in our organization's workspace. I would like to query using the Lookback API with the selected schedule states as filters. Since the LBAPI is driven by ObjectIDs, I need to pass in the ID representations of the schedule states, rather than their names. Is there a quick way to get these IDs so I can relate them to the checkbox entries?
Lookback API will accept string-valued ScheduleStates as query arguments. Thus the following query:
{
find: {
_TypeHierarchy: "HierarchicalRequirement",
"ScheduleState": "In-Progress",
__At:"current"
}
}
Works correctly for me. If you want/need OIDs though, and add &fields=true to the end of your REST query URL, you'll notice the following information coming back:
GeneratedQuery: {
{ "fields" : true,
"find" : { "$and" : [ { "_ValidFrom" : { "$lte" : "2013-04-18T20:00:25.751Z" },
"_ValidTo" : { "$gt" : "2013-04-18T20:00:25.751Z" }
} ],
"ScheduleState" : { "$in" : [ 2890498684 ] },
"_TypeHierarchy" : { "$in" : [ -51038,
2890498773,
10487547445
] },
"_ValidFrom" : { "$lte" : "2013-04-18T20:00:25.751Z" }
},
"limit" : 10,
"skip" : 0
}
}
You'll notice the ScheduleState OID here:
"ScheduleState" : { "$in" : [ 2890498684 ] }
So you could run a couple of sample queries on different ScheduleStates and find their corresponding OIDs.