Cisco ISE API POST (500 Error) - api

I am having difficulty creating a new endpoint in Cisco ISE using their API. Here is my code:
import json
import requests
from requests.auth import HTTPBasicAuth
# LAB Endpoint
API_ENDPOINT ="myurl.com:9060/ers/config/endpoint"
data= {
"ERSEndPoint" : {
"name": "name",
"description": "description",
"mac" : "99:99:99:99:99:99",
"profileId" : "profileId",
"staticProfileAssignment" : "false",
"groupId" : "groupId",
"staticGroupAssignment" : "false",
"portalUser" : "portalUser",
"identityStore" : "identityStore",
"identityStoreId" : "identityStoreId",
"customAttributes" : {
"customAttributes" : {
"key1" : "value1",
"key2" : "value2"
}
}
}
}
headers = {'Content-Type': 'application/json',
'Accept': 'application/json'}
r = requests.post(url = API_ENDPOINT, data = json.dumps(data), headers=headers, auth=('user', 'pwd'))
print r.text
I keep receiving a 500 error.
> { "ERSResponse" : {
> "operation" : "POST-create-endpoint",
> "messages" : [ {
> "title" : "CREATE: DB internal error during CRUD operation Unable to create the endpoint. ORA-02291: integrity constraint
> (CEPM.REF_ROLE_MASTER) violated - parent key not found\n",
> "type" : "ERROR",
> "code" : "CRUD operation exception"
> } ],
> "link" : {
> "rel" : "related",
> "href" : "https://ezlrtvise22.msstore.microsoftstore.com:9060/ers/config/endpoint",
> "type" : "application/xml"
> } } }
It seems like it keeps throwing some weird Oracle DB key error. Any suggestions? I have tried using a REST Firefox extension to test as well and I still get the same error. Thanks!

The value for groupId and profileID need to be the respective uuids of the group/profile you want to put this device in. (Or you can leave out the profileID key entirely since it's not required.)
See https://example.com:9060/ers/config/endpointgroup to list all the endpoint groups (with their UUIDs).

Related

Get from jenkins list of nodes by label - by REST API

I need to GET list of nodes that contains certain label.
I know how to do that by getting entire nodes list by using Jenkins REST API and then getting node by node also REST API and checking its labels - but its to many API calls.
I also can create some job that writing to some place nodes list by label as parameter - but its bad way as Jenkins job that triggered remotely have no return value and I cant know it finished and will need read results from some other place the job saved it there.
I need some way that by one API call I will get nodes list contains a given label.
You can run a single API call to <JENKINS_URL>/computer/api/json (or <JENKINS_URL>/computer/api/python for a python api) which return a list of all nodes and their properties.
One of the properties is the label - so just go over all nodes and extract the ones that contain your needed label.
Here is an example for the returned object:
{
"_class" : "hudson.model.ComputerSet",
"busyExecutors" : 0,
"computer" : [
{
"_class" : "hudson.model.Hudson$MasterComputer",
"actions" : [
],
"assignedLabels" : [
{
"name" : "built-in"
}
],
"description" : "the Jenkins controller's built-in node",
"displayName" : "Built-In Node",
"executors" : [
{
},
{
}
],
"icon" : "symbol-computer",
"iconClassName" : "symbol-computer",
"idle" : true,
"jnlpAgent" : false,
"launchSupported" : true,
"loadStatistics" : {
"_class" : "hudson.model.Label$1"
},
"manualLaunchAllowed" : true,
"monitorData" : {
"hudson.node_monitors.SwapSpaceMonitor" : {
"_class" : "hudson.node_monitors.SwapSpaceMonitor$MemoryUsage2",
"availablePhysicalMemory" : 6938730496,
"availableSwapSpace" : 6906019840,
"totalPhysicalMemory" : 16885276672,
"totalSwapSpace" : 21046026240
},
"hudson.node_monitors.TemporarySpaceMonitor" : {
"_class" : "hudson.node_monitors.DiskSpaceMonitorDescriptor$DiskSpace",
"timestamp" : 1653907906021,
"path" : "C:\\Windows\\Temp",
"size" : 426696622080
},
"hudson.node_monitors.DiskSpaceMonitor" : {
"_class" : "hudson.node_monitors.DiskSpaceMonitorDescriptor$DiskSpace",
"timestamp" : 1653907905929,
"path" : "C:\\ProgramData\\Jenkins\\.jenkins",
"size" : 426696622080
},
"hudson.node_monitors.ArchitectureMonitor" : "Windows 10 (amd64)",
"hudson.node_monitors.ResponseTimeMonitor" : {
"_class" : "hudson.node_monitors.ResponseTimeMonitor$Data",
"timestamp" : 1653907905941,
"average" : 0
},
"hudson.node_monitors.ClockMonitor" : {
"_class" : "hudson.util.ClockDifference",
"diff" : 0
}
},
"numExecutors" : 2,
"offline" : false,
"offlineCause" : null,
"offlineCauseReason" : "",
"oneOffExecutors" : [
],
"temporarilyOffline" : false
}
],
"displayName" : "Nodes",
"totalExecutors" : 2
}
You are interested in the assignedLabels object - notice that it can contain multiple labels.

How to get specific stub on multiple matching URL with different queryParameter

I have 2 WireMock mappings JSON files with same URL. In the first mapping JSON file, I only have a xDate as query parameter. In the 2nd mapping JSON file, I have the xDate and yType as query parameters.
How do I make the stub such that when I hit the URL with the 2 parameters, it will get the correct mapping/file information.
1st mapping json file:
"request" : {
"customMatcher" : {
"name" : "is-today",
"parameters" : {
"queryParamName" : "xDate",
"dateFormat": "yyyy-MM-dd"
}
},
"urlPathPattern" : "/myUrl",
"method" : "GET"
},
"response" : {
"status" : 200,
"bodyFileName" : "body1.json",
"headers" : {
"Server" : "Apache-Coyote/1.1",
"Content-Type" : "application/json"
}
}
2nd mapping json:
"request" : {
"customMatcher" : {
"name" : "is-today",
"parameters" : {
"queryParamName" : "xDate",
"dateFormat": "yyyy-MM-dd"
}
},
"queryParameters":{
"yType" : {
"equalTo": "Value"
}
},
"urlPathPattern" : "/myUrl",
"method" : "GET"
},
"response" : {
"status" : 200,
"bodyFileName" : "body2.json",
"headers" : {
"Server" : "Apache-Coyote/1.1",
"Content-Type" : "application/json"
}
}
When I was testing it, it always hits the 1st mapping JSON. When I tried to hit the URL with 2 input parameters, it always go to the 1st mapping.
Tried putting the "priority" value on 1st and 2nd mapping file but somehow, its not working properly for me.

Create Custom field in Salesforce using Tooling API

I am trying to create custom field using Tooling API in Salesforce. First to check Tooling API, I tried using workbench but it is showing following error:
JSON Parser Error:
message: Cannot deserialize instance of complexvalue from VALUE_STRING value text or request may be missing a required field at [line:5, column:25]
errorCode: JSON_PARSER_ERROR
Following is the JSON body I am using:
{
"DeveloperName" : "CusField",
"Metadata":
{
"type" : "text",
"description" : "test",
"inlineHelpText" : "testhelp",
"label" : "cus Field",
"required" : false,
"precision" : null,
"length" : 255,
"unique" : false,
"externalId" : false,
"trackHistory" : false
},
"TableEnumOrId" : "Account",
"ManageableState" : "installed"
}
Please let me know what is wrong with body?
Thanks in Advance.
There are a few things wrong with the body.
Remove ManageableState
Do not include DeveloperName and TableEnumOrId, instead, use FullName as shown below
Capitalize the field type Text
Here's a working post body:
{
"FullName" : "Account.CusField__c",
"Metadata": {
"type" : "Text",
"description" : "test",
"inlineHelpText" : "testhelp",
"label" : "cus Field",
"required" : false,
"precision" : null,
"length" : 255,
"unique" : false,
"externalId" : false,
"trackHistory" : false
}
}

How to query and list all types within an elasticsearch index?

Problem: What is the most correct way to simply query for and list all types within a specific index (and all indices) in elasticsearch?
I've been reading through the reference and API but can't seem to find anything obvious.
I can list indices with the command:
$ curl 'localhost:9200/_cat/indices?v'
I can get stats (which don't seem to include types) with the command:
$ curl localhost:9200/_stats
I'd expect that there'd be a straightforward command as simple as:
$ curl localhost:9200/_types
or
$ curl localhost:9200/index_name/_types
Thanks for any help you can offer.
What you call "type" is actually a "mapping type" and the way to get them is simply by using:
curl -XGET localhost:9200/_all/_mapping
Now since you only want the names of the mapping types, you don't need to install anything, as you can use simply use Python to only get you what you want out of that previous response:
curl -XGET localhost:9205/_all/_mapping | python -c 'import json,sys; indices=json.load(sys.stdin); indices = [type for index in indices for type in indices.get(index).get("mappings")]; print list(indices);'
The Python script does something very simple, i.e. it iterates over all the indices and mapping types and only retrieves the latter's names:
import json,sys;
resp = json.load(sys.stdin);
indices = [type for index in resp for type in indices.get(index).get("mappings")];
print list(indices);'
UPDATE
Since you're using Ruby, the same trick is available by using Ruby code:
curl -XGET localhost:9205/_all/_mapping | ruby -e "require 'rubygems'; require 'json'; resp = JSON.parse(STDIN.read); resp.each { |index, indexSpec | indexSpec['mappings'].each {|type, fields| puts type} }"
The Ruby script looks like this:
require 'rubygems';
require 'json';
resp = JSON.parse(STDIN.read);
resp.each { |index, indexSpec |
indexSpec['mappings'].each { |type, fields|
puts type
}
}
You can just print the index and use the _mapping API so you will see only the section of "mappings" in the index.
For example: curl -GET http://localhost:9200/YourIndexName/_mapping?pretty
You will get something like that:
{
"YourIndexName" : {
"mappings" : {
"mapping_type_name_1" : {
"properties" : {
"dateTime" : {
"type" : "date"
},
"diskMaxUsedPct" : {
"type" : "integer"
},
"hostName" : {
"type" : "keyword"
},
"load" : {
"type" : "float"
},
"memUsedPct" : {
"type" : "float"
},
"netKb" : {
"type" : "long"
}
}
},
"mapping_type_name_2" : {
"properties" : {
"dateTime" : {
"type" : "date"
},
"diskMaxUsedPct" : {
"type" : "integer"
},
"hostName" : {
"type" : "keyword"
},
"load" : {
"type" : "float"
},
"memUsedPct" : {
"type" : "float"
}
}
}
}
}
}
mapping_type_name_1 and mapping_type_name_2 are the types in this index, and you also can see the structure of these types.
Good explanation about mapping_types is here: https://logz.io/blog/elasticsearch-mapping/
private Set<String> getTypes(String indexName) throws Exception{
HttpClient client = HttpClients.createDefault();
HttpGet mappingsRequest = new HttpGet(getServerUri()+"/"+getIndexName()+"/_mappings");
HttpResponse scanScrollResponse = client.execute(mappingsRequest);
String response = IOUtils.toString(scanScrollResponse.getEntity().getContent(), Charset.defaultCharset());
System.out.println(response);
String mappings = ((JSONObject)JSONSerializer.toJSON(JSONObject.fromObject(response).get(indexName).toString())).get("mappings").toString();
Set<String> types = JSONObject.fromObject(mappings).keySet();
return types;
}

How do I set up my json schema structure

I'm trying to figure out how a json schema should be implemented (as standardized as possible).
I have noticed that if I define a schema for a form using the v4 draft, I cannot voice the requirements my project has. So I created a schema that uses the v4 schema ("$schema": "http://json-schema.org/draft-04/schema#"), and gave it a custom id for the project, lets call it projectschema#. This schema validates, so all is good standard-wise. I have added two values to the type enum.
I then use this schema as $schema for another schema that describes form properties and validations, the formschema#. This schema too validates, this time against the projectschema#.
Now, as documented on www.json-schema.org, there's also a hyper-schema which allows the definition of links. Useful, as I can define where to POST the form to, or even where to get valueSets to use in the form (i.e. a rest service to get a list of user titles).
However, the v4 schema itself does not support links. I see how the v4 hyper-schema draft does support links, and is referencing the v4 schema draft, but I cannot figure out how to implement the hyper-schema, which probably means I'm missing some fundamental part of the 'how to use and implement json schema' knowledge.
I found the following on http://json-schema.org/latest/json-schema-hypermedia.html:
JSON Schema is a JSON based format for defining the structure of JSON data. This document specifies hyperlink- and hypermedia-related keywords of JSON Schema.
The term JSON Hyper-Schema is used to refer to a JSON Schema that uses these keywords.
If the draft hyper-schema uses the draft schema keywords, then why is the 'links' keyword nowhere to be found in the schema?
Is my (or any) custom schema actually a hyper schema? And if so, is anything that implements a (custom or draft) json schema called a hyper schema?
I could fire off a hundred questions. Main question: what is the relation between a Schema and a Hyper Schema, and how should I implement a schema for a form that needs more types than defined in the v4 draft?
Sorry for the length of this answer. Hopefully it's helpful.
I too struggled to understand how to validate a particular link in Hyper-Schema so I implemented each link as a base JSON Schema then tied each link together with a Hyper-Schema.
Definitions (definitions.json):
{
"$schema" : "http://json-schema.org/schema#",
"definitions" : {
"id" : {
"type" : "integer",
"minimum" : 1,
"exclusiveMinimum" : false
},
"foreign_key_id" : {
"$ref" : "#/definitions/id"
},
"season_name" : {
"type" : "string",
"minLength" : 1,
"maxLength" : 1,
"pattern" : "^[A-T]{1,1}$"
},
"currency" : {
"type" : "integer"
},
"shares" : {
"type" : "integer"
},
"username" : {
"type" : "string",
"minLength" : 1,
"maxLength" : 19,
"pattern" : "^[^ ]{1,19}$"
},
"name" : {
"type" : "string",
"minLength" : 1,
"maxLength" : 64,
"pattern" : "^[A-Za-z0-9][A-Za-z0-9_\\- ]*$"
},
"email" : {
"type" : "string",
"format" : "email"
},
"timestamp" : {
"type" : "string",
"format" : "date-time"
}
}
}
Base object schema:
{
"$schema" : "http://json-schema.org/schema#",
"type" : "object",
"properties" : {
"id" : { "$ref" : "definitions.json#/definitions/id" },
"season_name" : { "$ref" : "definitions.json#/definitions/season_name" },
"user_id" : { "$ref" : "definitions.json#/definitions/foreign_key_id" },
"coins" : { "$ref" : "definitions.json#/definitions/currency" },
"bonus_coins" : { "$ref" : "definitions.json#/definitions/currency" },
"created_at" : { "$ref" : "definitions.json#/definitions/timestamp" },
"updated_at" : { "$ref" : "definitions.json#/definitions/timestamp" }
},
"required" : [
"id",
"season_name",
"user_id",
"coins",
"bonus_coins",
"created_at",
"updated_at"
],
"additionalProperties" : false
}
POST schema (account_request_post.json):
{
"$schema" : "http://json-schema.org/schema#",
"type" : "object",
"properties" : {
"season_name" : { "$ref" : "definitions.json#/definitions/season_name" },
"user_id" : { "$ref" : "definitions.json#/definitions/foreign_key_id" }
},
"required" : [
"season_name",
"user_id"
],
"additionalProperties" : false
}
Hyper Schema:
{
"$schema" : "http://json-schema.org/schema#",
"type" : "object",
"links" : [
{
"description" : "Create a new account.",
"href" : "accounts",
"method" : "POST",
"rel" : "create",
"title" : "Create",
"schema" : { "$ref" : "account_request_post.json#" }
},
{
"description" : "List accounts.",
"href" : "accounts",
"method" : "GET",
"rel" : "index",
"title" : "List"
}
]
}
Json hyper-schema is a subset of Json-schema standard dedicated to hyperlink and hypermedia keywords and rules.
The "links" keyword is defined in the hyper-schema section of the draft. Indeed it is a part of json-schema (despite it is defined in a special draft section)
If your are defining an API interface, it is likely you want to use hyper-schema. If you are just defining validation contracts, plain Json-schema keywords are enough.