In GoodData's ETL tool there's a key-value store that one can use for keeping some kind of state between ETL runs: http://developer.gooddata.com/cloudconnect/manual/lookup-table-functions-ctl2.html
Is there a way how to set / read these values through the REST API?
There is something called project metadata. It holds metadata on a per project level. It is what you can see if you go to Project explorer in CloudConnect and look at customer properties.
The data can be read like this
GET /gdc/projects/<projectName>/dataload/metadata
You can read only particular key
GET /gdc/projects/<projectName>/dataload/metadata/<key>
And update an existing key
PUT /gdc/projects/<projectName>/dataload/metadata/<key>
Also delete
DELETE /gdc/projects/<projectName>/dataload/metadata/<key>
Or create a new on
POST /gdc/projects/<projectName>/dataload/metadata/ {"metadataItem" : {"key" : "some_key", "val" : "some_val"}}
Another way is to use GoodData ruby SDK (https://github.com/gooddata/gooddata-ruby/)
client = GoodData.connect('username', 'pass')
project = client.projects('project_id')
project.metadata
metadata.inspect
You can also set the metadata liek this
project.set_metadata('key', 'val')
Related
I'm new to symfony and the api platform and I want to develop an api with specific routes.
I'm looking to do is create a post query with nested resources to add relationships between tables.
example: I have 3 tables (users, periods, articles). I need to create a post request to add a new post with the following structure:
URL: api/:userid/:period/item
:userID = user ID
:period = Period name
name = element name
This request must create a new article in my "articles" table by adding the identifier, the name of the period and the name of the article entered as a parameter.
So my question is how do I pass multiple parameters in my path and save them in the database using the api platform?
Thanks in advance !
You can use custom routes with API platform, which allow you to create a route that correspond to a custom query => but you need to have these data before setting them in your Api platform path.
First of all, I would use the query builder to create the query you need get the data you need, then you can use your method directly in your entity (more here https://api-platform.com/docs/core/controllers/).
You can set the route you want inside of the path of the route and set the different arguments you need like this:
'path' => '/books/{id}/publication'
here id is your argument coming from your repository function.
I want to build a temporary FeatureClass which contains temporary Features , such as points, which are useless later in programming.
While, I searched for ArcObject API reference, but I can't find an efficient way to solve this problem. So how can I build temporary "container" to store some temporary Features ?
Should I first use CreateFeatureClass to build a real FeatureClass and later delete it? I don't think this method is cool for I have to deal with some CLSID thing.
PS:This "container" must have the ability to return a Cursor.
I think you should to use InMemoryWorkspace.
IWorkspaceFactory2 objWorkspaceFactory = new InMemoryWorkspaceFactoryClass();
IWorkspaceName objWorkspaceName = objWorkspaceFactory.Create(string.Empty, p_strName, null, 0);
IName objName = (IName)objWorkspaceName;
IWorkspace objWorkspace = (IWorkspace)objName.Open();
Now using this workspace you can to create Temprorary Feature Classes (perform search, get cursor and than delete the feature class).
I believe that in your case InMemory Workspace is more efficient than working with ShapeFile or Personal Geodatabase.
You can use the IScratchWorkspaceFactory2 interface, which is used to create temporary personal geodatabases in the temp directory. You can find this directory by looking at the %TEMP% environment variable. The scratch personal geodatabase will have the name mx.mdb where is the lowest positive number that uniquely identifies the geodatabase.
IScratchWorkspaceFactory2 factory = new ScratchWorkspaceFactoryClass();
var selectionContainer = factory.DefaultScratchWorkspace;
Using Camunda as the tool for orchestration of the microservices. At later time, I find the process_instances_id generated necessary for continuing a particular process by using it in messageEventReceived(). Code as follows:
val processid = getProcessID(key1, key2)
val runtimeService = processengine.getRuntimeService
val subscription = runtimeService.createEventSubscriptionQuery
.eventType("message")
.eventName(eventname)
.processInstanceId(executionid)
.singleResult
runtimeService.messageEventReceived(subscription.getEventName, subscription.getExecutionId)
As of this moment the processid is saved and then retrieved from the database using the getProcessID(...) function when necessary. Is this proper?
Does camunda already have the list of process_ids in its own database? If so, how do I retrieve a particular process instance id just giving composite key(s)? Is that even possible?
It is the common way. You can also use the public api to get the process instance and his id via the process definition key.
See the following example from the documentation:
runtimeService.createProcessInstanceQuery()
.processDefinitionKey("invoice")
.list();
For your given example there is also a simpler way. It is possible to correlate the message via the runtime service.
See this example from the documenation:
runtimeService.createMessageCorrelation("messageName")
.processInstanceBusinessKey("AB-123")
.setVariable("payment_type", "creditCard")
.correlate();
You can use
runtimeService.createProcessInstanceQuery().list();
the query supports fluent criteria for filtering, for example on process_key, variables, businessKey ...
I am using indexer to sync data from my SQL Database to Azure Search Service. I have a field in my SQL View, which contains XML data. The Column contains a list of string. The corresponding field in my Azure Search Service Index in a Collection(Edm.String).
On checking some documentations, I found that Indexer does not change Xml(SQL) to Collection(Azure Search).
Is there any workaround as to how I can get create the Collection from the Xml data?
p.s I am extracting the data from a View, so I can change the Xml to JSON if needed.
UPDATE on October 17, 2016: Azure Search now automatically converts a string coming from a database to an Collection(Edm.String) field if the data represents a JSON string array: for example, ["blue", "white", "red"]
Old response: great timing, we just added a new "field mappings" feature that allows you to do this. This feature will be deployed sometime early next week. I will post a comment on this thread when this is rolled out in all datacenters.
To use it, you indeed need to use JSON. Make sure your source column contains a JSON array, for example ["hello" "world"]. Then, update your indexer definition to contain the new fieldMappings property:
"fieldMappings" : [ { "sourceFieldName" : "YOUR_SOURCE_FIELD", "targetFieldName" : "YOUR_TARGET_FIELD", "mappingFunction" : { "name" : "jsonArrayToStringCollection" } } ]
NOTE: You'll need to use API version 2015-02-28-Preview to add fieldMappings.
HTH,
Eugene
I have a field field10 which got created by accident when I updated a particular record in my index. I want to remove this field from my index, all its contents and recreate it with the below mapping:
"mytype":{
"properties":{
"field10":{
"type":"string",
"index":"not_analyzed",
"include_in_all":"false",
"null_value":"null"
}
}
}
When I try to create this mapping using the Put Mapping API, I get an error: {"error":"MergeMappingException[Merge failed with failures {[mapper [field10] has different index values, mapper [field10] has different index_analyzer, mapper [field10] has different search_analyzer]}]","status":400}.
How do I change the mapping of this field? I don't want to reindex millions of records just for this small accident.
Thanks
AFAIK, you can't remove a single field and recreate it.
You can not either just modify a mapping and have everything reindexed automagicaly. Imagine that you don't store _source. How can Elasticsearch know what your data look like before it was indexed?
But, you can probably modify your mapping using a multifield with field10.field10 using the old mapping and field10.new with the new analyzer.
If you don't reindex, only new documents will have content in field10.new.
If you want to manage old documents, you have to:
Send again all your docs (it will update everything) - aka reindex (you can use scan & scroll API to get your old documents)
Try to update your docs with the Update API
You can probably try to run a query like:
curl -XPOST localhost:9200/crunchbase/person/1/_update -d '{
"script" : "ctx._source.field10 = ctx._source.field10"
}'
But, as you can see, you have to run it document by document and I think it will take more time than reindexing all with the Bulk API.
Does it help?