I am using the following RallyApi service to communicate with RallyDev:
https://rally1.rallydev.com/slm/webservice/1.40/RallyService
I have the following method:
public HierarchicalRequirement GetFeedbackById(string usid)
{
var query = string.Format("(FormattedID = \"{0}\")", usid);
const string orderByString = "CreationDate desc";
var rallyService = GetRallyService();
var rtnval = rallyService.query(Workspace, Projs["XXX"], true, true,"HierarchicalRequirement", query,
orderByString, true, 1, 20).Results[0] as HierarchicalRequirement;
return rtnval;
}
Although I am successfully retrieving the "HierarchicalRquirement" object using the "FormattedID", I am not able to load the associated "ConversationPost" objects for this story, Since all the nested complex objects of the "HierarchicalRquirement" contains the "ref" and "reffield" property and nothing else.
Could you please let me know if there is a way to actively load all the associated discussions when we query for the story or if there is a query as follows:
rallyService.query(Workspace, Projs["XXX"], true, true, "ConversationPost", query, orderByString, true, 1, 20)
Using the above can I search for discussions(ConversationPost) using FormattedID?
Thanks for your help.
Regards,
Varun
You're right on target with your use of rallyService.read(). With SOAP, even with fetchFullObjects=true, any Artifact attributes that are themselves Rally objects, are hydrated with refs to those object.
Especially if you're just getting started with building your integration, I'd highly recommend using REST:
http://developer.help.rallydev.com/rest-apis
instead of SOAP.
REST is more robust, more performant, and, the soon-to-be-released Webservices API 1.41, will be the final API release to have SOAP support. Webservices 2.x will be REST-only, so using REST will be essential to anyone wanting new Webservices features moving forward.
Related
I am using Pivotal GemFire 9.0.0 with 1 Locator and 1 Server. The Server has a Region called "submissions", like below -
<gfe:replicated-region id="submissionsRegion" name="submissions"
statistics="true" template="replicateRegionTemplate">
...
</gfe:replicated-region>
I am getting Region as null when executing the following code -
Region<K, V> region = clientCache.getRegion("submissions");
Surprisingly, the same ClientCache returns all the records when I query using OQL and QueryService as shown below -
String queryString = "SELECT * FROM /submissions";
QueryService queryService = clientCache.getQueryService();
Query query = queryService.newQuery(queryString);
SelectResults results = (SelectResults) query.execute();
I am initializing my ClientCache like this -
ClientCache clientCache = new ClientCacheFactory()
.addPoolLocator("localhost", 10479)
.set("name", "MyClientCache")
.set("log-level", "error")
.create();
I am really baffled by this. Any pointer or help would be great.
You need to configure your ClientCache (either through a cache.xml or pure GemFire API) with the regions as well. Using your example:
ClientRegionFactory regionFactory = clientCache.createClientRegionFactory(ClientRegionShortcut.PROXY);
Region region = regionFactory.create("submissions");
The ClientRegionShortcut.PROXY is used just for the sake of simplicity, you should use the shortcut that meets your needs.
The OQL works as expected because you are obtaining the QueryService through the ClientCache.getQueryService() method (instead of ClientCache.getLocalQueryService()), so the query is actually executed on Server Side.
You can get more information about how to configure the Client/Server topology in
Client/Server Configuration.
Hope this helps.
Cheers.
Yes, you need to "define" the corresponding client-side Region, matching the server-side REPLICATE Region by name (i.e. "submissions"). Actually this is a requirement independent of the server Regions' DataPolicy type (e.g. REPLICATE or PARTITION).
This is necessary since not every client wants to know about or even needs have data/events from every possible server Region. Of course, this is also configurable through subscription and "Interests Registration" (with Client/Server Event Messaging, or alternatively, CQs).
Anyway, you can completely avoid the use of the GemFire API directly or even GemFire's native cache.xml (highly recommend avoiding) by using either SDG's XML namespace...
<gfe:client-cache properties-ref="gemfireProperties" ... />
<gfe:client-region id="submissions" shortcut="PROXY"/>
Or by using Spring JavaConfig with SDG's API...
#Configuration
class GemFireConfiguration {
Properties gemfireProperties() {
Properties gemfireProperties = new Properties();
gemfireProperties.setProperty("log-level", "config");
...
return gemfireProperties;
}
#Bean
ClientCacheFactoryBean gemfireCache() {
ClientCacheFactoryBean gemfireCache = new ClientCacheFactoryBean();
gemfireCache.setClose(true);
gemfireCache.setProperties(gemfireProperties());
...
return gemfireCache;
}
#Bean(name = "submissions");
ClientRegionFactoryBean submissionsRegion(GemFireCache gemfireCache) {
ClientRegionFactoryBean submissions = new ClientRegionFactoryBean();
submissions.setCache(gemfireCache);
submissions.setClose(false);
submissions.setShortcut(ClientRegionShortcut.PROXY);
...
return submissions;
}
...
}
The "submissions" Region can be wrapped with SDG's GemfireTemplate, which will handle getting the "correct" QueryService on your behalf when running queries using the find(..) method.
Of course, you may be interested in making your client "submissions" Region a CACHING_PROXY" too. Of course, you will then need to register "interests" in the keys or data of interests. CQs are the best way to do this as it uses query criteria to define the data of "interests".
CACHING_PROXY is exactly as it sounds, caching data locally in the client based on the interests policies. This also gives you the ability to use the "local" QueryService to query data locally, avoiding the network hop.
Anyway, many options here.
Cheers,
John
I am trying to migrate my code to Core.
I was using DocumentDB TransientFaultHandling package, but I can't seem to find it for a Core library.
Is it still best practice to use it, or are there other options for achieving the same results?
TIA
The current SDK (both Core and Full Framework) already include the fault handling that was part of the TransientFaultHandling package, not entirely the same since you can't define an exponential logic, but it works on the most common scenarios.
It's on the ConnectionPolicy settings:
var _dbClient = new DocumentClient("Db_uri", "Db_key", new ConnectionPolicy()
{
MaxConnectionLimit=100,
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp,
RetryOptions = new RetryOptions() { MaxRetryAttemptsOnThrottledRequests=3, MaxRetryWaitTimeInSeconds=60 }
});
Ember: 1.5.1 ember.js
Ember Data: 1.0.0-beta.7.f87cba88
I have a need for asymmetrical (de)serialization for one relationship type: sideloaded records on deserializing and embedded on serializing.
I have asked for this in the standard way:
RailsEmberTest.PlanItemSerializer = DS.ActiveModelSerializer.extend(DS.EmbeddedRecordsMixin, {
attrs: {
completions: {serialize: 'records', deserialize: 'ids'}//embedded: 'always'}
}
});
However, it doesn't seem to work. Following the execution through, I find that at line 498 of Ember data, the serializer decides whether or not to embed a relationship:
embed = attrs && attrs[key] && attrs[key].embedded === 'always';
At this stage, the attrs hash is well-formed, with completions containing the attributes as above. However, this line results in embed being false, and consequently the record is not embedded.
Overriding the value of embed to true makes it all hunky-dory.
Any ideas why Ember data is ignoring the settings? I suspect that maybe in my version the only option is embedded, and I need to upgrade to a later version to take advantage of the asymmetrical settings for serialize and deserialize.
However, given the possible manifold changes I am fearful of upgrading!
I'd be very grateful for your advice.
Courtesy of the London Ember meetup, I now know that it was simply down to the version of Ember Data! Now upgraded to the latest beta with no trouble.
Few month back I was working on some Odata WCF project and I had some problems with parsing custom headers for token auth (apiKey).
At that time, being quite a noob (still am!), I posted this SO question: JayData oData request with custom headers
Today I am working on a new project with Jaydata Odata server and client library and this:
application.context.prepareRequest = function (r) {
r[0].headers['apikey'] = '123456';
};
was working fine till I had to do a MERGE request. I found out that somehow MERGE request was overriding my headers so I investigated further.
It appears at first that in the oDataProvider.js (~line 617) in the _saveRest method the headers are not inherited:
request = {
requestUri: this.providerConfiguration.oDataServiceHost + '/',
headers: {
MaxDataServiceVersion: this.providerConfiguration.maxDataServiceVersion
}
};
but a few lines later we get:
this.context.prepareRequest.call(this, requestData);
which "should" call my own prepareRequest, but doesnt... Instead it still points to:
//Line 11302 jaydata.js
prepareRequest: function () { },
which of course does... nothing! Funnilly enough, when you execute a simple GET the same code supposedly on the same context instance works and points to my prepareRequest override.
I can assert with enough confidence that somehow the context between GET/MERGE is not the same instance. I cant see, however, any place where the context instance is reassigned.
Has anyone got a clue?
PS: this is NOT a CORS issue. My OPTIONS is passing fine and manually feeding the headers in oDataProvider works.
More
I followed the lead on different context instances and found something interesting. calling EntitySet.save() ends up calling the EntityContext constructor. see trace:
$data.Class.define.constructor (jaydata.js:10015)
EntityContext (VM110762:7)
Service (VM110840:8)
storeToken.factory (jaydata.js:14166)
$data.Class.define._getContextPromise (jaydata.js:13725)
$data.Class.define._getStoreContext (jaydata.js:13700)
$data.Class.define._getStoreEntitySet (jaydata.js:13756)
$data.Class.define.EntityInstanceSave (jaydata.js:13837)
$data.Entity.$data.Class.define.save (jaydata.js:9774)
(anonymous function) (app.js:162) // My save()
That explains why I get two different instances...
Hack
Replacing the prepareRequest function direcly in the class definition works, but its ugly!
for now I can cope with this:
$data.EntityContext.prototype.prepareRequest = function (r) {
r[0].headers['apikey'] = '12345';
};
This works fine as long as you only need to talk to a single endpoint.
Final word based on my experience
As much as I like JayData, it is obvious that they created a monster and its getting out of their hands (poor forum, no community, half-documented,...).
I chose JD because I was lazy and wanted to keep working with my old WCF DataService. Switching to Web API seemed wrong or too much work for me.
Also as a .net dev I liked strong typing of my entities and the ability to work with a concrete model generated from the JD tools. However, in the end, I was adding confusion. Every time my server side model changed I had to fetch the new metadata and scaffold a new entityModel.
I ended up by switching to Web Api and migrated my data service layer to Breeze. And seriously! its a breeze to work with it!
The documentation is absolutely brilliant and here on S.O you can always count on Ward or Jay Tarband to reply with a very high amount of professionalism.
In the end I realize this should probably be more a wiki than a Question.....
I need to obtain a parent issue of given issue via SOAP API, or even using database. It seems to be very basic objective, however I didn't find any useful information in internet. Besides, I didn't find any fields in jira's db tables (jiraissue) to set the parent issue of an issue.
Additional info: Jira 5.1, c# .Net
As far as I know there is no way to do this using the SOAP directly.
One possible solution would be using the Jira Scripting Suite. You can create a post-function that will run after the Open status that will copy the parent to a custom field using getParentObject. Then you could use the SOAP function getCustomFields to get the parent.
Another solution via REST API:
Issue issue = getRestClient().getIssueClient().getIssue(task.getKey(), new NullProgressMonitor());
Field issueParent = issue.getField("parent");
if (issueParent !=null){
JSONObject jsonParent = (JSONObject)issueParent.getValue();
BasicIssue partsedIssue = null;
try {
partsedIssue = new BasicIssueJsonParser().parse(jsonParent);
} catch (JSONException e1) {
e1.printStackTrace();
}
System.out.println("parent key: "+partsedIssue.getKey());
}