I need a good example of using the LookbackAPI to get the data for a burn up chart. I see some limited questions and responses on the API but no examples on how I would use it to do so. I need to get the current scope on story points and story points completed.
Sorry for the scarcity of available examples. More and better examples will be coming as the LBAPI beta matures. I'd definitely recommend that you become familiar with the Lookback API (LBAPI) Documentation, as there are good examples there for formulating queries.
For a burnup, let's say you want to get the state Snapshots for an Iteration going from 15-Jan-2013 through 30-Jan-2013, and that the Iteration applies to a Project hierarchy that is four-deep. The following LBAPI query would obtain the PlanEstimate, ToDo, and Schedule State for Stories scheduled into that Iteration:
{
find:
{
_TypeHierarchy:"HierarchicalRequirement",
Children:null,
_ValidFrom:{
$gte:"2013-01-15TZ",
$lt:"2013-01-30TZ"
},
Iteration:{
$in:[
12345678910,
12345678911,
12345678912,
12345678913
]
}
},
fields:[
"PlanEstimate",
"ToDo",
"ScheduleState"
]
}
Where:
$in:[
12345678910,
12345678911,
12345678912,
12345678913
]
Are the ObjectID's of the Iteration called "Iteration 1". It's probably easiest to get these Object ID's from a standard WSAPI query on Iterations: (Name = "Iteration 1"). For Iterations copied into a four-deep project hierarchy, we would see the four Iteration OID's similar to the above.
For charting, the toughest part right now is an easy way to deal with the Time-Series data. The most robust way to query and process LBAPI data currently is by working directly against the REST endpoint and processing the returned JSON results in your own code.
With Javascript apps, for processing the data and turning it into a Chart, the preferred toolkit is AppSDK2, specifically the SnapshotStore.
For Javascript apps, the Lumenize javascript library is separate from LBAPI, but was developed by Rally's director of analytics and is bundled in the SDK. You can find some examples of using LBAPI and Lumenize to produce charting as part of some Rally-internal and Rally-customer Hackathon projects here:
https://github.com/RallyHackathon
Please be cautious with these examples for a couple of reasons:
Several aspects of the Lumenize namespace will be changing/renamed for clarity
There's a bug in the current version of Lumenize where its timeSeriesCalculator does not correctly account for stories deleted or reparented.
Hopefully there will be an updated version of AppSDK2 bundled and released soon to consolidate the Lumenize namespace and resolve the bug, so that there's better glue between AppSDK2 and LBAPI for Javascript App development.
Unfortunately, the .NET, Java and Python toolkits have not yet been updated to support the Lookback API. As a result, you'll have to do an HTTP POST to the Lookback API's REST endpoint directly, with a body similar to the one Mark W listed above and Content-Type 'application/json'.
I'd recommend using the Chrome extension 'XHR Poster' to experiment with what you're sending from a browser:
https://chrome.google.com/webstore/detail/xhr-poster/akdbimilobjkfhgamdhneckaifceicen
Related
I decided to move my application to a new level by creating a RESTful API.
I think I understand the general principles, I have read some tutorials.
My model is pretty simple. I have Projects and Tasks.
So to get the lists of Tasks for a Project you call:
GET /project/:id/tasks
to get a single Task:
GET /task/:id
To create a Task in a Project
CREATE /task
payload: { projectId: :id }
To edit a Task
PATCH /task/:taskId
payload: { data to be changed }
etc...
So far, so good.
But now I want to implement an operation that moves a Task from one Project to another.
My first guess was to do:
PATCH /task/:taskId
payload: { projectId: :projectId }
but I do not feel comfortable with revealing the internal structure of my backend to the frontend.
Of course, it is just a convention and has nothing to do with security, but I would feel better with something like:
PATCH /task/:taskId
payload: { newProject: :projectId }
where there is no direct relation between the 'newProject' and the real column in the database.
But then, the next operation comes.
I want to copy ALL tasks from Project A to Project B with one API call.
PUT /task
payload: { fromProject: :projectA, toProject: :projectB }
Is it a correct RESTful approach? If not - what is the correct one?
What is missing here is "a second verb".
You can see that we are creating a new task(s) hence: 'PUT' but we also 'copy' which is implied by fromProject and toProject.
Is it a correct RESTful approach? If not - what is the correct one?
To begin, think about how you would do it in a web browser: the world wide web is the reference implementation for the REST architectural style.
One of the first things that you will notice: on the web, we are almost always using POST to make changes to the server. You fill in a form in a browser, submit the form, the browser takes information from the input controls of the form to create the HTTP request body, the server figures out how to do the work that is described.
What we have in HTTP is a standardized semantics for messages that manipulate individual documents ("resources"); doing useful work is a side effect of manipulating documents (see Webber 2011).
The trick of POST is that it is the method whose standardized meaning includes the case where "this method isn't worth standardizing" (see Fielding 2009).
POST /2cc3e500-77d5-4d6d-b3ac-e384fca9fb8d
Content-Type: text/plain
Bob,
Please copy all of the tasks from project A to project B
The request line and headers here are metadata in the transfer of documents over a network domain. That is to say, that's the information we are sharing with the general purpose HTTP application.
The actual underlying business semantics of the changes we are making to documents is not something that the HTTP application cares about -- that's the whole point, after all.
That said - if you are really trying to do manipulation of document hierarchies in general purpose and standardized way, then you should maybe see if your problem is a close match to the WebDAV specifications (RFC 2291, RFC 4918, RFC 3253, etc).
If the constraints described by those documents are acceptable to you, then you may find that a lot of the work has already been done.
I'm trying to figure out if FeathersJS suits my needs. I have looked at several examples and use cases. FeathersJS uses a set of request methods : find, get, create, update, patch and delete. No other methods let alone custom methods can be implemented and used, as confirmed on this other SO post..
Let's imagine this application where users can save their app settings. Careless of following method conventions, I would create an endpoint describing the action that is performed by the user. In this case, we could have, for instance: /saveSettings. Knowing there won't be any setting-finding, -creation, -updating (only some -patching) or -deleting. I might also need a /getSettings route.
My question is: can every action be reduced down to these request methods? To me, these actions are strongly bound to a specific collection/model. Sometimes, we need to create actions that are not bound to a single collection and could potentially interact with more than one collection/model.
For this example, I'm guessing it would be translated in FeathersJS with a service named Setting which would hold two methods: get() and a patch().
If that is the correct approach, it looks to me as if this solution is more server-oriented than client-oriented in the sense that we have to know, client-side, what underlying collection is going to get changed or affected. It feels like we are losing some level of freedom by not having some kind of routing between endpoints and services (like we have in vanilla ExpressJS).
Here's another example: I have a game character that can skill-up. When the user decides to skill-up a particular skill, a request is sent to the server. This endpoint can look like POST: /skillUp What would it be in FeathersJS? by implementing SkillUpService#create?
I hope you get the issue I'm trying to highlight here. Do you have some ideas to share or recommendations on how to organize the API in this particular framework?
I'm not an expert of featherJs, but if you build your database and models with a good logic,
these methods are all you need :
for the settings example, saveSettings corresponds to setting.patch({options}) so to the route settings/:id?options (method PATCH) since the user already has some default settings (created whith the user). getSetting would correspond to setting.find(query)
To create the user AND the settings, I guess you have a method to call setting.create({defaultOptions}) when the user CREATE route is called. This would be the right way.
for the skillUp route, depends on the conception of your database, but I guess it would be something like a table that gives you the level/skills/character, so you need a service for this specific table and to call skillLevel.patch({character, level})
In addition to the correct answer that #gui3 has already given, it is probably worth pointing out that Feathers is intentionally restricting in order to help you create RESTful APIs which focus on resources (data) and a known set of methods you can execute on them.
Aside from the answer you linked, this is also explained in more detail in the FAQ and an introduction to REST API design and why Feathers does what it does can be found in this article: Design patterns for modern web APIs. These are best practises that helped scale the internet (specifically the HTTP protocol) to what it is today and can work really well for creating APIs. If you still want to use the routes you are suggesting (which a not RESTful) then Feathers is not the right tool for the job.
One strategy you may want to consider is using a request parameter in a POST body such as { "action": "type" } and use a switch statement to conditionally perform the desired action. An example of this strategy is discussed in this tutorial.
Pretty simple you'd think given the popularity of both, but I am encountering a few hurdles.
I am using scaphold.io to be able to quickly show off a working UI. That is, if Vue can interact with Scaphold.
I have investigated two resources:
https://github.com/kristianmandrup/vue2-apollo-scaphold
Which seems to be a Scaphold production. Tried it. Many, many vague bugs.
Then there is also:
https://github.com/Akryum/vue-apollo
But this is too much. I don't need a server, the server is on Scaphold.
I also tried building the whole thing up by using the tutorial on howtographql, but this one is also outdated.
Ideally I want to instantiate an as up to date Vue 2 app using (I guess) the npm vue-cli, then install only the required apollo (or whatever, but I guess apollo) add-ons that I need. The minimum.
Shouldn't be too hard, I'll figure it out eventually, but some help is more than welcome.
You can consume a graphql api using your favorite regular request module (ajax, fetch, axios). Take the scalphold docs for example, but in the callback do this.vueUserData = body.data.getUser;
instead of
console.log(JSON.stringify(body, null, 2));
(edited to add one gotcha I remembered: if you encounter a problem where the callback doesn't know that this is supposed to be the component, you can do var self = this before the request function, then reference self.vueUserData instead.)
There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.
I'm trying to benchmark/ do performance testing of API's at my work. So the client facing is REST format while the backend data is retrieved by SOAP messages. So my question is can some of you share your thoughts on how you implement it (if you have done so in the past/doing it now), am basically interested in avg response time it takes for API to return results for the client
Please let me know if you need any additional information to answer the question
Could not say it any better than Mark, really: http://www.mnot.net/blog/2011/05/18/http_benchmark_rules
Maybe you should give JMeter a try.
You can try using Apache Benchmark.This is simple and quick
Jmeter gives you additional flexibility like adding functional cases along with performance details. Results will be almost similar to Apache Benchmark tool.
The detailed one which gives Functional Test Result, performance counters settings, Call response time details, CPU and Memory changes along with Load/Stress results, with different bandwidth and browser settings - Visual Studio Team system
I used VSTS2010 for performance testing. Also GET and POST are straight forward. PUT and DELETE need coded version of webtest.
Thanks,
Madhusudanan
Tesco
If you are trying to test the REST -> SOAP calls. One more thing you can consider is to have some stubs created (for backend). This way you can perf test REST -> Stub performance followed by Stub -> SOAP perfomance. This will help in analyzing the individual components.