I want to create multiple issues at once using /rest/api/2/issue/bulk endpoint.
However, I want it to fail if ANY of the tickets fail. Right now it creates tickets that are correct, but my preferred way is to block it from adding any ticket if at least one fails.
Is there a way to do it? Thanks!
If the response from the server is an error message (IE the request failed) why not use that as the point to stop processing any more requests?
Related
We have a concurrency issue in our system. This one occurs mainly during burst load through our API from an external system and is not reproducible manually.
So I would like to create a Gatling test to 1) reproduce it whenever I want and 2) check that we have solved the issue.
1) I am done for the first point. I have created two requests checking for the status 201 and I run them with many users.
2) This issue allow the creation of two resources with the same unique value. The expected behaviour is to have one that is created and the others should fail with the status 409. But I have no idea on how we can check that any of the request, but at least once, complete with 201 while all the others are failing with 409.
Can we do a kind of post-check on all requests with Gatling ?
Thanks
Store results already seen in a global ConcurrentHashMap and compute expected value in the is check in a function, based on presence in the CHM (201 for missing or 409 for existing).
I don't think you can achieve what you're after with a check on the call itself as gatling users have no visibility of results returned to other users, so you have no way of knowing whether the successful (201) request has already been made (short of some very messy hacking using a check transformer)
But you could use simulation level assertions to do this.
So you have your request where you assert that you expect a 201 response
http("my request")
.get("myUrl")
.check(status.is(201))
this should result in all but one of these requests failing in a simulation, which you can specify using the assertion...
setUp(
myScenario.inject(
...
)
)
.assertions(
details("my request").successfulRequests.count.is(1))
I have 2 testcases.
One test case, tests the authentication and gets token info in the header. I want to store it and use it in another testcase.
I tried store processor at 1st teastcase and tried retrieving it from 2nd testcase. But it says "couldnot find the key." Is this approach wrong?
Also, I tried queue /dequeue option. In this case i got,
Time out waiting for queue to return a result.
How to achieve this simple case?
Authentication should be mocked on all the other methods since you'll be testing the functionality and not the actual response from the server.
Take a look to this link:
https://docs.mulesoft.com/munit/2.2/mock-event-processor
For a RESTful API that I'm creating, I need to have some functionality that get's a resource, but if it doesn't exist, creates it and then returns it. I don't think this should be the default behaviour of a GET request. I could enable this functionality on a certain parameter I give to the GET request, but it seems a little bit dirty.
The main point is that I want to do only one request for this, as these requests are gonna be done from mobile devices that potentially have a slow internet connection, so I want to limit the requests that need to be done as much as possible.
I'm not sure if this fits in the RESTful world, but if it doesn't, it will disappoint me, because it will mean I have to make a little hack on the REST idea.
Does anyone know of a RESTful way of doing this, or otherwise, a beatiful way that doesn't conflict with the REST idea?
Does the client need to provide any information as part of the creation? If so then you really need to separate out GET and POSTas otherwise you need to send that information with each GET and that will be very ugly.
If instead you are sending a GET without any additional information then there's no reason why the backend can't create the resource if it doesn't already exist prior to returning it. Depending on the amount of time it takes to create the resource you might want to think about going asynchronous and using 202 as per other answers, but that then means that your client has to handle (yet) another response code so it might be better off just waiting for the resource to be finalised and returned.
very simple:
Request: HEAD, examine response code: either 404 or 200. If you need the body, use GET.
It not available, perform a PUT or POST, the server should respond with 204 and the Location header with the URL of the newly created resource.
I'm using signed POST to upload file directly to amazon S3. I had some trouble with the signature of the policy using PHP but finally fixed it and here is the sample of code.
This xhr request is send in javascript and I'm waiting for an answer from amazon. At first I was using success_action_status setting it to 201 to get the XML response.
What I'd like to do is using the success_action_redirect to call a script on my server to create a record in the database.
The reason why is that I could create the record in the database and if anything wrong happen at this stage I can return an error message directly at this point. Also it saves me another ajax request to my server.
So I've tried to set this up specifying the success_action_redirect to http:\\localhost\callback.php where I have a script that is waiting for some parameters.
But it looks like this script is never called and the response of the xhr.send() is empty.
I think it's a cross-browser issue and I'm wondering if it would be possible to use jsonp somehow to pass-by this?
Any ideas?
UPDATE
Apparently xhr is following redirect natively so it should work but when I specified the success_action_redirect it returns error Server responded with 0 code.
At first I thought it was because the redirect URL was on my local server so I've changed it to an accessible server but no chance.
Anyone knows why it's returning this error message?
I also run into this problem. It seems like nobody has a solution to this like this
maybe the best workaround i have found is something like this.
It seems thet the only workaround includes a second xhr-request to execute the callback manually. therefore the
success_action_status
should be used. Witht his you will get a 201 response if the upload was successful and you can start a second request for the actual callback. For me it looks like the only possible solution at the moment.
Any other solutions?
In Silverlight I got the following problem. If you fire multiple requests to the web service, the responses might not return in an ordered sequence. Meaning if the first request takes longer than the following ones, its response will return at last:
1. Sending request A.. (takes longer for some reason)
2. Sending request B..
3. Sending request C..
4. ...
5. Receiving response B
6. Receiving response C
7. Receiving response A
Now in my scenario, I am only interested in the most recent request being made. So A and B should be discareded and C should be kept as only accepted response.
What is the best approach to manage this? I came up with this solution so far:
Pass a generated GUID as user object when sending the request and store that value somewhere. As all responses will contain their respective GUID, you can now filter out the stale responses. A request-counter instead of a GUID would work as well.
Now I wonder if there are any better approaches to this. Maybe there are any out of the box features to make this possible? Any ideas are welcome..
I take a similar approach in my non-WCF ASP.NET web services, though I use the DateTime of the request instead and then just store the DateTime of the most recent request. This way I can do a direct less than comparison to determine if the returning service is the most recent or not.
I did look into canceling old service calls before making new ones, but there is no CancelAsync call for web services in Silverlight and I have been unable to find an equivalent way of doing this.
Both of these approaches are what I took when I worked on a real time system with a lot of service calls. Basically just have some way to keep track of order ( incrementing variable, timestamp, etc. ) then keep track of highest received response. If the current response is lower than the highest, drop it.