I'm trying to log Meal Entry using Jawbone API. In API documentation described sub_type parameter which is responsible for Meal Type (breakfast/lunch/dinner). However it looks like this parameter doesn't control anything and everything is controlled by time_created/tz parameters. Could anyone help me to understand exact logic responsible for where Logged Meal goes - Breakfast/Lunch/Dinner/Snack.
update 20/09/2016
I'd like to be able to log Meal(as Breakfast/Dinner/Lunch or Snack) and see result in user feed as Breakfast/Dinner/Lunch or Snack (https://jawbone.com/up/food/meals). For now I'm interested only in these 4 Meal Types because of compatibility with our app.
I've found that it can be achieved by specifying time_created = (${begin_of_the_day} + mealTypeAdjustment), where mealTypeAdjustment=
-7h for Breakfast
-13h for Lunch
-19h for Dinner
these numbers are just my assumption which work so far. But there is no Jawbone documentation about this logic so my questions:
1) how can I controll using time_created, where logged Meal appears in user feed(Breakfast/Dinner/Lunch or Snack)?
2) I still didn't get how to log Snack. Few times I was able to do it by randomizing input parameters but unfortunately I can't reproduce it now.
The sub_type is just a piece of metadata about the meal in case you would like to classify a meal as breakfast/lunch/dinner.
Where a meal entry appears in a user's feed is dictated by time_created. In fact, there is no direct connection between time_created and sub_type.
Here are all the meal sub_type values:
sub_type | value
-------------|-------
Breakfast | 1
Lunch | 2
Dinner | 3
Pre-Workout | 4
Post-Workout | 5
Snack | 6
Related
I have a server with 2 APIs: /migrate/start and /migrate/end
For each request, I log the userID (field usrid="") of the user using my service to be migrated and the api called (field api="").
Users call /migrate/start, then call /migrate/end. I would like to write a slunk query to list the userIDs that are being migrated, i.e. those that called /migrated/start but have yet to call /migrate/end. How would I write that query?
Thank you
Assuming you have only 2 api calls (start/end) in the logs, you can use a stats command to do this.
| your_search
| stats values(api) as api by usrid
| where api!="/migrate/end"
This clubs all api calls done per user and removes the ones which have called /migrate/end
The general method is to get all the start and end events and match them up by user ID. Take the most recent event for each user and throw out the ones that are "migrate/end". What's left are all the in-progress migrations. Something like this:
index = foo (api="/migrate/start" OR api="/migrate/end")
| stats latest(api) by usrid
| where api="/migrate/start"
While using data-driven feature in Karate framework, I see the generated report just show the title as configured in Scenario Outline NOT attached the value using in Example table. It causes the Tester confuse which data is using, and take time to expand each scenarios to know which data is using; so I want the report can pass variable into the title - Scenario/Scenario Outline. Please take a look at the example below.
E.g.
Feature: Login Feature
Background:
* configure headers = { 'Webapp-Version': '1.0.0'}
Scenario Outline: As a <description> user, I want to get the corresponding response_code <status_code>
Given def path = 'classpath:features/Authentication/authentication.feature'
And def signIn = call read(path) {username: '<username>', password: '1234567890'}
Then match signIn.status == <status_code>
Examples:
|username | status_code| description |
|test#gmail.com | 200 | valid user |
|null | 400 | invalid user|
My expected result, the generated report should fill the value on table for field "status code" and "description" fields.
-> As a valid user user, I want to get the corresponding response_code 200.
Please share your ideas and comments on it.
Thanks,
Learn.
Not supported. Just use the print syntax and you will see it in the report.
EDIT: okay this will be possible in the next version: https://github.com/intuit/karate/issues/553
I want to write a feature file that will pass unique data everytime I run my test.
Feature : - Create Facebook account
Scenario Outline: Create new account
Given I go to facebook.com
And I enter "First_Name>Last_name>DOB>Password>ConfirmPassword>Email>ConfirmEmail>"
When I click on Create account
Then I should get welcome to facebook message
Examples:
| First_Name | Last_name | DOB | Password | ConfirmPassword | Email | ConfirmEmail |
| Gary | English | 11/01/1989 | test123 | test123 | gar#mail.com | gar#mail.com |
| Barry | Smith | 01/11/1982 | test123 | test123 | bar#mail.com | bar#mail.com |
My question is:
When I run above scenario there will be 2 Facebook accounts created. When I commit my code and test are run every morning, they will fail unless I change email every-time to make them unique. As any system will check if email address provided already exists or not.
How do I tackle this issue where I don't have to change my Create account feature file data every time.
I hope someone of us should have come across such issue.
Note: I could not open and close <> as text was not visible between those brackets, hence I have kept just 1 bracket
Then don't put it in the feature file. Build a randomizer into the steps method. You should also change the feature file to reflect this.
And I enter "<First_Name>, <Last_name>, <DOB>, <Password>, <ConfirmPassword>, <Email>, <ConfirmEmail>"
And uses a random email
Or if you'd rather not build a separate condition, then use a keyword to signal that the value should be randomized (my preferred method):
|Barry|Smith|01/11/1982|test123|test123|[random]|[random]|
So when passed to the registration method, if the value doesn't validate as a legitimate email address (you are doing validation, right?), check for the [random] keyword. If it is there, then build a random valid email.
I have a list of IDs and Values in a file and would like to use postman to connect to an API and update the records found in this file.
I tried with the Runner but am stuck in writing the syntax.
The answer is pretty simple and very well explained on this page
You can start with the a basic "put/post" - try to modify one single data set with static values to determine how the final query needs to be build. In my case the API accepted only RAW JSON formated data payloads.
As soon as you have your static postman query running - you can start automating it by determining which parts should be replaced. This data should be found in a data file (JSON or CSV). The schema is important for postman to understand the data. As reference I state the example as if I would like to replace an ID and a Value. My data document has one more column which is not a problem.
+--------+--------+--------+
| id | email | value |
+--------+--------+--------+
| data 1 | data 1 | data 1 |
+--------+--------+--------+
| data 2 | data 2 | data 2 |
+--------+--------+--------+
| data 3 | data 3 | data 3 |
+--------+--------+--------+
Column two (aka email) will be ignored and not be used. Notice how "id" and "value" are written in the header.
I would like to replace the ID which needs to be attached to the API endpoint and like to update a value which is within the dataset of this ID. Replacing the static parts with variables like {{variable}} allows Postman to understand that it needs to fill dynamic data here.
Notice that the variable attached to the URL says that it is not defined in the environment - if you did not set it up in the environment, this is correct and will work with data files.
I used simple tests to confirm if the data of the file made it into my query:
tests["URL has ID"] = responseURL.has(data.id);
tests["Body contains SFID"] = responseBody.has(data.value);
If you reach this point - all there is left to do is to go to the runner page, select the query to run, add the data file (you should preview if everything looks okay) and run it.
My team have settled on MSpec for our BDD testing framework, which from their usage so far looks really good - but I'm struggling with the documentation/google for finding any implementation similar to SpecFlow's 'Scenario Outline'. I've shown an example of this below, but basically it allows you to write one 'test' and run it multiple times from a table (example) of inputs/expected outputs. I'll be embarrassed if answer turns out to be a LMGTFY but I've not been able to find anything myself. I don't want to say to the team it's not possible if I've just not found how to do it in MSpec (or understood MSpec properly). I wonder if this is why in some of the pro's/con's for MSpec I see references to the number of classes you can end up with listed as a negative.
Example of SpecFlow Scenario Outline
Scenario Outline: Successfully Convert Seconds to Minutes Table
When I navigate to Seconds to Minutes Page
And type seconds for <seconds>
Then assert that <minutes> minutes are displayed as answer
Examples:
| seconds | minutes |
| 1 day, 1 hour, 1 second | 1500 |
| 5 days, 3 minutes | 7203 |
| 4 hours | 240 |
| 180 seconds | 3 |
From: https://gist.github.com/angelovstanton/615da65a8f821d7a43c92ef9e2fd0b01#file-energyandpowerconvertcalculator-feature
Short answer, this is current not supported by by mspec. We planned this several years back, but the contribution never made it back into master.
If you want scenario outlines either use a different framework or create parameterized static methods in a helper class and call these from your context classes. Which will leave you with 1 class per scenario.