How do I track Amazon packages by the "TBA" number? - api

I have a tracking number from Amazon that starts with TBA that I'd like to track via their API. I've seen their getPackageTrackingDetails endpoint but it takes an integer as input and I get an error when I try to use a TBA number on that endpoint. I know it is possible somehow, since AfterShip can do it (just enter a valid tracking number that starts with TBA). I cannot find in Amazon's docs how to do it and Amazon customer support doesn't know how to do it, either.

You have to distinguish between the packageNumber (which is an integer) and the trackingNumber (which is a string). When creating your shipment, you will get the packageNumber. With that number you can call getPackageTrackingDetails.
The Shipping-Api seems to be the right endpoint to use. See https://github.com/amzn/selling-partner-api-docs/blob/main/references/shipping-api/shipping.md#get-shippingv1trackingtrackingid
The getTrackingInformation operation accepts a tracking number as an input parameter.

Looking through the API documentation, there doesn't seem to be a good way to go from TBA number (if you can't just cut off those first three letters) to Package ID.
My order of operations on fixing this problem:
Chop the first three letters off the TBA variable you have, convert to integer, try it. Per Andrew Morton's comment.
What AfterShip may also be doing is going from the order ID. If the TBA is closely related to the order ID, the Amazon API will give you the information to go from Order ID -> Fulfillment Shipment -> Fulfillment Package -> Package ID. You could then use the Package ID to get your package information. So I'd look at Order ID's as well as Package ID's to see if you could convert one to another.
Given Stevish's comment, it's possible that the TBA can be cut off the TBA number and used as a package number if there is only one package in the order, but things get more complicated in other situations.
If you're working on a site that has the seller Order ID stored on your side, that seems to be the intent they have for getting it through the API.

Related

How can I fetch (via GET) all JIRA issues? Do I go to the Search node?

It looks like /api/2/project easily returns all projects in a JIRA instance in JSON format.
I'd like to do the same for issues, but this does not appear to exist.
Is /api/2/search the standard way to do a mass-dump like this? And what is the best way to regularly update this to a database? Would I do something like search (update date > [last entry in database]) and then go through the pagination? Surely I can't be the first person attempting this, though I see no similar guide anywhere online to this (I checked Jira's own docs, no mass-issue-export guide really).
EDIT: Okay it looks like search really is the "issue dump" and not the issue node which, contrary to their documentation, does not default to a collection but really for creating issues or listing one at at time. I'll probably go the route of updated > [whatever last date is in the DB]
Unless you have very few issues, you can't fetch all of them at once.
What you can do is to execute the search step by step.
For example, lets say you have 1324 JIRA issues. In order to retrive all of them you have to execute a search similar to this several times:
/rest/api/2/search?&maxResults=100&startAt=0
This will retrive the first 100 JIRA issues starting from 0.
How to get the others?
When you execute the search, a field named total is returned. That field is the number of the total JIRA issues in your system (1324 issues).
The next query will be:
/rest/api/2/search?&maxResults=100&startAt=100
Repeat this operation, incrementing the value of startAt by 100 every time, until all the issues are returned.

how to list job ids from all users?

I'm using the Java API to query for all job ids using the code below
Bigquery.Jobs.List list = bigquery.jobs().list(projectId);
list.setAllUsers(true);
but it doesn't list me job ids that were run by Client ID for web applications (ie. metric insights) I'm using private key authentication.
Using the command line tool 'bq ls -j' in turn giving me only the metric insight job ids but not the ones ran with the private key auth. Is there a get all method?
The reason I'm doing this is trying to get better visibility into what queries are eating up our data usage. We have multiple sources of queries: metric insights, in house automation, some done manually, etc.
As of version 2.0.10, the bq client has support for API authorization using service account credentials. You can specify using a specific service account with the following flags:
bq --service_account your_service_account_here#developer.gserviceaccount.com \
--service_account_credential_store my_credential_file \
--service_account_private_key_file mykey.p12 <your_commands, etc>
Type bq --help for more information.
My hunch is that listing jobs for all users is broken, and nobody has mentioned it since there is usually a workaround. I'm currently investigating.
Jordan -- It sounds like you're honing in on what we want to do. For all access that we've allowed into our project/dataset we want to produce an aggregate/report of the "totalBytesProcessed" for all queries executed.
The problem we're struggling with is that we have a handful of distinct java programs accessing our data, a 3rd party service (metric insights) and 7-8 individual users who have query access via the web interface. Fortunately the incoming data only has one source so explaining the cost for that is simple. For queries though I am kinda blind at the moment (and it appears queries will be the bulk of the monthly bill).
It would be ideal if I can get the underyling data for this report with just one listing made with some single top level auth. With that I think from the timestamps and the actual SQL text I can attribute each query to a source.
One thing that might make this problem far easier is if there were more information in the job record (or some text adornment in the job_id for queries). I don't see that I can assign my own jobIDs on queries (perhaps I missed it?) and perhaps recording some source information in the job record would be possible? Just thinking out loud now...
There are three tables you can query for this.
region-**.INFORMATION_SCHEMA.JOBS_BY_{USER, PROJECT, ORGANIZATION}
Where ** should be replaced by your region.
Example query for JOBS_BY_USER in the eu region:
select
count(*) as num_queries,
date(creation_time) as date,
sum(total_bytes_processed) as total_bytes_processed,
sum(total_slot_ms) as total_slot_ms_cost
from
`region-eu.INFORMATION_SCHEMA.JOBS_BY_USER` as jobs_by_user,
jobs_by_user.referenced_tables
group by
2
order by 2 desc, total_bytes_processed desc;
Documentation is available at:
https://cloud.google.com/bigquery/docs/information-schema-jobs

CategoryId in venues search not working correctly

In foursquare Api documentation for "Search venues" https://developer.foursquare.com/docs/venues/search it states
"categoryId - A comma separated list of categories to limit results to. This is an experimental feature and subject to change or may be unavailable. If you specify categoryId you may also specify a radius. If specifying a top-level category, all sub-categories will also match the query."
Realise its supposed to be experimental, but when I provide Food category i.e. 4d4b7105d754a06374d81259, it only returns a few local results, the rest are miles away. However if I execute same search on website sing Food category, it returns correctly lots of results, assuming its the last bit "If specifying a top-level category, all sub-categories will also match the query" is not working , i.e. its not searching sub-categories ?
Any fix work around for this ?
Thanks,
Neil Pepper
You're making a /venues/search request with its default intent of intent=checkin. This returns a filter on nearby results, heavily biased by distance since it's trying to guess where the user might be checking in.
Foursquare Explore uses the /venues/explore endpoint and attempts to return recommended results for a query. If you want to get the sorts of results you get in that tool, call /venues/explore?section=food

Cost comparison using Solr

I plan to build something like pricegrabber.com/google product search.
Assume I already have the data available in a huge table. I plan to submit this all to Solr. This solves the problem of search. However I am not sure how to do comparison. I can do a group by query(on UPC/SKU) for the products returned by Solr on the DB. However, I dont want to do that. I want to somehow get product comparison data returned to me along with search from Solr itself.
How do you think should my schema be? Do you think this use-case can be solved all by Solr/Sphinx?
You need 'result grouping' or 'field collapsing' support to properly handle it.
In Solr, the feature is not available in any release version and is still under development. If you are willing to use an unreleased version of Solr, then get the details here.
Sphinx supports result grouping and I had used it a long time ago in a similar project. You can get more details here.
An alternative strategy could be to preprocess your data so that only a single record per UPC/SKU gets inserted in the index. Each record can have a separate field containing the ids of all the items with the same UPC/SKU.
Doing a database GROUP BY on the products returned by Solr may not be enough. For example, if products A and B have the same UPC and a certain query matches A but not B, then you will not get both A and B in your result set.

Flickr Geo queries not returning any data

I cannot get the Flickr API to return any data for lat/lon queries.
view-source:http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&has_geo=1&extras=geo&bbox=0,0,180,90
This should return something, anything. Doesn't work if I use lat/lng either. I can get some photos returned if I lookup a place_id first and then use that in the query, except then all the photos returned are from anywhere and not the place id
Eg,
http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&placeId=8iTLPoGcB5yNDA19yw
I deleted out my key obviously, replace with yours to test.
Any help appreciated, I am going mad over this.
I believe that the Flickr API won't return any results if you don't put additional search terms in your query. If I recall from the documentation, this is treated as an unbounded search. Here is a quote from the documentation:
Geo queries require some sort of limiting agent in order to prevent the database from crying. This is basically like the check against "parameterless searches" for queries without a geo component.
A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future).
My app uses the same kind of geo searching so what I do is put in an additional search term of the minimum date taken, like so:
http://api.flickr.com/services/rest/?method=flickr.photos.search&media=photo&api_key=KEY_HERE&has_geo=1&extras=geo&bbox=0,0,180,90&min_taken_date=2005-01-01 00:00:00
Oh, and don't forget to sign your request and fill in the api_sig field. My experience is that the geo based searches don't behave consistently unless you attach your api_key and sign your search. For example, I would sometimes get search results and then later with the same search get no images when I didn't sign my query.