Error itgensql005 when fetching serial number from Exact Online GoodsDeliveryLines for upload to Freshdesk ticket - sql

I want to exchange information between ExactOnline and Freshdesk based on deliveries (Exact Online Accounts -> Freshdesk Contacts, Exact Online deliveries -> Freshdesk tickets).
The serial number of delivered goods is not available in either the ExactOnlineREST..GoodsDeliveryLines table nor in ExactOnlineXML..DeliveryLines.
The following query lists all columns that are also documented on Exact Online REST API GoodsDeliveryLines:
select * from goodsdeliverylines
All other fields of the documentation on REST APIs are included in GoodsDeliveryLines, solely serial numbers and batch numbers not.
I've tried - as on ExactOnlineXML tables where there column only come into existence when actually specified - to use:
select stockserialnumbers from goodsdeliverylines
This raises however an error:
itgensql005: Unknown identifier 'stockserialnumbers'.
How can I retrieve the serial numbers?

StockSerialNumbers is an array, on the Exact Online documentation it says:
Collection of batch numbers
so far every delivery lines, there can be 0, 1 or more serial numbers included.
These serial numbers were not available till some time ago; please make sure you upgrade to at least build 16282 of the Exact Online SQL provider. It should work then using a query on a separate table:
select ssrdivision
, ssritemcode
, ssrserialnumber
from GoodsDeliveryLineSerialNumbers
Output:
ssrdivision | ssritemcode | ssrserialnumber
----------- | ----------- | ---------------
868,035 | OUT30074 | 132
868,035 | OUT30074 | 456
Use of serial numbers may require more modules from the respective supplier Exact Online like "Trade", but when you can see them in the web user interface, then you have them already. If you get an HTTP 401 unauthorized, you don't have the module for serial numbers.

Since stockserialnumbers is actually a list and not a single field, you have to query it using the entity GoodsDeliveryLineSerialNumbers, which you can find in the latest release.
select * from GoodsDeliveryLineSerialNumbers
If you execute the above query, you will get the fields for GoodsDeliveryLine and those of the underlying serial numbers. The latter fields are prefixed with Ssr to disambiguate both entities. This means you don't need an additional join on GoodsDeliveryLine, which may benefit performance.

Related

How do I track Amazon packages by the "TBA" number?

I have a tracking number from Amazon that starts with TBA that I'd like to track via their API. I've seen their getPackageTrackingDetails endpoint but it takes an integer as input and I get an error when I try to use a TBA number on that endpoint. I know it is possible somehow, since AfterShip can do it (just enter a valid tracking number that starts with TBA). I cannot find in Amazon's docs how to do it and Amazon customer support doesn't know how to do it, either.
You have to distinguish between the packageNumber (which is an integer) and the trackingNumber (which is a string). When creating your shipment, you will get the packageNumber. With that number you can call getPackageTrackingDetails.
The Shipping-Api seems to be the right endpoint to use. See https://github.com/amzn/selling-partner-api-docs/blob/main/references/shipping-api/shipping.md#get-shippingv1trackingtrackingid
The getTrackingInformation operation accepts a tracking number as an input parameter.
Looking through the API documentation, there doesn't seem to be a good way to go from TBA number (if you can't just cut off those first three letters) to Package ID.
My order of operations on fixing this problem:
Chop the first three letters off the TBA variable you have, convert to integer, try it. Per Andrew Morton's comment.
What AfterShip may also be doing is going from the order ID. If the TBA is closely related to the order ID, the Amazon API will give you the information to go from Order ID -> Fulfillment Shipment -> Fulfillment Package -> Package ID. You could then use the Package ID to get your package information. So I'd look at Order ID's as well as Package ID's to see if you could convert one to another.
Given Stevish's comment, it's possible that the TBA can be cut off the TBA number and used as a package number if there is only one package in the order, but things get more complicated in other situations.
If you're working on a site that has the seller Order ID stored on your side, that seems to be the intent they have for getting it through the API.

Create master table for status column

I have a table that represent a request sent through frontend
coupon_fetching_request
---------------------------------------------------------------
request_id | request_time | requested_by | request_status
Above I tried to create a table to address the issue.
Here request_status is an integer. It could have some values as follows.
1 : request successful
2 : request failed due to incorrect input data
3 : request failed in otp verification
4 : request failed due to internal server error
That table is very simple and status is used to let frontend know what happened to sent request. I had discussion with my team and other developers were proposing that we should have a status representation table. At database side we are not gonna need this status. But team was saying that in future we may need to show simple output from database to show what is the status of all request. According to YAGNI principle I don't think it is a good idea.
Currently I have coded to convert returned request_status value to descriptive value at frontend. I tried to convince team that I can creat an enumuration at business layer to represent meaning of the status OR I could add documentation at frontend and in java but failed to convince them.
The table proposed is as follows
coupon_fetching_request_status
---------------------------------------------------
status_id | status_code | status_description
My question is, Is it necessary to create table for such a simple status in similar cases.
I tried to create simple example to address the problem. In real time the table is to represent a Discount Coupon Code Request and status representing if the code is successfully fetched
It really depends on your use case.
To start with: in you main table, you are already storing request_status as an integer, which is a good thing (if you were storing the whole description, like 'request successful', that would not be optimized).
The main question is: will you eventually need to display that data in a human-readable format?
If no, then it is probably useless to create a representation table.
If yes, then having a representation table would be a good thing, instead of adding some code in the presentation layer to do the transcodification; let the data live in the database, and the frontend take care of presentation only.
Since this table can be easily created when needed, a pragmatic approach would be to hold on until you have a real need for the representation table.
You should create the reference table in the database. You currently have business logic on the application side, interpreting data stored in the database. This seems dangerous.
What does "dangerous" mean? It means that ad-hoc queries on the database might need to re-implement the logic. That is prone to error.
It means that if you add a reporting front end, then the reports have to re-implement the logic. That is prone to error and a maintenance nightmare.
It means that if you have another developer come along, or another module implemented, then the logic might need to be re-implemented. Red flag.
The simplest solution is to have a reference table to define the official meanings of the codes. The application should use this table (via join) to return the strings. The application should not be defining the meaning of codes stored in the database. YAGNI doesn't apply, because the application is so in need of this information that it implements the logic itself.

Get top 3 results

I have a query using the analysis type count, I've got it grouped by type and it is returning me 12 different groups with varying values.
Would it be possible to get only the 3 groups with the highest count from that query?
The Keen API doesn't (as of October 2015) support this directly, although it is a commonly requested feature. It may be added in the future but there is currently no timeline for that.
The best workaround is to do the sorting and trimming on the client side once the response has been received. This should only take a few lines of code in most programming languages. If you're working from a command line (e.g. via curl) then you could use jq to do it:
curl "https://api.keen.io/3.0/projects/...<insert your query URL>..." > result.json
cat result.json | jq '.result | sort_by(.result) | reverse | .[:3]'
Hope that helps! (Disclosure: I'm a platform engineer at Keen.)

Ignore similar values and not treat as duplicate records

I'm writing a Select query in SQL server and I found a question.
When I have two rows like that:
ID Address City Zip
1 123 Wash Ave. New York 10035
1 123 Wash Ave New York 10035
Because I have many same Address but some of them just have dot or some little difference.
they are almost identical, so how can I find all such case.
Using UPS Online API's, our solution was not to correct the error but help sort the results that best represent the correct answer.
With the results returned by UPS, we would run various filters against the original source address and each of the returned responses. then produce a weighting system to sort the results to present to our CSR to select the most logical "correctly formatted" answer from UPS.
Thus building a score card from the result set, such as the number of digits incorrect in the ZIP Code (catches fat fingering).
Another measure removes all pronunciation marks and gives a ranking of how close the address is now.
Lastly we pass the results through a matrix of standard substitutions [ ST for STREET ] and do a final ranking.
From all these scores we sort and present the results from most likely to least likely to an individual who then selects the correct answer to save in our database.
Correcting these errors now serve two purposes:
1) We look good to our customers by having the correct address information on the billing ( No just close enough)
2) We save on secondary charges from UPS by not being billed for incorrect addresses.

Should I initialize my AUTO_INCREMENT id column to 2^32+1 instead of 0?

I'm designing a new system to store short text messages [sic].
I'm going to identify each message by a unique identifier in the database, and use an AUTO_INCREMENT column to generate these identifiers.
Conventional wisdom says that it's okay to start with 0 and number my messages from there, but I'm concerned about the longevity of my service. If I make an external API, and make it to 2^31 messages, some people who use the API may have improperly stored my identifier in a signed 32-bit integer. At this point, they would overflow or crash or something horrible would happen. I'd like to avoid this kind of foo-pocalypse if possible.
Should I "UPDATE message SET id=2^32+1;" before I launch my service, forcing everyone to store my identifiers as signed 64-bit numbers from the start?
If you wanted to achieve your goal and avoid the problems that cletus mentioned, the solution is to set your starting value to 2^32+1. There's still plenty of IDs to go and it won't fit in a 32 bit value, signed or otherwise.
Of course, documenting the value's range and providing guidance to your API or data customers is the only right solution. Someone's always going to try and stick a long into a char and wonder why it doesn't work (always)
What if you provided a set of test suites or a test service that used messages in the "high but still valid" range and persuade your service users to use it to validate their code is proper? Starting at an arbitrary value for defensive reasons is a little weird to me; providing sanity tests rubs me right.
Actually 0 can be problematic with many persistence libraries. That's because they use it as some sort of sentinel value (a substitute for NULL). Rightly or wrongly, I would avoid using 0 as a primary key value. Convention is to start at 1 and go up. With negative numbers you're likely just to confuse people for no good reason.
If everyone alive on the planet sent one message per second every second non-stop, your counter wouldn't wrap until the year 2050 using 64 bit integers.
Probably just starting at 1 would be sufficient.
(But if you did start at the lower bound, it would extend into the start of 2092.)
Why use incrementing IDs? These require locking and will kill any plans for distributing your service over multiple machines. I would use UUIDs. API users will likely store these as opaque character strings, which means you can probably change the scheme later if you like.
If you want to ensure that messages have an order, implement the ordering like a linked list:
---
id: 61746144-3A3A-5555-4944-3D5343414C41
msg: "Hello, world"
next: 006F6F66-0000-0000-655F-444E53000000
prev: null
posted_by: jrockway
---
id: 006F6F66-0000-0000-655F-444E5300000
msg: "This is my second message EVER!"
next: 00726162-0000-0000-655F-444E53000000
prev: 61746144-3A3A-5555-4944-3D5343414C41
posted_by: jrockway
---
id: 00726162-0000-0000-655F-444E53000000
msg: "OH HAI"
next: null
prev: 006F6F66-0000-0000-655F-444E5300000
posted_by: jrockway
(As an aside, if you are actually returning the results as YAML, you can use & and * references instead of just using the IDs as data. Then the client will get the linked-list structure "for free".)
One thing I don't understand is why developers don't grasp that they don't need to expose their AUTO_INCREMENT field. For example, richardtallent mentioned using Guids as the primary key. I say do one better. Use a 64bit Int for your table ID/Primary Key, but also use a GUID, or something similar, as your publicly exposed ID.
An example Message table:
Name | Data Type
-------------------------------------
Id | BigInt - Primary Key
Code | Guid
Message | Text
DateCreated | DateTime
Then your data looks like:
Id | Code Message DateCreated
-------------------------------------------------------------------------------
1 | 81e3ab7e-dde8-4c43-b9eb-4915966cf2c4 | ....... | 2008-09-25T19:07:32-07:00
2 | c69a5ca7-f984-43dd-8884-c24c7e01720d | ....... | 2007-07-22T18:00:02-07:00
3 | dc17db92-a62a-4571-b5bf-d1619210245a | ....... | 2001-01-09T06:04:22-08:00
4 | 700910f9-a191-4f63-9e80-bdc691b0c67f | ....... | 2004-08-06T15:44:04-07:00
5 | 3b094cf9-f6ab-458e-965d-8bda6afeb54d | ....... | 2005-07-16T18:10:51-07:00
Where Code is what you would expose to the public whether it be a URL, Service, CSV, Xml, etc.
Don't want to be the next Twitter, eh? lol
If you're worried about scalability, consider using a GUID (uniqueidentifier) instead.
They are only 16 bytes (twice that of a bigint), but they can be assigned independently on multiple database or BL servers without worrying about collisions.
Since they are random, use NEWSEQUENTIALID() (in SQL Server) or a COMB technique (in your business logic or pre-MSSQL 2005 database) to ensure that each GUID is "higher" than the last one (speeds inserts into your table).
If you start with a number that high, some "genius" programmer will either subtract 2^32 to squeeze it in an int, or will just ignore the first digit (which is "always the same" until you pass your first billion or so messages).