Pymill APi get details of all transaction within two timestamps - paymill

How to get details of all transaction between two timestamps
i tried the following:
https://api.paymill.com/v2.1/subscriptions?created_at=1378987463&&created_at=1378987463

https://api.paymill.com/v2.1/subscriptions?created_at=timestamp1-timestamp2
Note, your timestamps are the same, in that case you can just go with created_at=timestamp

Related

Replacing incorrectly entered dates in sql server

I have ran a query is SQL server. there is a name category and every category has a date that it started... however sometimes data was incorrectly entered in the front end so when I do the data pull it returns two start dates per category when in reality just the earliest date should be present. is there any sql code I can throw into this join query that replaces all situations when a category has two dates with the earliest one?
From what I understand, you need to use the MIN() function to get only the earliest entered event when querying your table. You can achieve this my using something similar to the following:
SELECT
categoryName,
MIN(categoryDate)
FROM Category
GROUP BY categoryName
However, I am not sure this is what you need since we have no dataset to verify against. Ideally, you can explain in a more clear way, what you need to achieve, and we can help you better.

Doubleor triple timestamp issue

I am using SQL assistant and my data brings in snapshots from a huge database in the form of timestamps. Occasionally the snapshots bring in multiples per hour. The data is correct, multiple snapshots do happen from time to time within an hour, not always but it does happen.
I am bringing this into Spotfire and viewing by an hour and when more than one snapshot happens in the hour, the data shows as doubled.
I only want to display one per hour preferably the last(max) timestamp for the hour. Example; for the 7 am hour the data has a snapshot for 7:10 am and one for 7:55 am.
These are correct but I only want to display the last(max) timestamp, 7:55 am in this case. I can't figure the issue out in Spotfire so I am leaning towards a fix in SQL. How can I display only 1 for each hour?
You'd do this similarly to how you'd probably do it in SQL -- using a ranking/rownumber function.
The basic way Rank in Spotfire works is Rank(Order columns, order direction, partitioned columns, tie method)
You need to partition by the combination of Date and Hour, and then sort descending by your timestamp column.
So the code to identify the rows that you want to isolate should be something along the lines of:
Rank([TimestampColumn], "desc", Date([TimestampColumn]), Hour([TimestampColumn]), "ties.method=first")
What you do with it from here is going to depend on how you plan to use the data - for example, you can Limit Data Using Expression and set the code above = 1 which will limit your table accordingly (helpful if you don't want your users to accidentally forget to filter), or you can create a calculated column which turns it into a flag of some form like here:
If(Rank([TimestampColumn], "desc", Date([TimestampColumn]), Hour([TimestampColumn]), "ties.method=first") = 1, "Latest", "Duplicate")
Which allows your users to filter by this property. This way, they have the option to look at the extra rows.
Ultimately, though, if you want to only ever see these rows, and have no use for the earlier records, I'd probably do it in SQL, if you have that ability. This reduces the number of rows you have to load into your analytic.

BigQuery - Why is UNNEST operator not required for pulling transactions data in Google Analytics?

SELECT SUM(totals.totalTransactionRevenue)
FROM bigquery-public-data.google_analytics_sample.ga_sessions_*
WHERE _TABLE_SUFFIX BETWEEN '20170701' AND '20170701';
Transactions are a product level scope and one session can have multiple transactions. So, a session could hold an array of transactions. In such a case, why is the UNNEST parameter not required to run this query?
Thanks.
That's because the fields you selected is a simple INTEGER field and not ARRAY. You can check this by going to the Schema tab of the table. If the field contains ARRAY, it (or its parent field) should have the type "RECORD - REPEATED".
In the example you have given you're counting transactions as a sum of SUM(totals.totalTransactionRevenue). Note that totals contains aggregated data (as an interger, as the previous post explains), already unnested for you, that's why you don't need to UNNEST to read data from a field under totals.
You are correct in that if you wanted to ask another question from product level data, which hasn't already been aggregated for you in totals, (for example all of your transaction IDs from yesterday) then you need to unnest.
Also note that when you UNNEST you'll be duplicating the rows of totals, so be careful when using both in the same query, as you could end up double counting.
This previous answer explains this further, with some examples:
Unnest and totals.timeOnSite (BigQuery and Google Analytics data)

Yodlee - how to get only new transactions

I am looking for a way to get only 'new' transactions from specific account item. I.e. only transactions that was posted to account after I made previous transactions fetch/search.
For example I have the following scenario:
I have add item to consumer. Lets say consumer have 1 account item named 'BankAccount1'.
I fetch/search ALL transactions for some BankAccount1 and store transactions locally.
Now I need a way to get only new transactions on periodic basis. I.e. only transactions that was posted to 'BankAccount1' after previous fetch/search call. Is it possible to do this or I need to get all transactions every time and just skip transactions with Id which already present locally? If transaction Id is unique and incremental (did they?) maybe its possible to save last fetched transaction Id, and on next time get only transactions with Id > prevFetchId (what API to use if its possible)?
p.s.
I am using container based approach REST API.
As per your question, I can infer that you are going to store transactions locally in your DB. In that case Yodlee recommend to use Procedural Data Extracts, using which you can keep your DB in sync with Yodlee Cloud. You can find more details about it here.
Yodlee recommends you to pass date range in the executeUserSearchRequest API to get the transactions for any specific duration,as getting only new transactions may cause some issues. This is why Yodlee recommends to have few days of overlap, this will help you in not missing any transaction.
Transaction ID would be unique but it may not be incremental.

pulling current date queue

I have a view that lists employee (EmpID), request number (ReqNo), date request was opened (OpenDate) and the date it was moved to the next step in the process (AssignDate). What I am trying to do is get an average of the daily queue size. If EmpID 001 has 20 requests on 1/1/13, then has 24 on 1/2/13, 21 on 1/3/13 the average over 3 days should be 21.66, rounded up to 22. I have the following view:
CREATE VIEW EmpReqs
AS
SELECT [EmpID], [OpenDate], [AssignDate], [ReqID]
FROM [Metrics].[dbo].[Assignments]
WHERE OpenDate BETWEEN '01/01/2013' AND '12/31/2013' AND
[EmpID] IS NOT NULL AND
[ReqNo] NOT LIKE 'M%'
I then wrote a query to pull individual employee's queues per day:
/* First attempt to generate daily queue #s */
SELECT * FROM BLReqs
WHERE [BusLiaison] LIKE 'PN' AND
[OpenDate] <= '11/15/2013' AND
[AssignDate] > '11/15/2013'
Because no one has attempted to pull this information before, I have no way of verifying how accurate the above is. I tried using current dates, since I can see those in our database to compare but the code doesn't work, nothing is returned when I change the dates to 2014 and run my query.
What is the easiest way to verify that my code is correct, short of manually counting a day's queue?
Can anyone see any issues with the above scripts?
Is there a way to get the above code to work with current dates?
This question is really hard to answer because it is kind of broad and has little information at the same time. I'll try anyway:
Because no one has attempted to pull this information before, I have
no way of verifying how accurate the above is.
Try checking the result of this query for a few sampled dates.
I tried using current dates, since I can see those in our database to
compare but the code doesn't work, nothing is returned when I change
the dates to 2014 and run my query.
So clearly, the query is not working. You should probably find out why. Run the query for a date of which you know that it should return results but doesn't. Remove conditions one by one to see which one is incorrectly removing all rows. This should be enough to identify the bug.
Can anyone see any issues with the above scripts?
No, looks fine. A very simple query. That's why I said that we have too little information. There is some key piece of information missing that allows us to find the bug.
Is there a way to get the above code to work with current dates?
Stop staring at the code and hoping for a revelation. Debug it. Experiment.