Has anyone come up with an elegant way to search data stored on Authorize.net's Customer Information Manager (CIM)?
Based on their XML Guide there doesn't appear to be any search capabilities at all. That's a huge short-coming.
As I understand it, the selling point for CIM is that the merchant doesn't need to store any customer information. They merely store a unique identifier for each and retrieve the data as needed. This may be great from a PCI Compliance perspective, but it's horrible from a flexibility standpoint.
A simple search like "Show me all orders from Texas" suddenly becomes very complicated.
How are the rest of you handling this problem?
The short answer is, you're correct: There is no API support for searching CIM records. And due to the way it is structured, there is no easy way to use CIM alone for searching all records.
To search them in the manner you describe:
Use getCustomerProfileIdsRequest to get all the customer profile IDs you have stored.
For each of the CustomerProfileIds returned by that request, use getCustomerProfileRequest to get the specific record for that client.
Examine each record at that time, looking for the criterion you want, storing the pertinent records in some other structure; a class, a multi-dimensional array, an ADO DataTable, whatever.
Yes, that's onerous. But it is literally the only way to proceed.
The previously mentioned reporting API applies only to transactions, not the Customer Information Manager.
Note that you can collect the kind of data you want at the time of recording a transaction, and as long as you don't make it personally identifiable, you can store it locally.
For example, you could run a request for all your CIM customer profile records, and store the state each customer is from in a local database.
If all you store is the state, then you can work with those records, because nothing ties the state to a specific customer record. Going forward, you could write logic to update the local state record store at the same time customer profile records are created / updated, too.
I realize this probably isn't what you wanted to hear, but them's the breaks.
This is likely to be VERY slow and inefficient. But here is one method. Request an array of all the customer Id's, and then check each one for the field you want... in my case I wanted a search-by-email function in PHP:
$cimData = new AuthorizeNetCIM;
$profileIds = $cimData->getCustomerProfileIds();
$profileIds = $cimData->getCustomerProfileIds();
$array = $profileIds->xpath('ids');
$authnet_cid = null;
/*
this seems ridiculously inefficient...
gotta be a better way to lookup a customer based on email
*/
foreach ( $array[0]->numericString as $ids ) { // put all the id's into an array
$response = $cimData->getCustomerProfile($ids); //search an individual id for a match
//put the kettle on
if ($response->xml->profile->email == $email) {
$authnet_cid = $ids;
$oldCustomerProfile = $response->xml->profile;
}
}
// now that the tea is ready, cream, sugar, biscuits, you might have your search result!
CIM's primary purpose is to take PCI compliance issues out of your hands by allowing you to store customer data, including credit cards, on their server and then access them using only a unique ID. If you want to do reporting you will need to keep track of that kind of information yourself. Since there's no PCI compliance issues with storing customer addresses, etc, it's realistic to do this yourself. Basically, this is the kind of stuff that needs to get flushed out during the design phase of the project.
They do have a new reporting API which may offer you this functionality. If it does not it's very possible it will be offered in the near future as Authnet is currently actively rolling out lots of new features to their APIs.
Related
I've watched a tutorial about DDD in which it says that if I have aggregate root SnackMachine which has more than 30 child elements the child elements should be in separate aggregate. For example, SnackMachine has lots of PurshaseLog (more than 30) and it is better for PurshaseLog to be in a separate aggregate. Why is that?
The reason for limiting the overall size of an aggregate is because you always load the full aggregate into memory and you always store the full aggregate transactionally. A very large aggregate would cause technical problems.
That said, there is no such "30 child elements" rule in aggregate design and it sounds arbitrary as a rule. For example, having fewer very large child elements could be technically worse than having 30 very light child elements. A good way of storing aggregates is as json documents, given that you'll always read and write the documents as atomic operations. If you think it this way, you'll realise that an aggregate design that implies a very large or even ever-growing child collection will eventually cause problems. A PurhaseLog sounds like an ever-growing collection.
The second part of the rule that says "put it in a separate aggregate" is also not correct. You don't create aggregates because you need to store some data and it doesn't fit into an existing aggregate. You create aggregates because you need to implement some business logic and this business logic will need some data, so you put both things together in an aggregate.
So, although what you explain in your question are things to take into consideration when designing aggregates to avoid having technological problems, I'd suggest you put your attention to the actual responsibilities of the aggregate.
In your example, what are the responsibilities of the SnackMachine? Does it really need the (full) list of PurchaseLogs? What operations will the SnackMachine expose? Let's say that it exposes PurchaseProduct(productId) and LoadProduct(productId, quantity). To execute its business logic, this aggregate would need a list of products and keep count of their available quantity, but it wouldn't need to store the purchase log. Instead, at every Purchase, it could publish an event ProductPurchased(SnackMachineId, ProductId, Date, AvailableQuantity). Then external systems could subscribe to this event. One subscriber could register the PurchaseLog for reporting purposes and another subscriber could send someone to reload the machine when the stock was lower than X.
If PurchaseLog is not its own aggregate then it implies that it can only be retrieved or added as part of the child collection of SnackMachine.
Therefore, each time you want to add a PurchaseLog, you'd retrieve the SnackMachine with its child PurchaseLogs, add the PurchaseLog to its collection. Then save changes on your unit of work.
Did you really need to retrieve 30+ purchase logs which are redundant for the purpose of the use case of creating a new purchase log?
Application Layer - Option 1 (PurchaseLog is an owned entity of SnackMachine)
// Retrieve the snack machine from repo, along with child purchase logs
// Assuming 30 logs, this would retrieve 31 entities from the database that
// your unit of work will start tracking.
SnackMachine snackMachine = await _snackMachineRepository.GetByIdAsync(snackMachineId);
// Ask snack machine to add a new purchase log to its collection
snackMachine.AddPurchaseLog(date, quantity);
// Update
await _unitOfWork.SaveChangesAsync();
Application Layer - Option 2 (PurchaseLog is an aggregate root)
// Get a snackmachine from the repo to make sure that one exists
// for the provided id. (Only 1 entity retrieved);
SnackMachine snackMachine = await _snackMachineRepository.GetByIdAsync(snackMachineId);
// Create Purhcase log
PurchaseLog purchaseLog = new(
snackMachine,
date,
quantity);
await _purchaseLogRepository.AddAsync(purchaseLog);
await _unitOfWork.SaveChangesAsync()
PurchaseLog - option 2
class PurchaseLog
{
int _snackMachineId;
DateTimne _date;
int _quantity;
PurchaseLog(
SnackMachine snackMachine,
DateTime date,
int quantity)
{
_snackMachineId = snackMachine?.Id ?? throw new ArgumentNullException(nameof(snackMachine));
_date = date;
_quantity = quantity;
}
}
The second option follows the contours of your use case more accurately and also results in a lot less i/o with the database.
I am working on a scenario where I have invoices available in my Data Lake Store.
Invoice example (extremely simplified):
{
"business_guid":"b4f16300-8e78-4358-b3d2-b29436eaeba8",
"ingress_timestamp": 1523053808,
"client":{
"name":"Jake",
"age":55
},
"transactions":[
{
"name":"peanut",
"amount":100
},
{
"name":"avocado",
"amount":2
}
]
}
All invoices are stored in ADLS, and can be queried. But, It is my desire to provide access to the same data inside an ALD DB.
I am not an expert on unstructed data: I have RDBMS background. Taking that into consideration, I can only think of 2 possible scenarios:
2/3 tables - invoice, client (could be removed) and transaction. In this scenario, I would have to create an invoice ID to be able to build relationships between those tables
1 table - client info could be normalized into invoice data. But, transactions could (maybe) be defined as an SQL.ARRAY<SQL.MAP<string, object>>
I have mainly 3 questions:
What is the correct way of doing so? Solution 1 seems much better structured.
If I go with solution 1, how do I properly create an ID (probably GUID)? Is it acceptable to require ID creation when working with ADL?
Is there another solution I am missing here?
Thanks in advance!
This type of question is a bit like do you prefer your sauce on the pasta or next to the pasta :). The answer is: it depends.
To answer your 3 questions more seriously:
#1 has the benefit of being normalized that works well if you want to operate on the data separately (e.g., just clients, just invoices, just transactions) and want to the benefits of normalization, get the right indexing, and are not limited by the rowsize limits (e.g., your array of map needs to fit into a row). So I would recommend that approach unless your transaction data is always small and you always access the data together and mainly search on the column data.
U-SQL per se has no understanding of the hierarchy of the JSON document. Thus, you would have to write an extractor that turns your JSON into rows in a way that it either gives you the correlation of the parent to the child (normally done by stepwise downwards navigation with cross apply) and use the key value of the parent data item as the foreign key, or have the extractor generate the key (as int or guid).
There are some sample JSON extractors on the U-SQL GitHub site (start at http://usql.io) that can get you started with the JSON to rowset conversion. Note that you will probably want to optimize the extraction at some point to be JSON Reader based so you process larger docs without loading it into memory.
I am new to redis and I am trying to figure out how redis can be used.
So please let me know if this is a right way to build an application.
I am building an application which has got only one data source. I am planning to run a job on nightly basis to get data into a file.
Now I have a front end application, that needs to render this data in different formats.
Example application use case
Download processed applications by a university on nightly basis.
Display how many applications got approved or rejected.
Display number of applications by state.
Let user search for an application by application id.
Instead of using postgres/mysql like relational database, I am thinking about using redis. I am planning to store data in following ways.
Application id -> Application details
State -> List of application ids
Approved -> List of application ids (By date ?)
Declined -> List of application ids (By date ?)
Is this correct way to store data into redis?
Also if someone queries for all applications in california for a certain date,
I will be able to pull application ids in one call but to get details for each application, do I need to make another request?
Word of caution:
Instead of using postgres/mysql like relational database, I am thinking about using redis.
Why? Redis is an amazing database, but don't use the right hammer for the wrong nail. Use Redis if you need real time performance at scale, but don't try make it replace an RDBMS if that's what you need.
Answer:
Fetching data efficiently from Redis to answer your queries depends on how you'll be storing it. Therefore, to determine the "correct" data model, you first need to define your queries. The data model you proposed is just a description of the data - it doesn't really say how you're planning to store it in Redis. Without more details about the queries, I would store the data as follows:
Store the application details in a Hash (e.g. app:<id>)
Store the application IDs in a per state in Set (e.g. apps:<state>)
Store the approved/rejected applications in two Sorted Sets, the id being the member and the date being the score
Also if someone queries for all applications in california for a certain date, I will be able to pull application ids in one call but to get details for each application, do I need to make another request?
Again, that depends on the data model but you can use Lua scripts to embed this logic and execute it in one call to the database.
First of all you can use a Hash to store structured Data. With Lists (ZSets) and Sets you can create indexes for an ordered or unordered access. (Depending on your requirements of course. Make a list of how you want to access your data).
It is possible to get all data as json of an index in one go with a simple redis script (example using an unordered set):
local bulkToTable = function(bulk)
local retTable = {};
for index = 1, #bulk, 2 do
local key = bulk[index];
local value = bulk[index+1];
retTable[key] = value;
end
return retTable;
end
local functionSet = redis.call("SMEMBERS", "app:functions")
local returnObj = {} ;
for index = 1, #functionSet, 1 do
returnObj[index] = bulkToTable(redis.call("HGETALL", "app:function:" .. functionSet[index]));
returnObj[index]["functionId"] = functionSet[index];
end
return cjson.encode(returnObj);
more information about redis scripts see here : http://www.redisgreen.net/blog/intro-to-lua-for-redis-programmers/
I'm new to Backbone.js. I'm intrigued by the idea that you can just supply a URL to a collection and then proceed to create, update, delete, and get models from that collection and it handle all the interaction with the API.
In the small task management sample applications and numerous demo's I've seen of this on the web, it seems that the collection.fetch() is used to pull down all models from the server then do something with them. However, more often than not, in a real application, you don't want to pull down hundreds of thousands or even millions of records by issuing a GET statement to the API.
Using the baked-in connection.sync method, how can I specify parameters to GET specific record sets? For example, I may want to GET records with a date of 2/1/2014 or GET records that owned by a specific user id.
In this question, collection.find is used to do this, but does this still pull down all records to the client first then "finds" them or does the collection.sync method know to specify arguments when doing a GET to the server?
You do use fetch, but you provide options as seen in collection.fetch([options]).
So for example to obtain the one model where id is myIDvar:
collection.fetch(
{
data: { id: myIDvar },
success: function (model, response, options) {
// do a little dance;
}
};
My offhand recollections is that find, findWhere and where would invoke all models being downloaded and then the filtering taking place on the client. I believe with fetch the filtering takes places on the server side.
You can implement some kind of pagination on server side and update your collection with limited number of records. In this case all your data will be up to date with backend.
You can do it by overriding fetch method with you own implementaion, or specify params
For example:
collection.fetch({data: {page: 3})
You can also use find where method here
collection.findWhere(attributes)
I'm optimizing the memory load (~2GB, offline accounting and analysis routine) of this line:
l2 = Photograph.objects.filter(**(movie.get_selectors())).values()
Is there a way to convince django to skip certain columns when fetching values()?
Specifically, the routine obtains all rows of the table matching certain criteria (db is optimized and performs it very quickly), but it is a bit too much for python to handle - there is a long string referenced in each row, storing the urls for thumbnails.
I only really need three fields from each row, but, if all the fields are included, it suddenly consumes about 5kB/row which sadly pushes the RAM to the limit.
The values(*fields) function allows you to specify which fields you want.
Check out the QuerySet method, only. When you declare that you only want certain fields to be loaded immediately, the QuerySet manager will not pull in the other fields in your object, till you try to access them.
If you have to deal with ForeignKeys, that must also be pre-fetched, then also check out select_related
The two links above to the Django documentation have good examples, that should clarify their use.
Take a look at Django Debug Toolbar it comes with a debugsqlshell management command that allows you to see the SQL queries being generated, along with the time taken, as you play around with your models on a django/python shell.