Creating Customer on Balanced Payments - balanced-payments

I am confused by the Balanced Payment documentation, specifically for creating customers:
The Balanced docs say to create a customer with this code:
$marketplace = Balanced\Marketplace::mine();
$customer = $marketplace->customers->create(array(
'address' => array(
'postal_code' => '48120',
),
'dob_month' => '7',
'dob_year' => '1963',
'name' => 'Henry Ford',
));
The goal is to get a json response:
{
"customers": [
{
"address": {
"postal_code": "48120",
//more key -> value pairs
},
//more key -> value pairs
"href": "/customers/CU3SSJgvA5Z69kt05MusbPeE",
}
The problem that I am having is that I cannot find any reference as to how to send the info to Balanced. Do I use balanced.js to tokenize it the same way I tokenize a credit card?

Take a look at https://docs.balancedpayments.com/1.1/api/customers/#create-a-customer . You would use one of the clients, such as the Python or PHP clients found at https://docs.balancedpayments.com/1.1/overview/getting-started/#client-libraries .
Balanced.js is just for card tokenizations to aid with PCI-compliance -- using it, sensitive card information is posted directly to Balanced and never has to touch your own servers.

Related

GROQ: Query one-to-many relationship with parameter as query input

I have a blog built in NextJS, backed by Sanity. I want to start tagging posts with tags/categories.
Each post may have many categories.
Category is a reference on post:
defineField({
name: 'category',
title: 'Category',
type: 'array',
of: [
{
type: 'reference',
to: [
{
type: 'category',
},
],
},
],
}),
This is my GROQ query:
*[_type == "post" && count((category[]->slug.current)[# in ['dogs']]) > 0] {
_id,
title,
date,
excerpt,
coverImage,
"slug": slug.current,
"author": author->{name, picture},
"categories": category[]-> {name, slug}
}
The above works, when it is hardcoded, but swapping out 'dogs' with $slug for example will cause the query to fail. (Where $slug is a param provided)
*[_type == "post" && count((category[]->slug.current)[# in [$slug]]) > 0]
{
$slug: 'travel'
}
How do I make the above dynamic?
Returns all documents that are storefronts // within 10 miles of the user-provided currentLocation parameter ; // For a given $currentLocation geopoint
I can't believe it. Rookie mistake. I needed to pay more attention in the Sanity IDE. (To be fair there was a UI bug that hid the actual issue)
The param should not contain the $. E.g the following works in the GROQ IDE.
{
slug: 'travel'
}

Google Docs API for creating invoice containing table of variable number of rows

I have a template file for my invoice with a table with sample row, but I want to add more rows dynamically based on a given array size, and write the cell values from the array...
Template's photo
I've been struggling for almost 3 days now.
Is there any easy way to accomplish that?
Here's the template file: Link to the Docs file(template)
And here's a few sample arrays of input data to be replaced in the Template file:
[
[
"Sample item 1s",
"Sample Quantity 1",
"Sample price 1",
"Sample total 1"
],
[
"Sample item 2",
"Sample Quantity 2",
"Sample price 2",
"Sample total 2"
],
[
"Sample item 3",
"Sample Quantity 3",
"Sample price 3",
"Sample total 3"
],
]
Now, the length of the parent array can vary depending on the number of items in the invoice, and that's the only problem that I'm struggling with.
And... Yeah, this is a duplicate question, I've found another question on the same topic, but looking at the answers and comments, everyone is commenting that they don't understand the question whereas it looks perfectly clear for me.
Google Docs Invoice template with dynamically items row from Google Sheets
I think the person who asked the question have already quit from it. :(
By the way I am using the API for PHP (Google API Client Library for PHP), and code for replacing dummy text a Google Docs Document by the actual data is given below:
public function replaceTexts(array $replacements, string $document_id) {
# code...
$req = new Docs\BatchUpdateDocumentRequest();
// var_dump($replacements);
// die();
foreach ($replacements as $replacement) {
$target = new Docs\SubstringMatchCriteria();
$target->text = "{{" . $replacement["targetText"] . "}}";
$target->setMatchCase(false);
$req->setRequests([
...$req->getRequests(),
new Docs\Request([
"replaceAllText" => [
"replaceText" => $replacement["newText"],
"containsText" => $target
]
]),
]);
}
return $this->docs_service->documents->batchUpdate(
$document_id,
$req
);
}
A possible solution would be the following
First prep the document by removing every row from the table apart from the title.
Get the full document tree from the Google Docs API.
This would be a simple call with the document id
$doc = $service->documents->get($documentId);
Traverse the document object returned to get to the table and then find the location of the right cell. This could be done by looping through the elements in the body object until one with the right table field is found. Note that this may not necessarily be the first one since in your template, the section with the {{CustomerName}} placeholder is also a table. So you may have to find a table that has the first cell with a text value of "Item".
Add a new row to the table. This is done by creating a request with the shape:
[
'insertTableRow' => [
'tableCellLocation' => [
'rowIndex' => 1,
'columnIndex' => 1,
'tableStartLocation' => [
'index' => 177
]
]
]
]
The tableStartLocation->index element is the paragraph index of the cell to be entered, i.e. body->content[i]->table->startIndex. Send the request.
Repeat steps 2 and 3 to get the updated $doc object, and then access the newly created cell i.e. body->content[i]->table->tableRows[j]->tableCells[k]->content->paragraph->elements[l]->startIndex.
Send a request to update the text content of the cell at the location of the startIndex from 5 above, i.e.
[
'insertText' => [
'location' => [
'index' => 206,
]
],
'text' => 'item_1'
]
]
Repeat step 5 but access the next cell. Note that after each update you need to fetch an updated version of the document object because the indexes change after inserts.
To be honest, this approach is pretty cumbersome, and it's probably more efficient to insert all the data into a spreadsheet and then embed the spreadsheet into your word document. Information on that can be found here How to insert an embedded sheet via Google Docs API?.
As a final note, I created a copy of your template and used the "Try this method" feature in the API documentation to validate my approach so some of the PHP syntax may be a bit off, but I hope you get the general idea.

ReferenceArrayInput usage with relationships on React Admin

I have followed the doc for the ReferenceArrayInput (https://marmelab.com/react-admin/Inputs.html#common-input-props) but it does not seem to be working with relationship fields.
For example, I have this many-to-many relation for my Users (serialized version) :
Coming from (raw response from my API):
I have setup the ReferenceArrayInput as followed :
<ReferenceArrayInput source="profiles" reference="profiles" >
<SelectArrayInput optionText="label" />
</ReferenceArrayInput>
I think it's making the appropriate calls :
But here is my result :
Any idea what I'm doing wrong ?
Thanks in advance for your help !
On docs, ReferenceArrayInput is said to expect a source prop pointing to an array os ids, array of primitive types, and not array of objects with id. Looks like you are already transforming your raw response from api, so if you could transform a bit more, mapping [{id}] to [id], it could work.
If other parts of your app expects profiles to be an array of objects, just create a new object entry like profilesIds or _profiles.
As gstvg said, ReferenceArrayInput expects an array of primitive type, not array of objects.
If your current record is like below:
{
"id": 1,
"tags": [
{ id: 'programming', name: 'Programming' },
{ id: 'lifestyle', name: 'Lifestyle' }
]
}
And you have a resource /tags, which returns all tags like:
[
{ id: 'programming', name: 'Programming' },
{ id: 'lifestyle', name: 'Lifestyle' },
{ id: 'photography', name: 'Photography' }
]
Then you can do something like this (it will select the tags of current record)
<ReferenceArrayInput
reference="tags"
source="tags"
parse={(value) => value && value.map((v) => ({ id: v }))}
format={(value) => value && value.map((v) => v.id)}
>
<AutocompleteArrayInput />
</ReferenceArrayInput>

Is there a way to use the graphLookup aggregation pipeline stage for arrays?

I am currently working on an application that uses MongoDB as the data repository. I am mainly concerned about the graphLookup query to establish links between different people, based on what flights they took. My document contains an array field, that in turn contains key value pairs. I need to establish the links based on one of the key:value pairs of that array.
I have already tried some queries of aggregation pipeline with $graphLookup as one of the stages and they have all worked fine. But now that I am trying to use it with an array, I am hitting a blank.
Below is the array field from the first document :
"movementSegments":[
{
"carrierCode":"MO269",
"departureDateTimeMillis":1550932676000,
"arrivalDateTimeMillis":1551019076000,
"departurePort":"DOH",
"arrivalPort":"LHR",
"departurePortText":"HAMAD INTERNATIONAL AIRPORT",
"arrivalPortText":"LONDON HEATHROW",
"serviceNameText":"",
"serviceKey":"BA007_1550932676000",
"departurePortLatLong":"25.273056,51.608056",
"arrivalPortLatLong":"51.4706,-0.461941",
"departureWeeklyTemporalSpatialWindow":"DOH_8",
"departureMonthlyTemporalSpatialWindow":"DOH_2",
"arrivalWeeklyTemporalSpatialWindow":"LHR_8",
"arrivalMonthlyTemporalSpatialWindow":"LHR_2"
}
]
The other document has the below field :
"movementSegments":[
{
"carrierCode":"MO269",
"departureDateTimeMillis":1548254276000,
"arrivalDateTimeMillis":1548340676000,
"departurePort":"DOH",
"arrivalPort":"LHR",
"departurePortText":"HAMAD INTERNATIONAL AIRPORT",
"arrivalPortText":"LONDON HEATHROW",
"serviceNameText":"",
"serviceKey":"BA003_1548254276000",
"departurePortLatLong":"25.273056,51.608056",
"arrivalPortLatLong":"51.4706,-0.461941",
"departureWeeklyTemporalSpatialWindow":"DOH_4",
"departureMonthlyTemporalSpatialWindow":"DOH_1",
"arrivalWeeklyTemporalSpatialWindow":"LHR_4",
"arrivalMonthlyTemporalSpatialWindow":"LHR_1"
},
{
"carrierCode":"MO270",
"departureDateTimeMillis":1548254276000,
"arrivalDateTimeMillis":1548340676000,
"departurePort":"DOH",
"arrivalPort":"LHR",
"departurePortText":"HAMAD INTERNATIONAL AIRPORT",
"arrivalPortText":"LONDON HEATHROW",
"serviceNameText":"",
"serviceKey":"BA003_1548254276000",
"departurePortLatLong":"25.273056,51.608056",
"arrivalPortLatLong":"51.4706,-0.461941",
"departureWeeklyTemporalSpatialWindow":"DOH_4",
"departureMonthlyTemporalSpatialWindow":"DOH_1",
"arrivalWeeklyTemporalSpatialWindow":"LHR_4",
"arrivalMonthlyTemporalSpatialWindow":"LHR_1"
}
]
And I am running the below query :
db.person_events.aggregate([
{ $match: { eventId: "22446688" } },
{
$graphLookup: {
from: 'person_events',
startWith: '$movementSegments.carrierCode',
connectFromField: 'carrierCode',
connectToField: 'carrierCode',
as: 'carrier_connections'
}
}
])
The above query creates an array field in the document, but there are no values in it. As per the expectation, both my documents should get linked based on the carrier number.
Just to be clear about the query, the documents contain an eventId field, and the match pipeline returns one document to me after the match stage.
Well, I don't know how I missed it, but here is the solution to my problem which gives me the required results :
db.person_events.aggregate([
{ $match: { eventId: "22446688" } },
{
$graphLookup: {
from: 'person_events',
startWith: '$movementSegments.carrierCode',
connectFromField: 'movementSegments.carrierCode',
connectToField: 'movementSegments.carrierCode',
as: 'carrier_connections'
}
}
])

ElasticSearch analyzed fields

I'm building my search but need to analyze 1 field with different analyzers. My problem is for a field I need to have an analyzer on it for stemming (snowball) and then also one to keep the full word as one token (keyword). I can get this to work by the following index settings:
curl -X PUT "http://localhost:9200/$IndexName/" -d '{
"settings":{
"analysis":{
"analyzer":{
"analyzer1":{
"type":"custom",
"tokenizer":"keyword",
"filter":[ "standard", "lowercase", "stop", "snowball", "my_synonyms" ]
}
}
},
"filter": {
"my_synonyms": {
"type": "synonym",
"synonyms_path ": "synonyms.txt"
}
}
}
},
"mappings": {
"product": {
"properties": {
"title": {
"type": "string",
"search_analyzer" : "analyzer1",
"index_analyzer" : "analyzer1"
}
}
}
}
}';
The problem comes when searching on a single word in the title field. If it's populated with The Cat in the Hat it will store it as "The Cat in the Hat" but if I search for cats I get nothing returned.
Is this even possible to accomplish or do I need to have 2 separate fields and analyze one with keyword and the other with snowball?
I'm using nest in vb code to index the data if that matters.
Thanks
Robert
You can apply two different analyzers to the same using the fields property (previously known as multi fields).
My VB.NET is a bit rusty, so I hope you don't mind the C# examples. If you're using the latest code from the dev branch, Fields was just added to each core mapping descriptor so you can now do this:
client.Map<Foo>(m => m
.Properties(props => props
.String(s => s
.Name(o => o.Bar)
.Analyzer("keyword")
.Fields(fs => fs
.String(f => f
.Name(o => o.Bar.Suffix("stemmed"))
.Analyzer("snowball")
)
)
)
)
);
Otherwise, if you're using NEST 1.0.2 or earlier (which you likely are), you have to accomplish this via the older multi field type way:
client.Map<Foo>(m => m
.Properties(props => props
.MultiField(mf => mf
.Name(o => o.Bar)
.Fields(fs => fs
.String(s => s
.Name(o => o.Bar)
.Analyzer("keyword"))
.String(s => s
.Name(o => o.Bar.Suffix("stemmed"))
.Analyzer("snowball"))
)
)
)
);
Both ways are supported by Elasticsearch and will do the exact same thing. Applying the keyword analyzer to the primary bar field, and the snowball analyzer to the bar.stemmed field. stemmed of course was just the suffix I chose in these examples, you can use whatever suffix name you desire. In fact, you don't need to add a suffix, you can name the multi field something completely different than the primary field.