Golang access raw Podio field values - api

Hi Podio people (and maybe more specifically Andreas),
I'm trying to dig deeper into the Golang API library but bumping into my rookie Golang skills.
After doing a client.getItems(...) call I wish to loop over the fields inside of the items and only grab relevant portions. The end goal is that I can create a very much simplified json object like this
{
1000: "John", // key = app field id, value = text
5490: [{item_id: 4031294, app_id: 94392}], // relations
5163: [1,2,5] // categories
}
However I cannot seem to get a hold of the item.Fields nested Values struct {}. I tried using reflect but without any luck.
Could someone help me complete this code please?
for _, field := range item.Fields {
switch field.PartialField.Type {
case "text":
simpleValue := field.Values.Value // not working as I can't access Value in struct {}
}
}
Greetings,
PJ

Try a type assertion
myTexts := field.Values.([]TextValue)
You can also check for a valid assertion so your program doesn't panic
myTexts, assertionSucceeded := field.Values.([]TextValue)

Related

How to safely detele a record?

CONTEXT
I created an app which handles todos. I want to be able to delete todos based on an id I get from the url
import vweb
struct App {
vweb.Context
}
[post]
["/todo/:id/delete"]
pub fn (mut app App) delete_todo_response(id string) vweb.Result {
db := sqlite.open("dist/database.db") or {
return app.redirect("/todo")
}
db.exec_none('DELETE FROM todo WHERE id = $id') // id is not escaped
}
fn main() {
vweb.run<App>(80)
}
PROBLEM
As you can see, the id is not escaped. I feel this is not the ideal and secure way to do this.
QUESTIONS
How one can escape values using exec(), exec_one() or exec_none()?
Is the ORM capable of deleting a record for me based on a struct, like this is possible with select and insert?
As far as I know, there is no standard way to escape sqlite queries.
However, you can indeed use the ORM.
If you declare your Todo struct, this should do :
sql db {
delete from Todo where id == id
}

String with variable inside that can dynamically change

I'm trying to setup an API in golang, for specific needs, I want to be able to have an environment variable that would contain an URL as string (i.e : "https://subdomain.api.com/version/query") and I want to be able to modify the bold parts within an API call.
I have no clue on how I could achieve this.
Thanks for your time,
Paul
There are many ways, one which allows the URL to be configured from the environment, then to have the url configured dynamically at runtime, would be to use a template.
You could expect a template from the ENV:
apiUrlFromEnv := "https://{{.Subdomin}}.api.com/{{.Version}}/query" // get from env
Modified From the docs:
type API struct {
Subdomain string
Version string
}
api := API{"testapi", "1.1"}
tmpl, err := template.New("api").Parse(apiUrlFromEnv)
if err != nil { panic(err) }
err = tmpl.Execute(os.Stdout, api) // write to buffer so you can get a string?
if err != nil { panic(err) }
The simplest way is to use fmt.Sprintf.
fmt.Sprintf(format string, a ...interface{}) string
As you see this function returns a new formatted string and this is built-in library. Furthermore you can use indexing to place arguments in a template:
In Printf, Sprintf, and Fprintf, the default behavior is for each formatting verb to format successive arguments passed in the call. However, the notation [n] immediately before the verb indicates that the nth one-indexed argument is to be formatted instead.
fmt.Sprintf("%[2]d %[1]d %d[2]\n", 11, 22)
But if you want to use named variables you should use text/template package.

Google diff-match-patch : How to unpatch to get Original String?

I am using Google diff-match-patch JAVA plugin to create patch between two JSON strings and storing the patch to database.
diff_match_patch dmp = new diff_match_patch();
LinkedList<Patch> diffs = dmp.patch_make(latestString, originalString);
String patch = dmp.patch_toText(diffs); // Store patch to DB
Now is there any way to use this patch to re-create the originalString by passing the latestString?
I google about this and found this very old comment # Google diff-match-patch Wiki saying,
Unpatching can be done by just looping through the diff, swapping
DIFF_INSERT with DIFF_DELETE, then applying the patch.
But i did not find any useful code that demonstrates this. How could i achieve this with my existing code ? Any pointers or code reference would be appreciated.
Edit:
The problem i am facing is, in the front-end i am showing a revisions module that shows all the transactions of a particular fragment (take for example an employee details), like which user has updated what details etc. Now i am recreating the fragment JSON by reverse applying each patch to get the current transaction data and show it as a table (using http://marianoguerra.github.io/json.human.js/). But some JSON data are not valid JSON and I am getting JSON.parse error.
I was looking to do something similar (in C#) and what is working for me with a relatively simple object is the patch_apply method. This use case seems somewhat missing from the documentation, so I'm answering here. Code is C# but the API is cross language:
static void Main(string[] args)
{
var dmp = new diff_match_patch();
string v1 = "My Json Object;
string v2 = "My Mutated Json Object"
var v2ToV1Patch = dmp.patch_make(v2, v1);
var v2ToV1PatchText = dmp.patch_toText(v2ToV1Patch); // Persist text to db
string v3 = "Latest version of JSON object;
var v3ToV2Patch = dmp.patch_make(v3, v2);
var v3ToV2PatchTxt = dmp.patch_toText(v3ToV2Patch); // Persist text to db
// Time to re-hydrate the objects
var altV3ToV2Patch = dmp.patch_fromText(v3ToV2PatchTxt);
var altV2 = dmp.patch_apply(altV3ToV2Patch, v3)[0].ToString(); // .get(0) in Java I think
var altV2ToV1Patch = dmp.patch_fromText(v2ToV1PatchText);
var altV1 = dmp.patch_apply(altV2ToV1Patch, altV2)[0].ToString();
}
I am attempting to retrofit this as an audit log, where previously the entire JSON object was saved. As the audited objects have become more complex the storage requirements have increased dramatically. I haven't yet applied this to the complex large objects, but it is possible to check if the patch was successful by checking the second object in the array returned by the patch_apply method. This is an array of boolean values, all of which should be true if the patch worked correctly. You could write some code to check this, which would help check if the object can be successfully re-hydrated from the JSON rather than just getting a parsing error. My prototype C# method looks like this:
private static bool ValidatePatch(object[] patchResult, out string patchedString)
{
patchedString = patchResult[0] as string;
var successArray = patchResult[1] as bool[];
foreach (var b in successArray)
{
if (!b)
return false;
}
return true;
}

belongsTo only being set on first and last member of hasMany

My adapter uses findHasMany to load child records for a hasMany relationship.
My findHasMany adapter method is directly based on the test case for findHasMany. It retrieves the contents of the hasMany on demand, and eventually does the following two operations:
store.loadMany(type, hashes);
// ...
store.loadHasMany(record, relationship.key, ids);
(The full code for the findHasMany is below, in case the issue is there, but I don't think so.)
The really strange behavior is: it seems that somewhere within loadHasMany (or in some subsequent async process) only the first and last child records get their inverse belongsTo property set, even though all the child records are added to the hasMany side. I.e., if posts/1 has 10 comments, this is what I get, after everything has loaded:
var post = App.Posts.find('1');
post.get('comments').objectAt(0).get('post'); // <App.Post:ember123:1>
post.get('comments').objectAt(1).get('post'); // null
post.get('comments').objectAt(2).get('post'); // null
// ...
post.get('comments').objectAt(8).get('post'); // null
post.get('comments').objectAt(9).get('post'); // <App.Post:ember123:1>
My adapter is a subclass of DS.RESTAdapter, and I don't think I'm overloading anything in my adapter or serializer that would cause this behavior.
Has anybody seen something like this before? It's weird enough I though someone might know why it's happening.
Extra
Using findHasMany lets me load the contents of the hasMany only when the property is accessed (valuable in my case because calculating the array of IDs would be expensive). So say I have the classic posts/comments example models, the server returns for posts/1:
{
post: {
id: 1,
text: "Linkbait!"
comments: "/posts/1/comments"
}
}
Then my adapter can retrieve /posts/1/comments on demand, which looks like this:
{
comments: [
{
id: 201,
text: "Nuh uh"
},
{
id: 202,
text: "Yeah huh"
},
{
id: 203,
text: "Nazi Germany"
}
]
}
Here is the code for the findHasMany method in my adapter:
findHasMany: function(store, record, relationship, details) {
var type = relationship.type;
var root = this.rootForType(type);
var url = (typeof(details) == 'string' || details instanceof String) ? details : this.buildURL(root);
var query = relationship.options.query ? relationship.options.query(record) : {};
this.ajax(url, "GET", {
data: query,
success: function(json) {
var serializer = this.get('serializer');
var pluralRoot = serializer.pluralize(root);
var hashes = json[pluralRoot]; //FIXME: Should call some serializer method to get this?
store.loadMany(type, hashes);
// add ids to record...
var ids = [];
var len = hashes.length;
for(var i = 0; i < len; i++){
ids.push(serializer.extractId(type, hashes[i]));
}
store.loadHasMany(record, relationship.key, ids);
}
});
}
Solution
Override the DS.RelationshipChange.getByReference method by inserting the following code into your app:
DS.RelationshipChange.prototype.getByReference = function(reference) {
var store = this.store;
// return null or undefined if the original reference was null or undefined
if (!reference) { return reference; }
if (reference.record) {
return reference.record;
}
return store.materializeRecord(reference);
};
Yes, this is overriding a private, internal method in Ember Data. Yes, it may break at any time with any update. I'm pretty sure this is a bug in Ember Data, but I'm not 100% certain this is the right solution. But it does solve this problem, and possibly other relationship-related problems.
This fix is designed to be applied to Ember Data master as of 29 Apr 2013.
Reason
DS.Store.loadHasMany calls DS.Model.hasManyDidChange, which retrieves references for all the child records and then sets the hasMany's content to the array of references. This kicks off a chain of observers., eventually calling DS.ManyArray.arrayContentDidChange, in which the first line is this._super.apply(this, arguments);, calling the superclass method Ember.Array.arrayContentDidChange. That Ember.Array method includes an optimization that caches the first and last object in the array and calls objectAt on only those two array members. So there's the part that singles out the first and last record.
Next, since DS.RecordArray implements an objectAtContent method (from Ember.ArrayProxy), the objectAtContent implementation calls DS.Store.recordForReference, which in turn calls DS.Store.materializeRecord. This last function adds a record property to the reference that is passed in as a side effect.
Now we get to what I think is a bug. In DS.ManyArray.arrayContentDidChange, after calling the superclass method, it loops through all the new references and creates a DS.RelationshipChangeAdd instance that encapsulates the owner and child record references. But the first line inside the loop is:
var reference = get(this, 'content').objectAt(i);
Unlike what happens above to the first and last record, this calls objectAt directly on the Ember.NativeArray and bypasses the ArrayProxy methods including the objectAtContent hook, which means that DS.Store.materializeRecord--which adds the record property on the reference object--may have never been called on some references.
Next, the relationship changes created in the loop are immediately afterward (in the same run loop) applied with this call tree: DS.RelationshipChangeAdd.sync -> DS.RelationshipChange.getFirstRecord -> DS.RelationshipChange.getByReference. This last method expects the reference object to have a record property. However, the record property is only set on the first and last reference objects, for reasons explained above. Therefore, for all but the first and last records, the relationship fails to be established because it doesn't have access to the child record object!
The above fix calls DS.Store.materializeRecord whenever the record property doesn't exist on the reference. The last line in the function is the only thing added. On the one hand, it looks like this was the original intention: that var store = this.store; line in the original declares a variable that isn't otherwise used in the function, so what's it there for? Also, without the added line, the function doesn't always return a value, which is a little unusual for a function which is expected to do so. On the other hand, this could lead to mass materialization in some cases where that would be undesirable (but, the relationships just won't work without it in some cases, it seems).
Possibly related
The "chain of observers" I mentioned takes a bit of an odd path. The initiating event was setting the content property on a DS.ManyArray, which extends Ember.ArrayProxy--therefore the content property has a dependent property arrangedContent. Importantly, the observers on arrangedContent are executed before observers on content are executed (see Ember.propertyDidChange). However, the default implementation of Ember.ArrayProxy.arrangedContentArrayDidChange simply calls Ember.Array.arrayContentDidChange, which DS.ManyArray implements! The point being, this looks like a recipe for some code to execute in an unintended order. That is, I think Ember.ManyArray.arrayContentDidChange may getting executed earlier than expected. If this is the case, the above mentioned code that expects the record property to already exist on all references may have been expecting this reasonably, as one of the observers directly on the content property may call DS.Store.materializeRecord on each reference. But I haven't dug deep enough to find out if this is true.

Notes attributes syntax changed? - Notes attributes via API call fails for me

Hello Shopfiy Developers!
I'm having an issue with the notes attributes via API call. It used to work up until a month ago and then things start to go sideways. Has any syntax changed? Here is a snippet of my code that returns an error in the for loop.
Error message "Undefined index: note_attribute right at the foreach line"
// Overwrite custom status field if it's defined in note-attributes
if(array_key_exists('note-attributes', $o))
{
// For whatever reason, the note-attributes are formatted
// differently if there's only one key => value pair
// ( * see examples at end of this file )
// If the note-attribute array has the key 'name' in it, it's just a single pair.
// Otherwise, the note-attribute array would be numerically indexed with keys 0,1,2.. etc
if(array_key_exists('name',$o['note-attributes']['note_attribute']))
{
if($o['note-attributes']['note_attribute']['name'] == "custom_status")
$arr_tmp[7] = $o['note-attributes']['note_attribute']['value'] ;
}
else
{
foreach($o['note-attributes']['note_attribute'] as $na) //Fails here
{
if($na['name'] == "custom_status")
$arr_tmp[7] = $na['value'] ;
}
}
}
Your help is much appreciated. Thank you.
The issue here was due to a change in XML node syntax; Shopify had a regression that changed note-attributes to note_attributes in the response and it was changed back.