Django + PostgreSQL: Synchronize database with external source - sql

I'm looking for a way to synchronize some tables in database with an external source. I would check this external source every couple of hours for changes and if there are any, I would update my database.
My question is, if anybody knows about an existing app, that accepts some kind of data structure (a python dict) and translates this into database entries?
Here is an example: Lets say I have the following models:
class Language(models.Model):
tag = models.CharFieldmax_length=10)
name = models.CharField(max_length=50)
class Unit(models.Model):
label = models.CharField(max_length=10)
name = models.CharField(max_length=200)
class UnitTranslations(models.Model):
unit = models.ForeignKey(Unit)
language = models.ForeignKey(Language)
name = models.CharField(max_length=200)
So I could have corresponding data, that looks like this:
data = [
{
"label": "AB",
"name": "some random name",
"translations": [
{
"name": "some random name in german",
"language": {
"tag": "de",
"name": "german"
}
},
{
"name": "some random name in spanish",
"language": {
"tag": "es",
"name": "spanish"
}
},
]
},
{
"label": "OR",
"name": "some other random name",
"translations": [
{
"name": "some other random name in german",
"language": {
"tag": "de",
"name": "german"
}
},
{
"name": "some other random name in spanish",
"language": {
"tag": "es",
"name": "spanish"
}
},
]
},
]
So, I would like to take this structure (plus some meta informations on what the actuals models are, what attributes can be seen as IDs and so on), and have my program automatically update the necessary tables.
To finish my example, those are in short the main steps that need to be performed:
Is already a Unit with label "AB"? (otherwise create it)
Does it have the name "some random name"? (otherwise change it)
Do we have the languages german and spanish? (otherwise create them)
Do we have translations for german and spanish? (otherwise create them)
Are these translations "some random name in german" and "some random name in spanish" respectively? (otherwise change them)
But of course, the structure can be more complex. That's why I would be thankful for every library or app (or idea) that makes my life a bit easier.

Related

Importing Data to Contentful programatically from a json file

I am trying to import some data programatically into contentful:
I am following the docs here
And running the command inside my integrated terminal
contentful space import --config config.json
Where the config file is
{
"spaceId": "abc123",
"managementToken": "112323132321adfWWExample",
"contentFile": "./dataToImport.json"
}
And the dataToImport.json file is
{
"data": [
{
"address": "11234 New York City"
},
{
"address": "1212 New York City"
}
]
}
The thing is I don't understand what format my dataToImport.json should be and what is missing inside this file or in my config file so that the array of addresses from the .json file get added as new entries to an already created content model inside the Contentful UI show in the screenshot below
I am not specifying the content model for the data to go into so I believe that is one issue, and I don't know how I do that. An example or repo would help me out greatly
The types of data you can import are listed : in their documentation
your json top level should say "entries" and not data, if new content of a content type is what you would like to import.
This is an example of a blog post as per content model of the tutorial they provide.
The only thing i didn't work out yet is where the user id is :D so i substituted for one of the content type 'person' also provided in their tutorial (I think it's called Gatsby Starter)
{"entries": [
{
"sys": {
"space": {
"sys": {
"type": "Link",
"linkType": "Space",
"id": "theSpaceIdToReceiveYourImport"
}
},
"type": "Entry",
"createdAt": "2019-04-17T00:56:24.722Z",
"updatedAt": "2019-04-27T09:11:56.769Z",
"environment": {
"sys": {
"id": "master",
"type": "Link",
"linkType": "Environment"
}
},
"publishedVersion": 149, -- these are not compulsory, you can skip
"publishedAt": "2019-04-27T09:11:56.769Z", -- you can skip
"firstPublishedAt": "2019-04-17T00:56:28.525Z", -- you can skip
"publishedCounter": 3, -- you can skip
"version": 150,
"publishedBy": { -- this is an example of a linked content
"sys": {
"type": "Link",
"linkType": "person",
"id": "personId"
}
},
"contentType": {
"sys": {
"type": "Link",
"linkType": "ContentType",
"id": "blogPost" -- here should be your content type 'RealtorProperties'
}
}
},
"fields": { -- here should go your content type fields, i can't see it in your post
"title": {
"en-US": "Test 1"
},
"slug": {
"en-US": "Test-1"
},
"description": {
"en-US": "some description"
},
"body": {
"en-US": "some body..."
},
"publishDate": {
"en-US": "2016-12-19"
},
"heroImage": { -- another example of a linked content
"en-US": {
"sys": {
"type": "Link",
"linkType": "Asset",
"id": "idOfTHisImage"
}
}
}
}
},
--another entry, ...]}
Have a look at this repo. I am also trying to figure this out. Looks like there's quite a lot of fields that need to be included in the json file. I was hoping there'd be a simple solution but it seems you (me too actually) will need to create scripts to "convert" your json file to data contentful can read and import.
I'll let you know if I find anything better.

PUT inventory custom fragment

This is my device inventory with the custom tasks array:
{
...
"c8y_IsDevice": {},
"tasks": [
{
"task_status" : "NEW",
"task_id" : "1",
"task_data" : {
...
}
},
{
"task_status" : "DONE",
"task_id" : "2",
"task_data" : {
...
}
},
...
]
...
}
I want to create a MQTT/SMARTREST PUT template to update a task by id and status.
For example: 800,[task_id],[task_status]
I am not able to find a way for this, especially it's an json array and all my attempts end up in overwriting the complete json array.
Maybe there's sth. like a condition, if task_id = x -> set task_status = y
Thank you.
You can only replace the whole fragment. There is no way to partially modify fragments.
one way to do it is get the whole array, using it for create a new one locally with the changes to want to do and finally put it again into the database. It is not a kind of solution but have been working for me.
Thanks for the info, but I still got a question about updating an array.
Concerning your answers I want to update the whole fragment.
This is my inventory:
"tasks": [
{
"address": {
"street": "Street",
"street_number": "1"
},
"description": "Test Description",
"id": "1",
"status": "NEW"
},
{
"address": {
"street": "Street2",
"street_number": "2"
},
"description": "Test Description 2",
"id": "2",
"status": "DONE"
}
]
My template:
801,<$.tasks.status>,<$.tasks.description>,<$.tasks.address.street>,<$.tasks.address.street_number>
Template screenshot
Now I publish:
//801,SERIAL,status,description,street_name,street_nr
801,SERIAL,NEW,1,2,3,4
Of course, this will overwrite the array and just set a json object tasks.
"tasks": {
"address": {
"street": "2",
"street_number": "3"
},
"description": "1",
"status": "NEW"
}
So I tried tasks[*]/tasks[] in my template (like in response templates), but this won't work too. I don't get it, maybe you can give me a small solution about putting a complete fragment with an array inside.

Backand (Back&) schema setup for user favorites

I am struggling with setting up the notion of user favorites in the Backand schema editor. An example for what I am trying to do is as follows:
Say I have a website selling cars. Users are authenticated and on any car they like they can "favorite" that car. When favorited, that specific car object or id should be added to the users favorites property so that the user can return to or call upon their favorites list at any time.
In Back&s schema editor I have added a collection property to my user object but I am completely confused because this just creates a 1 to many relationship.
The Back& documentation is pretty lacking at this point so I figured I'd bring it up here so others who run into this can see it as well. What is the best way to accomplish this common functionality in Back&? Any help would be greatly appreciated.
I think what you are actually trying to do needs a many-to-many relation which is well documents here and has a code pen example here.
i.e. each user can favorite many cars and each car can be a favorite of many users
so the json would look somthing like this
[
{
"name": "user_car",
"fields": {
"user": {
"object": "user"
},
"car": {
"object": "car"
}
}
},
{
"name": "user",
"fields": {
"user_car": {
"collection": "user_car",
"via": "car"
},
"email": {
"type": "string"
},
"firstName": {
"type": "string"
},
"lastName": {
"type": "string"
}
}
},
{
"name": "car",
"fields": {
"user_car": {
"collection": "user_car",
"via": "car"
},
"name": {
"type": "string"
},
"model": {
"type": "int"
}
}
}
}
]

Schema to load JSON to Google BigQuery

Suppose I have the following JSON, which is the result of parsing urls parameters from a log file.
{
"title": "History of Alphabet",
"author": [
{
"name": "Larry"
},
]
}
{
"title": "History of ABC",
}
{
"number_pages": "321",
"year": "1999",
}
{
"title": "History of XYZ",
"author": [
{
"name": "Steve",
"age": "63"
},
{
"nickname": "Bill",
"dob": "1955-03-29"
}
]
}
All the fields in top-level, "title", "author", "number_pages", "year" are optional. And so are the fields in the second level, inside "author", for example.
How should I make a schema for this JSON when loading it to BQ?
A related question:
For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
How should I make a schema for this JSON when loading it to BQ?
The following schema should work. You may want to change some of the types (e.g. maybe you want the dob field to be a TIMESTAMP instead of a STRING), but the general structure should be similar. Since types are NULLABLE by default, all of these fields should handle not being present for a given row.
[
{
"name": "title",
"type": "STRING"
},
{
"name": "author",
"type": "RECORD",
"fields": [
{
"name": "name",
"type": "STRING"
},
{
"name": "age",
"type": "STRING"
},
{
"name": "nickname",
"type": "STRING"
},
{
"name": "dob",
"type": "STRING"
}
]
},
{
"name": "number_pages",
"type": "INTEGER"
},
{
"name": "year",
"type": "INTEGER"
}
]
A related question: For example, suppose there is another similar table, but the data is from different date, so it's possible to have different schema. Is it possible to query across these 2 tables?
It should be possible to union two tables with differing schemas without too much difficulty.
Here's a quick example of how it works over public data (kind of a silly example, since the tables contain zero fields in common, but shows the concept):
SELECT * FROM
(SELECT * FROM publicdata:samples.natality),
(SELECT * FROM publicdata:samples.shakespeare)
LIMIT 100;
Note that you need the SELECT * around each table or the query will complain about the differing schemas.

How to search this set of fields using query_string syntax in elasticsearch

It could be that I've created my index wrong, but I have a lead index with variable field names that I need to search through. I created a sub object called fields that contains name and value. Sample:
[
{
"name": "first_name",
"value": "XXX"
},
{
"name": "last_name",
"value": "XXX"
},
{
"name": "email",
"value": "X0#yahoo.com"
},
{
"name": "address",
"value": "X Thomas RD Apt 1023"
},
{
"name": "city",
"value": "phoenix"
},
{
"name": "state",
"value": "AZ"
},
{
"name": "zip",
"value": "12345"
},
{
"name": "phone",
"value": "5554448888"
},
{
"name": "message",
"value": "recently had XXXX"
}
]
name field is not_analyzed, and value field is analyzed and not, as .exact and .search
I thought I could get the results I want from a query string query doing something like
+fields.name: first_name +fields.value.exact: XXX
But it doesn't quite work the way I thought. I figure its because I'm trying to use this as mysql instead of as nosql, and there is a fundamental brain shift I must have.
While the approach you are taking probably should work with enough effort, you are much better off having explicit field names for everything, eg:
{
"name.first_name" : "XXX",
"name.last_name" : "XXX",
etc...
}
Then your query_string looks like this:
name.first_name:XXX
If you are a new to elasticsearch, play around with things before you add your mappings. The dynamic defaults should kick in and things will work. You then add mappings to get fine grained control over the field behavior.