Populating Django model sql tables with a json file - sql

I would like to use a json file to populate an instance of a Django model. I have essentially flatten the structure in the json to a few table/classes. How do you map the json data to the Django tables?
What is the most efficient ways of doing this?
Thanks.

$ python manage.py loaddata yourjsonfile.json
Let's say you want to populate the standard django user table with 2 users: John Lennon and Yoko Ono. Your json will something like:
[
{
"pk": 1,
"model": "auth.user",
"fields": {
"username": "john",
"first_name": "John",
"last_name": "Lennon",
"is_active": true,
"is_superuser": true,
"is_staff": true,
"last_login": "2015-06-03T14:07:31.392Z",
"groups": [],
"user_permissions": [],
"password": "pbaasdf_sha256$12001$9Ser7lc1k1pWQFqk0x3u/T6I3",
"email": "john#lennon.com",
"date_joined": "2015-03-10T15:38:34.406Z"
}
},
{
"pk": 2,
"model": "auth.user",
"fields": {
"username": "yoko",
"first_name": "Yoko",
"last_name": "Ono",
"is_active": true,
"is_superuser": false,
"is_staff": false,
"last_login": "2015-05-19T13:36:58.444Z",
"groups": [],
"user_permissions": [],
"password": "baasdf_sha256$12cJskLs9Ser7lc1k1pWQFqk0x3u/T6I3",
"email": "yoko#ono.com",
"date_joined": "2014-05-19T13:36:58.444Z"
}
}
]

"Providing initial data for models"
It’s sometimes useful to pre-populate your database with hard-coded data when you’re first setting up an app. You can provide initial data via fixtures.
A fixture is a collection of data that Django knows how to import into a database. The most straightforward way of creating a fixture if you’ve already got some data is to use the manage.py dumpdata command. Or, you can write fixtures by hand; fixtures can be written as JSON, XML or YAML (with PyYAML installed) documents. The serialization documentation has more details about each of these supported serialization formats.

Related

Azure Devops API Release definitions expand recursively (I need workflowTasksfrom deployPhases from environments)

Does the Azure Devops REST API allow me to expand multiple levels? When using the release definitions I specifically need the workflowtasks, which are buried a couple of lists deep.
More context:
I'm optimizing an Azure Devops extension that scans pipelines for compliancy. Right now there's a rule that scans the workflowtasks. To get the information required on all relevant pipelines, we do the following call to the Azdo API for each release definition:
https://vsrm.dev.azure.com/{Organization}/{Project}/_apis/release/definitions/{definitionID}?api-version=6.0
This returns a completely decked-out release definition including environments like this:
"environments": [{
"id": 10,
"name": "Stage 1",
... etc
"deployPhases": [{
"deploymentInput": {
"parallelExecution": {
"parallelExecutionType": "none"
},
...etc
"rank": 1,
...etc
"workflowTasks": [{
"environment": {},
"taskId": "obfuscated",
"version": "2.*",
"name": "obfuscated",
"refName": "",
"enabled": true,
"alwaysRun": false,
"continueOnError": false,
"timeoutInMinutes": 0,
"retryCountOnTaskFailure": 0,
"definitionType": "task",
"overrideInputs": {},
"condition": "succeeded()",
"inputs": {
"template": "obfuscated",
"assets": "obfuscated",
"duration": "60",
"title": "",
"description": "",
"implementationPlan": "obfuscated"
}
}, {
"environment": {},
"taskId": "obfuscated",
"version": "2.*",
"name": "obfuscated",
"refName": "",
"enabled": true,
"alwaysRun": false,
"continueOnError": false,
"timeoutInMinutes": 0,
"retryCountOnTaskFailure": 0,
"definitionType": "task",
"overrideInputs": {},
"condition": "succeeded()",
"inputs": {
"changeClosureCode": "1",
"changeClosureComments": "Successful implementation",
"changeId": ""
}
}
]
}
],
...etc
}
],
But when I try and get the list as a whole, using the following URL:
https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/definitions?$expand=environments&api-version=6.0
My Environments arrays (there is one for each definition obviously) looks nothing like the previous one. It doesn't include deployPhases (not even as an empty array).
Since we have 2300 release definitions, you can Imagine how inconvenient it is to call the release/definitions/{definitionID} endpoint instead of the release/definitions one that fetches all of them at the same time.
Is there a way to expand the release/definitions call to fetch all environments including workflowTasks and maybe other stuff? Is there a syntax that allows for this? Something like $expand=environments>deployPhases>workflowTasks?
Is there a way to expand the release/definitions call to fetch all environments including workflowTasks and maybe other stuff? Is there a syntax that allows for this? Something like $expand=environments>deployPhases>workflowTasks?
I am afraid there is no such syntax allow you to fetch all environments including workflowTasks.
You could use the REST API with some powershell scripts to fetch all environments including workflowTasks:
The sample powershell scripts:
$url = "https://vsrm.dev.azure.com/{Organization}/{Project}/_apis/release/definitions/{definitionID}?api-version=6.0"
Write-Host "URL: $url"
$pipeline = Invoke-RestMethod -Uri $url -Method Get -Headers #{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
Write-Host "workflowTasks = $($pipeline.environments.deployPhases.workflowTasks | ConvertTo-Json -Depth 100)"
The output is:

External pagination vuetify data-table

I have object returned from API in the following format:
{
"count": 0,
"result": [
{
"id": "5dfbb8d5b2f68faf05688997",
"createdOn": "1577121878136",
"hash": "d7a3a552a2c1a765b3bcd935980a1982",
"modifiedOn": "1577121878136",
"company": {},
"home": {},
"first_name": "string",
"last_name": "string",
"isActive": true,
"funding": "string",
"medication": "string",
"startDate": "string"
}
]
}
I am using vuex to store data. How do I use vuetify's v-data-table to make pagination. The backend is paginated. And my api is axios.get(..../?page='+page + '&count='+count)' I need to pass page and count as argument to get paginated data.
I would be grateful if you could link me some of the resources to do it. I tried but I was not able to achieve it and did not find any materials for it. I followed vuetify's docs for external pagination but there it assumes the data are only externally paginated, they donot link to veux store.
call v-data-table with these parameters.
<v-data-table
:headers="yourHeadersArrayOfObject"
:items="yourItemsArrayOfObject"
:page="yourCurrentPageNumber"
:items-per-page="yourItemsPerPage"
#update:page="pageUpdateFunction"
></v-data-table>
on methods handle page update =>
pageUpdateFunction(newPageNumber) {
console.log(newPageNumber);
// handle other axios request here and update varibles
},

Ruckus SmartZone API

I am having issues when trying to create a Zone using the API.
I can create the zone with the basic info, but as soon as I want to add another property (specifically "location") I get an error.
This is my dataset I use for the POST
def id_prov ={
"domainId": "$DomainId",
"name": "$ZoneName",
"login": {
"apLoginName": "xxxxx",
"apLoginPassword": "xxxxx"
},
"description": "$jira_summ",
"version": "3.5.1.0.1010",
"countryCode": "ZA"
"location": "$CalledStationName_val",
}
The API creates everything until I either include the "location" property in the original POST or if I try a PUT or PATCH atferwards.
Result value:
{"message":["object instance has properties which are not allowed by the schema: [\"location\"]"],"errorCode":101,"errorType":"Bad HTTP request"}
Anyone come across this or have any ideas on how to get this working?
Thanks
A comma is required after "countryCode": "ZA". The post payload should look like this:
def id_prov ={
"domainId": "$DomainId",
"name": "$ZoneName",
"login": {
"apLoginName": "xxxxx",
"apLoginPassword": "xxxxx"
},
"description": "$jira_summ",
"version": "3.5.1.0.1010",
"countryCode": "ZA",
"location": "$CalledStationName_val",
}

Best way to edit a nested protobuf field in react-admin?

For our admin page, we are using the basic
<SimpleForm>
<TextInput>
...
</SimpleForm>
pattern.
One of our fields, however, is a nested protobuf object. Although it's nested, most of the fields of this protobuf object are fairly basic. I tried using dot notation like
<TextInput
label="nestedField"
source="ProtobufObject.nestedField"
/>
but that doesn't seem to work. Is there a more straightforward way of handling this other than creating a custom input?
I don't know anything about ProtobufObject other than what I just read on the google site. I have an api that returns JSON below. To show the event name, source="event.eventName". If your payload isn't JSON, you'll need to convert it in your data provider.
[
{
"competitionName": "XYZ",
"competitionStatus": "SCHEDULED",
"id": 107,
"createdAt": "2018-08-30T03:53:37.000Z",
"updatedAt": "2018-08-30T03:53:37.000Z",
"event": {
"eventName": "XYZ",
"eventDesc": "Something",
"startDate": "2018-09-13T07:00:00.000Z",
"startTime": "2018-08-30T01:00:58.482Z",
"endTime": "2018-08-30T03:00:58.484Z",
"eventLocation": "Gym",
"competitionId": 107,
"createdAt": "2018-08-30T03:53:37.000Z",
"updatedAt": "2018-08-30T03:53:37.000Z",
}
}
]

AWS Data pipeline CSV data from S3 to DynamoDB

I am trying to transfer CSV data from S3 bucket to DynamoDB using AWS pipeline, following is my pipe line script, it is not working properly,
CSV file structure
Name, Designation,Company
A,TL,C1
B,Prog, C2
DynamoDb : N_Table, with Name as hash value
{
"objects": [
{
"id": "Default",
"scheduleType": "cron",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DynamoDBDataNodeId635",
"schedule": {
"ref": "ScheduleId639"
},
"tableName": "N_Table",
"name": "MyDynamoDBData",
"type": "DynamoDBDataNode"
},
{
"emrLogUri": "s3://onlycsv/error",
"id": "EmrClusterId636",
"schedule": {
"ref": "ScheduleId639"
},
"masterInstanceType": "m1.small",
"coreInstanceType": "m1.xlarge",
"enableDebugging": "true",
"installHive": "latest",
"name": "ImportCluster",
"coreInstanceCount": "1",
"logUri": "s3://onlycsv/error1",
"type": "EmrCluster"
},
{
"id": "S3DataNodeId643",
"schedule": {
"ref": "ScheduleId639"
},
"directoryPath": "s3://onlycsv/data.csv",
"name": "MyS3Data",
"dataFormat": {
"ref": "DataFormatId1"
},
"type": "S3DataNode"
},
{
"id": "ScheduleId639",
"startDateTime": "2013-08-03T00:00:00",
"name": "ImportSchedule",
"period": "1 Hours",
"type": "Schedule",
"endDateTime": "2013-08-04T00:00:00"
},
{
"id": "EmrActivityId637",
"input": {
"ref": "S3DataNodeId643"
},
"schedule": {
"ref": "ScheduleId639"
},
"name": "MyImportJob",
"runsOn": {
"ref": "EmrClusterId636"
},
"maximumRetries": "0",
"myDynamoDBWriteThroughputRatio": "0.25",
"attemptTimeout": "24 hours",
"type": "EmrActivity",
"output": {
"ref": "DynamoDBDataNodeId635"
},
"step": "s3://elasticmapreduce/libs/script-runner/script-runner.jar,s3://elasticmapreduce/libs/hive/hive-script,--run-hive-script,--hive-versions,latest,--args,-f,s3://elasticmapreduce/libs/hive/dynamodb/importDynamoDBTableFromS3,-d,DYNAMODB_OUTPUT_TABLE=#{output.tableName},-d,S3_INPUT_BUCKET=#{input.directoryPath},-d,DYNAMODB_WRITE_PERCENT=#{myDynamoDBWriteThroughputRatio},-d,DYNAMODB_ENDPOINT=dynamodb.us-east-1.amazonaws.com"
},
{
"id": "DataFormatId1",
"name": "DefaultDataFormat1",
"column": [
"Name",
"Designation",
"Company"
],
"columnSeparator": ",",
"recordSeparator": "\n",
"type": "Custom"
}
]
}
Out of four steps while executing the pipeline, two are getting finished, but it is not executing completely
Currently (2015-04) default import pipeline template does not support importing CSV files.
If your CSV file is not too big (under 1GB or so) you can create a ShellCommandActivity to convert CSV to DynamoDB JSON format first and the feed that to EmrActivity that imports the resulting JSON file into your table.
As a first step you can create sample DynamoDB table including all the field types you need, populate with dummy values and then export the records using pipeline (Export/Import button in DynamoDB console). This will give you the idea about the format that is expected by Import pipeline. The type names are not obvious, and the Import activity is very sensitive about the correct case (e.g. you should have bOOL for boolean field).
Afterwards it should be easy to create an awk script (or any other text converter, at least with awk you can use the default AMI image for your shell activity), which you can feed to your shellCommandActivity. Don't forget to enable "staging" flag, so your output is uploaded back to S3 for the Import activity to pick it up.
If you are using the template data pipeline for Importing data from S3 to DynamoDB, these dataformats won't work. Instead, use the format in the link below to store the input S3 data file http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-importexport-ddb-pipelinejson-verifydata2.html
This format of the output file generated by the template data pipeline that exports data from DynamoDB to S3.
Hope that helps.
I would recommend using the CSV data format provided by datapipeline instead of custom.
For debugging the errors on cluster, you can lookup the jobflow in EMR console and look at the log files for the tasks that failed.
See below link for a solution that works (in the question section), albeit EMR 3.x. Just change the delimiter to "columnSeparator": ",". Personally, I wouldn't do CSV unless you are certain the data is sanitized correctly.
How to upgrade Data Pipeline definition from EMR 3.x to 4.x/5.x?