PostgreSQL - Update inner json - sql

I have a column jdata of type jsonb inside a table 'JTABLE'. A sample jdata looks like this :
{
"id" : 12,
"address" : {
"houseName": {
"name" : "Jackson",
"lang" : "ENG"
}
}
}
How can i query to update the lang to anotherValue for this?
I tried this and it doesn't seem to work :
UPDATE JTABLE SET jdata -> 'address'->'houseName'-> 'lang' = '"DEU"' where jdata->>'id' = '12';
This doesn't work! Any help?
EDIT:
This overwrites my value and i get this when i run :
{
"id" : 12,
"address" : {
"houseName": {
"lang" : "DEU"
}
}
}
I lost key name.
I'm trying this query now :
SELECT jsonb_set(jdata, 'address,houseName}', '{"lang":"DEU"}'::jsonb) FROM JTABLE where jdata->>'id' = '12';

Your path is wrong, the 2nd statement should be a path to the json key you wish to update.
The query to view your update should look like :
SELECT jsonb_set(jdata, '{address,houseName,lang}', '"DEU"') FROM JTABLE where jdata->'id' = '12';
The final query to update :
UPDATE JTABLE SET jdata = jsonb_set(jdata, '{address,houseName,lang}', '"DEU"') WHERE jdata->'id' = '12';
Also don't type cast a record to `jsonb.

Related

Create partition on bigquery table using terraform

Description
I have list of bigquery tables to be created using terraform but I need only the partition for specific tables.
Here is the ex.
locals {
path = "../../../../../../../../db"
gcp_bq_tables = [
"my_table1",
"my_table1_daily",
"my_table2",
"my_table2_daily"
]
}
And, the terraform script to create the tables:
resource "google_bigquery_table" "gcp_bq_tables" {
for_each = toset(local.gcp_bq_tables)
dataset_id = google_bigquery_dataset.gcp_bq_db.dataset_id
table_id = each.value
schema = file("${local.path}/schema/${each.value}.json")
labels = {
env = var.env
app = var.app
}
}
In that I need to create partition on timestamp, type as DAY but the columns are different.
Lets say for my_table1,
The partition column would be my_ts_column_table1 for table1
The partition column would be my_last_modified_column_table2 for table2
How to write the terraform script in this scenario.
My exploration
I find a way to do it in terraform_documentation but not sure for multiple tables and how can be specified the partition columns for both tables.
In this case it might be the best to use dynamic [1] with for_each meta-argument [2] to achieve what you want. The code would have to be changed to:
resource "google_bigquery_table" "gcp_bq_tables" {
for_each = toset(local.gcp_bq_tables)
dataset_id = google_bigquery_dataset.gcp_bq_db.dataset_id
table_id = each.value
schema = file("${local.path}/schema/${each.value}.json")
dynamic "time_partitioning" {
for_each = each.value == "table1" || each.value == "table2" ? [1] : []
content {
type = "DAY"
field = each.value == "table1" ? "my_ts_column_table1" : "my_last_modified_column_table2"
}
}
labels = {
env = var.env
app = var.app
}
}
[1] https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks
[2] https://developer.hashicorp.com/terraform/language/meta-arguments/for_each
I hope this solution can help.
You can configure a json file to create dynamically your tables with partitions.
tables.json file
{
"tables": {
"my_table1": {
"dataset_id": "my_dataset",
"table_id": "my_table",
"schema_path": "folder/myschema.json",
"partition_type": "DAY",
"partition_field": "partitionField",
"clustering": [
"field",
"field2"
]
},
"my_table2": {
"dataset_id": "my_dataset",
"table_id": "my_table2",
"schema_path": "folder/myschema2.json",
"partition_type": "DAY",
"partition_field": "partitionField2",
"clustering": [
"field",
"field2"
]
}
}
Then retrieve your tables from Terraform local file.
locals.tf file :
locals {
tables = jsondecode(file("${path.module}/resource/tables.json"))["tables"]
}
I put a default partition in variables.json file on myDefaultDate field :
variable "time_partitioning" {
description = "Configures time-based partitioning for this table. cf https://www.terraform.io/docs/providers/google/r/bigquery_table.html#field"
type = map(string)
default = {
type = "DAY"
field = "myDefaultDate"
}
}
In the resource.tf file, I used a dynamic bloc :
if the partition exists in the current table from the Json matadata configuration file tables.json, I take it.
Otherwise I take the default partition given by the variables.tf file.
resource.tf file :
resource "google_bigquery_table" "tables" {
for_each = local.tables
project = var.project_id
dataset_id = each.value["dataset_id"]
table_id = each.value["table_id"]
clustering = try(each.value["clustering"], [])
dynamic "time_partitioning" {
for_each = [
var.time_partitioning
]
content {
type = try(each.value["partition_type"], time_partitioning.value["type"])
field = try(each.value["partition_field"], time_partitioning.value["field"])
expiration_ms = try(time_partitioning.value["expiration_ms"], null)
require_partition_filter = try(time_partitioning.value["require_partition_filter"], null)
}
}
schema = file("${path.module}/resource/schema/${each.value["schema_path"]}")
}

terraform dynamic block using list of map

I have a terraform variable:
variable "volumes" {
default = [
{
"name" : "mnt",
"value" : "/mnt/cvdupdate/"
},
{
"name" : "efs",
"value" : "/var"
},
]
}
and I am trying to create a dynamic block
dynamic "volume" {
for_each = var.volumes == "" ? [] : [true]
content {
name = volume["name"]
}
}
but I get an error when I run plan
name = volume["name"]
│
│ The given key does not identify an element in this collection value.
the desired output would be:
volume {
name = "mnt"
}
volume {
name = "efs"
}
what is wrong with my code?
Since you are using for_each, you should use value. Also you condition is incorrect. It all should be:
dynamic "volume" {
for_each = var.volumes == "" ? [] : var.volumes
content {
name = volume.value["name"]
}
}
As you are creating an if-else like condition to pass value to for loop, the condition needs a value to set. https://developer.hashicorp.com/terraform/language/meta-arguments/for_each
Need to replace [true] with var.volumes to pass the value.
for_each = var.volumes == "" ? [] : var.volumes
And, then set the value in the content block with .value to finally set the values to use.
content {
name = volume.value["name"]
The final working code is below as #marcin posted.
dynamic "volume" {
for_each = var.volumes == "" ? [] : var.volumes
content {
name = volume.value["name"]
}
}
You can simply use for_each = var.volumes[*]:
dynamic "volume" {
for_each = var.volumes[*]
content {
name = volume.value["name"]
}
}
or:
dynamic "volume" {
for_each = var.volumes[*]
content {
name = volume.value.name # <------
}
}

Azure Stream Analytics: Get Array Elements by name

I was wondering if it is possible for me to get the elements of the array by the name of property than the position. For example, this is my incoming data:
{
"salesdata": {
"productsbyzone": {
"zones": [{
"eastzone": "shirts, trousers"
},
{
"westzone": "slacks"
},
{
"northzone": "gowns"
},
{
"southzone": "maxis"
}
]
}
}
}
I intend to move this to a SQL database and I have columns within the database for each zone. The problem is that the order of different zones changes within each json. I was successfully using the following query until I realized that the position of the zones changes within each json:
WITH
salesData AS
(
SELECT
(c.salesdata.productsbyzone.zone,0) as eastzone,
(c.salesdata.productsbyzone.zone,1) as westzone,
(c.salesdata.productsbyzone.zone,2) as northzone,
(c.salesdata.productsbyzone.zone,3) as sourthzone,
FROM [sales-data] as c
)
SELECT
eastzone.eastzone as PRODUCTS_EAST,
westzone.westzone as PRODUCTS_WEST,
northzone.northzone as PRODUCTS_NORTH,
southzone.southzone as PRODUCTS_SOUTH
INTO PRODUCTSDATABASE
FROM salesData
Need a way to reference these fields by the name rather than by the position.
I recommend a solution: Use the JavaScript UDF in the azure stream job to complete the columns sort.
Please refer to my sample:
Input data(upset the order):
{
"salesdata": {
"productsbyzone": {
"zones": [{
"westzone": "slacks"
},
{
"eastzone": "shirts, trousers"
},
{
"northzone": "gowns"
},
{
"southzone": "maxis"
}
]
}
}
}
js udf code:
function test(arg) {
var z = arg;
var obj = {
eastzone: "",
westzone: "",
northzone: "",
southzone: ""
}
for(var i=0;i<z.length;i++){
switch(Object.keys(z[i])[0]){
case "eastzone":
obj.eastzone = z[i]["eastzone"];
continue;
case "westzone":
obj.westzone = z[i]["westzone"];
continue;
case "northzone":
obj.northzone = z[i]["northzone"];
continue;
case "southzone":
obj.southzone = z[i]["southzone"];
continue;
}
}
return obj;
}
You can define the order you want in the obj parameter
SQL:
WITH
c AS
(
SELECT
udf.test(jsoninput.salesdata.productsbyzone.zones) as result
from jsoninput
),
b AS
(
SELECT
c.result.eastzone as east,c.result.westzone as west,c.result.northzone as north,c.result.southzone as south
from c
)
SELECT
b.east,b.west,b.north,b.south
INTO
jaycosmos
FROM
b
Output:
Hope it helps you.
You can use GetArrayElement to return array element then access to each property. Please refer the below query
WITH
salesData AS
(
SELECT
GetArrayElement(zones,0) as z
FROM [sales-data] as s
)
SELECT
z.eastzone
z.westzone
z.northzone
z.southzone
FROM PRODUCTSDATABASE
FROM salesData

how to update an inner record in elm

I have this model
type alias Model =
{ exampleId : Int
, groupOfExamples : GroupExamples
}
type alias GroupExamples =
{ groupId : Int
, results : List String
}
In my update function, if I want to update the exampleId would be like this:
{ model | exampleId = updatedValue }
But what if I need to do to update, for example, just the results value inside of GroupExamples?
The only way to do it in the language without anything extra is to destructure the outer record like:
let
examples = model.groupOfExamples
newExamples = { examples | results = [ "whatever" ] }
in
{ model | groupOfExamples = newExamples }
There is also the focus package which would allow you to:
set ( groupOfExamples => results ) [ "whatever" ] model

Delete fields from array in mongodb using java driver

I need delete all fields from the devices array they have the condition where state = 0
And this is my mongodb document:
{ "_id" : { "$oid" : "53e53553b76000127cb1ab80"} ,
"city" : "Some Place" ,
"devices" :
[ { "guid" : "local" , "brand" : "SSSS" , "state" : 1} ,
{ "guid" : "local2" , "brand" : "DDD" , "state" : 0} ,
{ "guid" : "local2" , "brand" : "DDD" , "state" : 0} ,
{ "guid" : "local2" , "brand" : "DDD" , "state" : 0}] ,
"phone": 8888888888,
"sex" : "Male"
}
This is my java code that I'm trying:
DBCollection collection = db.getCollection(usersCollection);
BasicDBObject query2 = new BasicDBObject("_id", id);
((BasicDBObject) query2).append("devices.guid", device);
((BasicDBObject) query2).append("devices.state", 0);
collection.remove(query2);
But this query delete all devices from the document.
Thanks in advance!
You need to do the equivalent of:
db.usersCollection.update(
{"_id":"yourId"},
{$pull:
{ "devices" :
{ $elemMatch :
{
"guid":"local2",
"state":"0"
}
}
}
}
)
Where you replace the yourId with the ID you want to query
That is, instead of using remove, use update, with $pull in the BasicDBObject. I can't test this right now but try:
DBCollection collection = db.getCollection(usersCollection);
BasicDBObject query = new BasicDBObject("_id", id);
BasicDBObject deviceToDelete = new BasicDBObject("guid",device);
deviceToDelete.append("state", "0");
BasicDBObject obj = new BasicDBObject("devices", deviceToDelete);
collection.update(query, new BasicDBObject("$pull",obj));
Let me know if this works (again I can't test because I don't have mongod on my machine).
I fix the problem following this answer
This code works:
DBCollection collection = db.getCollection(usersCollection);
BasicDBObject match = new BasicDBObject("_id", id);
BasicDBObject update2 = new BasicDBObject("devices", new BasicDBObject("guid", device)
.append("state", 0));
collection.update(match, new BasicDBObject("$pull", update2));
You cannot use the remove-Function to remove a embeded document, you need to use update instead. This is the function call you need to implement in java:
db.userCollection.update(
{_id: ObjectId("53e53553b76000127cb1ab80")},
{$pull:
{ devices :
{state: 0}
}
})
Notice that the update-function takes to arguments, the first one to find the document and the second one to modify it.
DBCollection collection = db.getCollection(usersCollection);
BasicDBObject query = new BasicDBObject("_id", newObjectID("53e53553b76000127cb1ab80"));
BasicDBObject condition = new BasicDBObject("state",0);
BasicDBObject array = new BasicDBObject("devices", condition);
BasicDBObject update = new BasicDBObject("$pull", array);
collection.update(query, update);
I didn't test the Java Code but I tested the function call in mongoDB.
If it doesn't work, check the implementation first.