I have a problem when converting column string to json.
Currently, I don't have any problems with some data as it is currently converting string value to json. The problem occurs when the string value has apostrophe. For me to be able to convert my string value I would replace ' to " but 1 column inside the json sometimes has apostrophe as it is a name.
CAST(json_extract(replace(replace(column, '"', '"'), '''', '"'),'Name"s','Names'), '$.items') as array<json>) as column_list,
What I did now is I did many replace to remove those names with "/' but when I checked the numbers of names with apostrophe, it is so many to do my current solution. Any recommendation on this one?
{
'name': 'Sam',
'age': 45,
'cars': [
'car1': {
'name': "Hondas Lire'q Car"
},
'car2': {
'name': "Toyota's Car"
},
'car3': {
'name': "Happys' Car"
}
]
}
Related
I am working on a golang project with database MongoDB. I have a collection with following records:
[
{
"_id": 1,
"vals": [
"110",
"2211"
]
},
{
"_id": 1,
"vals": [
"Abcd",
"102"
]
}
]
I want to perform a search like if I pass "11001" then 1st record will return. But I have not found any solution to do the same. I have tried the following query:
db.getCollection('ColName').find({"vals":{"$regex": "^11001", "$options": "i"}})
Characters that are saved in db are less that I passed in the search. If I pass the "110" or "11" then it will gives the result, but my requirement is different I have full string and need to match with 2,3, or 4 characters.
It is about regex.
db.getCollection('ColName').find({"vals":{"$regex": "^110(01)?", "$options": "i"}})
will work for you.
"?" in regex means match 0 or 1.
I can't find anything about how to do this type of query in FaunaDB. I need to select only specifics fields from a document, not all fields. I can select one field using Select function, like below:
serverClient.query(
q.Map(
q.Paginate(q.Documents(q.Collection('products')), {
size: 12,
}),
q.Lambda('X', q.Select(['data', 'title'], q.Get(q.Var('X'))))
)
)
Forget the selectAll function, it's deprecated.
You can also return an object literal like this:
serverClient.query(
q.Map(
q.Paginate(q.Documents(q.Collection('products')), {
size: 12,
}),
q.Lambda(
'X',
{
title: q.Select(['data', 'title'], q.Get(q.Var('X')),
otherField: q.Select(['data', 'other'], q.Get(q.Var('X'))
}
)
)
)
Also you are missing the end and beginning quotation marks in your question at ['data, title']
One way to achieve this would be to create an index that returns the values required. For example, if using the shell:
CreateIndex({
name: "<name of index>",
source: Collection("products"),
values: [
{ field: ["data", "title"] },
{ field: ["data", "<another field name>"] }
]
})
Then querying that index would return you the fields defined in the values of the index.
Map(
Paginate(
Match(Index("<name of index>"))
),
Lambda("product", Var("product"))
)
Although these examples are to be used in the shell, they can easily be used in code by adding a q. in front of each built-in function.
I'm on postgres and have a table orders with a data column which is jsonb. Here's a condensed example of data in one of them - they have UUID keys and a value of { id, value }
{
'36462bd9-4ffa-4ee3-9a04-c2eb7575fe6c': {
id: '',
value: '2020-04-20T01:32:14.017Z',
},
'9baaed61-1275-4bbc-ae4f-2994ec9f7fda': { id: '4', value: 'Paper Towels' },
}
How can I do operations such as to find any orders where data has some UUID (ie. 9baaed61-1275-4bbc-ae4f-2994ec9f7fda) and { id: '4' }?
You can use the contains operator #>
select *
from the_table
where data #> '{"9baaed61-1275-4bbc-ae4f-2994ec9f7fda": {"id": "4"}}';
This assumes that the invalid JSON id: '4' from your question is really stored as "id":"4". If the value is stored as a number: "id": 4 then you need to use that in the comparison value.
I have a Parameters_A column in table A which has data like this :
Number,3771|ScheduleTime,0.00:00:00|LastData,|DP_AddPaymentDetails_URL,NULL|DP_URL,https://facebook.com
I need to move it into Parameters_B column in table B with foreign key (ID_A) from table A like this:
[
{
"Name":"Number",
"Value":"3771"
},
{
"Name":"ScheduleTime",
"Value":"0.00:00:00"
},
{
"Name":"LastData",
"Value":""
},
{
"Name":"DP_AddPaymentDetails_URL",
"Value":"NULL"
},
{
"Name":"DP_URL",
"Value":"https://facebook.com"
}
]
JSON is just a string. Assuming the input has no single or double quotes, you can convert it to a JSON string in any SQL Server version. To do so you need to replace :
, with ", "Value":"
| with "},{"Name":"
Add [{"Name":" at the front and
"}] at the end
to get a proper JSON string.
This query :
select
'[{"Name":"' +
replace(
replace('Number,3771|ScheduleTime,0.00:00:00|LastData,|DP_AddPaymentDetails_URL,NULL|DP_URL,https://facebook.com',',','", "Value":"')
,'|','"},{"Name":"')
+'"}]'
Produces :
[{"Name":"Number", "Value":"3771"},{"Name":"ScheduleTime", "Value":"0.00:00:00"},{"Name":"LastData", "Value":""},{"Name":"DP_AddPaymentDetails_URL", "Value":"NULL"},{"Name":"DP_URL", "Value":"https://facebook.com"}]
After formatting :
[
{"Name":"Number", "Value":"3771"},
{"Name":"ScheduleTime", "Value":"0.00:00:00"},
{"Name":"LastData", "Value":""},
{"Name":"DP_AddPaymentDetails_URL", "Value":"NULL"},
{"Name":"DP_URL", "Value":"https://facebook.com"}
]
Parsing this to extract individual values though requires SQL Server 2016 and later
I have a table with the following structure:
and the following data in it:
[
{
"addresses": [
{
"city": "New York"
},
{
"city": "San Francisco"
}
],
"age": "26.0",
"name": "Foo Bar",
"createdAt": "2016-02-01 15:54:25 UTC"
},
{
"addresses": [
{
"city": "New York"
},
{
"city": "San Francisco"
}
],
"age": "26.0",
"name": "Foo Bar",
"createdAt": "2016-02-01 15:54:16 UTC"
}
]
What I'd like to do is recreate the same table (same structure) but with only the latest version of a row. In this example let's say that I'd like to group by everything by name and take the row with the most recent createdAt.
I tried to do something like this: Google Big Query SQL - Get Most Recent Column Value but I couldn't get it to work with record and repeated fields.
I really hoped someone from Google Team will provide answer on this question as it is very frequent topic/problem asked here on SO. BigQuery definitelly not friendly enough with writing Nested / Repeated stuff back to BQ off of BQ query.
So, I will provide the workaround I found relatively long time ago. I DO NOT like it, but (and that is why I hoped for the answer from Google Team) it works. I hope you will be able to adopt it for you particular scenario
So, based on your example, assume you have table as below
and you expect to get most recent records based on createdAt column, so result will look like:
Below code does this:
SELECT name, age, createdAt, addresses.city
FROM JS(
( // input table
SELECT name, age, createdAt, NEST(city) AS addresses
FROM (
SELECT name, age, createdAt, addresses.city
FROM (
SELECT
name, age, createdAt, addresses.city,
MAX(createdAt) OVER(PARTITION BY name, age) AS lastAt
FROM yourTable
)
WHERE createdAt = lastAt
)
GROUP BY name, age, createdAt
),
name, age, createdAt, addresses, // input columns
"[ // output schema
{'name': 'name', 'type': 'STRING'},
{'name': 'age', 'type': 'INTEGER'},
{'name': 'createdAt', 'type': 'INTEGER'},
{'name': 'addresses', 'type': 'RECORD',
'mode': 'REPEATED',
'fields': [
{'name': 'city', 'type': 'STRING'}
]
}
]",
"function(row, emit) { // function
var c = [];
for (var i = 0; i < row.addresses.length; i++) {
c.push({city:row.addresses[i]});
};
emit({name: row.name, age: row.age, createdAt: row.createdAt, addresses: c});
}"
)
the way above code works is: it implicitely flattens original records; find rows that belong to most recent records (partitioned by name and age); assembles those rows back into respective records. final step is processing with JS UDF to build proper schema that can be actually written back to BigQuery Table as nested/repeated vs flatten
The last step is the most annoying part of this workaround as it needs to be customized each time for specific schema(s)
Please note, in this example - it is only one nested field inside addresses record, so NEST() fuction worked. In scenarious when you have more than just one
field inside - above approach still works, but you need to involve concatenation of those fields to put them inside nest() and than inside js function to do extra splitting those fields, etc.
You can see examples in below answers:
Create a table with Record type column
create a table with a column type RECORD
How to store the result of query on the current table without changing the table schema?
I hope this is good foundation for you to experiment with and make your case work!