Is there a way to create JSON object in Oracle, for parent child relationship data? For example an organizational structure. Table contains
EmpId Name Title ManagerId
1 John GM 0
2 Smith Manager 1
3 Jason Manager 1
4 Will IP1 3
5 Jade AM 3
6 Mark IP2 5
7 Jane AM2 5
8 Tamara M1 1
9 Dory M2 1
Something like below JSON object is expected.
{
'name': 'John',
'title': 'GM',
'children': [
{ 'name': 'Smith', 'title': 'manager' },
{ 'name': 'Jason', 'title': 'manager',
'children': [
{ 'name': 'Will', 'title': 'IP1' },
{ 'name': 'Jade', 'title': 'AM',
'children': [
{ 'name': 'Mark', 'title': 'IP2' },
{ 'name': 'Jane', 'title': 'AM2' }
]
}
]
},
{ 'name': 'Tamara', 'title': 'M1' },
{ 'name': 'Dory', 'title': 'M2' }
]
}
Oracle Database 12.2 does have a number of JSON generation functions. But these are of limited use. You need to build up the document recursively.
Which I believe requires a bit of hand-crafting.
First use a recursive query to create the org chart, adding which level each person is in the hierarchy.
Then build the JSON by:
If level for the next row is greater than the current, the employee is a manager. And you need to start a child array. Otherwise return a JSON object for the current row
If the current row is the last in the tree, you need to close N arrays and objects. N is how deep the row is in the tree minus one.
Otherwise if the next row is a lower level than the current, you need to close ( current level - next level ) arrays and objects
Then if the next level equals or is less than the current, add a comma
Which gives something like:
create table t (
EmpId int,
Name varchar2(10),
Title varchar2(10),
ManagerId int
);
insert into t values (1, 'John', 'GM' , 0 );
insert into t values (2, 'Smith', 'Manager' , 1 );
insert into t values (3, 'Jason', 'Manager' , 1 );
insert into t values (4, 'Will', 'IP1' , 3 );
insert into t values (5, 'Jade', 'AM' , 3 );
insert into t values (6, 'Mark', 'IP2' , 5 );
insert into t values (7, 'Jane', 'AM2' , 5 );
insert into t values (8, 'Tamar', 'M1' , 1 );
insert into t values (9, 'Dory', 'M2' , 1 );
commit;
with chart (
empid, managerid, name, title, lvl
) as (
select empid, managerid,
name, title, 1 lvl
from t
where empid = 1
union all
select t.empid, t.managerid,
t.name, t.title,
lvl + 1 lvl
from chart c
join t
on c.empid = t.managerid
) search depth first by empid set seq,
jdata as (
select case
/* The employee has reports */
when lead ( lvl ) over ( order by seq ) > lvl then
'{"name": "' || name ||
'", "title": "' || title ||
'", "children": ['
else
json_object ( 'name' value name, 'title' value title )
end ||
case
/* Close arrays & objects */
when lead ( lvl ) over ( order by seq ) is null then
lpad ( ']}', ( lvl - 1 ) * 2, ']}' )
when lead ( lvl ) over ( order by seq ) < lvl then
lpad ( ']}', ( lvl - lead ( lvl ) over ( order by seq ) ) * 2, ']}' )
end ||
case
/* Add closing commas */
when lead ( lvl ) over ( order by seq ) <= lvl then
','
end j,
lead ( lvl ) over ( order by seq ) nlvl,
seq, lvl
from chart
)
select json_query (
listagg ( j )
within group ( order by seq ),
'$' returning varchar2 pretty
) chart_json
from jdata;
CHART_JSON
{
"name" : "John",
"title" : "GM",
"children" :
[
{
"name" : "Smith",
"title" : "Manager"
},
{
"name" : "Jason",
"title" : "Manager",
"children" :
[
{
"name" : "Will",
"title" : "IP1"
},
{
"name" : "Jade",
"title" : "AM",
"children" :
[
{
"name" : "Mark",
"title" : "IP2"
},
{
"name" : "Jane",
"title" : "AM2"
}
]
}
]
},
{
"name" : "Tamar",
"title" : "M1"
},
{
"name" : "Dory",
"title" : "M2"
}
]
}
Related
These are my tables:
CREATE TABLE product (
product_id serial PRIMARY KEY,
name VARCHAR ( 50 ),
size VARCHAR ( 50 ),
)
CREATE TABLE country (
country_id serial PRIMARY KEY,
name VARCHAR ( 50 ),
product_id INT
)
CREATE TABLE color (
color_id serial PRIMARY KEY,
name VARCHAR ( 50 ),
product_id INT
)
I want my query to return the list of product in this way.
Query result needs to have to objects: meta and result
The result needs to be paginated with 10 objects. And meta should include the total count of filtered products, count of attributes of products.
When country is filtered, I want to see other country choices' names and counts as well, not only the country filtered (same for the color).
If the color is filtered, I don't want to see the countries that are not available with this color for the products we have (and vice versa):
{
"meta": {
"count" : 200,
"next_page": true,
"colors": [
{"id": 1, "name": "red", "count": 5},
{"id": 2, "name": "white", "count": 10}
],
"countries": [
{"id": 1, "name": "Germany", "count": 120},
{"id": 2, "name": "Albania", "count": 201}
],
"sizes": [
{"id": 1, "name": "Big", "count": 45},
{"id": 2, "name": "Small", "count": 63}
]
},
"result": [
{
"product_name" : "Milk",
"color": "White",
"country": "Germany"
},
{
"product_name" : "Milk2",
"color": "White",
"country": "Germany"
},
{
"product_name" : "Milk3",
"color": "White",
"country": "Germany"
}
]
}
This is what I've done:
WITH results as (
SELECT
product.id,
product.name,
product.size,
color.name,
country.name
FROM product
LEFT JOIN color ON color.product_id = product.id
LEFT JOIN country ON country.product_id = product.id
WHERE color.name = ANY('{White}')
)
SELECT
(
SELECT
jsonb_build_object(
'count', count.full_count,
'next_page', count.full_count - (1 * 10) > 0
)
FROM (SELECT count(id) AS full_count FROM results) AS count
) AS meta,
(
SELECT jsonb_agg(result_rows)
FROM
(SELECT * FROM results
LIMIT 10
OFFSET (1-1) * 10) AS result_rows
) AS result
I've tried lot's of thing and did not get the result of getting name and counts of country and colors. So I didn't include that part of query. BTW, the the slight change in query returning result is acceptable.
Any help is highly appreciated. I'm using the latest version of PostgreSQL. You can see this type of query used in Ebay (search results page) where page filter properties change while you select different filters to correspond the available choices and counts depending your current filters.
First of all, your data model looks like strange (or wrong). I would suggest you the following data model instead, so that you can store several products with the same size and/or the same color and/or the same country. The only limitation here is that one product may only have one size, one color, and one country. You will have to create new tables if you want to manage one to many relationships.
CREATE TABLE size (
size_id serial PRIMARY KEY,
name VARCHAR ( 50 )
) ;
CREATE TABLE country (
country_id serial PRIMARY KEY,
name VARCHAR ( 50 )
) ;
CREATE TABLE color (
color_id serial PRIMARY KEY,
name VARCHAR ( 50 )
) ;
CREATE TABLE product (
product_id serial PRIMARY KEY,
name VARCHAR ( 50 ),
size_id INT CONSTRAINT rf_size REFERENCES size (size_id) MATCH SIMPLE,
color_id INT CONSTRAINT rf_color REFERENCES color (color_id) MATCH SIMPLE,
country_id INT CONSTRAINT rf_country REFERENCES country (country_id) MATCH SIMPLE
) ;
Then you can get your expected with the following query :
WITH global_list AS (
SELECT p.size_id, p.color_id, p.country_id
, s.name AS size_name, clr.name AS color_name, cty.name AS country_name
, count(*) AS product_count
FROM product AS p
INNER JOIN country AS cty
ON cty.country_id = p.country_id
INNER JOIN color AS clr
ON clr.color_id = p.color_id
INNER JOIN size AS s
ON s.size_id = p.size_id
GROUP BY p.size_id, size_name, p.color_id, color_name, p.country_id, country_name
), result_list AS (
SELECT jsonb_build_object('product_name',p.name,'color',clr.name,'country',cty.name, 'size', s.name) AS result
, count(*) OVER () AS total_count
FROM product AS p
INNER JOIN country AS cty
ON cty.country_id = p.country_id
INNER JOIN color AS clr
ON clr.color_id = p.color_id
INNER JOIN size AS s
ON s.size_id = p.size_id
WHERE cty.name = COALESCE('Albania', cty.name) -- enter here the country filter criteria if any, like 'Germany', or NULL if no country criteria
AND clr.name = COALESCE(NULL, clr.name) -- enter here the color filter criteria if any, like 'White', or NULL if no color criteria
AND s.name = COALESCE(NULL, s.name) -- enter here the size filter criteria if any, like 'Medium', or NULL if no size criteria
), country_list AS (
SELECT jsonb_build_object('id', gl.country_id, 'name', gl.country_name, 'count', sum(gl.product_count)) AS country
FROM global_list AS gl
WHERE gl.color_name = COALESCE(NULL, gl.color_name) -- same color criteria than above
AND gl.size_name = COALESCE(NULL, gl.size_name) -- same size criteria than above
GROUP BY gl.country_id, gl.country_name
), color_list AS (
SELECT jsonb_build_object('id', gl.color_id, 'name', gl.color_name, 'count', sum(gl.product_count)) AS color
FROM global_list AS gl
WHERE gl.country_name = COALESCE('Albania', gl.country_name) -- same country criteria than above
AND gl.size_name = COALESCE(NULL, gl.size_name) -- same size criteria than above
GROUP BY gl.color_id, gl.color_name
), size_list AS (
SELECT jsonb_build_object('id', gl.size_id, 'name', gl.size_name, 'count', sum(gl.product_count)) AS size
FROM global_list AS gl
WHERE gl.country_name = COALESCE('Albania', gl.country_name) -- same country criteria than above
AND gl.color_name = COALESCE(NULL, gl.color_name) -- same color criteria than above
GROUP BY gl.size_id, gl.size_name
)
SELECT (SELECT jsonb_build_object('result', jsonb_agg(result)) FROM result_list LIMIT 10 OFFSET 0)
|| jsonb_build_object('meta'
, jsonb_build_object( 'count', (SELECT total_count FROM result_list LIMIT 1)
, 'next_page', (SELECT total_count > 10 FROM result_list LIMIT 1)
, 'countries', (SELECT jsonb_agg(country) FROM country_list)
, 'colors', (SELECT jsonb_agg(color) FROM color_list)
, 'sizes', (SELECT jsonb_agg(size) FROM size_list)
)
)
ps : the first query Global_list could be implemented as a view
The result looks like :
{
"meta": {
"count": 2,
"sizes": [
{
"id": 1,
"name": "Small",
"count": 1
},
{
"id": 2,
"name": "Medium",
"count": 1
}
],
"colors": [
{
"id": 1,
"name": "White",
"count": 1
},
{
"id": 3,
"name": "Blue",
"count": 1
}
],
"countries": [
{
"id": 1,
"name": "Germany",
"count": 1
},
{
"id": 3,
"name": "Albania",
"count": 2
}
],
"next_page": false
},
"result": [
{
"size": "Medium",
"color": "White",
"country": "Albania",
"product_name": "Milk2"
},
{
"size": "Small",
"color": "Blue",
"country": "Albania",
"product_name": "Milk3"
}
]
}
All details in dbfiddle
I have a JSON structure in a field that looks like this. I'm trying to extract every task in every category, there could be any number of tasks or categories.
I've got part of the way there by extracting a single category, but can't seem to do it for every task in every category.
"tasks": {
"category-business": [
{
"dateCompleted": {
"_seconds": 1653672655,
"_nanoseconds": 791000000
},
"slug": "task-alpha",
"status": "completed"
},
{
"dateCompleted": {
"_seconds": 1654516259,
"_nanoseconds": 796000000
},
"slug": "task-bravo",
"status": "completed"
}
],"category-community": [
{
"dateCompleted": {
"_seconds": 1654709063,
"_nanoseconds": 474000000
},
"slug": "task-papa",
"status": "completed"
},
{
"dateCompleted": {
"_seconds": 1654709841,
"_nanoseconds": 764000000
},
"slug": "task-zebra",
"status": "completed"
}
]}
Here's the query so far
SELECT
*
FROM
(
SELECT
ARRAY(
SELECT
STRUCT(
TIMESTAMP_SECONDS(
CAST(
JSON_EXTRACT_SCALAR(business_tasks, '$.dateCompleted._seconds') AS INT64
)
) AS dateCompleted,
json_extract_scalar(business_tasks, '$.slug') AS task_slug,
json_extract_scalar(business_tasks, '$.status') AS status
)
FROM
UNNEST(
json_extract_array(DATA, '$.tasks.category-business')
) business_tasks
) AS items
FROM
`table`
)
This extracts just the information in the category business.
What I'm trying to do is expand category-community and any other children underneath the tasks key. The real data has at least 10 categories and 50 tasks.
I think I need to do another round of UNNEST and json_extract_array but I can't quite work out the correct order?
Consider below approach
create temp function get_keys(input string) returns array<string> language js as """
return Object.keys(JSON.parse(input));
""";
create temp function get_values(input string) returns array<string> language js as """
return Object.values(JSON.parse(input));
""";
create temp function get_leaves(input string) returns string language js as '''
function flattenObj(obj, parent = '', res = {}){
for(let key in obj){
let propName = parent ? parent + '.' + key : key;
if(typeof obj[key] == 'object'){
flattenObj(obj[key], propName, res);
} else {
res[propName] = obj[key];
}
}
return JSON.stringify(res);
}
return flattenObj(JSON.parse(input));
''';
create temp table temp_table as (
select
split(key, '.')[offset(0)] as category,
split(key, '.')[offset(1)] as offset,
split(key, '.')[offset(2)] || ifnull(split(key, '.')[safe_offset(3)], '') as key,
val, format('%t', t) row_id
from your_table t, unnest([struct(get_leaves(json_extract(data, '$.tasks')) as leaves)]),
unnest(get_keys(leaves)) key with offset
join unnest(get_values(leaves)) val with offset using(offset)
);
execute immediate (
select '''
select * except(row_id) from temp_table
pivot (any_value(val) for key in ("''' || keys || '"))'
from (
select string_agg(key, '","') keys
from (select distinct key from temp_table)
)
);
if applied to sample data in your question - output is
DML only:
with category_level as (
select
coalesce(
json_query_array(DATA.tasks[a], '$.category-business')
, json_query_array(DATA.tasks[a], '$.category-community')
, json_query_array(DATA.tasks[a], '$.category-3')
, json_query_array(DATA.tasks[a], '$.category-4')
, json_query_array(DATA.tasks[a], '$.category-5')
, json_query_array(DATA.tasks[a], '$.category-6')
, json_query_array(DATA.tasks[a], '$.category-7')
, json_query_array(DATA.tasks[a], '$.category-8')
, json_query_array(DATA.tasks[a], '$.category-9')
, json_query_array(DATA.tasks[a], '$.category-10')
) category_array
from table
left join unnest(generate_array(0, 100)) a
where DATA.tasks[a] is not null
)
select
timestamp_seconds(cast(json_extract_scalar(b.dateCompleted._seconds) as int64)) dateCompleted
, json_extract_scalar(b.slug) slug
, json_extract_scalar(b.status) status
from category_level
left join unnest(category_array) b
https://console.cloud.google.com/bigquery?sq=1013309549723:fe8b75122e5b4b549e8081df99584c81
new version:
select
timestamp_seconds(cast(regexp_extract_all(to_json_string(json_extract(DATA,'$.tasks')), r'"_seconds":(\d*)')[offset(a)] as int64)) dateCompleted
, regexp_extract_all(to_json_string(json_extract(DATA,'$.tasks')), r'"slug":"([a-z\-]*)"')[offset(a)] task_slug
, regexp_extract_all(to_json_string(json_extract(DATA,'$.tasks')), r'"status":"([a-z\-]*)"')[offset(a)] status
from table
join unnest(generate_array(0,-1+array_length(regexp_extract_all(to_json_string(json_extract(DATA,'$.tasks')), r'"slug":"([a-z\-]*)"')))) a
https://console.cloud.google.com/bigquery?sq=1013309549723:9f43bd653ba14589b31a1f5673adcda7
I have the following tables:
create table students
(
id int,
name varchar(10)
)
create table subjects
(
subjectId int,
studentId int,
subject varchar(12)
)
create table marks
(
studentId int,
subjectId int,
marks int
)
create table sports
(
sportId int,
studentId int,
name varchar(12)
)
with the following data:
insert into students values(1, 'Rusty');
insert into subjects values(1, 1, 'math')
insert into subjects values(2, 1, 'science')
insert into marks values(1,1,50)
insert into marks values(1,2,60)
insert into sports values(1, 1, 'soccer')
insert into sports values(2, 1, 'baseball')
I want to write a query in SQL Server to get the following output:
studentId = 1
{
"id": 1,
"name": "Rusty",
"subjects" : [
{
"name": "math",
"marks": 50
},
{
"name": "science",
"marks": 60
}
],
"sports": [
{
"name": "soccer"
},
{
"name": "baseball"
}
]
}
I tried the following query
select *
from students s
join subjects su on (s.id = su.studentId)
join sports sp on (s.id = sp.studentId)
where s.id = 1
for json auto
and here is the output:
[
{
"id": 1,
"name": "Rusty",
"su": [
{
"subjectId": 1,
"studentId": 1,
"subject": "math",
"sp": [
{
"sportId": 1,
"studentId": 1,
"name": "soccer"
}
]
},
{
"subjectId": 1,
"studentId": 1,
"subject": "science",
"sp": [
{
"sportId": 1,
"studentId": 1,
"name": "soccer"
}
]
},
{
"subjectId": 1,
"studentId": 1,
"subject": "math",
"sp": [
{
"sportId": 1,
"studentId": 1,
"name": "baseball"
}
]
},
{
"subjectId": 1,
"studentId": 1,
"subject": "science",
"sp": [
{
"sportId": 1,
"studentId": 1,
"name": "baseball"
}
]
}
]
}
]
For your desired output you can use correlated subqueries for sports and subjects that generate their own JSON, using FOR JSON PATH, and include that information as an array of nested objects in your main JSON output by way of JSON_QUERY (Transact-SQL), e.g.:
/*
* Data setup...
*/
create table students (
id int,
name varchar(10)
);
create table subjects (
subjectId int,
studentId int,
subject varchar(12)
);
create table marks (
studentId int,
subjectId int,
marks int
);
create table sports (
sportId int,
studentId int,
name varchar(12)
);
insert into students (id, name) values
(1, 'Rusty');
insert into subjects (subjectId, studentId, subject) values
(1, 1, 'math'),
(2, 1, 'science');
insert into marks (studentId, subjectId, marks) values
(1,1,50),
(1,2,60);
insert into sports (sportId, studentId, name) values
(1, 1, 'soccer'),
(2, 1, 'baseball');
/*
* Example query...
*/
select
students.id,
students.name,
json_query(( --<<-- doubled brakcets
select
subjects.subject,
marks.marks
from subjects
join marks
on marks.subjectId = subjects.subjectId
and marks.studentId = subjects.studentId
where subjects.studentId = students.id
for json path
)) as [subjects],
json_query(( --<<-- doubled brackets
select
sports.name
from sports
where sports.studentId = students.id
for json path
)) as [sports]
from students
where students.id = 1
for json path, without_array_wrapper;
Which yields the JSON output:
{
"id": 1,
"name": "Rusty",
"subjects": [
{
"subject": "math",
"marks": 50
},
{
"subject": "science",
"marks": 60
}
],
"sports": [
{
"name": "soccer"
},
{
"name": "baseball"
}
]
}
I don't have a recursive relationship here, but a problem of several child tables having the same parent, and having multiple records in each child table. Here's an example:
create table #Schedules (
ScheduleId int,
ScheduleName varchar(20)
);
create table #ScheduleThings (
ScheduleThingId int,
ScheduleId int,
Thing decimal(18,2));
create table #ScheduleOtherThings (
ScheduleOtherThingId int,
ScheduleId int,
OtherThing varchar(50));
I insert some typical sample data:
insert into #Schedules (
ScheduleId,
ScheduleName )
values
(1, 'A'),
(2, 'B');
insert into #ScheduleThings (
ScheduleThingId,
ScheduleId,
Thing )
values
(1, 1, 10.22),
(2, 1, 11.02),
(3, 1, 11.89),
(4, 2, 19.23),
(5, 2, 20.04),
(6, 2, 20.76),
(7, 2, 21.37);
insert into #ScheduleOtherThings (
ScheduleOtherThingId,
ScheduleId,
OtherThing )
values
(1, 1, 'Always'),
(2, 1, 'Sometimes'),
(3, 2, 'Seldom'),
(4, 2, 'Always'),
(5, 2, 'Never');
declare #results table (result xml);
I've then tried 2 similar approaches (3 or 4 actually), but here is one:
insert into #Results (
result )
select fr.result from (
select
s.ScheduleId as [schedules.schedule_id],
s.ScheduleName as [schedules.schedule_name],
st.ScheduleThingId as [schedules.schedule_things.schedule_thing_id],
st.Thing as [schedules.schedule_things.thing],
sot.ScheduleOtherThingId as [schedules.schedule_other_things.schedule_other_thing_id],
sot.OtherThing as [schedules.schedule_other_things.other_thing]
from #Schedules s
join #ScheduleThings st
on st.ScheduleId = s.ScheduleId
join #ScheduleOtherThings sot
on sot.ScheduleId = s.ScheduleId
where s.ScheduleId = 1
and st.ScheduleThingId < 3
for json path, root('schedules') ) fr(result) ;
select * from #Results;
This attempt gives me:
{
"schedules": [
{
"schedules": {
"schedule_id": 1,
"schedule_name": "A",
"schedule_things": {
"schedule_thing_id": 1,
"thing": 10.22
},
"schedule_other_things": {
"schedule_other_thing_id": 1,
"other_thing": "Always"
}
}
},
{
"schedules": {
"schedule_id": 1,
"schedule_name": "A",
"schedule_things": {
"schedule_thing_id": 1,
"thing": 10.22
},
"schedule_other_things": {
"schedule_other_thing_id": 2,
"other_thing": "Sometimes"
}
}
},
and removing 'schedules' from the dot notation entirely has no significant impact:
{
"schedules": [
{
"schedule_id": 1,
"schedule_name": "A",
"schedule_things": {
"schedule_thing_id": 1,
"thing": 10.22
},
"schedule_other_things": {
"schedule_other_thing_id": 1,
"other_thing": "Always"
}
},
{
"schedule_id": 1,
"schedule_name": "A",
"schedule_things": {
"schedule_thing_id": 1,
"thing": 10.22
},
"schedule_other_things": {
"schedule_other_thing_id": 2,
"other_thing": "Sometimes"
}
},
What I need (and what I think is the proper JSON structure) is like:
{
"schedules": [
{
"schedule_id": 1,
"schedule_name": "A",
"schedule_things": [
{
"schedule_thing_id": 1,
"thing": 10.22
},
{
"schedule_thing_id": 2,
"thing": 11.02
},
]
"schedule_other_things": [
{
"schedule_other_thing_id": 1,
"other_thing": "Always"
},
{
"schedule_other_thing_id": 2,
"other_thing": "Sometimes"
}
]
}
]
}
In other words, the attributes of the parent 'Schedule' record appear one time, an object of ScheduleThings follows, including all child ScheduleThings, followed by an object of ScheduleOtherThings, etc.
I don't understand yet why my dot specifications don't make it clear which attributes belong to the root object, and therefore that I don't need those attributes repeated. But, I especially don't understand why the entire dataset is flattened--even when I think I've used the dot notation to make the parent-child relationships very explicit.
You could try nesting the calls to for json
Such as...
select
fr.result
from
(
select
s.ScheduleId as [schedules.schedule_id],
s.ScheduleName as [schedules.schedule_name],
(
SELECT ScheduleThingId, Thing
FROM #ScheduleThings
WHERE ScheduleId = s.ScheduleId
AND ScheduleThingId < 3
FOR JSON PATH
)
AS [schedules.schedule_things],
(
SELECT ScheduleOtherThingId, OtherThing
FROM #ScheduleOtherThings
WHERE ScheduleId = s.ScheduleId
FOR JSON PATH
)
AS [schedules.schedule_other_things]
from
#Schedules s
where
s.ScheduleId = 1
for json path, root('schedules')
)
fr(result) ;
Demo : https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=e9a9c55b2daaac4e0f48d52a87bfede9
My (simplified) SQLite tables are like this:
create table customers (
id integer primary key autoincrement,
contact_name text,
billaddr_id integer references addresses(id)
);
create table addresses (
id integer primary key autoincrement,
address text
);
And here are the result classes (generated from the sql by dbicdump):
Test::DB::Schema::Result::Customer->table("customers");
Test::DB::Schema::Result::Customer->add_columns(
"id",
{ data_type => "integer", is_auto_increment => 1, is_nullable => 0 },
"contact_name",
{ data_type => "text", is_nullable => 1 },
"billaddr_id",
{ data_type => "integer", is_foreign_key => 1, is_nullable => 1 },
);
Test::DB::Schema::Result::Customer->set_primary_key("id");
Test::DB::Schema::Result::Address->table("addresses");
Test::DB::Schema::Result::Address->add_columns(
"id", { data_type => "integer", is_auto_increment => 1, is_nullable => 0 },
"address", { data_type => "text", is_nullable => 1 },
);
Test::DB::Schema::Result::Address->set_primary_key("id");
Test::DB::Schema::Result::Address->has_many(
"customers",
"Test::DB::Schema::Result::Customer",
{ "foreign.billaddr_id" => "self.id" },
{ cascade_copy => 0, cascade_delete => 0 },
);
Test::DB::Schema::Result::Customer->belongs_to(
"billaddr",
"Test::DB::Schema::Result::Address",
{ id => "billaddr_id" },
{
is_deferrable => 0,
join_type => "LEFT",
on_delete => "NO ACTION",
on_update => "NO ACTION",
},
);
This bit of code:
my $data = {
contact_name => 'Jim Customer',
billaddr => {
address => 'Address...',
},
};
my $newcustomer = $c->schema->resultset('Customer')->create($data);
results in this database update:
SELECT me.id, me.address FROM addresses me WHERE ( ( me.address = ? ) ): 'Address...'
BEGIN WORK
SELECT me.id, me.address FROM addresses me WHERE ( ( me.address = ? ) ): 'Address...'
INSERT INTO addresses ( address ) VALUES ( ? ): 'Address...'
INSERT INTO partners ( billaddr_id, contact_name ) VALUES ( ?, ? ) : '10', 'Jim Customer'
COMMIT
Why does it do a select before the insert? Because it's checking to see if an address with the same value of the 'address' column already exists. If it does exist, the ID of that address is reused, like this:
SELECT me.id, me.address FROM addresses me WHERE ( ( me.address = ? ) ): 'Address...'
INSERT INTO partners ( billaddr_id, contact_name ) VALUES ( ?, ? ): '10', 'Another Customer with the same address'
But that's not what I want! I want separate addresses for separate customers, even if they happen to live in the same place at the moment.
How can I make DBIx::Class create a new row in the addresses table every time?
Thanks to abraxxa's comments, I've been pointed in the right direction and have done more reading and testing with DBIx::Class:Schema.
Generating the table from the Schema classes, rather than the other way round, seems like the way to go, especially if it will make future upgrades to the database easier.
I've boiled the problem down to the following example code:
Test.pl:
#!/usr/bin/perl
use Test::DB::Schema;
my $schema = Test::DB::Schema->connect(
"dbi:SQLite:dbname=dbicsl_test.db", '', '', {}
);
$schema->deploy({ add_drop_table => 1 } , '.');
$schema->storage->debug(1);
my $data1 = {
text => 'Fred',
table2 => {
text => 'abc',
}
};
my $new1 = $schema->resultset('Table1')->create($data1);
my $data2 = {
text => 'Jim',
table2 => {
text => 'xyz',
}
};
my $new2 = $schema->resultset('Table1')->create($data2);
my $data3 = {
text => 'Emily',
table2 => {
text => 'abc',
}
};
my $new3 = $schema->resultset('Table1')->create($data3);
Test::DB::Schema::Result::Table1.pm:
package Test::DB::Schema::Result::Table1;
use base 'DBIx::Class::Core';
__PACKAGE__->table("table1");
__PACKAGE__->add_columns(
"id",
{ data_type => "integer", is_auto_increment => 1, is_nullable => 0 },
"text",
{ data_type => "text", is_nullable => 1 },
"table2_id",
{ data_type => "integer", is_foreign_key => 1, is_nullable => 0 },
);
__PACKAGE__->set_primary_key("id");
__PACKAGE__->has_one(
table2 =>
"Test::DB::Schema::Result::Table2",
{ 'foreign.id' => 'self.table2_id' },
);
1;
Test::DB::Schema::Result::Table2:
package Test::DB::Schema::Result::Table2;
use base 'DBIx::Class::Core';
__PACKAGE__->table("table2");
__PACKAGE__->add_columns(
"id",
{ data_type => "integer", is_auto_increment => 1, is_nullable => 0 },
"text",
{ data_type => "text", is_nullable => 0 },
);
__PACKAGE__->set_primary_key("id");
1;
And here's the output:
SELECT me.id, me.text FROM table2 me WHERE ( me.text = ? ): 'abc'
BEGIN WORK
SELECT me.id, me.text FROM table2 me WHERE ( me.text = ? ): 'abc'
INSERT INTO table2 ( text) VALUES ( ? ): 'abc'
INSERT INTO table1 ( table2_id, text) VALUES ( ?, ? ): '1', 'Fred'
COMMIT
SELECT me.id, me.text FROM table2 me WHERE ( me.text = ? ): 'xyz'
BEGIN WORK
SELECT me.id, me.text FROM table2 me WHERE ( me.text = ? ): 'xyz'
INSERT INTO table2 ( text) VALUES ( ? ): 'xyz'
INSERT INTO table1 ( table2_id, text) VALUES ( ?, ? ): '2', 'Jim'
COMMIT
SELECT me.id, me.text FROM table2 me WHERE ( me.text = ? ): 'abc'
INSERT INTO table1 ( table2_id, text) VALUES ( ?, ? ): '1', 'Emily'
So the database now looks like
table1.id table1.text table1.table2_id
1 Fred 1
2 Jim 2
3 Emily 1
table2.id table2.text
1 abc
2 xyz
whereas I expected / hoped for:
table1.id table1.text table1.table2_id
1 Fred 1
2 Jim 2
3 Emily 3
table2.id table2.text
1 abc
2 xyz
3 abc
Why does it reuse 1/abc when I haven't told it to make the table2.text column unique?