So I've looked at the samples which use LoadDocument, but that requires the Id of the document that you're loading. In my case it's the exact opposite. I need all of the primary documents that have a document related to it, for which I know the relationship in the related document based on the id of the primary document.
How do I write it?
I.e.
Primary = {Id, Name}
Secondary = {Id, Name, PrimaryId}
So I want to get all of the Primary for which a Secondary (with other criteria about the secondary) exists.
In linq I'd just use from p in session.Primary where session.Secondary.Any(s => s.PrimaryId && whateverElse) select p) but this doesn't work as it doesn't understand it.
I've tried to construct this as a manual index but I can't figure out how to create the link to the related document since LoadDocument only takes the ID of the related document, which from the primary document I don't have it.
Related
I am creating related tables in SQLite and am wondering what the most efficient way to make them relate to each other is.
CREATE TABLE cards_name (id INTEGER PRIMARY KEY, name TEXT, rarity TEXT);
CREATE TABLE card_story (id INTEGER PRIMARY KEY, name_id INTEGER, story TEXT);
I have already entered some data for the first table and I was wondering how to add data to the second table without having to look up what the INTEGER PRIMARY KEY is every time (perhaps by using the cards name??)
26|Armorsmith|Rare
27|Auchenai Soulpriest|Rare
28|Avenging Wrath|Epic
29|Bane of Doom|Epic
For instance, I would like to enter the story of Armorsmith as "She accepts guild funds for repairs!" into story TEXT by using her name(Armorsmith) instead of ID(26).
Thanks
The task you are describing should be taken care of on the application level, not on database level.
You can create a GUI where you can select the name of a card, but the underlying value sent back to the database is the card's id and that gets stored in the story table establishing the relationship between the card and the story.
I would like to enter the story of Armorsmith as "She accepts guild funds for repairs!" into story TEXT by using her name(Armorsmith) instead of ID(26).
You can insert into one table from another table. Instead of hard coding the values, you can get them from a select. So long as the rows returned by the select match the rows needed by the insert it'll work.
insert into cards_story
(name_id, story)
select id, :story
from cards_name
where name = :name
The insert needs an ID and a story. The select returns ids and we've added our own text field for the story.
This statement would be executed with two parameters, one containing the text of the story, and one containing the name of the person. So you might write something like this (the exact details depend on your programming language and SQL interface library).
sql.execute(
name: "Armorsmith",
story: "She accepts guild funds for repairs!"
)
Is the equivalent of:
insert into cards_story
(name_id, story)
select id, 'She accepts guild funds for repairs!'
from cards_name
where name = 'Armorsmith'
Note that you'll want to make a few changes to your schema...
Declare name unique else you might get multiple cards for one name.
Like name TEXT UNIQUE.
Since you're looking up cards by name, you probably want to prevent there being multiple cards with the same name. That's just complexity you don't need to deal with.
Declare your foreign keys.
Like name_id INTEGER REFERENCES cards_name(id).
This has multiple benefits. One is keys are automatically indexed, so looking up stories by name_id will be faster.
The other is it enforces "referential integrity" which is a fancy way of saying it makes sure that every story has a name to go with it. If you try to delete a card_name it will balk unless the card_story is deleted first. You can also use things like on delete cascade to do the cleanup for you.
However, SQLite does not have foreign keys on by default. You have to turn them on. It's a very good idea to do so.
I know how to make an Solr atomic update based on the document unique key. But i don't know if there is a possibility to update a bunch of documents based on another field (not the unique key).
Bellow there is an example of what i need:
For example i have the fields: id (unique key), name, status. I want to update
the "name" in all documents where "status" is X.
Can i do that or i am forced to use the unique key?
Thanks.
You cannot do that - the unique key is required as it will update only 1 document at a time. From a previous discussion:
That is not a feature available in Solr.
You can update a full document or do a partial update of a single
document based on its unique key
http://lucene.472066.n3.nabble.com/Update-multiple-documents-in-one-query-td4070337.html
As discussed in that thread you would probably need to write a script that would pull each document up and issue the atomic update separately.
I currently am working with a database for a social media application. The database is running on postgresql, and I have ran into a logic issue.
I have two different types of content that can be submitted, topics, and posts. Each of these with their own primary key.
All of these have items can have some media attached to them. In my media table I have the columns content_type_id and content_id where content type is a key in a look up table with the different types of content, and the content_id is the primary key in the table where that particular piece of content is stored.
The issue I have ran into is that I cannot create a foreign key on content_id because depending on content_type it could be referring to one of two tables. Is there a way that I can set up a foreign key to look at the proper table depending on the value of the content_type_id column?
I'm not sure to understand your question, but you have a design problem. If I've interpreted that right, maybe you need a design like this:
but I can't know that if you don't provide your current design.
On this design:
CONTENT_TYPE can be a POST or TOPIC.
MEDIA can have 1 CONTENT_TYPE (POST or TOPIC).
CONTENT_TYPE can be related to N MEDIA.
The issue was resolved. Rather than having each table have it's own sequence for the primary key, a single sequence is used across all the tables, with the entity type lookup table becoming the entity map table mapping the now global id to the type of entity it is (post, topic ect). So that there will no longer be a need for a secondary table to differentiate if the primary key is a post or topic.
For example before when a post is created it is made with a sequental id as a primary key (1,2,3,4...) and when a topic is created the same thing happens sequential keys (1,2,3,4,...).
When media would be stored in the media_table, the media_table would have the issue of duplicate entity keys (both post with id 1 and topic with id 1 have a picture). This was originally designed to be resolved by having an additional column in the media table to differentiate between it being a post or topic.
By having the same sequence used for both posts and topics, they no longer will share any primary keys, so entity type is no longer needed to differentiate between the two, and the primary key in both topic and post will act as a super key pointing to the media table's entity id.
I'm developing an online shop application which includes a table named 'tbl_items'. The primary key for each item is the 'item_id' field.
Now, I want to add an option for each item posted on the shop to be associated with multiple pictures describing the item (unlimited amount of pictures per item), so I created another table called 'tbl_item_pictures' which includes two columns - 'item_id' and the url of the picture (varchar with the size of 2083).
I believe this structure isn't the best and it might be due to the fact it's already late where I live and I just can't think of any better solution, but I'm kind of lost. I would really not like to leave the table without a primary key, nor I want to assign a primary key to both of my fields.
Any ideas of what I can add/change in my current structure to make this work?
This is a very common design pattern, and putting both columns into a PK is the normal solution. If you don't do this you will potentially have multiple links from an item to the same picture.
There's nothing wrong with putting both columns into a PK for this.
Update:
to recap....
1 - Put your pictures into their own table, with an ID column and the url.
2 - In your linking table, use tbl_itemID and pictureID, and have them both be part of the PK for the lookup table.
You have 3 possibilities:
1.) Have no primary key. atm you seem not to need one.
2.) If item_id and url are unique together use both as primary key
3.) add a third column (like picture_id) and fill it manually or automatically from a sequence
Good luck!
I would make the item_id the foreign key in the pictures table. It's ok if there's not a primary key, unless you just want to add an additional autonumber column to have that distinction.
However, the say you have it is fine. Every picture has its individual item_id it's attached to.
Think of it as a simple lookup table.
This is how my table looks:
CREATE TABLE pics(
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT,
page INTEGER,
w INTEGER,
h INTEGER,
FOREIGN KEY(page) REFERENCES pages(id) ON DELETE CASCADE,
UNIQUE(name, page)
);
CREATE INDEX "myidx" ON "pics"("page"); # is this needed?
so UNIQUE(name, page) should create an index. But is this index enough to make fast queries that involve the page field only? Like selecting a set of "pics" WHERE page = ?. or JOIN pages.id ON pics.page ? Or should I create another index (myidx) just for the page field?
As stated, you will need your other myidx index, because your UNIQUE index specifies name first. In other words, it can be used to query by:
name
name and page
But not by page alone.
Your other option is to reorder the UNIQUE index and place the page column first. Then it can be used for page only queries, but will become incompatible with name only queries.
Think of a composite index as a phone book. The phone book is sorted by last name, then the first name. If you're given the name Bob Smith, you can quickly find the S section, then Sm, then all the Smith's, then eventually Bob. This is fast because you have both keys in the index. Since the book is organized by last name first, it would also be just as trivial to find all the Smith entries.
Now imagine trying to find all the people named Bob in the entire phone book. Much harder, right?
This is analogous to how the index on disk is organized as well. Finding all the rows with a certain page column when the list of sorted in (name, page) order will basically result in a sequential scan of all the rows, looking one by one for anything that has that page.
For more information on how indexes work, I recommend reading through Use the Index, Luke.
You have to analyse the query(s) that will use this table and determine what fields the query will use to sort the results. You should index the fields that are used for sorting the most.