Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This is a rather conceptual question:
I am working on a database with tables "product-information" and "buying transactions" - now the product-information table so far features "price" and the buying-transactions table features "amount".
But, some products are supposed to be paid per piece, and some have a price per weight.
Now I am unsure how to go about this without allowing decimal values for amounts.
Should I have the products have a flag if they are to be processed as price per weight and just do further calculations in the surrounding program? That seems to be rather impractical, since it makes aggregation in queries rather impossible. Or should I allow decimals but prohibit them in the user interface for things to be bought by piece, again requiring a flag?
What is the most sensible approach here?
Basically imagine a database containing receipts from groceries shopping and the appropriate information for each product. The user would insert the contents of a cart and the sum total would be calculated and spit out by the program, as well as the calculated price for each article to be paid per weight.
I'm sorry for the stupidity of the question.
Here's what we do for LedgerSMB, and I think the solution works relatively well.
Items are all priced per "unit." Items have a price per unit and a unit descriptor (human readable).
Items sold per piece have a unit of "piece". Items sold per weight have a unit like "kg", "oz", "g" or "T"
Price and quantity sold are both Numeric types without precision specified (so in PostgreSQL at least you have no precision limits).
Our table structure looks something like this (simplified for this question)
CREATE TABLE parts (
id SERIAL PRIMARY KEY,
sku VARCHAR NOT NULL,
unit BARCHAR(5) NOT NULL,
sell_price NUMERIC,
last_cost NUMERIC,
description TEXT,
obsolete BOOL
);
CREATE UNIQUE INDEX parts_sku_active_idx ON parts(sky) WHERE obsolete IS NOT TRUE;
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm currently working on the development of Sabre SOAP API's for Air(Flights) and using BargainFinderMax(OTA_AirLowFareSearchRQ) to search for flight availability. so in the request, there is a parameter ResponseType that defines the type of response to the requested data.
My Question is: if there is any responsetype which will bring the result grouped based upon the prices. ForExample: a group with one price = $1000 will contain multiple Options of flights (having different timings). For now, I can only get the OTA and GIR response type. which shows separate itineraries having the same price, as shown in the image below:
It has two itineraries with the same data(same price) but different Legs. What I'm actually looking for is that Itineraries with same price be grouped together in a single element.
Same as the response returned in TravelPort if we make LOWFARESEARCH Request and set SolutionResult="false". it gives PricePoint results i.e. Itineraries grouped in a single pricepoint. Can this be possible in sabre?
ResponseType can only have those 2 values, as stated in the request documentation: ResponseType, specify type of the response, valid values: "OTA" - regular OTA response, "GIR" - Grouped Itinerary Response.
If not used, it will default to OTA.
Anyway, even though it is harder to read (by a person), GIR groups almost everything, in order to avoid duplicating data. But, since the price of the whole itinerary is inside the itinerary element, the only way to do what you want is by looping through the itineraries and grouping them together, and it can be achieved using either OTA or GIR. There's nothing built in for that.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have a report that runs daily. I want to send the output of this report to a csv file. Due to the nature of the report, from time to time some data can be lost (new data is generated when the job is executing so sometimes, some is lost during this process as it is a lengthy job).
Is there a way to cross check on a daily basis that there is not any data from the previous day that has been lost- Perhaps with a tick or cross at the end of each row to show that the data has not been exported as a csv?
I am working with sensitive information so cant share any of the report details.
This is a fairly common question. Without specifics, it's very hard to give you a concrete answer - but here are a few solutions I've used in the past.
Typically, such reports have "grand total" lines - your widget report might be broken down by month, region, sales person, product type, etc. - but you usually have a "total widgets sold" line. If that's a quick query (you may need to remove joins and other refinements) then running that query after you've generated the report data allows you to compare your report grand total with the grand total at the end of the report. If the results are different, you know that the data changed while running the report.
Another option - SQLServer specific - is to use a checksum over the data you're reporting on. If the checksum changes between the start and end of the reporting run, you know you've had data changes.
Finally - and most dramatically - if the report's accuracy is critical, you can store the fact that a particular row was included in a reporting run. This makes your report much more complex, but it allows you to be clear that you've included all the data you need. For instance:
insert into reporting_history
select #reportID, widget_sales_id
from widget_sales
--- reporting logic here
select widget.cost,
widget_sales.date,
widget_sales.price,
widget_sales......
from widgets inner join widget sales on ...
inner join reporting_history on widget_sales.widget_sales_id = widget_sales.widget_sales_id
---- all your other logic
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i know this is a strange question !
i need to create a table called "Smartphone".
the question is about number of columns.
do i must to create every column for every characteristic ?
(Model
Band
SIM
Telephone operator
Shell material
Operating System
CPU
GPU
ROM
RAM
Storage expansion
Size
Type
Resolution
Ringtones
Audio
Video
Image
E-book
FM Radio
Earphones
Data transfer
Internet
Camera
Battery
Languages
Posts
Entry
TV
GPS
JAVA
WIFI
Bluetooth
Gravity Sensor
Multi-Touch
Dimensions
Weight
Standby time
Size
Weight
Accessories
)
or i just need to create the general characteristics and in the last. i create an column called “moreinformations” wish i put all information’s ...
cause i need to show all this information's and fliter by specific characteristics
what is the best practice ?
I would put all available MODEL BANDS, SIM, OPERATORS, SHELL MATERIAL, etc, so that every value which repeats and is not integer, into separated tables and then use foreign key. Example:
CREATE TABLE systems
(
ID int not null primary key,
name varchar(50),
-- other parameters
)
Other values, like bluetooth, wi-fi, would be boolean if the only states are exists or not exists, or numeric if there could be more values.
If there could be more bands or other things, I would create another table BAND LINKS, which links smartphones and bands, example band1 has ID=1, band2 had ID=2, and smartphone1 has ID=1, then I would insert into table BAND LINKS values (1,1) and (1,2) which means that 1 and 2 band is connected to ID=1 smartphone. Of course I would add procedures to improve performance.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a business process that requires taking a "snapshot" of portions of a client's data at a point in time, and being able to regurgitate it later. The data set has some oddities though that make the problem interesting:
The data is pulled from several databases, some of which are not ours.
The list of fields that could possibly be pulled are somewhere between 150 and 200
The list of fields that are typically pulled are somewhere between 10 and 20.
Each client can pull a custom set of fields for storage, this set is pre-determined ahead of time.
For example (and I have vastly oversimplified these):
Client A decides on Fridays to take a snapshot of customer addresses (1 record per customer address).
Client B decides on alternate Tuesdays to take a snapshot of summary invoice information (1 record per type of invoice).
Client C monthly summarizes hours worked by each department (1 record per department).
When each of these periods happen, a process goes out and fetches the appropriate information for each of these clients... and does something with them.
Sounds like an historical reporting system, right? It kind of is. The data is later parsed up and regurgitated in a variety of formats (xml, cvs, excel, text files, etc..) depending on the client's needs.
I get to rewrite this.
Since we don't own all of the databases, I can't just keep references to the data around. Some of that data is overwritten periodically anyway. I actually need to find the appropriate data and set it aside.
I'm hoping someone has a clever way of approaching the table design for such a beast. The methods that come to mind, all with their own drawbacks:
A dataset table (data set id, date captured, etc...);
A data table (data set id, row number, "data as a blob of crap")
A dataset table (data set id, date captured, etc....);
A data table (data set id, row number, possible field 1, possible field 2, possible field 3, ...., possible field x (where x > 150)
A dataset table (data set id, date captured, etc...); A field table (1 row per all possible field types); A selected field table (1 row for each field the client has selected); One table for each primitive data type possible (varchar, decimal, integer) (keyed on selected field, data set id, row, position, data is the single field value).
The first being the easiest to implement, but the "blob of crap" would have to be engineered to be parseable to break it down into reportable fields. Not very database friendly either, not reportable, etc.. Doesn't feel right.
The second is a horror show of columns. shudder
The third sounds right, but kind of doesn't. It's 3NF (yes, I'm old) so feels right that way. However reporting on the table screams of "rows that should have been columns" problems -- fairly useless to try to select on outside of a program.
What are your thoughts?
RE: "where hundreds of columns possible"
The limitations are 1000 columns per table
http://msdn.microsoft.com/en-us/library/ms143432.aspx
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am in the process of adding accounts receivable to one of my webapps. Essentially, I want to be able to create sales invoices and record payments received.
The reports, I generate are
statement with balance outstanding
invoice
receipt
To create a statement, I was thinking of doing a union of receipts and invoices ordered by date.
I also need to cater for refunds/credits, which i am doing by refund = receipts with a negative amount, and credit = invoice with a negative amount.
All the invoices/receipts are exported to a full accounting package (so don't require double entry system at this end)
What i have come up with is
INVOICES
id
customer_id
total
tax_amount
reference
user_id
created
INVOICE_LINES
id
invoice_id
description
qty
unit_price
total
tax_amount
RECEIPTS
id
customer_id
reference
internal_notes
amount
user_id
created
Is there anything that i am missing?
Would a single transactions table be simpler instead of having separate invoice/receipt tables?
Another thought, is it normal to link a receipt to an invoice? what if a receipt was for multiple invoices.
Any advice appreciated (simplicity is the goal)
Look at the "Library of Free Data Models" from DatabaseAnswers.org
They have many basic designs that should inspire you.
For example "Accounting Systems"
Have a look at this similar question Database schema design for a double entry accounting system? . I came across it googling for 'bookkeeping database design' as I reckon you'll easily find free or relatively low-priced databases already exist - as you say - simplicity is the goal.