How to match some loans to lendings with equal sum result? - sql

I have three tables: loan, lending, and matching in postgre.
Loan table
Id|amount|status|is_matched
1|500|active|TRUE
2|500|active|false
3|500|active|false
4|1000|INACTIVE|false
5|1000|active|false
6|5000|active|false
Lending table
Id|amount|status|is_matched
1|1000|active|false
2|1000|active|false
3|1000|active|false
4|2000|active|false
5|2000|active|false
I want to match loan and lending amounts with equal SUM result for data with status=active & is_matched=false. For example, amount of loan with id 2 & 3 will be matched to lending with id 1 because SUM(amount of loan w/ id 2 until loan w/ id 3) is equal to amount of lending w/ id 1. Then, the matching will be inserted in the matching table like below:
Matching table
loan_id|lending_id
2|1
3|1
5|2
6|3
6|4
6|5
After its inserted, it won't be matched anymore (set is_matched to true)
It's been three days I don't have an idea to do the sql query, lack of experience, in my mind maybe using sum with condition, but I can't make the sql query for this case.
Any idea How and What's the sql query to match then insert data like that?

Related

How to atomically insert records with unique ordinal numbers specific to a certain column?

I have a database with two tables. The first one holds customers (most columns truncated for brevity):
CustomerId
Name
1
Wipers
2
Wipers Central
3
Ministores
4
Minerals
The second one holds their respective codes, that are used on documents:
CustomerId
CodePrefix
Ordinal
1
WIP
1
2
WIP
2
3
MIN
1
4
MIN
2
A customer code consists of a three-letter prefix and an ordinal number. A unique constraint is applied to CodePrefix and Ordinal.
I struggle to make an SQL statement that inserts new codes to the second table when multiple users try to assign document codes to customers with similar names at the same time.
What I've tried:
INSERT INTO CustomerCodes
(CustomerId, CodePrefix, Ordinal)
VALUES
(1, 'WIP',
(
SELECT ISNULL(MAX(Ordinal), 0) + 1
FROM CustomerCodes
WHERE CodePrefix = 'WIP'
)
)
It works most of the time, but for many simultaneous operations I get unique constraint violation.
What should I do to make it work regardless of how many concurrent transactions happens at the same time?

SQL query to Manage Visas

I have Employees table, Visas table & mobilization table, I need to write a SQL query as a result: EmployeeIdno,Name,Surname,Designation,mobilizationidno,(Most recent Visa Exp Date based on country ) al in one row.
question is: Visa can be multiple and column must be added at the end ( multiple rows coming, I need column in each 1 Employee record )
example I am receiving:
'001','john','Doe','Developer','Mob1','Visa1','2018-02-18'
'001','john','Doe','Developer','Mob1','Visa2','2018-02-19'
Example I need:
'001','john','Doe','Developer','Mob1','Visa1','2018-02-18','Visa2','2018-02-19',Visa3,'2018-02-20'....
Visa assigned to person can be increase or decrease
Employees Table:
IDNO,NAME,SURNAME,DESIGNATION
VISAS table:
IDNO,VISATYPE,IDNO,ISSUEDATE,EXPIREDATE
VISATYPES table:
IDNO,NAME,COUNTRYIDNO
COUNTRIES table:
IDNO,NAME
Mobilization List:
IDNO,EMPLOYEEIDNO,MOBDATE
Mobilization Types:
IDNO,MOBTYPE,COMPANY,LOCATION,EXPECTEDMOBDATE

SQL Query and Sort From Multiple Tables

Working with SQL via a NOVA Oracle DB. Need to know how to query from multiple tables and arrange results based on being sorted by the highest values. Here are a few lines of code to reflect the three tables:
INSERT INTO VEHICLES
(vehicleVIN,vehicleType,vehicleMake,vehicleModel,vehicleWhereFrom,vehicleWholesaleCost,vehicleTradeID)
VALUES
('147258HHE91K3RT','compact','chevrolet','spark','Maryland',20583.00,NULL);
INSERT INTO VEHICLES
(vehicleVIN,vehicleType,vehicleMake,vehicleModel,vehicleWhereFrom,vehicleWholesaleCost,vehicleTradeID)
VALUES
('789456ERT0923RFB6','Midsize','ford','Taurus','washington, d.c.',25897.22,1);
INSERT INTO VEHICLES
(vehicleVIN,vehicleType,vehicleMake,vehicleModel,vehicleWhereFrom,vehicleWholesaleCost,vehicleTradeID)
VALUES
('1234567890QWERTYUIOP','fullsize','Lincoln','towncar','Virginia',44222.10,NULL);
AND
INSERT INTO SALES
(saleID,grossSalePrice,vehicleStatus,saleDate,saleMileage,customerID,salespersonID,vehicleVIN)
VALUES
(1,25987.28,'sold',date '2012-10-15',10,1,1,'147258HHE91K3RT');
INSERT INTO SALES
(saleID,grossSalePrice,vehicleStatus,saleDate,saleMileage,customerID,salespersonID,vehicleVIN)
VALUES
(2,29999.99,'sold',date '2012-10-17',50087,2,2,'789456ERT0923RFB6');
INSERT INTO SALES
(saleID,grossSalePrice,vehicleStatus,saleDate,saleMileage,customerID,salespersonID,vehicleVIN)
VALUES
(3,47490.88,'sold',date '2012-11-05',30,3,3,'1234567890QWERTYUIOP');
AND
INSERT INTO CUSTOMERS
(customerID,customerFirName,customerLasName,customerMiName,customerStreet,customerState,customerCity,customerZip)
VALUES
(1,'Regorna','Trasper','J','11111 Address Way','Maryland','Hollywood','20636');
INSERT INTO CUSTOMERS
(customerID,customerFirName,customerLasName,customerMiName,customerStreet,customerState,customerCity,customerZip)
VALUES
(2,'Bob','Seagram','A','22222 Seagram Lane','Texas','Houston','77001');
INSERT INTO CUSTOMERS
(customerID,customerFirName,customerLasName,customerMiName,customerStreet,customerState,customerCity,customerZip)
VALUES
(3,'Sally','Anderson','P','33333 Pheonix Drive','Arizona','Pheonix','85001');
Obviously there are other tables that come into play here (salesperson, etc.), however these are the only tables needed for the query. The query I want to pull needs to show the total count of sales for each model, sorted by the highest values, and the total count of sales for each zip code, sorted by the highest values. An example (using the data provided above) would look similar to this:
MODEL NUMBER of SALES ZIP CODE NUMBER OF SALES
spark 1 20636 1
Taurus 1 77001 1
towncar 1 85001 1
The results need to be sorted by highest values, based on the number of sales. I'm also trying to accomplish this via a single SELECT query.
I've tried some ideas, but haven't been able to find anything that hits the home run yet. Thanks for the help!
See if this is what you're after:
SELECT DISTINCT v.VEHICLEMODEL, COUNT(*) OVER (PARTITION BY s.VEHICLEVIN) "CAR_SALES"
, c.CUSTOMERZIP, COUNT(*) OVER (PARTITION BY c.CUSTOMERZIP )"TOTAL_SALES_AT_ZIP"
FROM SALES s, VEHICLES v, CUSTOMERS c
WHERE s. VEHICLEVIN = v. VEHICLEVIN
and c. CUSTOMERID = s. CUSTOMERID
ORDER BY 2 DESC , 4 DESC

Repetition record on sql query for 3 tables

I have this query that return to me all rows for one user
$strSQL = "SELECT * FROM customer , bills , vouchers
WHERE
bills.bills_CustomerName = customer.customer_Name and
vouchers.vouchers_CustomerName = customer.customer_Name and
bills.bills_CustomerName like '%".$_POST["MyName"]."%'
";
I have the problem that one row are repeated 2 times, the customer table is related to the bills table and to the vouchers table on one FK column.
bills table :
bills_ID - bills_CustomerName - bills_Total
customer table :
customer_ID - customer_Name - customer_Tell
vouchers table :
vouchers_ID - vouchers_CustomerName - vouchers_Total
we are get
Name Total Tell
kam johin 100 0444444444
kam johin 100 0444444444
mak pop 200 0588888888
mak pop 200 0588888888
If customer to bill is one-to-many and customer to voucher is also one-to-many then you have what is sometimes known as a "chasm trap", and you will have to aggregate child values from bill and voucher before joining to customer.
Or perhaps your data model should be more like:- Customer->Bill->Voucher, in which case you need to include in the voucher table a foreign key to the Bill to which the voucher relates.
BTW you could probably use some surrogate key for customer - what happens when two different customers have the same name?

Approach to a Bin Packing sql problem

I have a problem in sql where I need to generate a packing list from a list of transactions.
Data Model
The transactions are stored in a table that contains:
transaction id
item id
item quantity
Each transaction can have multiple items (and coincidentally multiple rows with the same transaction id). Each item then has a quantity from 1 to N.
Business Problem
The business requires that we create a packing list, where each line item in the packing list contains the count of each item in the box.
Each box can only contain 160 items (they all happen to be the same size/weight). Based on the total count of the order we need to split items into different boxes (sometimes splitting even the individual item's collection into two boxes)
So the challenge is to take that data schema and come up with the result set that includes how many of each item belong in each box.
I am currently brute forcing this in some not so pretty ways and wondering if anyone has an elegant/simple solution that I've overlooked.
Example In/Out
We really need to isolate how many of each item end up in each box...for example:
Order 1:
100 of item A100 of item B140 of item C
This should result in three rows in the result set:
Box 1: A (100), B (60) Box 2: B(40), C (120) Box 3: C(20)
Ideally the query would be smart enough to put all of C together, but at this point - we're not too concerned with that.
How about something like
SELECT SUM([Item quantity]) as totalItems
, SUM([Item quantity]) / 160 as totalBoxes
, MOD(SUM([Item Quantity), 160) amountInLastBox
FROM [Transactions]
GROUP BY [Transaction Id]
Let me know what fields in the resultset you're looking for and I could come up with a better one
I was looking for something similar and all I could achieve was expanding the rows to the number of item counts in a transaction, and grouping them into bins. Not very elegant though.. Moreover, because string aggregation is still very cumbersome in SQL Server (Oracle, i miss you!), I have to leave the last part out. I mean putting the counts in one single row..
My solution is as follows:
Example transactions table:
INSERT INTO transactions
(trans_id, item, cnt) VALUES
('1','A','50'),
('2','A','140'),
('3','B','100'),
('4','C','80');
GO
Create a dummy sequence table, which contains numbers from 1 to 1000 (I assume that maximum number allowed for an item in a single transaction is 1000):
CREATE TABLE numseq (n INT NOT NULL IDENTITY) ;
GO
INSERT numseq DEFAULT VALUES ;
WHILE SCOPE_IDENTITY() < 1000 INSERT numseq DEFAULT VALUES ;
GO
Now we can generate a temporary table from transactions table, in which each transaction and item exist "cnt" times in a subquery, and then give numbers to the bins using division, and group by bin number:
SELECT bin_nr, item, count(*) count_in_bin
INTO result
FROM (
SELECT t.item, ((row_number() over (order by t.item, s.n) - 1) / 160) + 1 as bin_nr
FROM transactions t
INNER JOIN numseq s
ON t.cnt >= s.n -- join conditionally to repeat transaction rows "cnt" times
) a
GROUP BY bin_id, item
ORDER BY bin_id, item
GO
Result is:
bin_id item count_in_bin
1 A 160
2 A 30
2 B 100
2 C 30
3 C 50
In Oracle, the last step would be as simple as that:
SELECT bin_id, WM_CONCAT(CONCAT(item,'(',count_in_bin,')')) contents
FROM result
GROUP BY bin_id
This isn't the prettiest answer but I am using a similar method to keep track of stock items through an order process, and it is easy to understand, and may lead to you developing a better method than I have.
I would create a table called "PackedItem" or something similar. The columns would be:
packed_item_id (int) - Primary Key, Identity column
trans_id (int)
item_id (int)
box_number (int)
Each record in this table represents 1 physical unit you will ship.
Lets say someone adds a line to transaction 4 with 20 of item 12, I would add 20 records to the PackedItem table, all with the transaction ID, the Item ID, and a NULL box number. If a line is updated, you need to add or remove records from the PackedItem table so that there is always a 1:1 correlation.
When the time comes to ship, you can simply
SELECT TOP 160 FROM PackedItem WHERE trans_id = 4 AND box_number IS NULL
and set the box_number on those records to the next available box number, until no records remain where the box_number is NULL. This is possible using one fairly complicated UPDATE statement inside a WHILE loop - which I don't have the time to construct fully.
You can now easily get your desired packing list by querying this table as follows:
SELECT box_number, item_id, COUNT(*) AS Qty
FROM PackedItem
WHERE trans_id = 4
GROUP BY box_number, item_id
Advantages - easy to understand, fairly easy to implement.
Pitfalls - if the table gets out of sync with the lines on the Transaction, the final result can be wrong; This table will get many records in it and will be extra work for the server. Will need each ID field to be indexed to keep performance good.