I am working with a database in MS Access.
I have a table(Table A) with different categories (criteria)
I have another table (Table B), where I have to pull values from Table A based on two categories, (year and amount).
For example,
From table B, the cost is $15,000, so we go to table A and find the contingency from year 2018 which falls between $0-$20,000 and report a contingency of 25%.
Is there a way to go about this? I've been racking my brain trying to use nested "IIF" and "AND" functions but i can't figure it out
Add both tables to a query.
Join on C_YEAR.
Use BETWEEN AND to grab the appropriate range hence the appropriate contingency.
Something like:
SELECT tableA.CONTINGENCY
FROM tableA INNER JOIN tableB ON tableA.C_YEAR = tableB.C_YEAR
WHERE tableB.COST BETWEEN tableA.MIN_VALUE AND tableA.MAX_VALUE;
It looks like the contingency rates for all the years are the same:
25% for values between $0 and $20,000
15% for values between $20,001 and $200,000
10% for values between $200,000 and $100,000
Is this always the case, or just based on the sample data you're using?
Is it possible you'll have data where the cost is > $100,000,000? If so, how should the data be handled?
I'm just wondering if there's not a better way to represent your contingency rate rules. Otherwise, I'd agree with Rene about joining the tables and adding a WHERE condition to get the rates you need. I'd also add that, since you'll always be pulling the rate from the query, you don't need an actual field on Table B to store the contingency rate.
Consider a DLookUp in an UPDATE query. The domain aggregate is needed for query to be updateable.
UPDATE tableB b
SET b.Contingency = DLookUp("CONTINGENCY", "tableA", "[C_YEAR] = " & b.[C_YEAR] &
" AND [MIN_VALUE] <= " & b.[COST] & " AND [MAX_VALUE] >= " & b.[Cost])
Related
Good evening. Could someone please help me with the following. I am trying to join two tables.The first id wbr_global.gl_ap_details. This stores historic GL information. The second table sandbox.utr_fixed_mapping is where account mapping is stored. For example, ana ccount number 60820 is mapped as Employee relation. The first table needs the mapping from the second table linked on the account number. The output I am getting is not right and way to bug. Any help would be appreciated!
Output
select sandbox.utr_fixed_mapping_na.new_mapping_1,sum(wbr_global.gl_ap_details.amount)
from wbr_global.gl_ap_details
LEFT JOIN sandbox.utr_fixed_mapping_na ON wbr_global.gl_ap_details.account_number = sandbox.utr_fixed_mapping_na.account_number
Where gl_ap_details.cost_center = '1172'
and gl_ap_details.period_name = 'JUL-21'
and gl_ap_details.ledger_name = 'Amazon.com, Inc.'
Group by 1;
I tried adding the cast function but after 5000 seconds of the query running I canceled it.
The query itself appears ok, but minor changes. Learn to use table "aliases". This way you don't have to keep typing long database.table.column all over. Additionally, SQL is easier to read doing it that way anyhow.
Notice the aliases "gl" and "fm" after the tables are declared, then these aliases are used to represent the columns.. Easier to read, would you agree.
Added GL Account number as described below the query.
select
gl.account_number,
fm.new_mapping_1,
sum(gl.amount)
from
wbr_global.gl_ap_details gl
LEFT JOIN sandbox.utr_fixed_mapping_na fm
ON gl.account_number = fm.account_number
Where
gl.cost_center = '1172'
and gl.period_name = 'JUL-21'
and gl.ledger_name = 'Amazon.com, Inc.'
Group by
gl.account_number,
fm.new_mapping_1
Now, as for your query and getting null. This just means that there are records within the gl_ap_details table with an account number that is not found in the utr_fixed_mapping_na table. So, to see WHAT gl account number does NOT exist, I have added it to the query. Its possible there are MULTIPLE records in the gl_ap_details that are not found in the mapping table. So, you may get
GLAccount Description SumOfAmount
glaccount1 null $someAmount
glaccount37 null $someAmount
glaccount49 null $someAmount
glaccount72 Depreciation $someAmount
glaccount87 Real Estate $someAmount
glaccount92 Building $someAmount
glaccount99 Salaries $someAmount
I obviously made-up glaccounts just to show the purpose. You may have multiple where the null's total amount is actually masking how many different gl account numbers were NOT found.
Once you find which are missing, you can check / confirm they SHOULD be in the mapping table.
FEEDBACK.
Since you do realize the missing numbers, lets consider a Cartesian result. If there are multiple entries in the mapping table for the same G/L account number, you will get a Cartesian result thus bloating your numbers. To clarify, lets say your mapping table has
Mapping file.
GL Descr1 NewMapping
1 test Salaries
1 testView Buildings
1 Another Depreciation
And your GL_AP_Details has
GL Amount
1 $100
Your total for the query would result in $300 because the query is trying to join the AP Details GL #1 to EACH of the entries in the mapping file thus bloating the amount. You could also add a COUNT(*) as NumberOfEntries to the query to see how many transactions it THINKS it is processing. Is there some "unique ID" in the GL_AP_Details table? If so, then you could also do a count of DISTINCT ID values. If they are different (distinct is lower than # of entries), I think THAT is your culprit.
select
fm.new_mapping_1,
sum(gl.amount),
count(*) as NumberOfEntries,
count( distinct gl.UniqueIdField ) as DistinctTransactions
from
wbr_global.gl_ap_details gl
LEFT JOIN sandbox.utr_fixed_mapping_na fm
ON gl.account_number = fm.account_number
Where
gl.cost_center = '1172'
and gl.period_name = 'JUL-21'
and gl.ledger_name = 'Amazon.com, Inc.'
Group by
fm.new_mapping_1
Might you also need to limit the mapping table for a specific prophecy or mec view?
If you "think" that the result of an aggregate is wrong, then the easiest way to verify this is to select the individual rows that correlate to 1 record in the aggregate output and inspect the records, looking for duplications.
For instance, pick 'Building Management':
SELECT fixed.new_mapping_1,details.amount,*
FROM wbr_global.gl_ap_details details
LEFT JOIN sandbox.utr_fixed_mapping_na fixed ON details.account_number = fixed.account_number
WHERE details.cost_center = '1172'
AND details.period_name = 'JUL-21'
AND details.ledger_name = 'Amazon.com, Inc.'
AND details.account_number = 'Building Management'
Notice that we tack on a ,* to the end of the projection, this will show you everything that the query has access to, you should look for repeating sections of data that you were not expecting, then depending on which table they originate from your might add additional criteria to the JOIN, or to the WHERE or you might need to group by additional columns.
This type of issue is really hard to comment on in a forum like this because it is highly specific to your schema, and the data contained within it, making solutions highly subjective to criteria you are not likely to publish online.
Generally if you think a calculation is wrong, you need to manually compute it to verify, this above advice helps you to inspect the data your query is using, you should either construct your own query or use other tools to build the data set that helps you to manually compute the correct values, then work them back into or replace your original query.
The speed issues are out of scope here, we can comment on the poor schema design but I suspect you don't have a choice. In the utr_fixed_mapping_na table you should make the account_number have the same column type as the source data, or add a new column that has the data in the original type, then you can setup indexes on the columns to improve the speed of the join.
I read many threads but didn't get the right solution to my problem. It's comparable to this Thread
I have a query, which gathers data and writes it per shell script into a csv file:
SELECT
'"Dose History ID"' = d.dhs_id,
'"TxFieldPoint ID"' = tp.tfp_id,
'"TxFieldPointHistory ID"' = tph.tph_id,
...
FROM txfield t
LEFT JOIN txfielpoint tp ON t.fld_id = tp.fld_id
LEFT JOIN txfieldpoint_hst tph ON fh.fhs_id = tph.fhs_id
...
WHERE d.dhs_id NOT IN ('1000', '10000')
AND ...
ORDER BY d.datetime,...;
This is based on an very big database with lots of tables and machine values. I picked my columns of interest and linked them by their built-in table IDs. Now I have to reduce my result where I get many rows with same values and just the IDs are changed. I just need one(first) row of "tph.tph_id" with the mechanics like
WHERE "Rownumber" is 1
or something like this. So far i couldn't implement a proper subquery or use the ROW_NUMBER() SQL function. Your help would be very appreciated. The Result looks like this and, based on the last ID, I just need one row for every og this numbers (all IDs are not strictly consecutive).
A01";261511;2843119;714255;3634457;
A01";261511;2843113;714256;3634457;
A01";261511;2843113;714257;3634457;
A02";261512;2843120;714258;3634464;
A02";261512;2843114;714259;3634464;
....
I think "GROUP BY" may suit your needs.
You can group rows with the same values for a set of columns into a single row
I have a table in MS Access which has stock prices arranged like
Ticker1, 9:30:00, $49.01
Ticker1, 9:30:01, $49.08
Ticker2, 9:30:00, $102.02
Ticker2, 9:30:01, $102.15
and so on.
I need to do some calculation where I need to compare prices in 1 row, with the immediately previous price (and if the price movement is greater than X% in 1 second, I need to report the instance separately).
If I were doing this in Excel, it's a fairly simple formula. I have a few million rows of data, so that's not an option.
Any suggestions on how I could do it in MS Access?
I am open to any kind of solutions (with or without SQL or VBA).
Update:
I ended up trying to traverse my records by using ADODB.Recordset in nested loops. Code below. I though it was a good idea, and the logic worked for a small table (20k rows). But when I ran it on a larger table (3m rows), Access ballooned to 2GB limit without finishing the task (because of temporary tables, the size of the original table was more like ~300MB). Posting it here in case it helps someone with smaller data sets.
Do While Not rstTickers.EOF
myTicker = rstTickers!ticker
rstDates.MoveFirst
Do While Not rstDates.EOF
myDate = rstDates!Date_Only
strSql = "select * from Prices where ticker = """ & myTicker & """ and Date_Only = #" & myDate & "#" 'get all prices for a given ticker for a given date
rst.Open strSql, cn, adOpenKeyset, adLockOptimistic 'I needed to do this to open in editable mode
rst.MoveFirst
sPrice1 = rst!Open_Price
rst!Row_Num = i
rst.MoveNext
Do While Not rst.EOF
i = i + 1
rst!Row_Num = i
rst!Previous_Price = sPrice1
sPrice2 = rst!Open_Price
rst!Price_Move = Round(Abs((sPrice2 / sPrice1) - 1), 6)
sPrice1 = sPrice2
rst.MoveNext
Loop
i = i + 1
rst.Close
rstDates.MoveNext
Loop
rstTickers.MoveNext
Loop
If the data is always one second apart without any milliseconds, then you can join the table to itself on the Ticker ID and the time offsetting by one second.
Otherwise, if there is no sequence counter of some sort to join on, then you will need to create one. You can do this by doing a "ranking" query. There are multiple approaches to this. You can try each and see which one works the fastest in your situation.
One approach is to use a subquery that returns the number of rows are before the current row. Another approach is to join the table to itself on all the rows before it and do a group by and count. Both approaches produce the same results but depending on the nature of your data and how it's structured and what indexes you have, one approach will be faster than the other.
Once you have a "rank column", you do the procedure described in the first paragraph, but instead of joining on an offset of time, you join on an offset of rank.
I ended up moving my data to a SQL server (which had its own issues). I added a row number variable (row_num) like this
ALTER TABLE Prices ADD Row_Num INT NOT NULL IDENTITY (1,1)
It worked for me (I think) because my underlying data was in the order that I needed for it to be in. I've read enough comments that you shouldn't do it, because you don't know what order is the server storing the data in.
Anyway, after that it was a join on itself. Took me a while to figure out the syntax (I am new to SQL). Adding SQL here for reference (works on SQL server but not Access).
Update A Set Previous_Price = B.Open_Price
FROM Prices A INNER JOIN Prices B
ON A.Date_Only = B.Date_Only
WHERE ((A.Ticker=B.Ticker) AND (A.Row_Num=B.Row_Num+1));
BTW, I had to first add the column Date_Only like this (works on Access but not SQL server)
UPDATE Prices SET Prices.Date_Only = Format([Time_Date],"mm/dd/yyyy");
I think the solution for row numbers described by #Rabbit should work better (broadly speaking). I just haven't had the time to try it out. It took me a whole day to get this far.
I am having an issue with a query which returns results that are very far from reality (not only does it not make sense at all but I can also calculate the correct answer using filters).
I am building a KPI db for work and this query returns KPIs by employee by period. I have a very similar query from which this one is derived which returns KPIs by sector by period which gives the exact results I have calculated using a spreadsheet. I really have no idea what happens here. Basically, I want to sum a few measures that are in the maintenances table like temps_requete_min, temps_analyse_min, temps_maj_min and temps_rap_min and then create a subtotal AND present these measures as hours (measures are presented in minutes, thus the divide by 60).
SELECT
[anal].[prenom] & " " & [anal].[nom] AS Analyste,
maint.periode, maint.annee,
Round(Sum(maint.temps_requete_min)/60,2) AS REQ,
Round(Sum(maint.temps_analyse_min)/60,2) AS ANA,
Round(Sum(maint.temps_maj_min)/60,2) AS MAJ,
Round(Sum(maint.temps_rap_min)/60,2) AS RAP,
Round((Sum(maint.temps_requete_min)+Sum(maint.temps_analyse_min)+Sum(maint.temps_maj_min)+Sum(maint.temps_rap_min))/60,2) AS STOTAL,
Count(maint.periode) AS Nombre,
a.description
FROM
rapports AS rap,
analyste AS anal,
maintenances AS maint,
per_annuelle,
annees AS a
WHERE
(((rap.id_anal_maint)=anal.id_analyste) And
((maint.id_fichier)=rap.id_rapport) And
((maint.maint_effectuee)=True) And
((maint.annee)=per_annuelle.annee) And
((per_annuelle.annee)=a.annees))
GROUP BY
[anal].[prenom] & " " & [anal].[nom],
maint.periode,
maint.annee,
a.description,
anal.id_analyste
ORDER BY
maint.annee, maint.periode;
All measures are many orders of magnitude higher than what they should be. I suspect that my Count() is wrong, but I can't see what would be wrong with the sums :|
Edit: Finally I have come up with this query which shows the same measures I have calculated using Excel from the advice given in the comments and the answer provided. Many thanks to everyone. What I would like to know however, is why it makes a difference to use explicit joins rather than implicit joins (WHERE clause on PKs).
SELECT
maintenances.periode,
[analyste].[prenom] & " " & analyste.nom,
Round(Sum(maintenances.temps_requete_min)/60,2) AS REQ,
Round(Sum(maintenances.temps_analyse_min)/60,2) AS ANA,
Round(Sum(maintenances.temps_maj_min)/60,2) AS MAJ,
Round(Sum(maintenances.temps_rap_min)/60,2) AS RAP,
Round((Sum(maintenances.temps_requete_min)+Sum(maintenances.temps_analyse_min)+Sum(maintenances.temps_maj_min)+Sum(maintenances.temps_rap_min))/60,2) AS STOTAL,
Count(maintenances.periode) AS Nombre
FROM
(maintenances INNER JOIN rapports ON maintenances.id_fichier = rapports.id_rapport)
INNER JOIN analyste ON rapports.id_anal_maint = analyste.id_analyste
GROUP BY analyste.prenom, maintenances.periode
In this case, the problem is typically that your joins are bringing together multiple dimensions. You end up doing a cross product across two or more categories.
The fix is to do the summaries independently along each dimension. That means that the "from" clause contains subqueries with group bys, and these are then joined together. The group by would disappear from the outer query.
This would suggest having a subquery such as:
from (select maint.periode, maint.annee,
Round(Sum(maint.temps_requete_min)/60,2) AS REQ,
Round(Sum(maint.temps_analyse_min)/60,2) AS ANA,
Round(Sum(maint.temps_maj_min)/60,2) AS MAJ,
Round(Sum(maint.temps_rap_min)/60,2) AS RAP,
Round((Sum(maint.temps_requete_min)+Sum(maint.temps_analyse_min) +Sum(maint.temps_maj_min)+Sum(maint.temps_rap_min))/60,2) AS STOTAL,
Count(maint.periode) AS Nombre,
from maintenances maint
group by maint.periode, maint.annee
) m
I say "such as" because without a layout of the tables, it is difficult to see exactly where the problem is and what the exact solution is.
I have an INSERT query that is pulling data from two tables and inserting that data into a third table. Everything appears to be working fine except that the COUNT part of the query isn't returning the results I would expect.
The first set of tables this query runs is MIUsInGrid1000 (number of rows = 1) and Results1000 (number of rows = 24). The number that is returned from the Count part of the query is 24 instead of being 1 like I would have expected.
The next set of tables is MIUsInGrid1000 (number of rows = 3) and Results1000 (number of rows = 30). The number that is returned from the Count part of the query is 90 instead of being 3 like I would have expected.
It appears that the product of the two counts is what is being returned to me and I can't figure out why that is. If I take out the references to the Results tables then the query works the way I would expect. I think that I am misunderstanding how at least some part of this works. Can someone explain why this is not working as I expected it to?
strQuery1 = "Insert Into MIUsInGridAvgs (NumberofMIUs, ProjRSSI, RealRSSI, CenterLat, CenterLong) " & _
"Select Count(MIUsInGrid" & i & ".MIUID), Avg(MIUsInGrid" & i & ".ProjRSSI), Avg(MIUsInGrid" & i & ".RealRSSI), Avg(Results" & i & ".Latitude), Avg(Results" & i & ".Longitude) " & _
"From MIUsInGrid" & i & ", Results" & i & " "
It seems logical to me that if you are joining two tables, one with 1 row and the other with 24 rows that there, is the possibility, of having a result set of 24 rows.
I notice you have not included a WHERE clause in your SQL (perhaps for brevity), but if you don't have that you are doing a CROSS JOIN (or cartesian join) between the tables and this will provide unexpected results.
The COUNT function will count all rows in the database, to determine how many "distinct" ID's there are, you can use the answer provided by Tomalak
This should solve your immediate problem
Count(DISTINCT MIUsInGrid" & i & ".MIUID)
The naked COUNT function counts the non-NULL values, not the distinct values, unless you tell it to switch behavior by using DISTINCT.
When two tables are joined like you do it (you build a cartesian product), then the number of resulting rows is the number of rows in the one times the number of rows in the other table.
This leads me to the suspicion that you are missing a join condition.
Apart from that i find it bewildering that you have a number of obviously identical tables that are indexed by name. This most certainly is a substantial design flaw in the database.
The way I usually figure these things out is to not use any aggregates first to see what my result sets will be. Then, I start adding in the aggregate functions.