I have a chart showing Issued Qty for all transactions from 'From Location' to 'To Location' as follows.
Now I want to show one more column. In it I want to show transaction qty from 'To Location' to 'From Location'.
To elaborate, the first row shows Qty issed from Pharmacy to 3rd Flr - NEW AC WARD. So now in the next coloumn, I wish to shwo
the qtys issued from 3rd Flr - NEW AC WARD to pharmacy.
Some trick using set analysis would do, but I am not well versed with Set analysis. Please help.
I tried something like this. But this is not working. :(
=Sum ({$<[From_Location_Name] = {[To_Location_Name]}, [To_Location_Name] = {[From_Location_Name]}>}[MatlMoveIssuedQty])
I also tried the following, but did not work
=Sum ({$<[From_Location_Name] = p(To_Location_Name)>}[MatlMoveIssuedQty])
I think what you're trying to do isn't possible because you can't tell QlikView to honour the dimensionality (keep everything on the FROM line) and ignore it at the same time (set analysis TO=FROM). QV has a lot of intelligence built into it's handling of dimensions and the associativity of the data behind those dimensions.
I do have 2 possibilities for you to display your data though.
Simplest method would be a pivot table with FROM down the left and TO across the top. could potentially get messy if your dimensions have lots of values in
Second way would be to create an orphan table of the Locations so that users can select them without breaking the underlying associativity. that involves a few more steps.
first in the script, the important part is the second part, creating a distinct list of the combinations of FROM and TO and naming them something that does not associate to anything anywhere else in the data.
MEDS:
load * inline [
FROM, TO, QTY
Pharm, 3rd, 1
Pharm, 1st, 2
Pharm, 2nd, 3
2nd, Pharm, 45
3rd, Pharm, 6
1st, Pharm, 76
3rd, Pharm, 53
];
LOCS:
load distinct
FROM as LOC1,
TO as LOC2
resident MEDS;
Then you can build these 2 objects next to each other to show your TO and FROM figures.
Notice that when nothing is selected the 2 objects equivalent but with different sorting. Depending on what you're trying to achieve a calculation condition might be in order to guide users to analyse one location at a time.
Related
I have a requirement for a dynamic report where the user can select what columns they want to display. This is not a problem for me to do using render variable, however, the measures are not rolling up.
As an example, I have age, gender and sales. This generates say, age 20 and 25, and obviously 2 genders, resulting in 4 rows.
When you remove gender using the static choices in the prompt page, it keeps 4 rows, just without displaying the age. I understand this is the nature of 'rendering' (or not) the column.
What I need is for the measures to roll up to what columns are left, which would show 2 rows, and a total. Or even remove all columns, and have just an overall total sales left.
I cant really use conditional blocks to create every combination as there is going to be 20+ columns in the report.
Thanks in advance!!
Change you conditionally rendered field to something like
case when ?render_gender? then [Gender] else '' end
This should zip your four rows to two.
By user sorting I mean that as a user on the site you see a bunch of items, and you are supposed to be able to reorder them (I'm using jQuery UI).
The user only sees 20 items on each page, but the total number of items can be thousands.
I assume I need to add another column in the table for custom ordering.
If the user sees items from 41-60, and and he sorts them like:
41 = 2nd
42 = 1st
43 = fifth
etc.
I can't just set the ordering column to 2,1,5.
I would need to go through the entire table and change each record.
Is there any way to avoid this and somehow sort only the current selection?
Add another column to store the custom order, just as you suggested yourself. You can avoid the problem of having to reassign all rows' values by using a REAL-typed column: For new rows, you still use an increasing integer sequence for the column's value. But if a user reorders a row, the decimal data type will allow you to use the formula ½ (previous row's value + next row's value) to update the column of the single row that was moved. You
have got two special cases to take care of, namely if a user moves a row to the very beginning or end of the list. In that case, just use min - 1 rsp. max + 1.
This approach is the simplest I can think of, but it also has some downsides. First, it has a theoretical limitation due to the datatype having only double-precision. After a finite number of reorderings, the values are too close together for their average to be a different number. But that's really only a theoretical limit you should never reach in practical applications. Also, the column will use 8 bytes of memory per row, which probably is much more than you actually need.
If your application might scale to the point where those 8 bytes matter or where you might have users that overeagerly reorder rows, you should instead stick to the INTEGER column and use multiples of a constant number as the default values (e.g. 100, 200, 300, ..). You still use the update formula from above, but whenever two values become too close together, you reassign all values. By tweaking the constant multiplier to the average table size / user behaviour, you can control how often this expensive operation has to be done.
There are a couple ways I can think of to do this. One would be to use a SELECT FROM SELECT style statement. As in something like this.
SELECT *
FROM (
SELECT col1, col2, col3...
FROM ...
WHERE ...
LIMIT n,m
) as Table_A
ORDER BY ...
The second option would be to use temp tables such as:
INSERT INTO temp_table_A SELECT ... FROM ... WHERE ... LIMIT n,m;
SELECT * FROM temp_table_A ORDER BY ...
Another option to look at would be jQuery plugin like DataTables
one way i can think of is:
Add a new column (if feasible) or create a new table for holding the order of the items.
On any page you will show around 20 items based on the initial ordering.
Using the jquery's Draggable you can send updates to this table
I think you can do this with an extra column.
First, you could prepopulate this new column with a default sort order and then allow the user to interactively modify it with the drag and drop of jquery-ui.
Lets say this user has 100 items in the table. You set the values in the order column to [1,2,3,...,99,100]. I suggest that you run a script on the original table to set all items to a default sort order.
Now going back to your example where the user is presented with items 41-60: the initial presentation in their browser would rank those at orders [41,42,43,...,59,60]. You might also need to save the lowest order that appears in this subset, in this case 41. Or better yet, save the entire array of rankings and restore the exact same numbers in the new order. This covers the case where they select a set of records that are not already consecutively ordered, perhaps because they belong to someone else.
To demonstrate what I mean: when they reorder them in the page, your javascript reassigns those same numbers back to the subset in the new order. Like this:
item A : 41
item B : 45
item C : 46
item D : 47
item E : 51
item F : 54
item G : 57
then the user changes them to this order, but you reassign the numbers like this:
item D : 41
item F : 45
item E : 46
item A : 47
item C : 51
item B : 54
item G : 57
This should also work if the subset is consecutive.
I stumbled across this website and instantly fell in love. Let me be completely honest, I have little to NO knowledge of Access.. I told my manager this and he still insists that I "can figure it out" which is highly doubted. So here I am asking for help.. On to the question:
Where are the SQL code gurus? haha
I have 2 tables, "Found" & "Missing", both showing inventory adjustments for our building within the company. (Amazon)
I believe I have the process figured out but have no idea how it looks within Access..
Step 1: Group by ASIN (basically the numerical version of a barcode)
Step 2: Determine the +/- for the grouped ASINs in both lists
Step 3: Use TOP function to find the largest negative adjustments
There is a total of 3000+ records in both spreadsheets, but hopefully if I can figure out the process then the input/output wouldn't matter.
I thought maybe I needed a unique identifier? Bin(location) + ASIN(barcode) + Quantity
As you can see.. I have been thinking, organizing, and praying someone can help!
Here is a dummy example of the "Found" spreadsheet, the "Missing" spreadsheet is the exact same format with the only difference being a "M" instead of a "F" under "Reason Code"
Hopefully this is enough information, I know its a cluster.... thanks guys!
Date FC Application Name IOG ID IOG Name Container Id GL Product Group ASIN Processed By Reason Code Quantity Item Cost
1/5/2014 RIC1 FCICQACountService 1234 Doll Inc. P-1-A101xxx Toy B000000001 unknown1 F -1 12.34
1/5/2014 RIC1 FCICQACountService 1334 Amazon P-1-A101xxx Drugstore B000000002 unknown2 F -1 10.36
1/5/2014 RIC1 FCICQACountService 1432 Amazon P-1-A102xxx Office Product B000000003 unknown3 F -13 50.50
1/5/2014 RIC1 FCICQACountService 1442 Amazon P-1-A102xxx Office Product B000000004 unknown4 F -2 223.62
1/5/2014 RIC1 FCICQACountService 1337 Hope Inc. P-1-A102xxx Office Product B000000005 unknown5 F -1 100.99
I take it that by "spreadsheet", you actually mean "table". Might be a good idea to find a good primer on SQL and relational databases in general.
You've got a pretty good start, though. You've identified what you want. Note that in SQL, this is what you usually do; you think more about the result you want than you do the process of getting it. Each of your points suggests a keyword or function that will go into your query:
1) "Group by ASIN (basically the numerical version of a barcode)": You probably want to use the GROUP BY keyword.
2) "Determine the +/- for the grouped ASINs in both lists": Sounds like you want to SUM up a column here.
3) "Use TOP function to find the largest negative adjustments": Obviously, you already know you want TOP. The piece you're missing, though, is that the "largest negative" part suggests you want to use ORDER BY, and you want the smallest (largest magnitude negative) first. That will make sure that the right row is on top when it takes the top one.
So putting all that together, the only thing you need to figure out is the syntax. Your end query probably looks something like this:
SELECT TOP 1 ASIN, SUM(Quantity) AS TotalQuantity FROM Found GROUP BY ASIN ORDER BY SUM(Quantity);
This will calculate the sum of Quantity for each group of rows that has the same ASIN, and the result will be a set of rows that contain the ASIN and the total Quantity for that ASIN. Then it sorts the rows using the total quantity, with the smallest (most negative) row on top. The TOP then cuts off all the other rows. You could optionally leave out the TOP 1 if you want to see all the rows.
By the way, this SUM function is a little special. It's what we call an aggregate function. That's because it does something with a bunch of values across many rows. Not all functions are like that in SQL, but this one is.
If this isn't exactly what you're looking for, I hope it's enough to get you off the ground. Good luck.
I've been asked to modify a report (which unfortunately was written horribly!! not by me!) to include a count of days. Please note the "Days" is not calculated using "StartDate" & "EndDate" below. The problem is, there are multiple rows per record (users want to see the detail for start & enddate), so my total for "Days" are counting for each row. How can I get the total 1 time without the total in column repeating?
This is what the data looks like right now:
ID Description startdate enddate Days
REA145681 Emergency 11/17/2011 11/19/2011 49
REA145681 Emergency 12/6/2011 12/9/2011 49
REA145681 Emergency 12/10/2011 12/14/2011 49
REA146425 Emergency 11/23/2011 12/8/2011 54
REA146425 Emergency 12/9/2011 12/12/2011 54
I need this:
ID Description startdate enddate Days
REA145681 Emergency 11/17/2011 11/19/2011 49
REA145681 Emergency 12/6/2011 12/9/2011
REA145681 Emergency 12/10/2011 12/14/2011
REA146425 Emergency 11/23/2011 12/8/2011 54
REA146425 Emergency 12/9/2011 12/12/2011
Help please. This is how the users want to see the data.
Thanks in advance!
Liz
--- Here is the query simplified:
select id
,description
,startdate -- users want to see all start dates and enddates
,enddate
,days = datediff(d,Isnull(actualstardate,anticipatedstartdate) ,actualenddate)
from table
As you didn't provide the data of your tables I'll operate over your result as if it was a table. This will result in what you're looking for:
select *,
case row_number() over (partition by id order by id)
when 1 then days
end
from t
Edit:
Looks like you DID added some SQL code. This should be what you're looking for:
select *,
case row_number() over (partition by id order by id)
when 1 then
datediff(d,Isnull(actualstardate,anticipatedstartdate) ,actualenddate)
end
from t
That is a task for the reporting tool. You will have to write something like he next code in teh Display Properties of the Days field:
if RowNumber > 1 AND id = previous_row(id)
then -- hide the value of Days
Colour = BackgroundColour
Days = NULL
Days = ' '
Display = false
... (anything that works)
So they want the output to be exactly the same except that they don't want to see the days listed multiple times for each ID value? And they're quite happy to see the ID and Description repeatedly but the Days value annoys them?
That's not really an SQL question. SQL is about which rows, columns and derived values are supposed to be presented in what order and that part seems to be working fine.
Suppressing the redundant occurrences of the Days value is more a matter of using the right tool. I'm not up on the current tools but the last time I was, QMF was very good for this kind of thing. If a column was the basis for a control break, you could, in effect, select an option for that column that told it not to repeat the value of the control break repeatedly. That way, you could keep it from repeating ID, Description AND Days if that's what you wanted. But I don't know if people are still using QMF and I have no idea if you are. And unless the price has come way down, you don't want to go out and buy QMF just to suppress those redundant values.
Other tools might do the same kind of thing but I can't tell you which ones. Perhaps the tool you are using to do your reporting - Crystal Reports or whatever - has that feature. Or not. I think it was called Outlining in QMF but it may have a different name in your tool.
Now, if this report is being generated by an application program, that is a different kettle of Fish. An application could handle that quite nicely. But most people use end-user reporting tools to do this kind of thing to avoid the greater cost involved in writing programs.
We might be able to help further if you specify what tool you are using to generate this report.
It would seem that there is a much simpler way to state the problem. Please see Edit 2, following the sample table.
I have a number of different products on a production line. I have the date that each product entered production. Each product has two identifiers: item number and serial number I have the total number of labour hours for each product by item number and by serial number (i.e. I can tell you how many hours went into each object that was manufactured and what the average build time is for each kind of object).
I want to determine how (if) varying the length of production runs affects the average time it takes to build a product (item number). A production run is the sequential production of multiple serial numbers for a single item number. We have historical records going back several years with production runs varying in length from 1 to 30.
I think to achieve this, I need to be able to assign 'run id'. To me, that means building a query that sorts by start date and calculates a new unique value at each change in item number. If I knew how to do that, I could solve the rest of the problem on my own.
So that suggests a series of related questions:
Am I thinking about this the right way?
If I am on the right track, how do I generate those run id values? Calculate and store is an option, although I have a (misguided?) preference for direct queries. I know exactly how I would generate the run numbers in Excel, but I have a (misguided?) preference to do this in the database.
If I'm not on the right track, where might I find that track? :)
Edit:
Table structure (simplified) with sample data:
AutoID Item Serial StartDate Hours RunID (proposed calculation)
1 Legend 1234 2010-06-06 10 1
3 Legend 1235 2010-06-07 9 1
2 Legend 1237 2010-06-08 8 1
4 Apex 1236 2010-06-09 12 2
5 Apex 1240 2010-06-10 11 2
6 Legend 1239 2010-06-11 10 3
7 Legend 1238 2010-06-12 8 3
I have shown that start date, serial, and autoID are mutually unrelated. I have shown the expectation that labour goes down as the run length increases (but this is a 'fact' only via received wisdom, not data analysis). I have shown what I envision as the heart of the solution, that being a RunID that reflects sequential builds of a single item. I know that if I could get that runID, I could group by run to get counts, averages, totals, max, min, etc. In addition, I could do something like hours/ to get percentage change from the start of the run. At that point I could graph the trends associated with different run lengths either globally across all items or on a per item basis. (At least I think I could do all that. I might have to muck about a bit, but I think I could get it done.)
Edit 2: This problem would appear to be: how do I get the 'starting' member (earliest start date) of each run when I don't already have a runID? (The runID shown in the sample table does not exist and I was originally suggesting that being able to calculate runID was a potentially viable solution.)
AutoID Item
1 Legend
4 Apex
6 Legend
I'm assuming that having learned how to find the first member of each run that I would then be able to use what I've learned to find the last member of each run and then use those two results to get all other members of each run.
Edit 3: my version of a query that uses the AutoID of the first item in a run as the RunID for all units in a run. This was built entirely from samples and direction provided by Simon, who has the accepted answer. Using this as the basis for grouping by run, I can produce a variety of run statistics.
SELECT first_product_of_run.AutoID AS runID, run_sibling.AutoID AS itemID, run_sibling.Item, run_sibling.Serial, run_sibling.StartDate, run_sibling.Hours
FROM (SELECT first_of_run.AutoID, first_of_run.Item, first_of_run.Serial, first_of_run.StartDate, first_of_run.Hours
FROM dbo.production AS first_of_run LEFT OUTER JOIN
dbo.production AS earlier_in_run ON first_of_run.AutoID - 1 = earlier_in_run.AutoID AND
first_of_run.Item = earlier_in_run.Item
WHERE (earlier_in_run.AutoID IS NULL)) AS first_product_of_run LEFT OUTER JOIN
dbo.production AS run_sibling ON first_product_of_run.Item = run_sibling.Item AND first_product_of_run.AutoID run_sibling.AutoID AND
first_product_of_run.StartDate product_between.Item AND
first_product_of_run.StartDate
Could you describe your table structure some more? If the "date that each product entered production" is a full time stamp, or if there is a sequential identifier across products, you can write queries to identify the first and last products of a run. From that, you can assign IDs to or calculate the length of the runs.
Edit:
Once you've identified 1,4, and 6 as the start of a run, you can use this query to find the other IDs in the run:
select first_product_of_run.AutoID, run_sibling.AutoID
from first_product_of_run
left join production run_sibling on first_product_of_run.Item = run_sibling.Item
and first_product_of_run.AutoID <> run_sibling.AutoID
and first_product_of_run.StartDate < run_sibling.StartDate
left join production product_between on first_product_of_run.Item <> product_between.Item
and first_product_of_run.StartDate < product_between.StartDate
and product_between.StartDate < run_sibling.StartDate
where product_between.AutoID is null
first_product_of_run can be a temp table, table variable, or sub-query that you used to find the start of a run. The key is the where product_between.AutoID is null. That restricts the results to only pairs where no different items were produced between them.
Edit 2, here's how to get the first of each run:
select first_of_run.AutoID
from
(
select product.AutoID, product.Item, MAX(previous_product.StartDate) as PreviousDate
from production product
left join production previous_product on product.AutoID <> previous_product.AutoID
and product.StartDate > previous_product.StartDate
group by product.AutoID, product.Item
) first_of_run
left join production earlier_in_run
on first_of_run.PreviousDate = earlier_in_run.StartDate
and first_of_run.Item = earlier_in_run.Item
where earlier_in_run.AutoID is null
It's not pretty, and will break if StartDate is not unique. The query could be simplified by adding a sequential and unique identifier with no gaps. In fact, that step will probably be necessary if StartDate is not unique. Here's how it would look:
select first_of_run.AutoID
from production first_of_run
left join production earlier_in_run
on (first_of_run.Sequence - 1) = earlier_in_run.Sequence
and first_of_run.Item = earlier_in_run.Item
where earlier_in_run.AutoID is null
Using outer joins to find where things aren't still twists my brain, but it's a very powerful technique.