When I fill my DGV from a bound Access Table, it orders by Primary ID to a point - I say that because I have 1200 records and the DGV is filled in the following order, assuming we are just looking at the Prim. ID column -
1200
1199
1198
1197
1196
1
2
3
4
.....
1195
using the code below -
Me.ClientListTableAdapter.Fill(Me.ReportGenieDatabaseDataSet2.ClientList)
Sorry if this is vague but it's all I got. I hope to be able to order by descending ID - just like the Access table shows.
Also when I look up the .Last value it reads the "1195"
There is no order in a table. You can often find from one query to the next, records will be returned in different orders. You MUST thus use a SQL statement to set the order of the data returned. It is likely now you are using some SQL statement to retrieve this data β simply all a ORDER BY clause to that sql.
Related
I'm new to SQL and Access and am trying to take an entry from an InventoryTransactions.Quantity and summing it with another field from another table MasterInventory.QuantityOnHand. I know there is a way to do this with queries and forms but I'm kind of hitting a roadblock. Any help will be much appreciated!
Example
InventoryTransactions.Quantity (Table)
ID | TransactionItem | TransactionType | Quantity |
[ID] = Autonumber [TransactionItem] = Lookup referencing [.ID],[.ItemName], (Checking Y/N from [.Consumable]), all from [MasterInventory] [TransactionType] - Addition, Removal (Add, Subtract) from [TransactionTypeTable] Quantity - Number
MasterInventory
Here I want to add my record entry for Quantity in the above table and it to be added or subtracted (depending on the entry in the TransactionType field to .QuantityOnHand in this MasterInventory table
Say I've got 20 of something in [MasterInventory].[QuantityOnHand] and I enter 20 into the [ItemTransactions].[Quantity]
Saying that I selected "Addition" in [ItemTransactions].[TransactionType]
The new value in [MasterInventory].[QuantityOnHand] should now be updated to 40 for the corresponding [MasterInventory].[ID] field. (Updated to 0 if I selected "Remove" in [ItemTransactions].[TransactionType]
Let me know if you see this and need clarification please.
Hi I was wondering if there is a way to split long column values in this case I am using SSRS to get the distinct values with the number of product ID against a category into a matrix/pivot table in SSRS. The problem lies with the amount of distinct category makes it a nightmare to make the report look pretty shall we say. Is there a dynamic way to split the columns in say groups of 10 to make the table look nicer and easy to read. I was thinking of using in operator then the list of values but that means managing the data every time a new category gets added. Is there a dynamic way to present the data in the best way possible? There are 135 distinct category values
Also I am open to suggestions to make the report to nicer if anyone has any thoughts. I am new to SSRS and trying to get to grips with its.
Here is an example of my problem
enter image description here
Are your column names coming back from the database under the SubCat field you note in the comments above? If so I imagine your dataset looks something like this
Subcat | Logno
---------+---------------
SubCatA | 34
SubCatB | 65
SubCatC | 120
SubCatD | 8
SubCatE | 19
You can edit this so that there is an index of each individual category being returned also, using the Row_Number() function. Add the field
ROW_NUMBER() OVER (ORDER BY SubCat ASC) AS ColID
To your query. This will result in the following.
Subcat | LogNo | ColID
-----------+--------------+----------
SubCatA | 34 | 1
SubCatB | 65 | 2
SubCatC | 120 | 3
SubCatD | 8 | 4
SubCatE | 19 | 5
Now there is a numeric identifier for each column you can perform some logic on it to arrange itself nicely on the page.
This solution involves a Tablix, nested inside a Matrix nested inside a Matrix as follows
First create a Matrix (Matrix1), and set itβs datasource to your dataset. Set the Row Group Properties to group on the following expression where β4β is the number of columns you wish to display horizontally.
=CInt(Floor((Fields!ColID.Value - 1) / 4))
Then in the data section of the Matrix (bottom right corner) insert a rectangle and on this insert a new Matrix (Matrix 2). Remove the leftmost row. Set the column header to be the Column Name SubCat. This will automatically set the column grouping to be SubCat.
Finally, in the Data Section of Matrix 2 add a new Rectangle and Add a Tablix on it. Remove the Header Row, and set it to be one column wide only. Set the Data to be the information you wish to display, i.e. LogNo.
Finally, delete the Leftmost and Topmost rows/columns from Matrix 1 to make it look tidier (Note Delete Column Row only! Not associated groups!)
Then when the report is run it should look similar to the following. Note in my example SubCat = ColName, and LogNo = NumItems, and I have multiple values per SubCat.
Hopefully you find this helpful. If not, please ask for clarification.
Can you do something like this:
The following gives the steps (in two columns, down then across)
So, I have a table that I created from an even larger table using this statement in SQL
sSql = "SELECT InputID, EstimateLogNumber INTO [EstimateTable] FROM [dbo_tblInput]"
I have added a new column into [EstimateTable] called [Options] which should sequentially insert a field value of 'Option A, Option, B, Option C,..etc' for each [InputID] where [EstimateLogNumber] is different. The [InputID] is ordered chronologically though I wouldn't mind descending it to be safe. I am pretty clueless on how to go abotu this and whether or not this is even possible with SQL! But I would love to learn!
InputID EstimateLogNumber Options
-----------------------------------------------------------
220 0008305 Option A (This is what I want for every change in Input ID)
221 0009505
222 0008505
223 0008505
224 0008505
Thanks!
Kurt
I have data in an Excel spreadsheet that I import into Access 2007. There is a candidate key (CN). For those rows with the same CN, the data is different for all columns. Example below (real data has 100 columns and MsgNum might vary more often, haven't confirmed this pattern with other instances yet, so although I tried to select on it, the solution should probably ignore that the combination of CN and MsgNum could be unique).
Date | CN | MsgNum
2012-01-03 111-111-1111 101
2012-01-04 222-222-2222 101
2012-01-05 222-222-2222 202
2012-01-05 333-333-3333 101
2012-01-05 333-333-3333 202
2012-01-04 444-444-4444 101
2012-01-04 444-444-4444 101
I do not have access to SQL Server. All I have is Access 2007. I don't want to use Excel's remove duplicates procedure because the data that gets to me comes from Access before being exported to Excel, so I'm trying to find a solution to remove the duplicates through Access.
Using SQL in Query Design in Access, I have tried using a subquery in the WHERE clause that groups by the CN keeping those with a count of 1, but that removes all instances and does not keep at least one.
I tried selecting just two columns (CN and MIN(MsgNum))--grouping appropriately--and that gives me what I want, but when I run it with all the columns specified (100 columns in all), I get duplicates still.
I tried the Query Wizard Find Duplicates for a single column and to return the rest of the columns, that works in getting the duplicates isolated in a view. Since I cannot setup any primary keys, I am not sure how to join the tables. When running the previous MIN query with all columns, I get the same problem as before.
I was trying to setup something in the WHERE clause that compared the combination of two columns, but I read that that cannot be done. So, I am at a lost on how to solve this issue where there is a candidate key but the information in the records for the duplicates on this candidate key column are different. What I want to have done is what Excel 2007's Remove Duplicates procedure does where duplicates on one column can be removed and retain others.
For ten years we've been using the same custom sorting on our tables, I'm wondering if there is another solution which involves fewer updates, especially since today we'd like to have a replication/publication date and wouldn't like to have our replication replicate unnecessary entries.I had a look into nested sets, but it doesn't seem to do the job for us.
Base table:
id | a_sort
---+-------
1 10
2 20
3 30
After inserting:
insert into table (a_sort) values(15)
An entry at the second position.
id | a_sort
---+-------
1 10
2 20
3 30
4 15
Ordering the table with:
select * from table order by a_sort
and resorting all the a_sort entries, updating at least id=(2,3,4)
will of course produce the desired output:
id | a_sort
---+-------
1 10
4 20
2 30
3 40
The column names, the column count, datatypes, a possible join, possible triggers or the way the resorting is done is/are irrelevant to the problem.Also we've found some pretty neat ways to do this task fast.
only; how the heck can we reduce the updates in the db to 1 or 2 max.
Seems like an awfully common problem.
The captain obvious in me thougth once "use an a_sort float(53), insert using a fixed value of ordervaluefirstentry+abs(ordervaluefirstentry-ordervaluenextentry)/2".
But this would only allow around 1040 "in between" entries - so never resorting seems a bit problematic ;)
You really didn't describe what you're doing with this data, so forgive me if this is a crazy idea for your situation:
You could make a sort of 'linked list' where instead of a column of values, you have a column for the 'next highest valued' id. This would decrease the number of updates to a maximum of 2.
You can make it doubly linked and also have a column for next lowest, which would bring the maximum number of updates to 3.
See:
http://en.wikipedia.org/wiki/Linked_list