Dynamic Dataset updation - vb.net

I am creating a project in VB.NET in which one of the reports require that the name of employees should be displayed as column names and whatever work they have done for a stated period should appear in rows below that particular column.
Now as it is clear, the columns will have to be added at runtime. Am using an ODBC Data source to populate the grid. Also since a loop will have to be done to find out the work done by employees individually, so the number of rows under one column might be less or more than the rows in the next column.
Is there a way to create an empty data table and then update its contents based on columns and not add data based on addition of new rows.
Regards

A table consists of rows and columns: the rows hold the data, the columns define the data.
So you have no other choice than to add at least that many rows as your longest column will need. You could just fill up empty values in the other columns. That should give you the view you need.

Wouldn't it be better to simply switch the table orientation?
If most of your columns are names or maybe regroupment I dont' know,
then you'd have one column for each of the data you could display,
And you'd add each rows with the names and stats dynamically, which is more common.
I'm only suggesting that, because I don't know all your table structure.

Related

Transpose query using SQL query

I am currently having one problem and I need your help. In my data table, I have one column contains all information that I need, but it has json type. Hence, I have to use SQL query to parse these information.
However, those information now becomes several columns, and that's not what I need. So, may I ask if I want to change these information into rows, which mean transpose the table result, what function should I use? Thanks in advance. My table result contains several rows and several columns.

SQL Server: Persisting computed column

I have some order related tables. Order, OrderLines, OrderWarehouse etc.
The Orders table contains some computed columns (TotalNetPrice, TotalVATPrice etc).
I need them to be persisted columns as I need to include them in some indexes.
My question:
The columns themselves call functions which take the order tables Id field and return the calculated value.
Is there a way of forcing the values to be recalculated when ANY of the Order related tables change (as they all effect the calculations)?
Edit: (...without writing triggers)
I am not sure about persisting the computed column while allowing it to recalculate whenever one of the variables are changed, however I would almost approach this one using a View. when you select from the view it almost acts like a table and it could have your computed columns updated on the fly, however when you need to save new data you would save it to the originating column rather than the view.

How can I update lookup fields in a row with randomly chosen rows from the related table in access SQL?

I have a table with 5 fields that are all the same. They each can hold a reference to a row from another table with relationships. I want to update all of these fields at the same time on a row, but with a randomly selected row from the table for each field (with no duplicates). I am not sure how in access SQL you can update a lookup/relationship field like this. Any advice is greatly appreciated.
Simple answer is that you can't, not as it appears you would like to anyway. The closest thing possible would be to create an Insert query with parameters, and then feed in your 5 values using VBA. Since you will have to use VBA anyway, you may as well go the whole hog and conduct the entire process with Recordsets.
But that's not the fiddly part, (relatively speaking) selecting your source values is. What you will need to do is open a Recordset on your source table, and hook it up to your random-no-duplicates logic in order to select your 5 record references, then you open up a Recordset on your destination table, and drop them in the appropriate fields.
This tutorial will get you started on Recordsets: http://www.utteraccess.com/wiki/index.php/Recordsets_for_Beginners

Method to create calculated column from all columns in PowerPivot model

I'm wanting to compare data in two powerpivot tables.
Is there a method in PowerPivot to compare two tables of data?
Or alternatively ...
I've created a "key" calculated column (as concatenation of 6 columns using '&') and I am creating a calculated column from all the remaining data - about 100 columns.
Is there a method / function that will allow me create that calculated column?
Edit: the reason is to perform data comparison checks on data before and after a data migration. Additionally, PowerPivot was dictated as being the technology of choice for this solution, much easier might have been using one of the RedGate compares.
The best answer I could find was what I was originally doing.
Create a string concatenation of the 6 key columns as a CompoundKey Column
Create a string concatenation of the 100 (approx) data columns as a CombinedData Column
After initially checking that there were identical number of observations, I then did ordered the data in each table by the CompoundKey and performed a comparison of table1.CompoundKey to table2.CompoundKey and table1.CombinedData to table2.CombinedData.
This enabled me to find the Keys that were different between the two datasets then additionally to find any rows of data that were different for matching key rows.

Sql Design Question

I have a table with 25 columns where 20 columns can have null values for some (30-40%) rows.
Now what is the cost of having rows with 20 null columns? Is this OK?
Or
is it a good design to have another table to store those 20 columns and add a ref to the first table?
This way I will only write to the second table only when there is are values.
I am using SQL server 2005. Will migrate to 2008 in future.
Only 20 columns are varchar, rest smallint, smalldate
What I am storing:
These columns store different attributes of the row it belongs to. These attributes can be null sometimes.
The table will hold ~billion of rows
Please comment.
You should describe the type of data you are storing. It sounds like some of those columns should be moved to another table.
For example, if you have several columns that represent multiple columns for the same type of data, then I would say move it to another table On the other hand, if you need this many columns to describe different types of data, then you may need to keep it as it is.
So it kind of depends on what you are modelling.
Are there some circumstances where some of those columns are required? If so, then perhaps you should use some form of inheritance. For instance, if this were information about patients in a hospital, and there was some data that only made sense for female patients, then you could create a FemalePatients table with those columns. Those columns that must always be collected for female patients could then be declared NOT NULL in that separate table.
It depends on the data types (40 nullable ints is going to basically take the same space as 40 non-nullable ints, regardless of the values). In SQL Server, the space is fairly efficient with ordinary techniques. In 2008, you do have the SPARSE feature.
If you do split the table vertically with an optional 1:1 relationship, there is a possibility of wrapping the two tables with a view and adding triggers on the view to make it updatable and hide the underlying implementation.
So there are plenty of options, many of which can be implemented after you see the data load and behavior.
Create tables based on the distinct sets of attributes you have. So if you have some data where some of your columns do not apply then it would make sense to have that data in a table which doesn't have those columns. As far as possible, avoid repeating the same attribute in multiple tables. Make sure your data is in at least Boyce-Codd / 5th Normal Form and you won't go far wrong.