How to delete row in REPORT VIEWER? - vb.net

I am working with report viewer 2016 in VB.NET, I have a tablix with 10 columns and 21 rows , these rows are completed with information from the database, example of a supermarket invoice report:
PRODUCT|PRICE|QUANTITY|TOTAL
rice | $0 | 0 | $0
noodles| $5 | 2 | $10
sugar| $3 | 2 | $6
milk | $2 | 1 | $2
...
TOTAL RECORDS COUNTED: 3
the problem is the following, having 21 rows that represent each product that the supermarket has, there are fields that arrive empty since the client does not buy the 21 products, what I do is hide them using expressions, but when generating the report it arrives with a blank space and it occupies half a page (corresponding to the empty rows), is there any way to delete a row with an expression? or remove blank spaces from a report?

Related

Find current data set using two SQL tables storing separately historical insertions and deletions

Problem
I need to do daily syncs of our latest internal data to an external audit database that does not offer an update interface. In order to update some records, I need to first generate and send in a deletion file to remove those records, and then follow by an insertion file with the same but updated records in it.
An important detail is that all of the records in deletion files must match the external records verbatim, in order to be deleted.
Proposed approach
Currently I use two separate SQL tables to version control what I have inserted/deleted.
Let's say that right now the inserted_records table looks like this:
id | file_version | contract_id | customer_name | start_year
9 | 6 | 1 | Alice | 2015
10 | 6 | 2 | Bob | 2015
11 | 6 | 3 | Charlie | 2015
Accompanied by a separate and empty deleted_records table with identical columns.
Now, if I want to
change the customer_name from Alice to Dave on line id 9
change the start_year for Bob from 2015 to 2020 on line id 10
Two new lines in inserted_records would be generated, line 12 and 13, in turn creating a new insertion file 7.
id | file_version | contract_id | customer_name | start_year
9 | 6 | 1 | Alice | 2015
10 | 6 | 2 | Bob | 2015
11 | 6 | 3 | Charlie | 2015
12 | 7 | 1 | Dave | 2015
13 | 7 | 2 | Bob | 2020
Then their original column values in line 9 and 10 are then copied onto the previously empty deleted_records, in turn creating a new deletion file 1.
id | file_version | contract_id | customer_name | start_year
1 | 1 | 1 | Alice | 2015
2 | 1 | 2 | Bob | 2015
Now, if I were to send in the deletion file 1 first followed by the insertion file 7, I would get the result that I wanted.
Question
How can I query the current set of records, considering all insertions and deletions that have occurred? Assuming all records in deleted_records always have matches in inserted_records and if multiple, we always delete records with smaller file version numbers first.
I have tried by first writing one to query the inserted_records for the latest records grouped by contract_id.
select top 1 with ties *
from insertion_record
order by row_number() over (partition by contract_id order by file_version desc)
This would give me line 11, 12 and 13, which is what I wanted in this particular example. But if we also wanted to delete the record line 11 with Charlie, then my query wouldn't work anymore as it doesn't take deleted_records into account, and I have no idea how to do it in SQL.
Furthermore, my nut tells me that this approach isn't solid as there are two separate and moving parts, perhaps there is a better approach to solve this?
How can I query the current set of records
I don't understand your question. Every SQL query is against the current set of records, if by that you mean the data currently in the database.
I do see a couple of problems.
Unless the table you're deleting from has a key defined, even an exact match on every column risks deleting more than one row.
You're performing an ad hoc update with UPDATE's transaction guarantee. I suppose the table you're updating is otherwise idle, and as a practical matter you don't have to worry about someone else (or you) re-inserting the deleted rows before your inserts arrive. But it's problem waiting to happen.
If what you're trying to do is produce the set of rows that will be the result of a series of inserts and deletions, you haven't provided enough information to say how that could be done, or even if it's possible. There would have to be some way to uniquely identify rows, so that deletions and insertions can be associated. (They don't match on all columns, after all.) And you'd need some indication of order of operation, because it matters whether INSERT follows or precedes DELETE.

Using awk to export a mysql table to .csv

I've run into an issue where I'm trying to understand awk for a class. We are supposed to take the table full of names and some other information and divide each field using "," to make it easier to export to .csv. So far what I have removes all extra characters including the initial "," tied to the first field. I'm down to 2 last issues with my script. The first is adding the "," to divide each field. I know this seems basic, but I'm having a hard time wrapping my head around it. The second is that occasionally $2 is followed by an extra initial standing in for a middle name. I have no idea how to incorporate another field to every other line that does not have an initial.
The table is the following:
+---------------------------------+------------+------+----------+
| Name | NumCourses | Year | Semester |
+---------------------------------+------------+------+----------+
| ABDULHADI, ASHRAF M | 2 | 1990 | 3 |
| ACHANTA, BALA | 2 | 1995 | 3 |
| ACHANTA, BALA | 2 | 1996 | 3 |
+---------------------------------+------------+------+----------+
My Code:
awk 'NR==3, N==6{gsub(","," "); gsub(/\|/, " "); gsub(/\+/," "); gsub(/\-/," "); print $1, $2, $3, $4, $5, $6}' awktest.txt
Output:
ABDULHADI ASHRAF M 2 1990 3
ACHANTA BALA 2 1995 3
ACHANTA BALA 2 1996 3
P.S. It should be noted that we were instructed to rip out the headers.
Expected Output:
ABDULHADI,ASHRAF,M,2,1990,3
ACHANTA,BALA,N/A,,2,1995,3
ACHANTA,BALA,N/A,2,1996,3
Your approach to first remove punctuation characters is good. Keeping it, you could write:
awk -v OFS="," '{gsub(","," ");gsub(/\|/," ")}{$1=$1}NF==5{$2=$2",N/A"}NF>4' awktest.txt
Let's unwind it and understand what is happening:
awk -v OFS="," ' #Output field separator is set to comma
{
gsub(","," ") #Substitute any comma by space
gsub(/\|/," ") #Substitute any pipe by space
}
{$1=$1} #Force line rebuild so that OFS is used to separate fields outputed
NF==5{$2=$2",N/A"} #If there are only 5 fields, middle-name is missing, so append ",N/A" to 2nd field
NF>4 #Print resulting lines that have at least 5 fields (this gets rid of headers)
' awktest.txt
Output:
ABDULHADI,ASHRAF,M,2,1990,3
ACHANTA,BALA,N/A,2,1995,3
ACHANTA,BALA,N/A,2,1996,3
Feel free to request further clarification if you need it.

Oracle SQL - Give each row in a result set a unique identifier depending on a value in a column

I have a result set, being returned from a view, that returns a list of items and the country they originated from, an example would be:
ID | Description | Country_Name
------------------------------------
1 | Item 1 | United Kingdom
2 | Item 2 | France
3 | Item 3 | United Kingdom
4 | Item 4 | France
5 | Item 5 | France
6 | Item 6 | Germany
I wanted to query this data, returning all columns (There are more columns than ID, Description and Country_Name, I've omitted them for brevity's sake) with an extra one added on giving a unique value depending on the value that is inside the field Country_name
ID | Description | Country_Name | Country_Relation
---------------------------------------------------------
1 | Item 1 | United Kingdom | 1
2 | Item 2 | France | 2
3 | Item 3 | United Kingdom | 1
4 | Item 4 | France | 2
5 | Item 5 | France | 2
6 | Item 6 | Germany | 3
The reason behind this, is we're using a Jasper report and need to show these items with an asterisk next to it (Or in this case a number) explaining some details about the country. So the report would look like this:
Desc. Country
Item 1 United Kingdom(1)
Item 2 France(2)
Item 3 United Kingdom(1)
Item 4 France(2)
Item 5 France(2)
Item 6 Germany(3)
And then further down the report would be a field stating:
1: Here are some details about the UK
2: Here are some details about France
3: Here are some details about Germany
I'm having difficulty trying to generate a unique number to go along side each country, starting at one each time the report is ran, incrementing it when a new country is found and keeping track of where to assign it. I would hazard a guess at using temporary tables to do such a thing, but I feel that's overkill.
Question
Is this kind of thing possible in Oracle SQL or am I attempting to do something that is rather large and cumbersome?
Are there better ways of doing this inside of a Jasper report?
At the moment, I'm looking at just having the subtext underneath each individual item and repeating the same information several times, just to avoid this situation, rather than having them aggregated and having the subtext once. It's not clean, but it saves this rather odd hassle.
You are looking for dense_rank():
select t.*, dense_rank() over (order by country_name) as country_relation
from t;
I don't know if this can be done inside Jasper reports. However, it is easy enough to set up a view to handle this in Oracle.

Some equal values in column A from a CSV file and need to copy values from other files that matches with that column in excel

Here is the example:
File1.csv (already created)
Column A | Column B
1 | 1
6 | 3
12 | 4
18 | 5
File2.csv (already created)
Column A | Column C
2 | 2
6 | 4
12 | 5
File3.csv (what I want to create)
Column A | Column B | Column C
6 | 3 | 4
12 | 4 | 5
Observation: those CSV files are enormous ~1 million of rows but I think there is no problem with that (I hope so), they have only
exactly two columns like the example. "vlookup" has not worked for me and I can't see the solution!
Observation 2: i can put the columns from different files in the same sheet but i need to match with the first column of them anyway.
Put all records to same sheet in order not force excel to open file every time function evaluates
Use this function in a third table (range $g$1:$h$1000) to match ColumnB values from table1 (range $a$1:$b$1000...) that have common ColumnA value with table1 columnA values (range $d$1:$e$1000).
=VLOOKUP(d1;$a$1:$b$1000;2;FALSE)

Ransack search- select rows whose sum adds up to a given value

Im using ransack search with ruby on rails and trying to output random rows between 1-6, whose time adds up to a given value specified by the search.
For example search for rows whose time value adds up to 40. In this case id 12 and 14 will be returned. Any combination between 1-6 can be randomly outputted.
If a combination of 3 rows meet the criteria then 3 rows should be outputted. likewise 1,2,3,4,5,6. If no single row or combination can be found then the output should return nil
id | title | time
----+-------------------------+-----------
26 | example | 10
27 | example | 26
14 | example | 20
28 | example | 50
12 | example | 20
20 | example | 6
Note - Not sure if ransack search is the best to perform this type of query
Thanks in advance