Replace specific part in my data - sql

I would like to know most efficient and secure way to replace some numbers. In my table i have two columns: Nummer and Vater. In Nummer column i store articles numbers. The one with .1 at the end is the 'main' article and rest are his combinations (sometimes main article doesn't contain combinations), all of specific makes it as concrete product with combinations. Numbers consist of 3-parts separated by 3 dots (always). Vater for all of them is always main article number as shown below:
Example 1:
Nummer | Vater
-------------------------------
003.10TT032.1 | 003.10TT032.1
003.10TT032.2L | 003.10TT032.1
003.10TT032.UY | 003.10TT032.1
Nummer column = varchar
Vater column = varchar
I want to have possibility to change first 2 parts n.n
For example i want to say and send via sql query that i want to replace to: 9.4R53 Therefore based on our example the final results should be as follows:
Nummer | Vater
----------------------
9.4R53.1 | 9.4R53.1
9.4R53.2L | 9.4R53.1
9.4R53.UY | 9.4R53.1
Example 2:
Current:
Nummer | Vater
-------------------------------
12.90D.1 | 12.90D.1
12.90D.089 | 12.90D.1
12.90D.2 | 12.90D.1
Replace to: 829.12
Result should be:
Nummer | Vater
-------------------------------
829.12.1 | 829.12.1
829.12.089 | 829.12.1
829.12.2 | 829.12.1
I made queries as follows:
Example 1 query:
update temp SET Nummer = replace(Nummer, '003.10TT032.', '9.4R53.'),
Vater = replace(Vater, '003.10TT032.1', '9.4R53.1')
WHERE Vater = '003.10TT032.1'
Example 2 query:
update temp SET Nummer = replace(Nummer, '12.90D.', '829.12.'),
Vater = replace(Vater, '12.90D.1', '829.12.1')
WHERE Vater = '12.90D.1 '
In my database i have thousends of records therefore i want to be sure this query is fine and not having anything that could make potentially wrong results. Please of your advice whether can it be like this or not.
Therefore questions:
Is this query fine according to how my articles are stored? (want to avoid wrong replacments which could makes mess into production database data)
Is there better solution?

To answer your questions: yes, your solution works, and yes, there's something to make it bullet-proof. Make it bullet proof and reversible is what I suggest to do below. You will sleep better if you know you can
A. Answer any question of angry people who ask you "what did you do to my product table"
B. Know you can reverse any change you've made to this table (without restoring a backup), including mistakes of others (like wrong instructions).
So if you really have to be 100% confident of the output, I would not run it in one go. I suggest to prepare the queries in a separate table, then run the queries in a loop in dynamic SQL.
It is a little bit cumbersome but you can do it like this: create a dedicated table with the columns you need (like batch_id, insert_date, etc.) and a column named execute_query NVARCHAR(MAX).
Then load your table by running a select distinct of the section you need to replace in your source table (using CHARINDEX to locate the second dot - make your CHARINDEX start from the CHARINDEX of the first dot+1 to get the second dot).
In other words: you prepare all the queries (like the ones in your examples) one by one and store them in a table.
If you want to be totally safe, the update queries can include a where source table_id between n and n' (which you build with a GROUP BY on the source table). This will secure that you can track which records you have updated if you have to answer questions later.
Once this is done, you run a loop which executes each line one by one.
The advantage of this approach is to keep track of your changes - you can also prepare the rollback query at the same time as you prepare the update query. Then you know you can safely revert all the changes you have ever made to your product table.
Never truncate that table, it is your audit table. If someone ask you what you did to the product catalogue you can answer any question, even 5 years from now.

This is a separate answer to show how to split the product ID into separate sections - If you have to update sections of the product ID I think it is better t to store it in a separate columns:
DECLARE #ProductRef TABLE
(ID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
SrcNummer VARCHAR(255), DisplayNummer VARCHAR(255), SrcVater VARCHAR(255), DisplayVater VARCHAR(255),
NummerSectionA VARCHAR(255), NummerSectionB VARCHAR(255), NummerSectionC VARCHAR(255),
VaterSectionA VARCHAR(255), VaterSectionB VARCHAR(255), VaterSectionC VARCHAR(255) )
INSERT INTO #ProductRef (SrcNummer, SrcVater ) VALUES ('003.10TT032.1','003.10TT032.1')
INSERT INTO #ProductRef (SrcNummer, SrcVater ) VALUES ('003.10TT032.2L','003.10TT032.1')
INSERT INTO #ProductRef (SrcNummer, SrcVater ) VALUES ('003.10TT032.UY','003.10TT032.1')
DECLARE #Separator CHAR(1)
SET #Separator = '.'
;WITH SeparatorPosition (ID, SrcNummer, NumFirstSeparator, NumSecondSeparator, SrcVater, VatFirstSeparator, VatSecondSeparator)
AS ( SELECT
ID,
SrcNummer,
CHARINDEX(#Separator,SrcNummer,0) AS NumFirstSeparator,
CHARINDEX(#Separator,SrcNummer, (CHARINDEX(#Separator,SrcNummer,0))+1 ) AS NumSecondSeparator,
SrcVater,
CHARINDEX(#Separator,SrcVater,0) AS VatFirstSeparator,
CHARINDEX(#Separator,SrcVater, (CHARINDEX(#Separator,SrcVater,0))+1 ) AS VatSecondSeparator
FROM #ProductRef )
UPDATE #ProductRef
SET
NummerSectionA = SUB.NummerSectionA , NummerSectionB = SUB.NummerSectionB , NummerSectionC = SUB.NummerSectionC ,
VaterSectionA = SUB.VaterSectionA , VaterSectionB = SUB.VaterSectionB , VaterSectionC = SUB.VaterSectionC
FROM #ProductRef T
JOIN
(
SELECT
t.ID,
t.SrcNummer,
SUBSTRING (t.SrcNummer,0,s.NumFirstSeparator) AS NummerSectionA,
SUBSTRING (t.SrcNummer,s.NumFirstSeparator+1,(s.NumSecondSeparator-s.NumFirstSeparator-1) ) AS NummerSectionB,
RIGHT (t.SrcNummer,(LEN(t.SrcNummer)-s.NumSecondSeparator)) AS NummerSectionC,
t.SrcVater,
SUBSTRING (t.SrcVater,0,s.NumFirstSeparator) AS VaterSectionA,
SUBSTRING (t.SrcVater,s.NumFirstSeparator+1,(s.NumSecondSeparator-s.NumFirstSeparator-1) ) AS VaterSectionB,
RIGHT (t.SrcVater,(LEN(t.SrcVater)-s.NumSecondSeparator)) AS VaterSectionC
FROM #ProductRef t
JOIN SeparatorPosition s
ON t.ID = s.ID
) SUB
ON T.ID = SUB.ID
Then you only work on the correct product ID section.

Related

How to design a SQL table where a field has many descriptions

I would like to create a product table. This product has unique part numbers. However, each part number has various number of previous part numbers, and various number of machines where the part can be used.
For example the description for part no: AA1007
Previous part no's: AA1001, AA1002, AA1004, AA1005,...
Machine brand: Bosch, Indesit, Samsun, HotPoint, Sharp,...
Machine Brand Models: Bosch A1, Bosch A2, Bosch A3, Indesit A1, Indesit A2,....
I would like to create a table for this, but I am not sure how to proceed. What I have been able to think is to create a table for Previous Part no, Machine Brand, Machine Brand Models individually.
Question: what is the proper way to design these tables?
There are of course various ways to design the tables. A very basic way would be:
You could create tables like below. I added the columns ValidFrom and ValidTill, to identify at which time a part was active/in use.
It depends on your data, if datatype date is enough, or you need datetime to make it more exactly.
CREATE TABLE Parts
(
ID bigint NOT NULL
,PartNo varchar(100)
,PartName varchar(100)
,ValidFrom date
,ValidTill date
)
CREATE TABLE Brands
(
ID bigint NOT NULL
,Brand varchar(100)
)
CREATE TABLE Models
(
ID bigint NOT NULL
,BrandsID bigint NOT NULL
,ModelName varchar(100)
)
CREATE TABLE ModelParts
(
ModelsID bigint NOT NULL
,PartID bigint NOT NULL
)
Fill your data like:
INSERT INTO Parts VALUES
(1,'AA1007', 'Screw HyperFuturistic', '2017-08-09', '9999-12-31'),
(1,'AA1001', 'Screw Iron', '1800-01-01', '1918-06-30'),
(1,'AA1002', 'Screw Steel', '1918-07-01', '1945-05-08'),
(1,'AA1004', 'Screw Titanium', '1945-05-09', '1983-10-05'),
(1,'AA1005', 'Screw Futurium', '1983-10-06', '2017-08-08')
INSERT INTO Brands VALUES
(1,'Bosch'),
(2,'Indesit'),
(3,'Samsung'),
(4,'HotPoint'),
(5,'Sharp')
INSERT INTO Models VALUES
(1,1,'A1'),
(2,1,'A2'),
(3,1,'A3'),
(4,2,'A1'),
(5,2,'A2')
INSERT INTO ModelParts VALUES
(1,1)
To select all parts of a certain date (in this case 2013-03-03) of the "Bosch A1":
DECLARE #ReportingDate date = '2013-03-03'
SELECT B.Brand
,M.ModelName
,P.PartNo
,P.PartName
,P.ValidFrom
,P.ValidTill
FROM Brands B
INNER JOIN Models M
ON M.BrandsID = B.ID
INNER JOIN ModelParts MP
ON MP.ModelsID = M.ID
INNER JOIN Parts P
ON P.ID = MP.PartID
WHERE B.Brand = 'Bosch'
AND M.ModelName = 'A1'
AND P.ValidFrom <= #ReportingDate
AND P.ValidTill >= #ReportingDate
Of course there a several ways to do an historization of data.
ValidFrom and ValidTill (ValidTo) is one of my favourites, as you can easily do historical reports.
Unfortunately you have to handle the historization: When inserting a new row - in example for your screw - you have to "close" the old record by setting the ValidTill column before inserting the new one. Furthermore you have to develop logic to handle deletes...
Well, thats a quite large topic. You will find tons of information in the world wide web.
For the part number table, you can consider the following suggestion:
id | part_no | time_created
1 | AA1007 | 2017-08-08
1 | AA1001 | 2017-07-01
1 | AA1002 | 2017-06-10
1 | AA1004 | 2017-03-15
1 | AA1005 | 2017-01-30
In other words, you can add a datetime column which versions each part number. Note that I added a primary key id column here, which is invariant over time and keeps track of each part, despite that the part number may change.
For time independent queries, you would join this table using the id column. However, the part number might also serve as a foreign key. Off the top of my head, if you were generating an invoice from a previous date, you might lookup the appropriate part number at that time, and then join out to one or more tables using that part number.
For the other tables you mentioned, I do not see a similar requirement.

SQL Server: what's the best way to query a table whose name is stored in a column?

I'd like to put together a query but avoid using a cursor in order to do so. We have PDF files stored in multiple tables. One year for each table. So we have table names such as:
"Files_2012", "Files_2013", "Files_2014", etc.
We then have a master table (called Files) that contains which table the file is stored in.
Here's the layout:
=======================================
FILES
=======================================
FileId | RecordId | FileTableName
---------------------------------------
104 | 7108162 | Files_2013
105 | 7108162 | Files_2014
106 | 7108162 | Files_2013
The yearly tables would then look like this:
=======================================
FILES_2013
=======================================
FileId | FileData (varbinary
---------------------------------------
104 | 0x255044462D312E340A25E2E3CFD30D...
106 | 0x897444462D312E340A25E2E3CFD30D...
=======================================
FILES_2014
=======================================
FileId | FileData (varbinary
---------------------------------------
105 | 0x556044462D312E340A25E2E3CFD30D...
My query needs to return records based on the RecordId. So, in this example, all 3 of the Files.RecordId values are the same. I would need to return the FileData column for all 3 records, like this:
=======================================
My returned records
=======================================
FileId | FileData (varbinary
---------------------------------------
104 | 0x255044462D312E340A25E2E3CFD30D...
105 | 0x556044462D312E340A25E2E3CFD30D...
106 | 0x897444462D312E340A25E2E3CFD30D...
How can I do this? If it helps, here's my query so far, although I may be way off. I'm storing the FileTableName values into a temporary table & was hoping to work with them that way, but I'm stuck after this:
DECLARE #recordId INT
CREATE TABLE #tmpFiles (FileId int, FileTableName varchar(100), FileData varbinary(max))
SET #recordId = 7108162
INSERT INTO #tmpFiles (FileId, FileTableName)
SELECT FileId, FileTableName
FROM dbo.Files
WHERE RecordId = #recordId
UPDATE t
SET t.FileData = f.FileData
FROM #tmpFiles t
INNER JOIN Files_2013 f ON t.FileId = f.FileId
Thanks for any help you can provide.
Assuming that you know table names in advance you can use UNION ALL:
DECLARE #recordId INT = 7108162;
WITH cte(FileId, FileData, Year) AS
(
SELECT FileId , FileData, 2012 AS [Year]
FROM FILES_2012
UNION ALL
SELECT FileId , FileData, 2013
FROM FILES_2013
UNION ALL
SELECT FileId , FileData, 2014
FROM FILES_2014
)
SELECT c.*
FROM FILES f
JOIN cte c
ON f.FileId = c.FileId
WHERE f.RecordId = #RecordId;
If you don't know tables names (I doubt because they have common name pattern) you need to use Dynamic-SQL but reconsider different options.
Read The Curse and Blessings of Dynamic SQL by Erland Sommarskog
SELECT * FROM sales + #yymm
This is a variation of the previous case, where there is a suite of
tables that actually do describe the same entity. All tables have the
same columns, and the name includes some partitioning component,
typically year and sometimes also month. New tables are created as a
new year/month begins.
In this case, writing one stored procedure per table is not really
feasible. Not the least, because the user may want to specify a date
range for a search, so even with one procedure per table you would
still need a dynamic dispatcher.
Now, let's make this very clear: this is a flawed table design. You
should not have one sales table per month, you should have one single
sales table, and the month that appear in the table name, should be
the first column of the primary key in the united sales table. But you
may be stuck with a legacy application where you cannot easily change
the table design. And, admittedly, there are situations where
partitioning makes sense. The table may be huge (say over 10 GB in
size), or you want to be able age to out old data quickly. But in such
case you should do partitioning properly.
In the following, I will look at three approaches to deal with
partitioning without using dynamic SQL.
Possible solutions:
Partitioned Tables
Views and Partitioned Views
Compatibility Views

Get all missing values between two limits in SQL table column

I am trying to assign ID numbers to records that are being inserted into an SQL Server 2005 database table. Since these records can be deleted, I would like these records to be assigned the first available ID in the table. For example, if I have the table below, I would like the next record to be entered at ID 4 as it is the first available.
| ID | Data |
| 1 | ... |
| 2 | ... |
| 3 | ... |
| 5 | ... |
The way that I would prefer this to be done is to build up a list of available ID's via an SQL query. From there, I can do all the checks within the code of my application.
So, in summary, I would like an SQL query that retrieves all available ID's between 1 and 99999 from a specific table column.
First build a table of all N IDs.
declare #allPossibleIds table (id integer)
declare #currentId integer
select #currentId = 1
while #currentId < 1000000
begin
insert into #allPossibleIds
select #currentId
select #currentId = #currentId+1
end
Then, left join that table to your real table. You can select MIN if you want, or you could limit your allPossibleIDs to be less than the max table id
select a.id
from #allPossibleIds a
left outer join YourTable t
on a.id = t.Id
where t.id is null
Don't go for identity,
Let me give you an easy option while i work on a proper one.
Store int from 1-999999 in a table say Insert_sequence.
try to write an Sp for insertion,
You can easly identify the min value that is present in your Insert_sequence and not in
your main table, store this value in a variable and insert the row with ID from variable..
Regards
Ashutosh Arya
You could also loop through the keys. And when you hit an empty one Select it and exit Loop.
DECLARE #intStart INT, #loop bit
SET #intStart = 1
SET #loop = 1
WHILE (#loop = 1)
BEGIN
IF NOT EXISTS(SELECT [Key] FROM [Table] Where [Key] = #intStart)
BEGIN
SELECT #intStart as 'FreeKey'
SET #loop = 0
END
SET #intStart = #intStart + 1
END
GO
From there you can use the key as you please. Setting a #intStop to limit the loop field would be no problem.
Why do you need a table from 1..999999 all information you need is in your source table. Here is a query which give you minimal ID to insert in gaps.
It works for all combinations:
(2,3,4,5) - > 1
(1,2,3,5) - > 4
(1,2,3,4) - > 5
SQLFiddle demo
select min(t1.id)+1 from
(
select id from t
union
select 0
)
t1
left join t as t2 on t1.id=t2.id-1
where t2.id is null
Many people use an auto-incrementing integer or long value for the Primary Key of their tables, and it is often called ID or MyEntityID or something similar. This column, since it's just an auto-incrementing integer, often has nothing to do with the data being stored itself.
These types of "primary keys" are called surrogate keys. They have no meaning. Many people like these types of IDs to be sequential because it is "aesthetically pleasing", but this is a waste of time and resources. The database could care less about which IDs are being used and which are not.
I would highly suggest you forget trying to do this and just leave the ID column auto-increment. You should also create an index on your table that is made up of those (subset of) columns that can uniquely identify each record in the table (and even consider using this index as your primary key index). In rare cases where you would need to use all columns to accomplish that, that is where an auto-incrementing primary key ID is extremely useful—because it may not be performant to create an index over all columns in the table. Even so, the database engine could care less about this ID (e.g. which ones are in use, are not in use, etc.).
Also consider that an integer-based ID has a maximum total of 4.2 BILLION IDs. It is quite unlikely that you'll exhaust the supply of integer-based IDs in any short amount of time, which further bolsters the argument for why this sort of thing is a waste of time and resources.

Query mySql with LIKE %...% and not pull false records

I have a database that contains two fields that collect multiple values. For instance, one is colors, where one row might be "red, blue, navyblue, lightblue, orange". The other field uses numbers, we'll call it colorID, where one row might be "1, 10, 23, 110, 239."
Now, let's say I want to SELECT * FROM my_table WHERE 'colors' LIKE %blue%; That query will give me all the rows with "blue," but also rows with "navyblue" or "lightblue" that may or may not contain "blue." Likewise, with colorID, a query for WHERE 'colorID' LIKE %1% will pull up a lot more rows than I want.
What's the correct syntax to properly query the database and only return correct results? FWIW, the fields are both set as TEXT (due to the commas). Is there a better way to store the data that would make searching easier and more accurate?
you really should look at changing your db schema. One option would be to create a table that holds colours with an INT as the primary key. You could then create a pivot table to link my_table to colours
CREATE TABLE `colours` (
`id` INT NOT NULL ,
`colour` VARCHAR( 255 ) NOT NULL ,
PRIMARY KEY ( `id` )
) ENGINE = MYISAM
CREATE TABLE `mytable_to_colours` (
`mytable_id` INT NOT NULL ,
`colour_id` INT NOT NULL ,
) ENGINE = MYISAM
so your query could look like this - where '1' is the value of blue (and more likely how you would be referencing it)
SELECT *
FROM my_table
JOIN mytable_to_colours ON (my_table.id = mytable_to_colours.mytable_id)
WHERE colour_id = '1'
If you want to search in your existing table you can use the following query:
SELECT *
FROM my_table
WHERE colors LIKE 'blue,%'
OR colors LIKE '%,blue'
OR colors LIKE '%,blue,%'
OR colors = 'blue'
However it is much better than when you create table colors and numbers and create many to many relationships.
EDITED: Just like #seengee has written.
MySQL has a REGEXP function that will allow you to match something like "[^a-z]blue|^blue". But you should really consider not doing it this way at all. A single table containing one row for each color (with multiple rows groupable by a common ID) would be far more scalable.
The standard answer would be to normalize the data by putting a colorSelID (or whatever) in this table, then having another table with two columns, mapping from 'colorSelID' to the individual colorIDs, so your data above would turn into something like:
other colums | colorSelId
other data | 1
Then in the colors table, you'd have:
colorSelId | ColorId
1 | 1
1 | 10
1 | 23
1 | 110
1 | 239
Then, when you want to find all the items that match colorID 10, you just search on colorID, and join that ColorSelId back to your main table to get all the items with a colorID of 10:
select *
from
main_table join color_table
on
main_table.ColorSelId=color_table.ColorSelId
where
color_table.colorId = 10
Edit: note that this will also probably speed up your searches a lot, at least assuming you index on ColorId in the color table, and ColorSelId in the main table. A search on '%x%' will (almost?) always do a full table scan, whereas this will use the index.
Perhaps this will help to you:
SELECT * FROM table WHERE column REGEXP "[X]"; // where X is a number. returns all rows containg X in your column
SELECT * FROM table WHERE column REGEXP "^[X]"; // where X is a number. returns all rows containg X as first number in your column
Good luck!
None of the solutions suggested so far seem likely to work, assuming I understand your question. Short of splitting the comma-delimited string into a table and joining, you can do this (using 'blue' as an example):
WHERE ', ' + myTable.ValueList + ',' LIKE '%, blue,%'
If you aren't meticulous about spaces after commas, you would need to replace spaces in ValueList with empty strings as part of this code (and remove the space in ', ').

Table Variables: Is There a Cleaner Way?

I have a table that stores various clients I have done work for, separated by Government and Commercial columns. The problem is that there could be an uneven number of clients in either column. When I do a SELECT, I end up with NULL values in irregular places because I can't think of a clean way to order the result set. For example, the following is possible with a straight SELECT (no ORDER BY):
Government | Commercial
DOD | IBM
DOS | Microsoft
| Novell
DVA | Oracle
As you can see, there is a NULL value in the Government column because there are more commercial clients than government. This could change at any time and there's no guarantee which column will have more values. To eliminate rendering with a blank value in the middle of a result set, I decided to perform two separate SELECTs into table variables (one for the Government clients and another for the Commercial) and then SELECT one final time, joining them back together:
DECLARE #Government TABLE
(
Row int,
PortfolioHistoryId uniqueidentifier,
Government varchar(40),
GovernmentPortfolioContentId uniqueidentifier
)
DECLARE #Commercial TABLE
(
Row int,
PortfolioHistoryId uniqueidentifier,
Commercial varchar(40),
CommercialPortfolioContentId uniqueidentifier
)
INSERT INTO #Government
SELECT
(ROW_NUMBER() OVER (ORDER BY Government)) AS Row,
PortfolioHistoryId,
Government,
GovernmentPortfolioContentId
FROM dbo.PortfolioHistory
WHERE Government IS NOT NULL
INSERT INTO #Commercial
SELECT
(ROW_NUMBER() OVER (ORDER BY Commercial)) AS Row,
PortfolioHistoryId,
Commercial,
CommercialPortfolioContentId
FROM dbo.PortfolioHistory
WHERE Commercial IS NOT NULL
SELECT
g.Government,
c.Commercial,
g.GovernmentPortfolioContentId,
c.CommercialPortfolioContentId
FROM #Government AS g
FULL OUTER JOIN #Commercial AS c ON c.Row = g.Row
I'm not necessarily unhappy with this query (maybe I should be), but is there a cleaner way to implement this?
From design point, I do not see why you need two tables, or even two columns Government/Commercial. You could just have table Clients with a classifier column for OrganizationType. For example:
DECLARE TABLE Customer (
ID int
,Name varchar(50)
,OrganizationType char(1)
,Phone varchar(12)
)
For OrganizationType you could use: G=gov, B=business/commercial, C=charity. When querying the table use OrganizationType in ORDER BY and GROUP BY.
If there are some specific columns for gov and business clients, then keep all common columns in the Customer table and move specific columns in separate sub-type Government and Commercial tables as in this example. In the example, book and magazine are types of publication -- in your example government and commercial are types of customer.
you must have a governID which will connect the goverment table to commercial table. When inserting a values in commercial table you must also insert a governID to where you want the commercial row to be under with.
Government Commercial
governID | NAME commID | com_govID | Name
1 BIR 1 1 Netopia
2 1 SM Mall
so if you query it.
SELECT
g.Government,
c.Commercial,
g.GovernmentPortfolioContentId,
c.CommercialPortfolioContentId
FROM #Government AS g
FULL INNER JOIN #Commercial AS c ON c.governID = com_govID
SELECT
Government,
Commercial,
GovernmentPortfolioContentId,
CommercialPortfolioContentId
FROM dbo.PortfolioHistory
ORDER BY
CASE WHEN Government is null THEN 2 ELSE 1 END,
CASE WHEN Commercial is null THEN 2 ELSE 1 END,
Government,
Commercial
Aside: In your variable tables, the "Row" column should be declared as PRIMARY KEY - to gain the advantages of clustered indexing.
generally speaking, rendering issues should be handled in the application layer. Do a straight SELECT, then process it into the format you want.
So:
SELECT DISTINCT client, client_type FROM clients table;
Then in your application layer (I'll use quick and dirty PHP):
foreach($row = $result->fetch_assoc()) {
if($row['client_type']=='gov') {
$gov[]=$row['client'];
} else {
$com[]=$row['client'];
}
}
$limit=(count($gov)>$count($com)) ? count($gov) : count($com);
echo '<table><tr><th>Gov</th><th>Com</th><tr>';
for($i=0; $i< $limit; $i++) {
echo "<tr><td>{$gov[$i]}</td><td>{$com[$i]}</td></tr>\n";
}
echo '</table>