Save All Results to Excel - sql

I have run a query using Eclipse from a Sybase db. I need to eliminate duplicate entries but the results have mixed types - INT and TEXT. Sybase will not do distinct on TEXT fields. When I Save All results and paste that into Excel some of the TEXT field bleeds into the INT field columns - which makes Excel -Remove Duplicates tough to do.
I am thinking I might create an alias for my query, add a temp table, select the distinct INT column values from the alias and then query the alias again, this time including the TEXT values. Then when I export the data I save it into Word instead. It would look like this:
SELECT id, text
FROM tableA, TableB
WHERE (various joins here...)
AS stuff
CREATE TABLE #id_values
(alt_id CHAR(8) null)
INSERT INTO #id_values
(SELECT DISTINCT id
FROM stuff)
SELECT id, text
FROM stuff a
WHERE EXISTS (SELECT 1 FROM #id_values WHERE b.alt_id = a.id )
If there was a way to format the data better in Excel I would not have to do all this manipulation on the db side.I have tried different formats in the Excel import dialog..import as tab-delimited, space-delimited with the same end result.
Additional information: I converted the TEXT to VARCHAR but I now need a new column which has up to 5 entries per id sometimes. ID -> TYPE is 1-many? The distinct worked on the original list but now I need to figure out how to show all the new column values in one row with each id. The new column is CHAR(4).
Now my original select looks like this:
SELECT DISTINCT id, CONVERT(VARCHAR(8192), text), type_cd
FROM TableA, TableB
...etc
And I get multiple rows again for each type_cd attached to an id. I also realized I don't think I need the 'b.' alias in front of *alt_id*.
Also, regardless of how I format the query (TEXT or VARCHAR), Excel continues to bleed the text into the id rows. Maybe this is not a sql problem but rather with Excel, or maybe Eclipse.

You are limited in how much data you can past into an Excel cell anyway, so convert your text to a varchar:
SELECT distinct id, cast(text as varchar(255)) as text
FROM tableA, TableB
WHERE (various joins here...)
I'm using 255, because that is the default on what Excel shows. You can have longer values in Excel cells, but this may be sufficient for your purposes. If not, just make the value bigger.
Also, as a comment, you should be using the proper syntax for joins, which uses the "on" clause (or "cross join" in place of a comma).

Related

Column changing numeric values to scientific notation by default

In sql server, I am trying to select insert data from one table into another. The code reads as:
Insert into TABLE2 (
Id, document_id
) select id, document_id from TABLE1
These two tables are basically identical. The document id field is in nvarchar50 since we will occasionally get values with a letter in them.
How can i get these to insert as numeric values, instead of scientific notation?
Thank you!
I assume the columns in table1 are of some varchar variant and hold the numbers in scientific notation. You can try to convert them to real (and if necessary to some other numeric or varchar variant form there).
INSERT INTO table2
(id,
document_id)
SELECT convert(real, id),
convert(real, document_id)
FROM table1;
This seems to be an Excel issue, where Excel treats the document id as a number instead as a text. The number formatting of Excel can potentially destroy information, as the same number can be represented in different ways. E.g. what was the original format of 500.2? Was it 0500.200, 000000000500.20 or something else? Also, Excel might even drop decimals, e.g. "5023423423423450" is displayed as "5.02342E+15". There is no way to restore this information in SQL.
You must handle this in Excel by either
entering the document id with a leading apostrophe (') to tell Excel not to interpret it in some way
or by
formatting the document fields as Text before entering the document id.

SQLite WHERE-Clause for every column?

Does SQLite offer a way to search every column of a table for a searchkey?
SELECT * FROM table WHERE id LIKE ...
Selects all rows where ... was found in the column id. But instead to only search in the column id, I want to search in every column if the searchstring was found. I believe this does not work:
SELECT * FROM table WHERE * LIKE ...
Is that possible? Or what would be the next easy way?
I use Python 3 to query the SQLite database. Should I go the route to search through the dictionary after the query was executed and data returned?
A simple trick you can do is:
SELECT *
FROM table
WHERE ((col1+col2+col3+col4) LIKE '%something%')
This will select the record if any of these 4 columns contain the word "something".
No; you would have to list or concatenate every column in the query, or reorganize your database so that you have fewer columns.
SQLite has full-text search tables where you can search all columns at once, but such tables do not work efficiently with any other queries.
I could not comment on #raging-bull answer. So I had to write a new one. My problem was, that I have columns with null values and got no results because the "search string" was null.
Using coalesce I could solve that problem. Here sqlite chooses the column content, or if it is null an empty string (""). So there is an actual search string available.
SELECT *
FROM table
WHERE (coalesce(col1,"") || coalesce(col2,"") || coalesce(col3,"") || coalesce(col4,"")) LIKE '%something%')
I'm not quite sure, if I understood your question.
If you want the whole row returned, when id=searchkey, then:
select * from table where id=searchkey;
If you want to have specific columns from the row with the correct searchkey:
select col1, col2, col3 from table where id=searchkey;
If you want to search multiple columns for the "id": First narrow down which columns this could be found in - you don't want to search the whole table! Then:
select * from table where col1=searchkey or col2=searchkey or col3=searchkey;

Find out if a value exists in a column with a large input values set

What is the most effective (and simple) way to find out if a specific column cells of a table contain one of a given values?
To give you some background, I have a list of 1000 ID numbers. They might or might not exist in a "FileName" column of a table "ProcessedFiles" as a part of the filename.
Basically, I need to check which of these 1000 tasks have been processed (i.e. they exist in the table).
The thing that I came with seems very uneffective:
SELECT * FROM ProcessedFiles
WHERE FileName LIKE '%54332423%'
OR FileName LIKE '%234432%'
OR FileName LIKE '%342342%'
...
etc
Thanks for help!
You could create a temporary table and insert all the Ids in a column. Then you could cross join with the ProcessedFiles table and check for the id in the name with a like:
SELECT pf.*
FROM ProcessedFiles pf,table t
WHERE pf.FileName like '%'+t.Id+'%'
I tested the above and it worked on SQL Server.

SQL Query: Modify records based on a secondary table

I have two tables in a PostgreSQL database.
The first table contains an ID and a text field with up to 200 characters and the second table contains a data definition table which has a column that contains smileys or acronyms and a second column which converts them to plain readable English.
The number of records in table 1 is about 1200 and the number in table two is about 300.
I wish to write a SQL statement which will convert any text speak in column 1 in table one into normal readable language based on the definitions in Table 2.
So for example if the value in table 1 reads as: Finally Finished :)
The transformed SQL would be something like: Finally Finished Smiles or smiling,
where the definition is pulled from the second table.
Note the smiley could be anywhere in the text in column one and could one of three hundred characters.
Does anyone know if this is possible?
Yes. Do you want to do it entirely in SQL, or are you writing a brief bit of code to do this? I'm not entirely sure of how to do it all in SQL but I would consider something like what is below:
SELECT row.textToTranslate FROM Table_1
oldText = row.textToTranslate
Split row.textToTranslate by some delimeter
For each word in row.textToTranslate:
queryResult = SELECT FROM Table_2 WHERE pretranslate=word
if(queryResult!=Null)
modifiedText = textToTranslate.replace(word, queryResult)
UPDATE Table_1 SET translatedText=modifiedText WHERE textToTranslate=oldText

Forcing a datatype in MS Access make table query

I have a query in MS Access which creates a table from two subqueries. For two of the columns being created, I'm dividing one column from the first subquery into a column from the second subquery.
The datatype of the first column is a double; the datatype of the second column is decimal, with scale of 2, but I want the second column to be a double as well.
Is there a way to force the datatype when creating a table through a standard make-table Access query?
One way to do it is to explicitly create the table before putting anything into it.
Your current statement is probably like this:
SELECT Persons.LastName,Orders.OrderNo
INTO Persons_Order_Backup
FROM Persons
INNER JOIN Orders
ON Persons.P_Id=Orders.P_Id
WHERE FirstName = 'Alistair'
But you can also do this:
----Create NewTable
CREATE TABLE NewTable(FirstName VARCHAR(100), LastName VARCHAR(100), Total DOUBLE)
----INSERT INTO NewTableusing SELECT
INSERT INTO NewTable(FirstName, LastName, Total)
SELECT FirstName, LastName,
FROM Person p
INNER JOIN Orders o
ON p.P_Id = o.P_Id
WHERE p.FirstName = 'Alistair'
This way you have total control over the column types. You can always drop the table later if you need to recreate it.
You can use the cast to FLOAT function CDBL() but, somewhat bizarrely, the Access Database Engine cannot handle the NULL value, so you must handle this yourself e.g.
SELECT first_column,
IIF(second_column IS NULL, NULL, CDBL(second_column))
AS second_column_as_float
INTO Table666
FROM MyTest;
...but you're going to need to ALTER TABLE to add your keys, constraints, etc. Better to simply CREATE TABLE first then use INSERT INTO..SELECT to populate it.
You can use CDbl around the columns.
An easy way to do this is to create an empty table with the correct field types and then to an Append-To query and Access will automatically convert the data to the destination field.
I had a similar situation, but I had a make-table query creating a field with NUMERIC datatype that I wanted to be short text.
What I did (and I got the idea from Stack) is to create the table with the field in question as Short Text, and at the same time build a delete query to scrub the records. I think it's funny that a DELETE query in access doesn't delete the table, just the records in it - I guess you have to use a DROP TABLE function for that, to purge a table...
Then, I converted my make-table query to an APPEND query, which I'd never done before... and I just added the running of the DELETE query to my process.
Thank you, Stack Overflow !
Steve
I add a '& ""' to the field I want to make sure are stored as text, and a ' *1 ' (as in multiplying the amount by 1) to the fields I want to store as numeric.
Seems to do the trick.
To get an Access query to create a table with three numeric output fields from input numeric fields, (it kept wanting to make the output fields text fields), had to combine several of the above suggestions. Pre-establish an empty output table with pre-defined output fields as integer, double and double. In the append query itself, multiply the numeric fields by one. It worked. Finally.