Need to populate table based on data in a csv file - sql

I have a database table called "COL" that contains over 836,000 items purchased online by individuals. There is a distinct set of 330 items that I read in as a .csv file called "desc". Rather than deal with all 836,000 individual items, I would like to group them from 1 to 330. For this I've created a column in the table called "groups".
What I would like to do is read in an item from desc and update the table using an update query, but I'm a bit lost on how to set it up. I've provided some code, but I'm not sure how to run an update query in a for loop.
library(RODBC)
db <- "C:/Projects/stuff.accdb"
desc<-read.csv("C:/Projects/description.csv",header=TRUE)
col <- odbcConnectAccess2007(db)
for (i in 330)
{
sqlquery(col,paste("update COL set groups=1 where description='SLIP-ONS, LT, GREEN'"))
}
close(col)

Related

get ERROR "Internal tables cannot be used as work areas" inside of method

I am new with ABAP. I asked a similar, but different, question to this one yesterday.
I duplicate a table (= table) to a local table (= localTable) and remove all duplicates in it, this works fine (first 3 code lines)
Now I want to loop over this local table and send all matching data into an structure with INTO CORRESPONDING FIELDS OF - unfortunately I always get the following error:
Internal tables cannot be used as work areas.
INFO: I'm working inside of a method!
Here is my code where I'm working with:
DATA localTable TYPE STANDARD TABLE OF table.
SELECT columnName FROM table INTO TABLE localTable.
DELETE ADJACENT DUPLICATES FROM localTable COMPARING columnName.
LOOP AT localTable ASSIGNING FIELD-SYMBOL(<fs_table>).
SELECT * FROM anotherTable as p
WHERE p~CN1 = #localVariable
AND p~CN2 = #<fs_table>-columnName
INTO CORRESPONDING FIELDS OF #exportStructure "<-- Here I always get my error
ENDSELECT.
ENDLOOP.
First: I've read that I have to sort my internal table before using command DELETE ADJACENT DUPLICATES FROM localTable COMPARING columnName. so I've added following code line in between:
SORT localTable BY columnName ASCENDING.
Second: Instead of using INTO CORRESPONDING FIELDS OF TABLE I've used APPENDING CORRESPONDING FIELDS OF TABLE because INTO overwrites every line with itself, so in total I have only one line in my exported structure.
APPENDING adds a new line every time my statements are true.

Import a single column dataset (CSV or TXT or XLXS) to act as a list in SQL WHERE IN clause

I have a dataset that I receive on a weekly basis, this dataset is a single column of unique identifiers. Currently this dataset is gathered manually by our support staff. I am trying to query this dataset (CSV file) in my WHERE clause of a SQL Query.
In order to add this dataset to my query I do some data transformation to tweak the formatting, the reformatted data is then pasted directly into the WHERE IN part of my query. Ideally I would have the ability to import this list to the SQL query directly potentially bypassing the manual effort involved in the data formatting and swapping between programs.
I am just wondering if this is possible, have tried my best to scour the internet and have had no luck finding any reference to this functionality.
Using where in makes this more complex than it needs to be. Store the IDs you want to filter on in a table called MyTableFilters with a column of the ID values you want to use as filter(s) and join from MyTable on ID to MyTableFilters on ID. The join will cause MyTable to only return rows if the ID in MyTable is also on MyTableFilters
select * from MyTable A join MyTableFilters F on A.ID = F.ID
Since you don't really need to any transformations or data manipulation of what you want to ETL you could also easily truncate and use bulk insert to keep MyFiltersTable up to date
truncate table dbo.MyFiltersTable
BULK INSERT dbo.MyFiltersTable
FROM 'X:\MyFilterTableIDSourceFile.csv'
WITH
(
FIRSTROW = 1,
DATAFILETYPE='widechar', -- UTF-16
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
TABLOCK,
KEEPNULLS -- Treat empty fields as NULLs.
)
I'm guessing that you currently have something like the following:
SELECT *
FROM MyTable t
WHERE t.UniqueID in ('ID12','ID345','ID84')
My recommendation would be to create table in which to store the IDs referenced in the WHERE clause. So for the above, your table would look like this:
UniqueID
========
ID12
ID345
ID84
Supposing the table is called UniqueIDs the original query then becomes:
SELECT *
FROM MyTable t
WHERE t.UniqueID in (SELECT u.UniqueID FROM UniqueIDs u)
The question you're asking is then how to populate the UniqueIDs table. You need some means to expose that table to your users. There are several ways you could go about that. A lazy but relatively effective solution would be a simple MS Access database with that table as a "linked" table. You may need to be careful about permissions.
Alternatively, assuming your wedded to the CSV, set up an SSIS job which clears down the table and then imports from that CSV into the UniqueIDs table.

Updating a table column using LIKE in WHERE

I have a table(ENTITY) that needs to be updated based on an ID(FUNNCODE) but the ID(FUNNCODE) is linked between two other tables(from JOINT then to POSITION)
and is independent of where the data is at(table NEORSD). The only parameter I can bind is the position name between the NEORSD table and POSITION table. When I place my LIKE statement into the where clause I get an error in return. If anyone can point me in the right direction it would be greatly appreciated!
Tables:
NEORSD: Contains the range information and 'position name(= Tag_No)'
ENTITY: Needs to update and accept the range information (Holds FUNCCODE)
JOINT: Holds FUNCCODE(named POSFUNCCODE) and corresponding POSCODE
POSITION: Contains POSCODE and 'position name(=POSID)'
UPDATE ENTITY
SET
RANGE0 = (
SELECT RANGE0
FROM NEORSD_1199
WHERE Tag_No like ('%PIT%'))
WHERE
FUNCCODE = (
SELECT POSFUNCCODE
FROM JOINT
WHERE POSCODE = (
SELECT POSCODE
FROM POSITION
WHERE POSID like ('%PIT%'))
If NEORSD_1199 has more than one row with a tag_no like '%PIT%', which NEORSD_1199.RANGE0 value should it use to update ENTITY.RANGE0?
This is the db engine's problem with your SQL.
To better understand, read the SQL backwards:
First you're getting a list of every Position Code from the POSITION table where the Position ID is like '%PIT%'. That might be one code, and it might be one hundred codes.
Then you're getting every Position Function Code from the JOINT table where the Position Code is in the list of Position Codes you just gathered. Again, could be one, could be a hundred.
Then you're getting a list of all values of RANGE0 from the NEORSD1199 table where Tag_No is like '%PIT%'. Again, this could be one value, or a list of one hundred.
Then, you're getting every row from the ENTITY table where the Function Code is in the list of Position Function Codes you gathered from the JOINT table (step 2 above), and you're updating RANGE0 in each of these rows to the value you captured in step 3.
The problem is that the 'value' returned in step 3 could be a list of values. If
NEORSD1199 has four rows where tag number is like '%PIT%'
(e.g. PIT01,PIT02,PIT03,APIT00), and each of those rows has a different
RANGE0 (e.g. 1,2,3,99), then which of those four values should the DB engine use to update RANGE0 in the rows in the ENTITY table?
Thank you to #SQLCliff for the questions that help to find the solution. I created an ID column inside my NEORSD table, created a temporary table holding the link between FUNCCODE and the ranges in NEORSD. Then I updated ENTITY using a join on. I can insert the where clause at the end of the temporary table for filtering if needed. Since it is a mass update I no longer require a where clause. My brain just likes making things more complicated than they need to be XD
with t as(
select f.funccode as funccode ,n.range0, n.range100
from func as f
join NEORSD_1199_With_Ranges_Updated as n on n.id = f.poscode or n.id =f.devcode
/* WHERE nessecrary ;P*/)
update entity
set
range0 = t.range0,
range100 = t.range100
from entity as e
join t on e.funccode = t.funccode

How to add lines from text file to sqlite db rows that already exist?

I have 12 columns with +/- 2000 rows in a sqlite DB.
Now I want to add a 13th column with the same amount of rows.
If I import the text from a cvs file it will add this after the existing rows (now I have a 4000 row table)
How can I avoid adding it underneath these rows?
Do I need to create a script to run trough each row of the table and add the text from the cvs file for each row?
If you have the code that imported the original data, and if the data has not changed in the meantime, you could just drop the table and reimport it.
Otherwise, you indeed have to create a script that looks up the corresponding record in the table and updates it.
You could also import the new data into a temporary table, and then copy the values over with a command like this:
UPDATE MyTable
SET NewColumn = (SELECT NewColumn
FROM TempTable
WHERE ID = MyTable.ID)
I ended up using Razor SQL great program.
http://www.razorsql.com/

sql dump of data based on selection criteria

When extracting data from a table (schema and data) I can do this by right clicking on the database and by going to tasks->Generate Scripts and it gives me all the data from the table including the create script, which is good.
This though gives me all the data from the table - can this be changed to give me only some of the data from the table? e.g only data on the table after a certain dtmTimeStamp?
Thanks,
I would recommend extracting your data into a separate table using a query and then using generate scripts on this table. Alternatively you can extract the data separately into a flatfile using the export data wizard (include your column headers and use comma seperators with double quote field delimiters).
To make a copy of your table:
SELECT Col1 ,Col2
INTO CloneTable
FROM MyTable
WHERE Col3 = #Condition
(Thanks to #MarkD for adding that)