I'm having trouble creating a table using RODBC's sqlSave (or, more accurately, writing data to the created table).
This is different than the existing sqlSave question/answers, as
the problems they were experiencing were different, I can create tables whereas they could not and
I've already unsuccesfully incorporated their solutions, such as closing and reopening the connection before running sqlSave, also
The error message is different, with the only exception being a post that was different in the above 2 ways
I'm using MS SQL Server 2008 and 64-bit R on a Windows RDP.
I have a simple data frame with only 1 column full of 3, 4, or 5-digit integers.
> head(df)
colname
1 564
2 4336
3 24810
4 26206
5 26433
6 26553
When I try to use sqlSave, no data is written to the table. Additionally, an error message makes it sound like the table can't be created though the table does in fact get created with 0 rows.
Based on a suggestion I found, I've tried closing and re-opening the RODBC connection right before running sqlSave. Even though I use append = TRUE, I've tried dropping the table before doing this but it doesn't affect anything.
> sqlSave(db3, df, table = "[Jason].[dbo].[df]", append = TRUE, rownames = FALSE)
Error in sqlSave(db3, df, table = "[Jason].[dbo].[df]", :
42S01 2714 [Microsoft][ODBC SQL Server Driver][SQL Server]There is already
an object named 'df' in the database.
[RODBC] ERROR: Could not SQLExecDirect 'CREATE TABLE [Jason].[dbo].[df]
("df" int)'
I've also tried using sqlUpdate() on the table once it's been created. It doesn't matter if I create it in R or SQL Server Management Studio, I get the error table not found on channel
Finally, note that I have also tried this without append = TRUE and when creating a new table, as well as with and without the rownames option.
Mr.Flick from Freenode's #R had me check if I could read in the empty table using sqlQuery and indeed, I can.
Update
I've gotten a bit closer with the following steps:
I created an ODBC connection that goes directly to my Database within the SQL Server, instead of just to the default (Master) DB then specifying the path to the table within the table = or tablename = statements
Created the table in SQL Server Management Studio as follows
GO
CREATE TABLE [dbo].[testing123](
[Person_DIMKey] [int] NULL
) ON [PRIMARY]
GO
In R I used sqlUpdate with my new ODBC connection and no brackets around the tablename
Now sqlUpdate() sees the table, however it complains that it needs a unique column
Indicating that the only column in the table is the unique column with index = colname results in an error saying that the column does not exist
I dropped and recreated the table specifying a primary key,
GO
CREATE TABLE [dbo].[jive_BNR_Person_DIMKey](
[jive_BNR_Person_DIMKey] [int] NOT NULL PRIMARY KEY
) ON [PRIMARY]
GO
which generated both a Primary Key and Index (according to the GUI interface of SQL Sever Management Studio) named PK__jive_BNR__2754EC2E30F848ED
I specified this index/key as the unique column in sqlUpdate() but I get the following error:
Error in sqlUpdate(db4, jive_BNR_Person_DIMKey, tablename = "jive_BNR_Person_DIMKey", :
index column(s) PK__jive_BNR__2754EC2E30F848ED not in database table
For the record, I was specifying the correct column name (not "colname") for index; thanks to MrFlick for requesting clarification.
Also, these steps are numbered 1 through 7 in my post but StackOverflow resets the numbering of the list a few times when it gets displayed. If anyone can help me clean that aspect of this post up I'd appreciate it.
After hours of working on this, I was finally able to get sqlSave to work while specifying the table name--deep breathe, where to start. Here is the list of things I did to get this to work:
Open 32-bit ODBC Administrator and create a User DSN and configure it for your specific database. In my case, I am creating a global temp table so I linked to tempdb. Use this connection Name in your odbcConnection(Name). Here is my code myconn2 <- odbcConnect("SYSTEMDB").
Then I defined my data types with the following code: columnTypes <- list(Record = "VARCHAR(10)", Case_Number = "VARCHAR(15)", Claim_Type = "VARCHAR(15)", Block_Date = "datetime", Claim_Processed_Date = "datetime", Status ="VARCHAR(100)").
I then updated my data frame class types using as.character and as.Date to match the data types listed above.
I already created the table since I've been working on it for hours so I had to drop the table using sqlDrop(myconn2, "##R_Claims_Data").
I then ran: sqlSave(myconn2, MainClmDF2, tablename = "##R_Claims_Data", verbose=TRUE, rownames= FALSE, varTypes=columnTypes)
Then my head fell off because it worked! I really hope this helps someone going forward. Here are the links that helped me get to this point:
Table not found
sqlSave in R
RODBC
After re-reading the RODBC vignette and here's the simple solution that worked:
sqlDrop(db, "df", errors = FALSE)
sqlSave(db, df)
Done.
After experimenting with this a lot more for several days, it seems that the problems stemmed from the use of the additional options, particularlly table = or, equivalently, tablename =. Those should be valid options but somehow they manage to cause problems with my particular version of RStudio ((Windows, 64 bit, desktop version, current build), R (Windows, 64 bit, v3), and/or MS SQL Server 2008.
sqlSave(db, df) will also work without sqlDrop(db, "df") if the table has never existed, but as a best practice I'm writing try(sqlDrop(db, "df", errors = FALSE), silent = TRUE) before all sqlSave statements in my code.
We have had this same problem, which after a bit of testing we solved simply by not using square brackets in the schema and table name reference.
i.e. rather than writing
table = "[Jason].[dbo].[df]"
instead write
table = "Jason.dbo.df"
Appreciate this is now long past the original question, but just for anyone else who subsequently trips up on this problem, this is how we solved it. For reference, we found this out by writing a simple 1 item dataframe to a new table, which when inspected in SQL contained the square brackets in the table name.
Here are a few rules of thumb:
If things aren't working out, then manually specify the column types just as #d84_n1nj4 suggested.
columnTypes <- list(Record = "VARCHAR(10)", Case_Number = "VARCHAR(15)", Claim_Type = "VARCHAR(15)", Block_Date = "datetime", Claim_Processed_Date = "datetime", Status ="VARCHAR(100)")
sqlSave(myconn2, MainClmDF2, tablename = "##R_Claims_Data", verbose=TRUE, rownames= FALSE, varTypes=columnTypes)
If #1 doesn't work, then continue to specify the columns, but specify them all as VARCHAR(255). Treat this as a temp or staging table, and move the data over with sqlQuery with your next step, just as #danas.zuokas suggested. This should work, but even if it doesn't, it gets you closer to the metal and puts you in better position to debug the problem with SQL Server Profiler if you need it. <- And yes, if you still have a problem, it's likely due to either a parsing error or type conversion.
columnTypes <- list(Record = "VARCHAR(255)", Case_Number = "VARCHAR(255)", Claim_Type = "VARCHAR(255)", Block_Date = "VARCHAR(255)", Claim_Processed_Date = "VARCHAR(255)", Status ="VARCHAR(255)")
sqlSave(myconn2, MainClmDF2, tablename = "##R_Claims_Data", verbose=TRUE, rownames= FALSE, varTypes=columnTypes)
sqlQuery(channel, 'insert into real_table select * from R_Claims_Data')
Due to RODBC's implementation, and not due to any inherent limitation in T-SQL, R's logical type (i.e. [TRUE, FALSE]) will not convert to T-SQL's BIT type (i.e. [1, 0]), so don't try this. Either convert the logical type to [1, 0] in the R layer or take it down to the SQL layer as a VARCHAR(5) and convert it to a BIT in the SQL layer.
In addition to some of the answered posted earlier, here's my workaround. NOTE: I use this as part of a small ETL process, and the destination table in the DB is dropped and recreated each time.
Basically you want to name your dataframe what you destination table is named:
RodbcTest <- read.xlsx('test.xlsx', sheet = 4, startRow = 1, colNames = TRUE, skipEmptyRows = TRUE)
Then make sure your connection string includes the target database (not just server):
conn <- odbcDriverConnect(paste("DRIVER={SQL Server};Server=localhost\\sqlexpress;Database=Charter;Trusted_Connection=TRUE"))
after that, I run a simple sqlQuery that conditionally drops the table if it exists:
sqlQuery(conn, "IF OBJECT_ID('Charter.dbo.RodbcTest') IS NOT NULL DROP TABLE Charter.dbo.RodbcTest;")
Then finally, run sqlSave without the tablename param, which will create the table and populate it with your dataframe:
sqlSave(conn, RodbcTest, safer = FALSE, fast = TRUE)
I've encountered the same problem-- the way I found around it is to create the an empty table using regular CREATE TABLE SQL syntax, and then append to it via sqlSave. For some reason, when I tried it your way, I could actually see the table name in the MSSQL database - even after R threw the error message you showed above - but it would be empty.
Related
Goal: be able to conduct SQL queries on a data frame in R.
Approach used: using dbWriteTable to write the table to the database that I would then be able to query on using SQL and join to other tables existing in the DB.
Issue: Seems to execute successfully, but table does not seem to actually exist in the db. Errors thrown when attempting to query table. Details below:
Data frame name: testing_df = 1 column dataframe
channel <- DBI::dbConnect(odbc::odbc(), "data_source_name", uid="user_name", pwd='password')
dbGetQuery(channel,"use role role_name;")
dbGetQuery(channel,"use warehouse warehouse_name;")
dbGetQuery(channel,"use schema schema_name;")
dbGetQuery(channel,"use database db_name;")
table_name = Id(database="database_name",schema="schema_name",table="table_testing")
dbWriteTable(conn = channel,
name = table_name,
value = testing_df,
overwrite=TRUE)
dbReadTable(channel,name=table_name)
dbExistsTable(channel,name=table_name)
dbReadTable provides output of data frame expected.
dbExistsTable provides the following output:
> dbExistsTable(channel,name=table_name)
[1] TRUE
Issue: The table cannot be located in the actual database UI, and when running the following in R:
desired_output <- dbGetQuery(channel,sprintf("select * from database_name.schema_name.table_testing;"))
desired_output
I get the following error:
SQL compilation error: Object 'table_testing' does not exist or not authorized.
I am able to check in the database and see that the table actually does not exist.
Question: Does anyone know if dbWriteTable is actually supposed to be writing the table to the database, or if I'm misunderstanding the purpose of dbWriteTable? Any better ways to approach this task?
After connecting to the oracle DB, I have created a sql developer table via Python, I then load my csv into the df and convert the datatypes to varchar for a faster load into the table (because the default str type takes an unreasonable amount of time). The data loads fast and it is all present, but the issue is when interrogating the data in SQL Developer, I am forced to put '' round the column names for it to be recognised, and when trying to perform simple operations on the data such as SELECT * FROM new_table ORDER BY 'CATEGORY' asc sql can not seem to sort my data at all, does anyone have any suggestions please? Below is a snippet of the python code I have used
os.chdir('C:\\Oracle\\instantclient_19_5')
dataload= Table('new_table, meta,
Column('Category',VARCHAR(80)),
Column('Code',VARCHAR(80)),
Column('Medium',VARCHAR(80)),
Column('Source',VARCHAR(80)),
Column('Start_date',VARCHAR(80)),
Column('End_date',VARCHAR(80))
meta.create_all(engine)
df3= pd.read_csv(fname)
dtyp = {c:types.VARCHAR(df3[c].str.len().max())
for c in df3.columns[df3.dtypes == 'object'].tolist()}
df3.to_sql(new_table, engine, index=False, if_exists='replace', dtype=dtyp)
I have created an MS SQL 2012 data base using enterprise archiect and I am in the process of uploading data to this server using R and the dbi and odbc packages. Now, I have problem reading a table from my MS SQL 2012 sever, that I need to merge with another table that has a FK constraint to this table. What frustrates me is, that I can download the empty table, but not the filled table that I appended before.
Here is a picture of the EA data base modelling:
There is a main table "unternehmen" with information about firms that leads to a secondary table "eigentumsverhaeltnisse" with further information and one information is linked to a meta table "eigentuemer" containing the labels of this information. (I do not necessarily need this, but there are similar situations elsewhere). The entire data base (rougly 100 tables) is created using the DDL generation of enterprise architect. I could add more information on this, if needed.
So I created the meta table manually and uploaded it to the server using this code:
con <- DBI::dbConnect(odbc::odbc(),
Driver = "SQL Server",
Server = "MY server",
Database = "EA_DB",
encoding = "latin1")
df_temp<-data.table(bezeichnung=c("Land", … , "Staat"))
DBI::dbWriteTable(con, “eigentuemer”, df_temp, append=TRUE)
There is a bit more code to create the secondary table, which I then merge with the meta table after loading it from the SQL server to include the FK. This is then also uploaded using the same code as above.
df_temp <- code to create the other table
df_temp_sql<-DBI::dbReadTable(con, “eigentuemer”)
df_temp<-merge(df_temp,df_temp_sql,by="bezeichnung")
some other code
DBI::dbWriteTable(con, “eigentumsverhaeltnisse“, df_temp, append=TRUE)
Now I cannot reload the previously appended table, which is really frustrating and I was able to use the same command when the table was empty.
df_temp_sql<-DBI::dbReadTable(con, “eigentumsverhaeltnisse“,)
Error in result_fetch(res#ptr, n) :
nanodbc/nanodbc.cpp:2966: 07009: [Microsoft][ODBC SQL Server Driver]Ungültiger Deskriptorindex
I am assuming „ungültiger Deskriptorindex means invalid descriptor index.
The table exists
DBI::dbExistsTable(con, "eigentumsverhaeltnisse")
I have found simliar questions, but I did not see a solution for me I tried different
df_temp_sql_4<-DBI::dbReadTable(con,DBI::SQL("eigentumsverhaeltnisse"))
df_temp_sql_5<-DBI::dbReadTable(con,"dbo.eigentumsverhaeltnisse")
df_temp_sql_6<-DBI::dbReadTable(con,DBI::SQL("dbo.eigentumsverhaeltnisse"))
I also tried dgGetQuery but get the same error
Bin <- DBI::dbGetQuery(con, "SELECT [id], [konzernname], [eigentuemer_id], [eigentuemer_andere],
[weitere_geschaeftsfelder], [konzernteil] FROM [EA_DB].[dbo].[eigentumsverhaeltnisse]")
Error in result_fetch(res#ptr, n) :
nanodbc/nanodbc.cpp:2966: 07009: [Microsoft][ODBC SQL Server Driver]Ungültiger Deskriptorindex
Similar questions are:
Import Tables from SQL Server into R
dbReadTable error in R: invalid object name
dbReadTable error in R: invalid object name
Edit to answer comment: I do not quite understand the issue it the link. I do not have varchar(max) or varbinary(max) variable, do I? I have
PK id:bigint
konzername:ntext
FK eigentuemer_id:bigint
eigentuemer_andere:ntext
weitere_geschaftsfelder:bit
konzernteil:bit
What is strang is that some DBI::dbGetQuerycommands work when I only include a subset of the columns and some dont.
#Works
Bin <- DBI::dbGetQuery(con, "SELECT [id], [konzernname], [eigentuemer_andere] FROM [EA_DB].[dbo].[eigentumsverhaeltnisse]")
head(Bin)
#works as well
Bin <- DBI::dbGetQuery(con, "SELECT [id], [eigentuemer_id] FROM [EA_DB].[dbo].[eigentumsverhaeltnisse]")
head(Bin)
#Does not work
Bin <- DBI::dbGetQuery(con, "SELECT [id], [konzernname], [eigentuemer_andere], [eigentuemer_id] FROM [EA_DB].[dbo].[eigentumsverhaeltnisse]")
#works as well
Bin <- DBI::dbGetQuery(con, "SELECT [id], [weitere_geschaeftsfelder] FROM [EA_DB].[dbo].[eigentumsverhaeltnisse]")
head(Bin)
#Does not work
Bin <- DBI::dbGetQuery(con, "SELECT [id], [konzernname], [eigentuemer_andere], [weitere_geschaeftsfelder] FROM [EA_DB].[dbo].[eigentumsverhaeltnisse]")
So that just to close this:
is is related to the isseu pointed out by #Hearkz
https://github.com/r-dbi/odbc/issues/10
Which also relates to ntext and text variables, so that I need to adjust these variables to nvarchar(veryhighnumber) instead of nvarchar(MAX).
I'm attempting to create a new table in a SQL database from a series of dataframes that I've made available to the global environment (dataframes created via a function in R).
I'm able to connect to the server:
#libraries
library(odbc)
library(dplyr)
library(DBI)
#set up a connection to the database
staging_database <- dbConnect(odbc::odbc(), "staging_db")
#Write dataframe to table in database (table does not exist yet in SQL!)
dbWriteTable(staging_database , 'test_database', `demographics_dataframe`, row.names = FALSE)
However, I'm getting the following error:
Error: <SQL> 'CREATE TABLE "test_database" (
"field1" varchar(255),
"field2" BIT,
"field3" BIT,
"field4" varchar(255),
"field5" varchar(255),
"field6" varchar(255),
"field7" varchar(255),
"field8" INT,
"field9" INT,
Very unhelpful error here - is there something I'm missing? I've followed the documentation for dbWritetable. Something I'm noticing, that I believe may be a part of the problem, is that I can't view any existing tables within "staging_db".
dbListTables(staging_database) reveals a bunch of metadata, but no actual tables that exist (I can verify they exist by logging into Microsoft SQL Server).
I recently encountered the same error. In my case, there was no problem with special characters or unexpected symbols anywhere in the table, since I was able to write the table “piece by piece” each time writing only a subset of columns.
I am guessing it has something to do with the number of columns to be written. In a single go, I was able to write up to 173 columns.
Rather than think that this is the limit I would say there is an internal limit for the length of the “CREATE TABLE” string hidden somewhere along the way.
Workaround that worked for me was to split the table into 2 sets of columns with the ID in both of them and joining them later.
Edit
So the truth was for me that 2 columns had the same name. The error indeed comes from the "CREATE TABLE" string - one cannot create table with duplicated column names.
Even a year after asking the question, I hope this can help someone dealing with the same problem ;)
This question already has answers here:
In SQL Server, how do I generate a CREATE TABLE statement for a given table?
(16 answers)
Closed 5 years ago.
So i'm trying to copy all data of table CarOrders into a new table, NewCarOrders. But i only want to copy columns and dependencies and not data. I don't know if it truly matters, but i'm using SQL Server Management Studio.
Here are two things that i tried, but none worked.
SELECT *
INTO NewCarOrders
FROM CarOrders
WHERE 0 = 1
The issue with above is, it is copying all the columns but it is not copying the dependencies/relationships.
I saw some posts on some of the forums regarding this, but the suggested solution was using SSMS Wizard. I want to do it using SQL (or T-SQL).
I'm new to SQL world, so i'm really sorry if it is a stupid or no brainer question. Let me know if i am missing some important information, i'll update it.
Try below query and check if this works for you
SELECT TOP 0 *
INTO NewCarOrders
FROM CarOrders
This will create NewCarOrders table with same structure as CarOrders table and no rows in NewCarOrders.
SELECT * FROM NewCarOrders -- Returns zero rows
Note : This will not copy constraints , only structure is copied.
For constraints do as below -
In SSMS right click on the table CarOrders, Script Table As > Create To > New Query Window.
Change the name in the generated script to NewCarOrders and execute.
Also change the constraints name in the generated script else it will throw error like There is already an object named 'xyz' in the database
You can use the like command:
CREATE TABLE NewCarOrders LIKE CarOrders;