Trying to change the column type from BLOB to to ORDSYS.ORDImage with the following code:
alter table "POSTS"
modify ("IMAGE" "ORDSYS"."ORDIMAGE");
But it produces the following error:
ORA-22859: invalid modification of columns
The table and column names are definitely right.
A possible solution would be creating a new table via CREATE TABLE AS SELECT statement, then drop the source table and rename the new one.
According to Oracle Technology Network you can create an ORDImage from a BLOB with
select ordsys.ordimage(ordsys.ordsource(IMAGE, null, null, null, null, 1),
null, null, null, null, null, null, null) from POSTS
(not tested)
The solution I found was to drop the column and create a new one.
Related
I'm trying to modify the name a column named "photo_url". I tried to simply changing the string name to "test" and killing the postgresql service and then re starting it again, but it doesn't seem to be working; it still loads up as "photo_url".
I'm not sure how to change the name if anyone could help me it would be greatly appreciated.
this is my table im using postgreSQL, and pgweb to view my database, i used dbdesigner to generate this schema
CREATE TABLE "users" (
"user_id" serial NOT NULL,
"name" TEXT NOT NULL,
"instrument" TEXT NOT NULL,
"country" TEXT NOT NULL,
"state" TEXT NOT NULL,
"city" TEXT NOT NULL,
"about" TEXT NOT NULL,
"email" TEXT NOT NULL UNIQUE,
"hashed_password" TEXT NOT NULL,
"photo_url" TEXT NOT NULL,
"created_at" timestamptz NOT NULL default now(),
CONSTRAINT "users_pk" PRIMARY KEY ("user_id")
) WITH (
OIDS=FALSE
);
If you've already created the table, you can use this query to rename the column
ALTER TABLE users RENAME COLUMN photo_url TO test;
otherwise simply recreate your table with the new column name.
More information on the ALTER TABLE command can be found in the PostgreSQL Docs.
I would like to enter a data frame into an existing table in a database using an R script, and I want the table in the database to have a sequential primary key. My problem is that RODBC doesn't seem to allow the primary key constraint.
Here's the SQL for creating the table I want:
CREATE TABLE [dbo].[results] (
[ID] INT IDENTITY (1, 1) NOT NULL,
[FirstName] VARCHAR (255) NULL,
[LastName] VARCHAR (255) NULL,
[Birthday] DATETIME NULL,
[CreateDate] DATETIME NULL,
CONSTRAINT [PK_dbo.results] PRIMARY KEY CLUSTERED ([ID] ASC)
);
And a test with some R code:
ConnectionString1="Driver=ODBC Driver 11 for SQL Server;Server=myserver; Database=TestDb; trusted_connection=yes"
ConnectionString2="Driver=ODBC Driver 11 for SQL Server;Server=notmyserver; Database=TestDb; trusted_connection=yes"
db1=odbcDriverConnect(ConnectionString1)
query="SELECT a.[firstname] as FirstName
, a.[lastname] as LastName
, Cast(a.[dob] as datetime) as Birthday
, cast(a.createDate as datetime) as CreateDate
FROM [dbo].[People] a"
results=NULL
results=sqlQuery(db1,query,stringsAsFactors=FALSE)
close(db1)
db2=odbcDriverConnect(ConnectionString)
sqlSave(db2,
results,
append = TRUE,
varTypes=c(Birthday="datetime", CreateDate="datetime"),
colnames = FALSE,
rownames = FALSE,fast=FALSE)
close(db2)
The first part of the R code is just getting some test data into a dataframe--it works fine and it's not part of my question here (I'm just including it here so you can see what format the test data is). When I run the sqlSave function I get an error message:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
However, if I remove the primary key from the database, everything works fine with this table:
CREATE TABLE [dbo].[results] (
[FirstName] VARCHAR (255) NULL,
[LastName] VARCHAR (255) NULL,
[Birthday] DATETIME NULL,
[CreateDate] DATETIME NULL
);
Clearly the primary key is the issue. Normally with entity framework or whatever (as I understand it), the primary key is created at the database when you enter data.
I'd like a way to append data to a table with a primary key using only an R script. Is that possible? There could already be data in the table I'm adding to, so I don't really see a way to create keys in R before trying to append to the table.
The problem is line 361 in http://github.com/cran/RODBC/blob/master/R/sql.R - the data.frame and the DB table must have exactly the same number of columns otherwise you get this error with this stacktrace:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
3. `colnames<-`(`*tmp*`, value = c("ID", "FirstName", "LastName",
"Birthday", "CreateDate")) at sql.R#361
2. sqlwrite(channel, tablename, dat, verbose = verbose, fast = fast,
test = test, nastring = nastring) at sql.R#211
1. sqlSave(db2, results, append = TRUE, varTypes = c(Birthday = "datetime",
CreateDate = "datetime"), colnames = FALSE, rownames = FALSE,
fast = FALSE, verbose = TRUE)
If you add the ID column to your data.frame you can no longer use the autoinc ID column so this is no solution (or workaround).
A "simple" workaround to the "same columns" limitation of RODBC::sqlSave is:
Use sqlSave to save the new rows into another table name
Send an insert into ... select from ... via RODBC::sqlQuery to append the new rows to your original table that includes the autoinc ID
column
Delete the table with the new rows again (drop table...)
A better option would be to use the new odbc package which also offers better performance through bulk-alike inserts instead of sending single insert statements like RODBC does:
https://github.com/r-dbi/odbc
Look for the function dbWriteTable (which is an implementation of the interface DBI::dbWriteTable).
I'm having a problem with a table I created. I am trying to run a query however a red line appears under my code ('excursionID', and 'excursions'), claiming 'Invalid Column name 'excursionID' and 'Invalid object name 'dbo.excursions' even though I have created the table already!
Here is the query
SELECT
excursionID
FROM [dbo].[excursions]
Here is the query I used to create the table
USE [zachtravelagency]
CREATE TABLE excursions (
[excursionID] INTEGER NOT NULL IDENTITY (1,1) PRIMARY KEY,
[companyName] NVARCHAR (30) NOT NULL,
[location] NVARCHAR (30) NOT NULL,
[description] NVARCHAR (30) NOT NULL,
[date] DATE NOT NULL,
[totalCost] DECIMAL NOT NULL,
I've tried dropping the table and inserting table again.
For some reason all my other tables work, it's just this table that doesn't identify itself. I'm very new to SQL so thank you for your patience!
You use DB [zachtravelagency] for create table.And You dont use this DB in your query. Default used db master in SSMS. Try
SELECT
excursionID
FROM [zachtravelagency].[excursions]
I am trying to create tables in an SQLite database with sqlite3.
The command $ sqlite3 mydb < mytables.sql produce the following error: Incomplete SQL: ??C.
mytables.sql is:
CREATE TABLE SizeCulture (
SizeCultureID INTEGER PRIMARY KEY ASC,
SizeID INTEGER NULL,
CultureID TEXT NULL,
Name TEXT NULL,
Description TEXT NULL,
Abbreviation TEXT NULL,
);
CREATE TABLE Size(
SizeID INTEGER PRIMARY KEY ASC ,
Creation TEXT NOT NULL,
Modification TEXT NOT NULL,
Deleted INTEGER NOT NULL,
);
/****** Object: Table [Ordering].[BarCode] Script Date: 11/09/2011 14:58:19 ******/
CREATE TABLE BarCode(
BarCodeID INTEGER PRIMARY KEY ASC NOT NULL,
BarCodeValue TEXT NOT NULL,
);
This was modified from a script generated by SQL Server, where some tables need to be replicated on an Android device.
The above is just a set of repeating create table statements. From what I understand, SQLite follows standard SQL (like MySQL or postgres).
Though I can't test it at the moment, I think it's the trailing commas that are confusing it (for example, the comma at the end of Abbreviation TEXT NULL,). Try removing all those trailing commas.
Edit: To be clear, I'm talking about all of these commas:
Abbreviation TEXT NULL,
...
Deleted INTEGER NOT NULL,
...
BarCodeValue TEXT NOT NULL,
I had the same problem, but for a different reason (so I'm commenting because Google led me here). Turns out you can also encounter this error if your file has a weird encoding (like UCS-2 instead of UTF8).
I have some txt files that contain tables with a mix of different records on them which have diferent types of values and definitons for columns. I was thinking of importing it into a table and running a query to separate the different record types since a identifier to this is listed in the first column. Is there a way to change the value type of a column in a query? since it will be a pain to treat all of them as text. If you have any other suggestions on how to solve this please let me know as well.
Here is an example of tables for 2 record types provided by the website where I got the data from
create table dbo.PUBACC_A2
(
Record_Type char(2) null,
unique_system_identifier numeric(9,0) not null,
ULS_File_Number char(14) null,
EBF_Number varchar(30) null,
spectrum_manager_leasing char(1) null,
defacto_transfer_leasing char(1) null,
new_spectrum_leasing char(1) null,
spectrum_subleasing char(1) null,
xfer_control_lessee char(1) null,
revision_spectrum_lease char(1) null,
assignment_spectrum_lease char(1) null,
pfr_status char(1) null
)
go
create table dbo.PUBACC_AC
(
record_type char(2) null,
unique_system_identifier numeric(9,0) not null,
uls_file_number char(14) null,
ebf_number varchar(30) null,
call_sign char(10) null,
aircraft_count int null,
type_of_carrier char(1) null,
portable_indicator char(1) null,
fleet_indicator char(1) null,
n_number char(10) null
)
Yes, you can do what you want. In ms access you can use any VBA functions and with some
IIF(FirstColumn="value1", CDate(SecondColumn), NULL) as DateValue,
IIF(FirstColumn="value2", CDec(SecondColumn), NULL) as DecimalValue,
IIF(FirstColumn="value3", CStr(SecondColumn), NULL) as StringValue
You can use all/any of the above in your SELECT.
EDIT:
From your comments it seems that you want to split them into different tables - importing as text should not be a problem in that case.
a)
After you import and get it in the initial table, create the proper table manually setting you can INSERT into the proper table.
b)
You could even do a make table query, but it might be faster to create it manually. If you do a make table query you have to be sure that you have casted the data into proper type in your select.
EDIT2:
As you updated the question showing the structure it becomes obvious that my suggestion above will not help directly.
If this is one time process you can follow HLGEM's solution. Here are some more details.
1) Import into a table with two columns - RecordType char(2), Rest memo
2) Now you can split the data (make two queries that select based on RecordType) and re-export the data (to be able to use access' import wizard)
3) Now you have two text files with proper structure which can be easily imported
I did this in my last job. You start with a staging table that has one column or two coulmns if your identifier is always the same length.
Then using the record identifier, you move the data to another set of staging tables, one for each type of record you have. This will be in columns for the data and can have the correct data types. Then you do any data cleaning you need to do. Then you insert into the real production table.
If you have a column defined as text, because it has both alphas and numbers, you'll only be able to query it as if it were text. Once you've separated out the different "types" of data into their own tables, you should be able to change the schema definition. Please comment here if I'm misunderstanding what you're trying to do.