I am trying to export my Database as an .dbf by using a VBA script, but the dbf requires the database to have certain values for the column size.
When I leave the columns as they are in Access, I get an error saying
field will not fit in record
How can I set the column size for each column seperatly? Preferably while generating the table, so I don't have to do it manually everytime i generate a new table with queries
And where do I set them? (in a Query or in SQL?)
Thanks in advance!
Edit:
I have made sure that its the field size value that is giving me the error. I changed all the field size values manually by opening the table in Design View.
So now the second part of my question is becoming more crucial. Wether or not it is possible to set the field size while generating the table.
Edit2:
I am currently using SQL in a query to create the table as followed:
SELECT * INTO DB_Total
FROM Tags_AI_DB;
After the initial DB_Total is made, I use several Insert into queries to add other rows:
INSERT INTO DB_TOTAL
SELECT a.*
FROM Tags_STS_ENA_DB AS a
LEFT JOIN DB_TOTAL AS b
ON a.NAME = b.NAME
WHERE b.NAME IS NULL;
If I set the column values in the DB_Total table while generating it with the Select into query, will they still have those values after using the Insert Into queries to insert more rows?
Edit3:
I decided (after a few of your suggestions and some pointers from colleagues, that it would be better to first make my table and afterwards update this table with queries.
However, it seems like I have run into a dead end with Access, this is the code I am using:
CREATE TABLE DB_Total ("NAME" char(79),"TYPE" char(16), "UNIT" char(31),
"ADDR" char(254), "RAW_ZERO" char(11), "RAW_FULL" char(11), "ENG_ZERO" char(11),
"ENG_FULL" char(11), "ENG_UNIT" char(8), "FORMAT" char(11), "COMMENT" char(254),
"EDITCODE" char(8), "LINKED" char(1), "OID" char(10), "REF1" char(11), "REF2" char(11),
"DEADBAND" char(11), "CUSTOM" char(128), "TAGGENLINK" char(32), "CLUSTER" char(16),
"EQUIP" char(254), "ITEM" char(63), "HISTORIAN" char(6),
"CUSTOM1" char(254), "CUSTOM2" char(254), "CUSTOM3" char(254), "CUSTOM4" char(254),
"CUSTOM5" char(254), "CUSTOM6" char(254), "CUSTOM7" char(254), "CUSTOM8" char(254))
These are all the columns required for me to make a DBF file that is accepted by the application we are using it with.
You'll understand my sadness when this generated the following error:
Record is too large
Is there anything I can do to make this table work?
UPDATE
The maximum record size for Access 2007 is around 2kB (someone will no doubt correct that value)
When you create CHAR(255) it will use 255 bytes of space regardless as to what is in the field.
By contrast, VARCHARs do not use up space (only enough to define them) until you put something in the field, they grow dynamically.
Changing the CHAR(x)s to VARCHAR(x)s you will shrink the length of your table to within permitted values. Be aware that you may come into trouble if the row you are trying to insert is larger than the 2kB limit.
Previous
The way to specify column lengths when generating the table is to use a CREATE TABLE statement instead of a SELECT * INTO.
CREATE TABLE DB_Total
(
Column1Name NVARCHAR(255) --Use whatever datatype and length you need
,Column2Name NUMERIC(18,0) --Use whatever datatype and length you need
,...
) ;
INSERT INTO DB_Total
....
If you use a SELECT * INTO statement, SQL will use whatever field lengths and types it finds in the existing data.
It is also better practice to list the column names in your insert statement, so instead of
INSERT INTO DB_TOTAL
SELECT a.*
You should put:
INSERT INTO DB_Total
(
Column1Name
,Column2Name
,...
)
SELECT a.Column1Name
,a.Column2Name
,...
FROM ...
WHERE ... ;
In Edit2, you indicated your process starts with a "make table" (SELECT INTO) query which creates DB_Total and loads it with data from Tags_AI_DB. Then you run a series of "append" (INSERT) queries to add data from other tables.
Now your problem is that you need specific field size settings for DB_Total, but it is impossible to define those sizes with a "make table" query.
I think you should create DB_Total one time and set the field sizes as you wish. Do that manually with the table in Design View, or execute a CREATE TABLE statement if you prefer.
Then forget about the "make table" query and use only "append" queries to add the data.
If the issue is that this is a recurring operation and you want to discard previous data before importing the new, execute DELETE FROM DB_Total instead of DROP TABLE DB_Total. That will allow you to preserve the structure of the (now empty) DB_Total table so you needn't fiddle with setting the field sizes again.
Seems to me the only potential issue then might be if the structure of the source tables changes. If that happens, revise the structure of DB_Total so that it's compatible again.
Related
I have a table with 10M+ rows and want to change the data type of one the columns from nvarchar(254) to decimal(7,2). What is the most efficient and effective query to make this change?
I have tried using ALTER in order to make this change, but get an error in SSMS
Error converting data type nvarchar to numeric.
I have also tried using CAST, but this results in errors as well. Admittedly, I'm not a DBA so I have struggled to understand the following:
How to properly write a CAST query that does not yield errors
Whether the CAST and CONVERT functions change the design of the data at the database level (meaning in the Object Explorer, when I right-click the table and then click 'Design' I see the data type of the column has changed) or if the changes only last until the next query is run or the program is exited.
This table was initially created over a month ago as the result of a workflow that was run a few months ago; this workflow has since been scheduled to push new data to the table on an hourly cadence, so deleting the job/table and starting over is not an option.
SET STATISTICS TIME ON
ALTER TABLE Clone3
ALTER COLUMN Price decimal(7,2)
The ultimate goal is to store this data correctly so that arithmetic operations can be performed when it is ingested into other visualization programs (e.g., Tableau, Power BI, etc.) That said, the expected result here is for the data type to be changed to Decimal(7,2) but the actual result is nvarchar(254).
UPDATE
After running SELECT Price from Clone3 WHERE TRY_CONVERT(decimal(7,2),Price) IS NULLthere are 239 records that return in scientific notation. For example -5.0000000000000003E-2
FINAL UPDATE
I ran the following query to update the records that were causing the conversion error (these were negative numbers like '-0.05' being converted to scientific notation for some strange reason).
UPDATE Clone3
SET Price = CAST(Price AS Float)
WHERE TRY_CONVERT(decimal(7,2), Price) IS NULL
Because all of the records are now in a numeric data type, I can convert the entire dataset to decimal(7,2), using this query.
ALTER TABLE Clone3
ALTER COLUMN Price decimal(7,2)
I think I can call this solved, so many thanks to everyone for their responses, especially #Larnu for the code snippet that eventually helped me figure this out.
This 5.9999999999999998E-2 cannot be converted directly to decimal(7,2), although it can be converted to a float, which can then be converted to a decimal(7,2). EG
select cast(cast('5.9999999999999998E-2' as float) as decimal(7,2))
While not the most efficient, or a general solution for this kind of thing, you could alter the table twice, eg:
use tempdb
drop table if exists t
create table t(s varchar(200))
insert into t(s) values ('5.9999999999999998E-2')
go
alter table t alter column s float
alter table t alter column s decimal(7,2)
go
select * from t
The most efficient way is probably to empty the table and reload it:
select *
into temp_t
from t;
truncate table temp_t;
alter table t alter column price decimal(7, 2);
insert into t
select *
from temp_t;
There is more overhead to updating the records in place.
I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.
I have a question about the ALTER TABLE command on a really large table (almost 30 millions rows).
One of its columns is a varchar(255) and I would like to resize it to a varchar(40).
Basically, I would like to change my column by running the following command:
ALTER TABLE mytable ALTER COLUMN mycolumn TYPE varchar(40);
I have no problem if the process is very long but it seems my table is no more readable during the ALTER TABLE command.
Is there a smarter way? Maybe add a new column, copy values from the old column, drop the old column and finally rename the new one?
Note: I use PostgreSQL 9.0.
In PostgreSQL 9.1 there is an easier way
http://www.postgresql.org/message-id/162867790801110710g3c686010qcdd852e721e7a559#mail.gmail.com
CREATE TABLE foog(a varchar(10));
ALTER TABLE foog ALTER COLUMN a TYPE varchar(30);
postgres=# \d foog
Table "public.foog"
Column | Type | Modifiers
--------+-----------------------+-----------
a | character varying(30) |
There's a description of how to do this at Resize a column in a PostgreSQL table without changing data. You have to hack the database catalog data. The only way to do this officially is with ALTER TABLE, and as you've noted that change will lock and rewrite the entire table while it's running.
Make sure you read the Character Types section of the docs before changing this. All sorts of weird cases to be aware of here. The length check is done when values are stored into the rows. If you hack a lower limit in there, that will not reduce the size of existing values at all. You would be wise to do a scan over the whole table looking for rows where the length of the field is >40 characters after making the change. You'll need to figure out how to truncate those manually--so you're back some locks just on oversize ones--because if someone tries to update anything on that row it's going to reject it as too big now, at the point it goes to store the new version of the row. Hilarity ensues for the user.
VARCHAR is a terrible type that exists in PostgreSQL only to comply with its associated terrible part of the SQL standard. If you don't care about multi-database compatibility, consider storing your data as TEXT and add a constraint to limits its length. Constraints you can change around without this table lock/rewrite problem, and they can do more integrity checking than just the weak length check.
Ok, I'm probably late to the party, BUT...
THERE'S NO NEED TO RESIZE THE COLUMN IN YOUR CASE!
Postgres, unlike some other databases, is smart enough to only use just enough space to fit the string (even using compression for longer strings), so even if your column is declared as VARCHAR(255) - if you store 40-character strings in the column, the space usage will be 40 bytes + 1 byte of overhead.
The storage requirement for a short string (up to 126 bytes) is 1 byte
plus the actual string, which includes the space padding in the case
of character. Longer strings have 4 bytes of overhead instead of 1.
Long strings are compressed by the system automatically, so the
physical requirement on disk might be less. Very long values are also
stored in background tables so that they do not interfere with rapid
access to shorter column values.
(http://www.postgresql.org/docs/9.0/interactive/datatype-character.html)
The size specification in VARCHAR is only used to check the size of the values which are inserted, it does not affect the disk layout. In fact, VARCHAR and TEXT fields are stored in the same way in Postgres.
I was facing the same problem trying to truncate a VARCHAR from 32 to 8 and getting the ERROR: value too long for type character varying(8). I want to stay as close to SQL as possible because I'm using a self-made JPA-like structure that we might have to switch to different DBMS according to customer's choices (PostgreSQL being the default one). Hence, I don't want to use the trick of altering System tables.
I ended using the USING statement in the ALTER TABLE:
ALTER TABLE "MY_TABLE" ALTER COLUMN "MyColumn" TYPE varchar(8)
USING substr("MyColumn", 1, 8)
As #raylu noted, ALTER acquires an exclusive lock on the table so all other operations will be delayed until it completes.
if you put the alter into a transaction the table should not be locked:
BEGIN;
ALTER TABLE "public"."mytable" ALTER COLUMN "mycolumn" TYPE varchar(40);
COMMIT;
this worked for me blazing fast, few seconds on a table with more than 400k rows.
Adding new column and replacing new one with old worked for me, on redshift postgresql, refer this link for more details https://gist.github.com/mmasashi/7107430
BEGIN;
LOCK users;
ALTER TABLE users ADD COLUMN name_new varchar(512) DEFAULT NULL;
UPDATE users SET name_new = name;
ALTER TABLE users DROP name;
ALTER TABLE users RENAME name_new TO name;
END;
Here's the cache of the page described by Greg Smith. In case that dies as well, the alter statement looks like this:
UPDATE pg_attribute SET atttypmod = 35+4
WHERE attrelid = 'TABLE1'::regclass
AND attname = 'COL1';
Where your table is TABLE1, the column is COL1 and you want to set it to 35 characters (the +4 is needed for legacy purposes according to the link, possibly the overhead referred to by A.H. in the comments).
Try run following alter table:
ALTER TABLE public.users
ALTER COLUMN "password" TYPE varchar(300)
USING "password"::varchar;
I have found a very easy way to change the size i.e. the annotation #Size(min = 1, max = 50) which is part of "import javax.validation.constraints" i.e.
"import javax.validation.constraints.Size;"
#Size(min = 1, max = 50)
private String country;
when executing this is hibernate you get in pgAdmin III
CREATE TABLE address
(
.....
country character varying(50),
.....
)
OMG! What am I doing wrong?
declare #WTF TABLE (
OrderItemId int
)
SELECT TOP 20 OrderItemId as OrderItemId INTO [#WTF] FROM ac_OrderItems
SELECT * FROM [#WTF]
Problem A: This creates a PHYSICAL table called #WTF. WHY?? I thought this was in memory only?!
Problem B: The last line of code, if I do select * from #WTF... WITHOUT the [ ], it returns NOTHING. What is the significance of the [ ]?
I need serious help. I'm losing my MIND!
Thanks in advance.
What you experience is by design:
SELECT…INTO creates a new table in the default filegroup and inserts the resulting rows from the query into it.
The alternatives are to either:
Not define the WTF table, and rely on the behavior to create it automatically
Use the existing code, but change the SELECT INTO into an INSERT:
INSERT INTO #WTF
(orderitemid)
SELECT TOP 20
oi.orderitemid
FROM ac_ORDERITEMS oi
Mind that when using TOP, you should be defining an ORDER BY clause to ensure data is returned consistently.
Because Select INTO always creates a physical table. What you want to do is an Insert Into.
The Select INTO is creating a physical table named '#WTF', just as it's supposed to do.
The secondary answer is that the reason it seemed to only work with brackets [] is because of the # sign.
select * from #WTF
is selecting off of your empty table variable, where as
select * from [#WTF]
is selecting off of the new physical table the select into created that was populated with data. The brackets are used to allow characters not normally allowed in a table or column name so their use here signifies you are looking for a table with the name #WTF instead of a variable named WTF.
All table variables are "physical" tables.
Your belief that they are "memory only" is a myth. They reside in tempdb and are shown in the metadata views with system generated names such as #4BAC3F29. The structure of a table variable is identical to a #temp table.
You cannot use SELECT ... INTO with table variables but can do with #temp tables. Your code just creates a new user table called #WTF in your user database as indicated in the other answers.
We are performing a database migration to SQL Server, and to support a legacy app we have defined views on the SQL Server table which present data as the legacy app expects.
However, we're now having trouble with INSTEAD OF INSERT triggers defined on those views, when the fields may have default values.
I'll try to give an example.
A table in the database has 3 fields, a, b, and c. c is brand new, the legacy app doesn't know about it, so we also have a view with 2 fields, a and b.
When the legacy app tries to insert a value into its view, we use an INSTEAD OF INSERT trigger to lookup the value that should go in field c, something like this:
INSERT INTO realTable(a, b, c) SELECT Inserted.a, Inserted.b, Calculated.C FROM...
(The details of the lookup aren't relevant.)
This trigger works well, unless field b has a default value. This is because if the query
INSERT INTO legacyView(a) VALUES (123)
is executed, then in the trigger, Inserted.b is NULL, not b's default value. Now I have a problem, because I can't tell the difference the above query, which would put the default value into b, and this:
INSERT INTO legacyView(a,b) VALUES (123, NULL)
Even if b was non-NULLABLE, I don't know how to write the INSERT query in the trigger such that if a value was provided for b, it's used in the trigger, but if not the default is used instead.
EDIT: added that I'd rather not duplicate the default values in the trigger. The default values are already in the database schema, I would hope that I could just use them directly.
Paul: I've solved this one; eventually. Bit of a dirty solution and might not be to everyone's taste but I'm quite new to SQL Server and such like:
In the Instead_of_INSERT trigger:
Copy the Inserted virtual table's data structure to a temporary table:
SELECT * INTO aTempInserted FROM Inserted WHERE 1=2
Create a view to determine the default constraints for the view's underlying table (from system tables) and use them to build statements which will duplicate the constraints in the temporary table:
SELECT 'ALTER TABLE dbo.aTempInserted
ADD CONSTRAINT ' + dc.name + 'Temp' +
' DEFAULT(' + dc.definition + ')
FOR ' + c.name AS Cmd, OBJECT_NAME(c.object_id) AS Name
FROM sys.default_constraints AS dc
INNER JOIN sys.columns AS c
ON dc.parent_object_id = c.object_id
AND dc.parent_column_id = c.column_id
Use a cursor to iterate through the set retrieved and execute each statement. This leaves you with a temporary table with the same defaults as the table to be inserted into.
Insert default record into the temporary table (all fields are nullable as created from Inserted virtual table):
INSERT INTO aTempInserted DEFAULT VALUES
Copy the records from the Inserted virtual table into the view's underlying table (where they would have been inserted originally, had the trigger not prevented this), joining the temporary table to supply default values. This requires use of the COALESCE function so that only unsupplied values are defaulted:
INSERT INTO realTable([a], [b],
SELECT COALESCE(I.[a], T.[a]),
COALESCE(I.[a], T.[b])
FROM Inserted AS I,
aTempInserted AS T
Drop the temporary table
Some ideas:
If the legacy application is specifying column lists for INSERTs, and naming columns rather than using SELECT *, then can't you just bind a default to column c and let the application use your original (modified) table?
If there was any way that you could make the legacy app use a different view or table for its INSERTs than for SELECT or DELETE, you could put the required defaults on that table and use a regular after-trigger to move the new columns over to the real table.
How about leaving the original table alone and adding your additional columns in a separate table which has a 1-1 relationship with the original? Then create a view that combines these two tables and put appropriate instead-of trigger(s) on this new view to handle all data operations split across the two tables. I realize this has performance implications, but it might be the only way around the problem. This would be an ideal case for a materialized view, which would slow down updates but make the result perform exactly like a table for reads. (Materialized views lend themselves best to inner joins and require no aggregation. They also put schema locks on the source tables.)
I've run into a similar problem where I couldn't tell the difference between intentionally NULL values and skipped columns in an instead-of UPDATE trigger on a view. I eventually made an instead-of INSERT trigger on the view to convert inserts to updates (if the key already existed it was an update, otherwise it was an insert). Though this won't help you directly, it might spur some ideas for you or others.
What about using something like this???:
insert into realtable
values inserted.a, isnull(inserted.b, DEFAULT), computedC
from inserted