SQLiteexception database disk image is malformed - sql

I have a weird error with a SQLite Database: You can download it here
Everytime I try to insert something in the Table "CurrencyTransactions" it fails because a new column called 7 appeared for no reason.
I tried to drop the table but
I ran PRAGMA integrity_check but I've this error then
Then I tried to export a .sql file and to import it again in a fresh new database but
1) If I import the structure only, it works fine and I don't have the 7 column anymore
2) If I import the entries then, it fails with this error:
It means something like: "Error in process #74: not an error"
To finish, I also tried this solution but the new database created is empty.
What can I do? I really need to save the entries.

What I suggest is in DB Browser.
File/Export/Database to SQL file.
Select All (for all tables)
Other options up to you other than Export Everything
Save the file.
Close the database.
Open a new database e.g nadekobotfix.db. (could be same name but different location)
Note 1-6 takes a minute or so (just under 60k).
Do the hard work according to :-
You may need to remove/ignore the first and last lines (BEGIN TRANSACTION; and the subsequent COMMIT;)
You would probably not be able to run the generated SQL directly due to constraints (tried this an failed with constraints).
You need to copy sections from the file and run according to the hierarchy as imposed by the constraints (foreign keys). If you have CHECK constraints these may need to be considered. (no Triggers to worry about).
Running SELECT * FROM sqlite_master WHERE type = 'table' AND instr(sql,'CHECK'); returns nothing so there are no CHECK constraints.
Indexes could/should be left till last (as they are in the generated SQL).
A section would consist of a table's create statement along with the insert statements.
You may wish to create a spreadsheet of the tables(sections) marking them off when they have been done.
The following query could assist as those with NA could be done first
SELECT CASE WHEN instr(sql,'FOREIGN KEY') THEN 'FK' ELSE 'NA' END AS fkey, name,sql
FROM sqlite_master
WHERE type = 'table' AND name NOT LIKE 'sqlite%' ORDER BY instr(sql,'FOREIGN KEY')
you could export individual tables from DB Browser for SQlite marking them off when done.
You may wish to do an integrity_check at regular intervals.
If this works (you might have to make adjustments to the SQL) then you can rename the old db and then rename the new (or move the old and the copy the new if using the same database name).
Note you may still have to determine how the corruption occurred.
You may wish to backup the database regularly.
You may wish to have a look at How To Corrupt An SQLite Database File
You may wish to heed :-
With few exceptions, analysis of a corrupt database does not normally
help to determine what went wrong. A better approach to avoiding
"danger", we have found, is to read and understand
https://www.sqlite.org/howtocorrupt.html
* in database main *
Page 10628: btreeInitPage() returns error code 11
This indicates that the page header is so badly corrupted that SQLite
cannot interpret this page at all. One possible reason: page 10628
has been zeroed. Can you look at a hex dump of that page? (Remember
that SQLite numbers pages beginning with 1, so the start of the page
is pgsz*10627 where pgsz is the page size.)
-- D. Richard Hipp
“btreeInitPage() returns error code 11”
Sample adjustment required
The Reminders table has a column called When, this is an SQL keyword (inadvisable column name IMO) so the generated SQL for the INSERT doesn't wrap the column name so you will get an error.
i.e. :-
CREATE TABLE IF NOT EXISTS `Reminders` (
`Id` INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
`ChannelId` INTEGER NOT NULL,
`IsPrivate` INTEGER NOT NULL,
`Message` TEXT,
`ServerId` INTEGER NOT NULL,
`UserId` INTEGER NOT NULL,
`When` TEXT NOT NULL,
`DateAdded` TEXT
);
INSERT INTO `Reminders` (Id,ChannelId,IsPrivate,Message,ServerId,UserId,When,DateAdded) VALUES (1270,367886754973351936,1,'Birthday Day',318127386367623170,367886754973351936,'2018-05-03 22:07:48.1860996','2018-03-18 22:07:48.186101'),
(1271,248278722656993281,1,'to remind Chanmi to remind Jayos to DeagleMomoka',318127386367623170,248278722656993281,'2018-05-05 22:08:58.4915565','2018-03-18 22:08:58.4915582'),
(1376,170240129414201344,1,'timely',318127386367623170,170240129414201344,'2018-03-29 09:00:29.4476776','2018-03-28 09:00:29.447679'),
(1377,373301201158144000,1,'timely',318127386367623170,373301201158144000,'2018-03-29 09:50:14.1631563','2018-03-28 09:50:14.1631577'),
(1378,248278722656993281,1,'timely',318127386367623170,248278722656993281,'2018-03-29 11:24:27.0250275','2018-03-28 11:24:27.025029'),
(1379,421433212716318721,1,'to timely',318127386367623170,421433212716318721,'2018-03-29 19:21:17.7465563','2018-03-28 19:21:17.7465584'),
(1380,346513954966863872,1,'t',318127386367623170,346513954966863872,'2018-03-29 19:42:23.4758798','2018-03-28 19:42:23.4758816'),
(1381,272735316002209792,1,'t!daily',318127386367623170,272735316002209792,'2018-03-29 21:01:47.5616218','2018-03-28 21:01:47.5616236'),
(1382,298272937243312132,1,'timely',318127386367623170,298272937243312132,'2018-03-29 23:18:02.8826873','2018-03-28 23:18:02.8826891'),
(1383,332340162774302720,1,'t',318127386367623170,332340162774302720,'2018-03-30 01:55:21.4704139','2018-03-29 01:55:21.4704156'),
(1384,367165474246754314,1,'tatyahaksodoeo',318127386367623170,367165474246754314,'2018-03-30 03:46:18.8805182','2018-03-29 03:46:18.8805196'),
(1385,290086674761908225,1,'timely',318127386367623170,290086674761908225,'2018-03-30 07:02:33.4115303','2018-03-29 07:02:33.4115321'),
(1386,168064128500367360,1,'timely',318127386367623170,168064128500367360,'2018-03-30 07:19:09.1915867','2018-03-29 07:19:09.1915885');
would have to be changed to use (square brackets, single or double quotes or grave accents can be used to enclose/wrap/quote the offending keyword):-
.......INSERT INTO `Reminders` (Id,ChannelId,IsPrivate,Message,ServerId,UserId,[When],DateAdded) ......
Likewise table SelfAssignableRoles has the GROUP keyword as a column name.
Likewise table Permissionv2 and table StartupCommand have the INDEX keyword as a column name.
Potential Issue
As an exercise I've tried doing the above and have managed to get 67 out of the 71 tables (66 out of 70 of your tables as sqlite_sequence is automatically created).
However, there appears to be an issue, between the Clubs table and the DiscordUser table. I believe that there is a circular reference between them. Thus as WaifuInfo and WaifuUpdates are reliant upon the DiscordUser table and as WaifuItem is realiant upon WaifInfo. The tables mentioned here have not been successfully copied.
A word of warning. If you attempt to create Clubs and or DiscordUser using the existing constraints you may end up in a situation where one always has to exist.
e.g. if DiscordUser exists but Clubs doesn't then
DROP TABLE IF EXISTS `DiscordUser`;
results in :-
no such table: main.Clubs: DROP TABLE IF EXISTS `DiscordUser`;
If you then create Clubs and try the DROP with a very basic (no Constraints) using :-
CREATE TABLE IF NOT EXISTS `Clubs` (ID INTEGER PRIMARY KEY);
DROP TABLE IF EXISTS `DiscordUser`;
The result is good as per :-
Query executed successfully: DROP TABLE IF EXISTS `DiscordUser`; (took 1ms)
Now try to DROP Clubs using :-
--CREATE TABLE IF NOT EXISTS `Clubs` (ID INTEGER PRIMARY KEY);
--DROP TABLE IF EXISTS `DiscordUser`;
DROP TABLE IF EXISTS `Clubs`;
and you can't as DiscordUser doesn't exist as per :-
no such table: main.DiscordUser: DROP TABLE IF EXISTS `Clubs`;
I've tried closing the database in case it was a caching issue but the behaviour remains.
As such, I'd strongly suggest having a good look at the constraint usage and being sure of correcting the issues before trying to copy all of the tables (I guess that there is a chance that this could be part of the cause of the corruption, however why/how is way beyond me).
P.S. The method I used was (1-6):-
Then for 7 :-
Run the sqlite_master query, from above, select all cells and copy, then drop the results into a spreadsheet (you could drop the sql column as the create gets truncated unless you try to fiddle with the delimiters).
Open the exported file (I used NotePad++) in your Editor.
Open a new DB in DB Browser (will refer to it as DBB from now) for SQLite.
In DBB in EXEC SQL tab, input PRAGMA integrity_check, run to check.
Create new tab (for next SQL).
Switch to Spreadsheet and copy the first table name that isn't marked as done.
Switch to Editor do find on EXISTS copied_table_name
Select the section (i.e the CREATE statement along to and including the last row to be inserted, note can be a pain for the larger tables so might be easier to create separate export for those tables). Copy the selection to the clipboard.
Paste into the empty tab and run.
If OK then
in DBB click to create a new tab for the next
switch to spreadsheet and mark table as DONE
goto 5.
If not OK then
If you can fix the issue by altering the SQL fix the SQL (e.g. column name needs enclosing/wrapping/quoting) and then go to 9.
If the issue is due to Constraints then go to 5 but select the table causing the constraint.
OK the issue with the DiscordUser/Club tables is that a Clubs.Ownerid requires a DiscordUser. So clubs cannot be added without the relevant Discord users (id's 1,2,7,14 and 32). Some DiscordUsers are club members so they require a club to exist.
What I have done is to load the DiscordUsers rows for the Club owners changing their ClubId to null. Load the Clubs. Update the ClubId's of the DiscordUsers so they are members of the club that they were before (i.e. undo the null) and to load the rest of the nearly 600 Discordusers (excluding those already loaded).
Here's the SQL I used for that part (note except for the Discorduser, Clubs and the 3 waifu tables, all other tables have been successfully created and loaded).
INSERT INTO `DiscordUser` (Id,AvatarId,Discriminator,UserId,DateAdded,Username,ClubId,LastLevelUp,NotifyOnLevelUp,LastXpGain,TotalXp,IsClubAdmin,CurrencyAmount) VALUES
-- ClubId was 6 changed to null
(1,'6d5212a0f5e862d57c8ffc6f254a2e85','1458',299779864045682689,'2017-10-07 18:02:04.8287878','Anubis',NULL,'2018-03-27 02:22:26.362966',0,'2017-11-17 01:19:14.0313957',7056,1,280),
-- Owns a club but not in a club
(2,'3b37e0f635706f81fdde2b6de9889283','9810',181200115539640321,'2017-10-07 18:04:39.767728','AnnaHime',NULL,'2018-01-02 02:27:38.8011863',0,'2017-11-16 01:29:49.0371488',429,0,360),
-- ClubId null was 3
(7,'612c67b6eb57d8806dcc92ed45b3a6d0','0396',177502331582021639,'2017-10-07 18:11:09.7830603','Tsuchimursu',NULL,'2018-03-28 17:45:53.7399883',0,'2017-11-17 15:53:59.084885',18156,1,4725),
-- ClubId null was 4
(14,'b2dd362171277337294de325bf92ad6a','3267',215597863441268737,'2017-10-07 18:45:54.8092675','LaLa☆Star',NULL,'2018-01-14 20:52:15.7531274',0,'2017-11-08 19:00:22.7778305',2061,1,286),
-- ClubId null was 5
(32,'667f4d802b977c4d4be974e35ae63c55','2593',251689019929395200,'2017-10-08 00:58:16.6089546','username',NULL,'2018-03-28 07:27:34.9348084',0,'2017-11-17 20:02:14.0283998',4704,1,1188),
-- ClubId was 2 changed to NULL
(91,'0adb399c9f2cd94370038e2452ab8c8d','6790',346513954966863872,'2017-10-13 05:48:51.7788964','mayoi',NULL,'2018-03-24 02:50:06.8970518',0,'2017-11-17 20:01:29.0692552',7635,1,515)
;
INSERT INTO `Clubs` (Id,DateAdded,Discrim,ImageUrl,MinimumLevelReq,Name,OwnerId,Xp,Description) VALUES
(2,'2017-11-14 07:39:57.5091592',1,'https://lh3.googleusercontent.com/_7WKFouxTx1fdFpnmmuykDAd5SoiiJOPzHdRmXKOmRRZhV5Ba4V_kZct5ooVjQ9BuzU=w300',5,'We ⤠waifus',91,40137,'Love your waifus short & tall, big & small, cute as dolls, we love ''em all!'),
(3,'2017-12-11 07:00:59.3762914',1,'',30,'Den of Faes',7,11607,NULL),
(4,'2017-12-11 07:03:59.093402',1,'',5,'Skeleton Enthusiasts',14,657,NULL),
(5,'2017-12-11 07:05:56.9111719',1,'',5,'Saki''s Juice',32,2610,NULL),
(6,'2017-12-22 04:46:24.7271709',1,'',5,'nap pile',1,24870,'For the sleeping beauties and the wandering insomniacs who enjoy a good night sleep.')
;
UPDATE `DiscordUser` SET ClubId = 6 WHERE Id=1;
UPDATE `DiscordUser` SET ClubId = 3 WHERE Id=7;
UPDATE `DiscordUser` SET ClubId = 4 WHERE Id=14;
UPDATE `DiscordUser` SET ClubId = 5 WHERE Id=32;
UPDATE `DiscordUser` SET ClubId = 2 WHERE Id=91;
-- LOAD Remaining DiscordUser rows (note incomplete)
INSERT INTO `DiscordUser` (Id,AvatarId,Discriminator,UserId,DateAdded,Username,ClubId,LastLevelUp,NotifyOnLevelUp,LastXpGain,TotalXp,IsClubAdmin,CurrencyAmount) VALUES
--(1,'6d5212a0f5e862d57c8ffc6f254a2e85','1458',299779864045682689,'2017-10-07 18:02:04.8287878','Anubis',6,'2018-03-27 02:22:26.362966',0,'2017-11-17 01:19:14.0313957',7056,1,280),
--(2,'3b37e0f635706f81fdde2b6de9889283','9810',181200115539640321,'2017-10-07 18:04:39.767728','AnnaHime',NULL,'2018-01-02 02:27:38.8011863',0,'2017-11-16 01:29:49.0371488',429,0,360),
(3,'a3cd92d397ad357834d0e6c9f10bfc59','0429',145356302347010048,'2017-10-07 18:04:49.786657','Rebel Lucy',NULL,'2018-03-26 12:55:21.1149964',0,'2017-11-17 22:21:24.0263741',6876,0,3600),
(4,'7225dccaab1c93896657a61e18595378','5286',84689434536050688,'2017-10-07 18:05:44.765554','scarletflame234',NULL,'2018-03-28 22:56:28.7427437',0,'2017-11-17 23:21:41.4446535',13368,0,288),
(5,'c1316bc0673f4a2709b3ce550ed54395','0760',303279191116480514,'2017-10-07 18:06:39.7664015','zachary',NULL,'2018-03-02 03:48:43.4817755',0,'2017-11-17 18:44:14.1082867',210,0,50),
(6,'2ed95eae7c3088c46b23e71578dacc42','8801',161369834314137601,'2017-10-07 18:07:04.7672808','Kou',NULL,'2018-03-07 06:24:32.3405246',0,'2017-11-17 23:20:00.0648699',2640,0,55),
--(7,'612c67b6eb57d8806dcc92ed45b3a6d0','0396',177502331582021639,'2017-10-07 18:11:09.7830603','Tsuchimursu',3,'2018-03-28 17:45:53.7399883',0,'2017-11-17 15:53:59.084885',18156,1,4725),
(8,'5b1d239935ab4dd6d3eee98954601d52','9859',179093512610906113,'2017-10-07 18:13:54.7939334','TheCorty',NULL,'2017-11-12 12:07:36.4752178',0,'2017-11-12 23:47:26.4744132',2460,0,205), ...........
NOTE The SQL from -- LOAD Remaining DiscordUser rows, will not work as it's only intended to show how ID's 1,2 and 7 have been commented out, as should be rows 14, 32 and 91 as they have already been loaded, the other close to 600 rows should be included.
Note I've just also loaded the outstanding 3 waifu tables so all data can be retrieved (assuming that none has been lost due to the corruption). PRAGMA integrity_check; returns OK.

Related

sybase sql anywhere 11 query results are incorrect

I have the following table which has >13.000 rows.
CREATE TABLE "DBA"."mytable" (
"c1" numeric(8,0) NOT NULL,
"c2" numeric(2,0) NOT NULL,
"c3" numeric(4,0) NOT NULL,
"c4" numeric(8,0) NOT NULL,
PRIMARY KEY ("c1","c2","c3","c4")
)
The following query returns no rows when there are at least two rows that happen to meet the condition:
select * from mytable A where A.c1=229 and A.c3=1
More strangely, a slightly modified (but same) version of the query returns 2 rows as expected:
select * from mytable A where A.c1=229 and A.c3+1=2
Having suspected there is some physical corruption in the db, I have created a new db, created the table with the above code and loaded values from an unload file. The results are the same.
I know all four columns in PK is not good design, but this shouldn't be an excuse for the db to return wrong results.
By playing with the column creation order and dropping the PK the problem disappears and reappears.
Does aynbody know a fix or some workaround about this problem?
The DB is on a Windows 8 64-bit system, the locale is Turkish.
Thanks
I remember long ago in the times I used Sybase, that sometimes the table's data get silently corrupted.
The only solution we found was to re-create the table and insert the data again. I'm sorry but that was the only solution we found. So, you could copy the data to another table, then drop and recreate your table, and finally copy the data back.

How to ignore some rows while importing from a tab separated text file in PostgreSQL?

I have a 30 GB tab separated text file which has more than 100 million rows, when I want to import this text file to a PostgreSQL table using \copy command, some rows cause error. how can I ignore those rows and also take a record of the ignored rows while importing to postgresql?
I connect to my machine by SSH so I can not use pgadmin!
it's very hard to edit the text file before importing because so many different rows have different problems. if there exists a way to check the rows one by one before importing and then run the \copy command for individual rows, it would be helpful.
Below is the code which generates the table:
CREATE TABLE Papers(
Paper_ID CHARACTER(8) PRIMARY KEY,
Original_paper_title TEXT,
Normalized_paper_title TEXT,
Paper_publish_year INTEGER,
Paper_publish_date DATE,
Paper_Document_Object_Identifier TEXT,
Original_venue_name TEXT,
Normalized_venue_name TEXT,
Journal_ID_mapped_to_venue_name CHARACTER(8),
Conference_ID_mapped_to_venue_name CHARACTER(8),
Paper_rank BIGINT,
FOREIGN KEY(Journal_ID_mapped_to_venue_name) REFERENCES Journals(Journal_ID),
FOREIGN KEY(Conference_ID_mapped_to_venue_name) REFERENCES Conferences(Conference_ID));
Don't load directly to your destination table but to a single column staging table.
create table Papers_stg (rec text);
Once you have all the data loaded you can the do verifications on the data using SQL.
Find records with wrong number of fields:
select rec
from Papers_stg
where cardinality(string_to_array(rec,' ')) <> 11
Create a table with all text fields
create table Papers_fields_text
as
select fields[1] as Paper_ID
,fields[2] as Original_paper_title
,fields[3] as Normalized_paper_title
,fields[4] as Paper_publish_year
,fields[5] as Paper_publish_date
,fields[6] as Paper_Document_Object_Identifier
,fields[7] as Original_venue_name
,fields[8] as Normalized_venue_name
,fields[9] as Journal_ID_mapped_to_venue_name
,fields[10] as Conference_ID_mapped_to_venue_name
,fields[11] as Paper_rank
from (select string_to_array(rec,' ') as fields
from Papers_stg
) t
where cardinality(fields) = 11
For fields conversion checks you might want to use the concept described here
Your only option is to use row-by-row processing. Write shell script (for example) that will loop thru input file and send each row to "copy" then check execution result, then write failed rows to some "err_input.txt".
More complicated logic can increase processing speed. Using "portions" instead of row-by-row and use row-by-row logic on failed segments.
Consider using pgloader
Check BATCHES AND RETRY BEHAVIOUR
You could use an BEFORE INSERT - trigger and check your criteria. If the record fails the check, write a log (or an entry into a separate table) and return null. You could even correct some values, if possible and feasible.
Of course, if checking criteria requires other queries (like finding duplicate keys etc.), you might get a performance issue. But I'm not sure which kind of "different problems in different rows" you mean...
Confer also an answer on StackExchange Database Administrators, and the following example taken from Bartosz Dmytrak at PostgreSQL forum:
CREATE OR REPLACE FUNCTION "myschema"."checkTriggerFunction" ()
RETURNS TRIGGER
AS
$BODY$
BEGIN
IF EXISTS (SELECT 1 FROM "myschema".mytable WHERE "MyKey" = NEW."MyKey")
THEN
RETURN NULL;
ELSE
RETURN NEW;
END IF;
END;
$BODY$
LANGUAGE plpgsql;
and trigger:
CREATE TRIGGER "checkTrigger"
BEFORE INSERT
ON "myschema".mytable
FOR EACH ROW
EXECUTE PROCEDURE "myschema"."checkTriggerFunction"();

Adding Row in existing table (SQL Server 2005)

I want to add another row in my existing table and I'm a bit hesitant if I'm doing the right thing because it might skew the database. I have my script below and would like to hear your thoughts about it.
I want to add another row for 'Jane' in the table, which will be 'SKATING" in the ACT column.
Table: [Emp_table].[ACT].[LIST_EMP]
My script is:
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
Will this do the trick?
Your statement looks ok. If the database has a problem with it (for example, due to a foreign key constraint violation), it will reject the statement.
If any of the fields in your table are numeric (and not varchar or char), just remove the quotes around the corresponding field. For example, if emp_cod and line_no are int, insert the following values instead:
('REG','EMP',45233,'2016-06-20 00:00:00:00',2,'SKATING','JANE')
Inserting records into a database has always been the most common reason why I've lost a lot of my hairs on my head!
SQL is great when it comes to SELECT or even UPDATEs but when it comes to INSERTs it's like someone from another planet came into the SQL standards commitee and managed to get their way of doing it implemented into the final SQL standard!
If your table does not have an automatic primary key that automatically gets generated on every insert, then you have to code it yourself to manage avoiding duplicates.
Start by writing a normal SELECT to see if the record(s) you're going to add don't already exist. But as Robert implied, your table may not have a primary key because it looks like a LOG table to me. So insert away!
If it does require to have a unique record everytime, then I strongly suggest you create a primary key for the table, either an auto generated one or a combination of your existing columns.
Assuming the first five combined columns make a unique key, this select will determine if your data you're inserting does not already exist...
SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND [EMP_COD] = wsEmpCod AND [DATE] = wsDate AND [LINE_NO] = wsLineno
The wsXXX declarations, you will have to replace them with direct values or have them DECLAREd earlier in your script.
If you ran this alone and recieved a value of 1 or more, then the data exists already in your table, at least those 5 first columns. A true duplicate test will require you to test EVERY column in your table, but it should give you an idea.
In the INSERT, to do it all as one statement, you can do this ...
INSERT INTO [Emp_table].[ACT].[LIST_EMP]
([ENTITY],[TYPE],[EMP_COD],[DATE],[LINE_NO],[ACT],[NAME])
VALUES
('REG','EMP','45233','2016-06-20 00:00:00:00','2','SKATING','JANE')
WHERE (SELECT COUNT(*) AS FoundRec FROM [Emp_table].[ACT].[LIST_EMP]
WHERE [ENTITY] = wsEntity AND [TYPE] = wsType AND
[EMP_COD] = wsEmpCod AND [DATE] = wsDate AND
[LINE_NO] = wsLineno) = 0
Just replace the wsXXX variables with the values you want to insert.
I hope that made sense.

Postgresql - change the size of a varchar column to lower length

I have a question about the ALTER TABLE command on a really large table (almost 30 millions rows).
One of its columns is a varchar(255) and I would like to resize it to a varchar(40).
Basically, I would like to change my column by running the following command:
ALTER TABLE mytable ALTER COLUMN mycolumn TYPE varchar(40);
I have no problem if the process is very long but it seems my table is no more readable during the ALTER TABLE command.
Is there a smarter way? Maybe add a new column, copy values from the old column, drop the old column and finally rename the new one?
Note: I use PostgreSQL 9.0.
In PostgreSQL 9.1 there is an easier way
http://www.postgresql.org/message-id/162867790801110710g3c686010qcdd852e721e7a559#mail.gmail.com
CREATE TABLE foog(a varchar(10));
ALTER TABLE foog ALTER COLUMN a TYPE varchar(30);
postgres=# \d foog
Table "public.foog"
Column | Type | Modifiers
--------+-----------------------+-----------
a | character varying(30) |
There's a description of how to do this at Resize a column in a PostgreSQL table without changing data. You have to hack the database catalog data. The only way to do this officially is with ALTER TABLE, and as you've noted that change will lock and rewrite the entire table while it's running.
Make sure you read the Character Types section of the docs before changing this. All sorts of weird cases to be aware of here. The length check is done when values are stored into the rows. If you hack a lower limit in there, that will not reduce the size of existing values at all. You would be wise to do a scan over the whole table looking for rows where the length of the field is >40 characters after making the change. You'll need to figure out how to truncate those manually--so you're back some locks just on oversize ones--because if someone tries to update anything on that row it's going to reject it as too big now, at the point it goes to store the new version of the row. Hilarity ensues for the user.
VARCHAR is a terrible type that exists in PostgreSQL only to comply with its associated terrible part of the SQL standard. If you don't care about multi-database compatibility, consider storing your data as TEXT and add a constraint to limits its length. Constraints you can change around without this table lock/rewrite problem, and they can do more integrity checking than just the weak length check.
Ok, I'm probably late to the party, BUT...
THERE'S NO NEED TO RESIZE THE COLUMN IN YOUR CASE!
Postgres, unlike some other databases, is smart enough to only use just enough space to fit the string (even using compression for longer strings), so even if your column is declared as VARCHAR(255) - if you store 40-character strings in the column, the space usage will be 40 bytes + 1 byte of overhead.
The storage requirement for a short string (up to 126 bytes) is 1 byte
plus the actual string, which includes the space padding in the case
of character. Longer strings have 4 bytes of overhead instead of 1.
Long strings are compressed by the system automatically, so the
physical requirement on disk might be less. Very long values are also
stored in background tables so that they do not interfere with rapid
access to shorter column values.
(http://www.postgresql.org/docs/9.0/interactive/datatype-character.html)
The size specification in VARCHAR is only used to check the size of the values which are inserted, it does not affect the disk layout. In fact, VARCHAR and TEXT fields are stored in the same way in Postgres.
I was facing the same problem trying to truncate a VARCHAR from 32 to 8 and getting the ERROR: value too long for type character varying(8). I want to stay as close to SQL as possible because I'm using a self-made JPA-like structure that we might have to switch to different DBMS according to customer's choices (PostgreSQL being the default one). Hence, I don't want to use the trick of altering System tables.
I ended using the USING statement in the ALTER TABLE:
ALTER TABLE "MY_TABLE" ALTER COLUMN "MyColumn" TYPE varchar(8)
USING substr("MyColumn", 1, 8)
As #raylu noted, ALTER acquires an exclusive lock on the table so all other operations will be delayed until it completes.
if you put the alter into a transaction the table should not be locked:
BEGIN;
ALTER TABLE "public"."mytable" ALTER COLUMN "mycolumn" TYPE varchar(40);
COMMIT;
this worked for me blazing fast, few seconds on a table with more than 400k rows.
Adding new column and replacing new one with old worked for me, on redshift postgresql, refer this link for more details https://gist.github.com/mmasashi/7107430
BEGIN;
LOCK users;
ALTER TABLE users ADD COLUMN name_new varchar(512) DEFAULT NULL;
UPDATE users SET name_new = name;
ALTER TABLE users DROP name;
ALTER TABLE users RENAME name_new TO name;
END;
Here's the cache of the page described by Greg Smith. In case that dies as well, the alter statement looks like this:
UPDATE pg_attribute SET atttypmod = 35+4
WHERE attrelid = 'TABLE1'::regclass
AND attname = 'COL1';
Where your table is TABLE1, the column is COL1 and you want to set it to 35 characters (the +4 is needed for legacy purposes according to the link, possibly the overhead referred to by A.H. in the comments).
Try run following alter table:
ALTER TABLE public.users
ALTER COLUMN "password" TYPE varchar(300)
USING "password"::varchar;
I have found a very easy way to change the size i.e. the annotation #Size(min = 1, max = 50) which is part of "import javax.validation.constraints" i.e.
"import javax.validation.constraints.Size;"
#Size(min = 1, max = 50)
private String country;
when executing this is hibernate you get in pgAdmin III
CREATE TABLE address
(
.....
country character varying(50),
.....
)

A trigger to find the sum of one field in a different table and error if it's over a certain value in oracle

I have two tables
moduleprogress which contains fields:
studentid
modulecode
moduleyear
modules which contains fields:
modulecode
credits
I need a trigger to run when the user is attempting to insert or update data in the moduleprogress table.
The trigger needs to:
look at the studentid that the user has input and look at all modules that they have taken in moduleyear "1".
take the modulecode the user input and look at the modules table and find the sum of the credits field for all these modules (each module is worth 10 or 20 credits).
if the value is above 120 (yearly credit limit) then it needs to error; if not, input is ok.
Does this make sense? Is this possible?
#a_horse_with_no_name
This looks like it will work but I will only be using the database to input data manually so it needs to error on input. I'm trying to get a trigger similar to this to solve the problem(trigger doesn't work) and forget that "UOS_" is before everything. Just helps me with my database and other functions.
CREATE OR REPLACE TRIGGER "UOS_TESTINGS"
BEFORE UPDATE OR INSERT ON UOS_MODULE_PROGRESS
REFERENCING NEW AS NEW OLD AS OLD
DECLARE
MODULECREDITS INTEGER;
BEGIN
SELECT
m.UOS_CREDITS,
mp.UOS_MODULE_YEAR,
SUM(m.UOS_CREDITS)
INTO MODULECREDITS
FROM UOS_MODULE_PROGRESS mp JOIN UOS_MODULES m
ON m.UOS_MODULE_CODE = mp.UOS_MODULE_CODE
WHERE mp.UOS_MODULE_YEAR = 1;
IF MODULECREDITS >= 120 THEN
RAISE_APPLICATION_ERROR(-20000, 'Students are only allowed to take upto 120 credits per year');
END IF;
END;
I get the error message :
8 23 PL/SQL: ORA-00947: not enough values
4 1 PL/SQL: SQL Statement ignored
I'm not sure I understand your description, but the way I understand it, this can be solved using a materialized view, which might give better transactional behaviour than the trigger:
CREATE MATERIALIZED VIEW LOG
ON moduleprogress WITH ROWID (modulecode, studentid, moduleyear)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG
ON modules with rowid (modulecode, credits)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW mv_module_credits
REFRESH FAST ON COMMIT WITH ROWID
AS
SELECT pr.studentid,
SUM(m.credits) AS total_credits
FROM moduleprogress pr
JOIN modules m ON pr.modulecode = m.modulecode
WHERE pr.moduleyear = 1
GROUP BY pr.studentid;
ALTER TABLE mv_module_credits
ADD CONSTRAINT check_total_credits CHECK (total_credits <= 120)
But: depending on the size of the table this might however be slower than a pure trigger based solution.
The only drawback of this solution is, that the error will be thrown at commit time, not when the insert happens (because the MV is only refreshed on commit, and the check constraint is evaluated then)