a few weeks ago my computer stopped working and I lost a lot of important data stored in a postgres DB (should have backed it up but, I didn't ** sigh ** ).
I was able to access the hard drive and extract the program files for postgres. How can I try and restore the DB?
Here is what I have tried so far:
download postgres (same version as the lost DB) on a separate computer and swap out the program files with the ones I am trying to recover. I am using a PC, so I stop service->swap out all the files-> restart service. Restarting the service is never successful.
Use PGAdmin -> creat new db (ensure path for binary files is correct) -> restore-> Here i get stuck figuring out what the correct files are
Related
Due to a lightning strike on my house my old computer was recently fried. But I bought a new one and much to my delight, the C: SSD filesystem from that old machine was still working after I ported it to the new one, albeit now as a D: drive.
Now I'm getting ready to install PostgreSQL and would like to be able to access the old database that resides on the D: drive. I am stumped as how to proceed.
There does not seem to be any way to tell a running PostgreSQL instance, "Hey look over there on the D: drive - that's a data base you can use. There's a CREATE Database and a DROP database, but not a "Use this database". I should say I was running version 14 on the old machine and could certainly install that same DB again on the new one before upgrading, if there were a way to add to its catalogue.
There is no data base dump/conversion utility that works without going through a running PostgreSQL server instance, so I see no way to convert the old data out of its proprietary format and reload it to the new PostgreSQl instance.
The only thought that occurs to me is to install a version as close to the old version 14 as possible, then CREATE a second data base somewhere new (perhaps on the D: drive), then stop the PostgreSQL server instance, copy the old data over top of the new data (with all subdirectories), then restart the server and hope for the best. Sounds like a 50-50 proposition at best.
Anyone else have any other thoughts/ideas?
So, just in case someone else has this problem and finds this question, here is what I found.
The installer for PostgreSQL has a prompt for what data directory to use. (After making a backup copy of the data,) I told it to use D:\Program Files\PostgreSQL\14\data and it recognized that this was an existing PostgreSQL data repository and it preserved all my tables.
As an experiment afterward I copied the backup data back into the data directory (after stopping the DB), restarted the DB, and everything was fine after PostgreSQL complained a little about the LOG files locations. I would say this can work as long as you are running the same version of PostgreSQL that last worked with the database on your old computer.
I have spent 2 days chasing this one round and round, and I have tried several solutions (detailed below).
Problem. When retrieving Geographical data from a Microsft SQL Database I get an error
DBServer routine OpenDataSet has failed with error DataReader.GetFieldType(3) returned null.
From what I have read, this is typically because the project cannot load or access Microsoft.SqlServer.Types, so it can't interpret the returned data effectively
What I have tried;
Removing and readding the reference.
Setting the assembly to copy Local
Removing and reinstalling via Nuget (v14.0)
Referencing said assembly in the web.config
Adding a utility class in Global.asax, then calling that on Application_Start to load in the other dependent files
LoadNativeAssembly(nativeBinaryPath, "msvcr120.dll")
LoadNativeAssembly(nativeBinaryPath, "SqlServerSpatial140.dll")
The error happens whether I am running locally (not such a key issue) or on an Azure vps (SqlServer Web Edition).
The stored procedure I am calling to return the data works fine. (In fact, this code is a lift and shift project. the old vps works fine if we fire it up, so it is most likely a configuration issue and all the above I have done is wasted effort. But the original developer is not contactable, nor are there any notes on how this was made to work.)
I'm trying to connect to and run queries on two large, locally-stored SQL databases with file extensions like so:
filename.sql.zstd.part
filename2.sql.zstd
My preference is to use the RMySQL package- however i am finding it hard to find documentation of a) how to access locally stored SQL files, and b) how to deal with the zstd extension.
This may be very basic but help is appreciated!
Seems like you have problems understanding the file extensions.
filename.sql.zstd.part
.part usually means you are downloading a file from the internet, but the download isn't complete yet (so downloads that are in progress or have been stopped)
So to get from filename.sql.zstd.part to filename.sql.zstd you need to complete your download
.zstd means it is a compressed file (to save disk space). You need a decompression program to get from filename.sql.zstd to filename.sql
The compression algorithm used is called Zstandard so you need a decompressor specifically for this program. Look here https://facebook.github.io/zstd/ for such a program.
There was also once an R package for this - but it has been archived. But you could also download an older version
(https://cran.r-project.org/web/packages/zstdr/index.html)
In filename.sql is actually not a database. In an .sql file are usually SQL statements for creating / modifying database structures. You'd have to install a database e.g. MariaDB and then import this .sql file to actually really have the files in a database on your computer. And then you would access this database via R.
I'm in a rather odd situation. At my work, we have two MSSQL 2012 servers, one physically hosted here, one virtual. Through a long, frustrating set of circumstances, our migration plans fell apart and we now have both servers with different data on each. I have to take a database, let's call it cms1, from the physical server and move it to the virtual server. However, I have to also make sure the virtual server's copy of cms1 remains intact, then run a script to move the changed tables from one to the other.
What I've tried so far is:
Make a full back up of the physical server's copy into cms1.bak, then copy that bak file over to the virtual server.
Rename the virtual server's version of the database with "alter database cms1 modify name = cms1_old". Good so far.
Take the newly renamed cms1_old db offline, then restore from my bak file. I get an error that the file for cms1 (NOT cms_old) is in use.
I went to the actual location on disk and renamed the two files associated with cms1 to be cms1_old. I closed SSMS and re-opened it, and tried the restore again. I got the same error, that the file for cms1 (again, NOT the file for cms1_old) was in use.
(update) I have since discovered detaching databases and tried to use that. When re-attaching after renaming the files for cms1_old, though, SSMS says that the files cannot be found. I believe I've gotten every path correct, so I'm not sure why this is happening.
Was my mistake in not taking the cms1 database offline BEFORE renaming it? If so, is there a way to fix this, or should I start again? This cms1 database is a test, not the real thing, but I want to get the procedure nailed down before working on our live database. How would I move a copy of cms1 from physical to virtual, keeping cms1 on the virtual server, so both can exist side by side while I move data from certain tables of one to the other? I really hope I'm making sense--I've been fighting with this for two hours straight. Thanks for any suggestions. I'm not too experienced in this sort of thing; I know SQL reasonably well, but dealing with physical DB files, backups, etc is new to me.
I make a copy of an .mdb database (and it's other partition) every night, and test it by opening it up to see if it works.
By "make a copy" I mean:
I kick all the users out of the database who are connected via RDP (not automated...)
Rename both backend files...and then proceed to make a copy of the files. (automated...)
And by "see if it works" I mean:
Relink a frontend file (.mde) to both files (this is automated)
Open it (and it's other partition) with a frontend (.mde) and workgroup security file (.mdw) on my local machine to see if it works. (this is not automated, and the part I am focusing on here...)
There are only two tables in the other partition, so I run the part of the frontend file I know uses that partition to test if the backup is going to work.
Would connecting to the backup of the files and doing a query on some table in both partitions be enough to prove that the backup is good without actually looking at it with human eyes?
I have also automated the process of compacting the live database, but I don't feel safe automating this part until I have verified that the backups indeed work.
Also before I get any posts along the lines of "Why are you still using access?", let me just state that I don't get to make those decisions and this database was here a long time before I got here.
(Please Note: if you feel I have posted this on SO in error please feel free to migrate to the DBA SE or to Serverfault)