Current LUKS header backup is different to previous one - qnap

Running a QNAP NAS for serveral years with LUKS encryption without any faults. Years ago I created a LUKS header backup. Stored in a safe place. MD5 is right so I assume my backup is still valid. Today I created a second LUKS header backup (only for testing purpose NAS is still working fine). Now these two LUKS header are different. Keys were not modified. How can this happen ? How can I verify which header is valid ?

Related

MS Access Backend data corruption

I have an Access database that was designed and developed back in 1997 - 99. all user interaction is through forms and reports, there is no end user access to the backend tables. It has worked flawlessly for the last 19 plus years. Both front and back end are still in .mdb format, but the front ends which are local to the workstation and are replaced with a clean copy upon every login are running on Access 2016. There are at a maximum, 6 users in the database at one time, usually there are only 4.
Starting about 90 days ago the back end would randomly corrupt when a request to write a record was made. The error message is “Database is in an unrecognized format”. We have replaced old workstations and the Server that host the back end, in addition we replaced the switch that all computers connect to. On the workstations that were not replaced Office has been re-installed and all updates applied to it. The corruptions cannot be reproduced consistently, it happens in different forms and from different workstations randomly. After the corruption, we have to delete the lock file, compact and repair the Back end and it will work just fine until the next corruption, and the data that was attempted to be written is there, so there has been no Data loss.
The back end data was rebuilt last year to remove the random primary key values, that were created when the database was replicated over a dial up modem back when the database was first developed. The replication functionality was turned off approximately 17 years ago. The back end has been rebuilt again from scratch, each table was created in a new Database and all the index's and relationships were rebuilt. The data from each table was exported to a Text file and then imported into the new database.
There were no changes made to the front end in the prior three or four weeks before this issue started happening. Just to ensure that it was not something in the frontend, it was rolled back to a version that was working fine in February of this year, unfortunately that did not resolve the issue
None of these steps have resulted in resolving the back end corruption, and if anything, the corruption is happening more frequently. The only thing that works at this point is to have one user at a time in the database, as soon as the second user opens the frontend, the back end will corrupt within a few minutes.
Any thoughts or ideas would be greatly appreciated.
Thank you
Steve Brewer
Update:
This is a known bug introduced by one of the Office/Windows updates. See http://www.devhut.net/2018/06/13/access-bu...ognized-format/ for all the details and workaround/solution.

Copying database to cloned server and keeping both copies?

I'm in a rather odd situation. At my work, we have two MSSQL 2012 servers, one physically hosted here, one virtual. Through a long, frustrating set of circumstances, our migration plans fell apart and we now have both servers with different data on each. I have to take a database, let's call it cms1, from the physical server and move it to the virtual server. However, I have to also make sure the virtual server's copy of cms1 remains intact, then run a script to move the changed tables from one to the other.
What I've tried so far is:
Make a full back up of the physical server's copy into cms1.bak, then copy that bak file over to the virtual server.
Rename the virtual server's version of the database with "alter database cms1 modify name = cms1_old". Good so far.
Take the newly renamed cms1_old db offline, then restore from my bak file. I get an error that the file for cms1 (NOT cms_old) is in use.
I went to the actual location on disk and renamed the two files associated with cms1 to be cms1_old. I closed SSMS and re-opened it, and tried the restore again. I got the same error, that the file for cms1 (again, NOT the file for cms1_old) was in use.
(update) I have since discovered detaching databases and tried to use that. When re-attaching after renaming the files for cms1_old, though, SSMS says that the files cannot be found. I believe I've gotten every path correct, so I'm not sure why this is happening.
Was my mistake in not taking the cms1 database offline BEFORE renaming it? If so, is there a way to fix this, or should I start again? This cms1 database is a test, not the real thing, but I want to get the procedure nailed down before working on our live database. How would I move a copy of cms1 from physical to virtual, keeping cms1 on the virtual server, so both can exist side by side while I move data from certain tables of one to the other? I really hope I'm making sense--I've been fighting with this for two hours straight. Thanks for any suggestions. I'm not too experienced in this sort of thing; I know SQL reasonably well, but dealing with physical DB files, backups, etc is new to me.

One of two Lotus Domino servers doesn't display changes I've made

We've got two servers. Both are 8.5.3FP6. I've got the same client Lotus. My problem is that most of the changes I've made haven't displayed on one of the servers. This is our main server, and this app has a replication on the other. And on the second server everything is fine. For example I've made changes in a view. In the selection I have filtered on two columns. Then I want to use this view in a Custom control, but the first server displays it without this filter. The second server displays the custom control with this filter.
I don't know what went wrong, because when I first set the filter, it was displayed, and 5 minutes later when I changed the filter key, nothing has changed on the first server. But it replicated to the other, and on that it is fine.
look at:
1 connection document on server (replication/routing and shedule, source/dest server)
2 rights on db for replication servers
3 on advanced replication options on the db. maby there is unmarked design elements.
all best

Recognize changes in files

my app checks a website for some files with simple NSURLConnection Method. Now I want to recognize if one of the files has changed without downloading the file and compare it.
I thought about md5 checksums but how can I do this without wasting traffic downloading the whole file.
Do you have any ideas for this?
Check the timestamp on the file. That should be easier then using md5 checksums. I don't know how your app or or server API is implemented but the idea is pretty straightforward:
On the server create an API that allows you to query when a file was last modified (keeping track of the modification timestamps should already be handled by the OS on the server)
When you download the file on your client also store the timestamp (i.e. when the server thinks the file was last modified).
When checking whether to update a file, first ask the server timestamp for the file and compare it with the one in your client app - if the server timestamp is more recent than the one on your client download the new file, otherwise do nothing.

Backup application with single instance functionality

Currently working on an application, that help you to take backup of the files in you machine at the server (hosted by the company itself), so that you can recover data after any hdd crash. I have implemented Single Instance feature, across the users.
Single Instance : A file uploaded already at the server, wouldn't be uploaded again. Whenever any other instance of the exact file uploaded will not be actual upload but some database changes and linked to the same previously uploaded file.
Issue arise when same file (that has not already been uploaded before) is uploaded simultaneously by more than one users, On Start file wouldn't be detected for an instance (as database is updated only after successful upload/backup). All are running, at once. What will be the best way to implement single instance in this way.
I am thinking when I let all the instance upload as it is. So more than one instance of the file will reside at the server. But whenever another backup of the same file will be taken afterwards, I will remove all the previous instances and link them up with the one. This will not let user double uploads and also less complex on the cost of some disc space that too for a while probably (till next upload of the same file will be done)
Thanks for your thoughts in advance.
Calculate the hash (signature) of the file before upload and store it in the DB.
Then - start uploading.
if a similar file will be mark for uploading during the upload of the first file (you will know b/c you already saved the hash) - you will hold the 2nd file upload, until the first one finish successfully, and then link, if the 1st on fails, you can go to the 2nd source and upload it.