influxDB v 0.11 restore hangs forever - restore

I am doing a POC on Influx 0.11. Below are the steps I perform to do backup and restore.
#backup metadata
--> influxd backup /tmp/backup
[Timestamp] backing up metastore to /opt/backup/meta.00
[Timestamp] backup complete
#backing up a particular DB - mydb
--> influxd backup -database mydb /tmp/mydb/
[Timestamp] backing up db=mydb since 0001-01-01 00:00:00 +0000 UTC
[Timestamp] backing up metastore to /tmp/mydb/meta.00
[Timestamp] backing up db=mydb rp=default shard=12 to /tmp/mydb/mydb.default.00012.00 since 0001-01-01 00:00:00 +0000 UTC
[Timestamp] backup complete
It looks fine until now.
#remove db and stop the service
--> influx
--> drop database mydb
--> systemctl stop influxd
Restoring metadata
--> influxd restore -metadir /var/lib/influxdb/meta /tmp/backup
Using metastore snapshot: /opt/backup/meta.00
It freezes at this point.
Please advise.

Related

Issue with backups failing on Bacula

I'm new to Bacula, and I've inherited an environment that was already setup and deployed. Recently one of our servers that we have always backed up crashed and was deemed no longer of any use, so I was tasked to remove it from the client list, which I did. Since I've removed it, every morning I have jobs failing and I can see from the email I receive that it's looking to copy an old job:
15-Jun 01:00 bacula-dir JobId 56332: Copying using JobId=55657 Job=server2-fd.2022-05-31_18.00.01_46
15-Jun 01:00 bacula-dir JobId 56332: Fatal error: Previous Job resource not found for "server2-fd".
15-Jun 01:00 bacula-dir JobId 56332: Error: Bacula bacula-dir 9.4.2 (04Feb19):
Build OS: x86_64-pc-linux-gnu redhat Enterprise release
Prev Backup JobId: 55657
Prev Backup Job: server2-fd.2022-05-31_18.00.01_46
New Backup JobId: 0
Current JobId: 56332
Current Job: CopyDiskToTape.2022-06-15_01.00.01_17
Backup Level: Incremental
I can't find any indication of server2 in any of my jobs and I'm not sure how to get rid of these errors. What am I missing here?
Ok, I found a utility called dbcheck. Comes with bacula, allowed me to check for orphaned client records.

pg_restore: [directory archiver] could not open input file. Error while trying to restore DB

Error when trying to restore a backup file(.nb3).
I did a database backups in Navicat, one backup for each schema. They are .nb3 files.
I tried to restore my DB on a local server using pgAdmin. I got an error like this after choosing backup file :
How do I restore the database on a local server?
NB3 NaviCat backup files can be restored using the NaviCat software.
Add the database connection in NaviCat
Double click on the connection and go to Backup
Right-click on the empty backup window and select "Restore from..." from the menu
Select your nb3 file

Cannot connect Impala-Kudu to Apache Kudu (without Cloudera Manager): Get TTransportException Error

I have successfully installed kudu on Ubuntu (Trusty) as per the official kudu documentations (see http://kudu.apache.org/docs/installation.html ). The setup has one node running master and tablet server and another node running the tablet server only. I am having issues installing impala-kudu without Cloudera Manager on the node running kudu master. I have followed CDH installation instructions on this (see http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh5_install.html ) page until Step 3. I have avoided installing CDH with YARN and MRv1 as I don’t need to run any mapreduce jobs and will not be using hadoop. Impala-kudu and impala-kudu-shell installed without errors. When I launch the impala-shell it returns:
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, Could not connect to kudu_test:21000
***********************************************************************************
Welcome to the Impala shell. Copyright (c) 2015 Cloudera, Inc. All rights reserved.
(Impala Shell v2.7.0-cdh5-IMPALA_KUDU-cdh5 (48f1ad3) built on Thu Aug 18 12:15:44 PDT 2016)Want to know what version of Impala you're connected to? Run the VERSION command to
find out!
***********************************************************************************
[Not connected] >
I have tried to use the CONNECT option to connect to the kudu-master node without success. Both imapala-kudu and kudu are running on the same machine. Are there additional configuration settings which need to be changed or is hadoop and YARN a strict requirement to make impala-kudu work?
After running ps -ef | grep -i impalad I can confirm the impala daemon is not running. After navigating to the impala logs at ~/var/log/impala I find a few errors and warning files. Here is the output of impalad.ERROR:
Log file created at: 2016/09/13 13:26:24
Running on machine: kudu_test
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
E0913 13:26:24.084389 3021 logging.cc:118] stderr will be logged to this file.
E0913 13:26:25.406966 3021 impala-server.cc:249] Currently configured default filesystem: LocalFileSystem. fs.defaultFS (file:///) is not supported.ERROR: block location tracking is not properly enabled because
- dfs.datanode.hdfs-blocks-metadata.enabled is not enabled.
- dfs.client.file-block-storage-locations.timeout.millis is too low. It should be at least 10 seconds.
E0913 13:26:25.406990 3021 impala-server.cc:252] Aborting Impala Server startup due to improper configuration. Impalad exiting.
Maybe I need to revisit HDFS and the Hive Metastore to ensure I have these services configured properly?
According to the log, impalad quits because the default filesystem is configured to be LocalFileSystem, which is not supported. You have to set a distributed filesystem, such as HDFS as the default.
Although Kudu is a separate storage system and does not rely on HDFS, Impala still seems to require a non-local default FS even when using with Kudu. The Impala_Kudu documentation explicitly lists the following requirement:
Before installing Impala_Kudu, you must have already installed and configured services for HDFS (though it is not used by Kudu), the Hive Metastore (where Impala stores its metadata), and Kudu.
I can even imagine that HDFS may not really be needed for any other reason than to make Impala happy, but this is just speculation from my side. Update: Found IMPALA-1850 which confirms my suspicion that HDFS should not be needed for Impala any more, but it's not just a single check that has to be removed.

How to restore data base using InfluxD

I am using influxd and have created the back up using
influxd backup -database grpcdb /opt/data
I can see that files are created under /opt/data directory
Now, i want to restore same datafiles with different database name on same machine.
influxd restore -database grpcdb1 /opt/data
but getting below mention error
restore: -datadir is required to restore
Here i am providing same data path. Not sure what is missing.
I found a way for doing that.
Important thing:
Data can only by exported when Influxdb instance is running.
Data can only be imported when Influxdb instance is not running.
Export data:
sudo service influxdb start (Or leave this step if service is already running)
influxd backup -database grpcdb /opt/data
Import Data:
sudo service influxdb stop
influxd restore -metadir /var/lib/influxdb/meta /opt/data
influxd restore -database grpcdb -datadir /var/lib/influxdb/data /opt/data
sudo service influxdb start
You were missing -datadir /var/lib/influxdb/data
Don't forget to restore metadata first as Ammad wrote.

Drop DB but don't delete *.mdf / *.ldf

I am trying to automate a process of detaching and dropping a database (via a VBS objshell.run) If I manually use SSMS to detach and drop I can then copy to database files to another location... however if I use:
sqlcmd -U sa -P MyPassword -S (local) -Q "ALTER DATABASE MyDB set single_user With rollback IMMEDIATE"
then
sqlcmd -U sa -P MyPassword -S (local) -Q "DROP DATABASE MyDB"
It detaches/drops and then deletes the files. How do I get the detach and drop without the delete?
The MSDN Documentation on DROP DATABASE has this to say about dropping the database without deleting the files (under General Remarks):
Dropping a database deletes the database from an instance of SQL Server and deletes the physical disk files used by the database. If the database or any one of its files is offline when it is dropped, the disk files are not deleted. These files can be deleted manually by using Windows Explorer. To remove a database from the current server without deleting the files from the file system, use sp_detach_db.
So in order for you to drop the database without having the files get deleted with sqlcmd you can change it to do this:
sqlcmd -U sa -P MyPassword -S (local) -Q "EXEC sp_detach_db 'MyDB', 'true'"
DISCLAIMER: I have honestly never used sqlcmd before but assuming from the syntax of how it's used I believe this should help you with your problem.
Use SET OFFLINE instead of SET SINGLE_USER
ALTER DATABASE [DonaldTrump] SET OFFLINE WITH ROLLBACK IMMEDIATE; DROP DATABASE [DonaldTrump];
Might it be best to detach the database rather than drop it?
If you drop the database, that implies delete
Note, however, that this will leave your hard disk cluttered with database files you no longer want - in a couple of years time your successor will be running out of space and wondering why the disk is full of MDF files he doesn't recognise