RavenDB Restore slow - backup

I've been testing our DR Process for a new application and am finding that RavenDB restore is taking an unexpected and unacceptable amount of time. I need to know if there is something wrong with my process or if there is a way of improving performance.
For the 70MB database I am restoring it is taking > 8 hours.
After stopping the RavenDB Windows service I'm using the following command, after reading RavenDB documentation http://ravendb.net/docs/server/administration/backup-restore
d:\RavenDB\Server>Raven.Server.exe -src "D:\Backups\RavenDB\2013-11-25_2330\MyDatabase\RavenDB.Backup" -dest "D:\RavenDB\Server\Data
base\Databases\" -restore
I get progress reporting like this:
Request #10,306: POST - 72 ms - <system> - 201 - /admin/backup
Request #10,307: GET - 21 ms - <system> - 200 - /docs/Raven/Backup/Stat
us
Request #10,308: GET - 0 ms - <system> - 200 - /docs/Raven/Backup/Stat
us
Request #10,309: POST - 1,150 ms - MyDatabase - 201 - /admin/backup
Request #10,310: GET - 32 ms - MyDatabase - 200 - /docs/Raven/Backup/Status
etc
But have not yet had confirmation of successful restore.

Related

How to use a SQL query to get the Progress OpenEdge database information, e.g. database version

How to use a SQL query to get the Progress OpenEdge database information, e.g. database version?
In MS SQL Server, we can use SELECT ##VERSION to get the database version information, but this doesn't work for Progress OpenEdge database.
Thanks
You can get the version somewhat indirectly by looking at _dbStatus._dbStatus-shmVers and then mapping that value onto the values listed in this kbase:
https://knowledgebase.progress.com/articles/Article/P39456
(A leading "64" means 64 bit.)
For instance, a shared memory version of 6412371 means that you have 64 bit 10.2b00, 13723 is 11.7.0 etc.
Obviously new releases will result in new shared memory versions so you may need to stay on top of the kbase.
But as of today the list is:
OpenEdge 11 Shared Memory Versions:
11.0.0 - 13019
11.1.0 - 13053
11.2.0 - 13102
11.2.1 - 13103
11.3.0 - 13205
11.3.1 - 13215
11.3.2 - 13217
11.3.3 - 13221
11.4.0 - 13312
11.5.0 - 13506
11.5.1 - 13507
11.6.0 - 13614
11.6.1 - 13614
11.6.2 - 13615
11.6.3 - 13615
11.7.0 - 13723
11.7.1 - 13723
OpenEdge 10 Shared Memory Versions:
10.0A00 - 10004
10.0B00 - 10036
10.0B01 - 10036
10.0B02 - 10036
10.0B03 - 10040
10.0B04 - 10042
10.1A00 - 10127
10.1A01 - 10129
10.1B00 - 10171
10.1B02 - 10173
10.1B03 - 10174
10.1C00 - 10212
10.1C01 - 10213
10.1C02 - 10213
10.1C03 - 10213
10.1C04 - 10215
10.2A00 - 12003
10.2A01 - 12008
10.2A02 - 12008
10.2A03 - 12009
10.2B00 - 12371
10.2B01 - 12372
10.2B02 - 12372
10.2B03 - 12372
10.2B04 - 12382
10.2B05 - 12383
10.2B06 - 12384
10.2B07 - 12385
10.2B08 - 12403
Progress 9.1D to 9.1E Shared Memory Versions:
9.1D00 - 9118
9.1D01 - 9122
9.1D02 - 9124
9.1D03 - 9124
9.1D04 - 9125
9.1D05 - 9126
9.1D06 - 9127
9.1D07 - 9128
9.1D08 - 9129
9.1E00 - 9135
9.1E01 - 9136
9.1E02 - 9171
9.1E03 - 9200
9.1E04 - 9200
Older Shared Memory Versions:
9.0x - 9000 +
8.0x - 8001 +
7.4x - 7400 +
7.3B - 7331 +
7.3A - 7301 +
7.2x - 70xx
7.1x - 70xx
7.0x - 70xx
6.3x - 63xx
6.2x - 6xx
5.2x - 3
There is not direct way as in MS SQL Server. But, as a workaround, you can define a procedure/user defined function (UDF) (if you are using 11.7) to get version. As you should write procedure/UDF in java, you can write some piece of code to get the version. For example, you can create a .p to get the version and call that .p from java code in procedure/UDF. In the .p file, you can use PROVERSION statement to get version. Later, you can call that procedure/UDF from SQL.

Failure of Importing data from Bigquery to GCS

Dear support at Google,
We recently noticed that many of the GAP site import jobs extracting&uploading data from Google Bigquery to Google Cloud Service have been failing (Since April 4th). Our uploading jobs are running fine before April 4th but have been failing since April 4th, after did investigation, we feel this is an issue/error from Bigquery side, not from our job. The details of error info from Bigquery API when uploading data is shown below:
216769 [main] INFO  org.mortbay.log  - Dataset : 130288123
217495 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
227753 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
237995 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
Heart beat
248208 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds..
258413 [main] INFO  org.mortbay.log  - Job is PENDING waiting 10000 milliseconds...
268531 [main] INFO  org.mortbay.log  - Job is RUNNING waiting 10000 milliseconds...
Heart beat
278675 [main] INFO  org.mortbay.log  - An internal error has occurred
278675 [main] INFO  org.mortbay.log  - ErrorProto : null
 
As per log, it is an internal error with the issue ErrorProto:null.
 
Our google account: ea.eadp#gmail.com
 
Our Google Big Query projects:
Origin-BQ              origin-bq-1
Pulse-web             lithe-creek-712
The importing failure on following data set:
 
In Pulse-web, lithe-creek-712:
101983605
130288123
48135564
56570684
57740926
64736126
64951872
72220498
72845162
73148296
77517207
86821637
 
 
Please look into this and let us know if you have any updates.
Thank you very much and looking forward to hearing back from you.
 
Thanks

How to Configure the Web Connector from metrics.log Values

I am reviewing the ColdFusion Web Connector settings in workers.properties to hopefully address a sporadic response time issue.
I've been advised to inspect the output from the metrics.log file (CF Admin > Debugging & Logging > Debug Output Settings > Enable Metric Logging) and use this to inform the adjustments to the settings max_reuse_connections, connection_pool_size and connection_pool_timeout.
My question is: How do I interpret the metrics.log output to inform the choice of setting values? Is there any documentation that can guide me?
Examples from over a 120 hour period:
95% of entries -
"Information","scheduler-2","06/16/14","08:09:04",,"Max threads: 150 Current thread count: 4 Current thread busy: 0 Max processing time: 83425 Request count: 9072 Error count: 72 Bytes received: 1649 Bytes sent: 22768583 Free memory: 124252584 Total memory: 1055326208 Active Sessions: 1396"
Occurred once -
"Information","scheduler-2","06/13/14","14:20:22",,"Max threads: 150 Current thread count: 10 Current thread busy: 5 Max processing time: 2338 Request count: 21 Error count: 4 Bytes received: 155 Bytes sent: 139798 Free memory: 114920208 Total memory: 1053097984 Active Sessions: 6899"
Environment:
3 x Windows 2008 R2 (hardware load balanced)
ColdFusion 10 (update 12)
Apache 2.2.21
Richard, I realize your question here is from 2014, and perhaps you have since resolved it, but I suspect your problem was that the port set in the CF admin (below the "metrics log" checkbox) was set to 8500, which is your internal web server (used by the CF admin only, typically, if at all). That's why the numbers are not changing. (And for those who don't enable the internal web server at installation of CF, or later, most values in the metrics log are null).
I address this problem in a blog post I happened to do just last week: http://www.carehart.org/blog/client/index.cfm/2016/3/2/cf_metrics_log_part1
Hope any of this helps.

Mono error when load testing

During load testing (using Load UI) of a new .Net web api using Mono hosted on a medium sized Amazon server I'm receiving the following results (in chronological order over the course of about ten minutes)
5 connections per second for 60 seconds
No errors
50 connections per second for 60 seconds
No errors
100 connections per second for 60 seconds
Received 3 errors, appearing later during the run
2014-02-07 00:12:10Z Error HttpResponseExtensions Error occured while Processing Request: [IOException] Write failure Write failure|The socket has been shut down
2014-02-07 00:12:10Z Info HttpResponseExtensions Failed to write error to response: {0} Cannot be changed after headers are sent.
5 connections per second for 60 seconds
No errors
100 connections per second for 30 seconds
No errors
100 connections per second for 60 seconds
Received 1 error same as above, appearing later during the run
100 connections per second for 45 seconds
No errors
Doing some research on this, this error seems to be a standard one received when a client closed the connection. As this is only occurring during the heavier load tests, I am wondering if it is just getting to the upper limits of what the server instance can support? If not any suggestions on hunting down the source of the errors?

PDI Error occured while trying to connect to the database

I got the following error while executing a PDI job.
I do have mysql driver in place (libext/JDBC). Can some one say, what would be the reason of failure?
Despite the error while connecting to DB, my DB is up and I can access it by command prompt.
Error occured while trying to connect to the database
Error connecting to database: (using class org.gjt.mm.mysql.Driver)
Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
ERROR 03-08 11:05:10,595 - stepname- Error initializing step [Update]
ERROR 03-08 11:05:10,595 - stepname - Step [Update.0] failed to initialize!
INFO 03-08 11:05:10,595 - stepname - Finished reading query, closing connection.
ERROR 03-08 11:05:10,596 - stepname - Unable to prepare for execution of the transformation
ERROR 03-08 11:05:10,596 - stepname - org.pentaho.di.core.exception.KettleException:
We failed to initialize at least one step. Execution can not begin!
Thanks
Is this a long running query by any chance? Or; in PDI world it can be because your step kicks off at the start of the transform, waits for something to do, and if nothing comes along by the net write timeout then you'll see this error.
If so your problem is caused by a timeout that MySQL uses and frequently needs increasing from the default which is 10 mins.
See here:
http://wiki.pentaho.com/display/EAI/MySQL