How can I change the Index batch size of a ravenDB database from 512 to 1024 - ravendb

As the title says I am trying to change the batch size of my raven db database. This sounds like it should be something really simple but I can't seem to work out how to do it. I have searched google and looked over the ravenDB Console.
My problem is when I try to populate a ravenDB database I only get a fraction of the documents but when I populate a ravenDB database on a test server I seem to get all of the documents.
I was looking at the status page of the ravenDB console and it turns out the database on the test server has and Index count of 6 while the ravenDB database on my local machine has 7. The test server has a document count of 63,864 while my machine has 28,512. The database on the test server has an Index Batch Size of 1,024 while the database on my machine only has an Index batch size of 512.
I'm not sure why there are differences as I use the same code to generate the databases. I am still relatively new to RavenDB. Any advice will be much appreciated.
Cheers.

I don't understand if the problem is on the number of results when querying or it is on the index batch size but:
For the number of results when querying the difference could be in the server configuration.
Check if in the server config file [YourRavenDbFolder]/Server/Raven.Server.exe.config is specified:
<appSettings>
<add key="Raven/MaxPageSize" value="512" />
this change the maximum number of results that you retrive by default when querying.
Instead if we are talking about the default number of elements indexed in a single batch you need to know that default values are:
64-bit: 128 * 1024
32-bit: 64 * 1024
you can change this in the Raven.Server.exe.config file:
<appSettings>
<add key="Raven/MaxNumberOfItemsToIndexInSingleBatch" value="" />
Larger batch size result in faster indexing, but higher memory usage.

Related

Using Powershell to copy large Oracle table to SQL Server - memory issue

We are trying to copy data from a large Oracle table (about 50M rows) to SQL Server, using Powershell and SQLBulkCopy. The issue with this particular Oracle table is that it contains a CLOB field and it seems that unlike other table loads, this one is taking up more and more OS memory, eventually overpowering SQL Server, which is located on the same server, on which Powershell is running. Oracle is external and data is being sent via a network. Max size of CLOB is 6.4M bytes, whereas the average size is 2000.
Here is a snip of code being used. Seems that batchsize does not have any bearing on what's happening:
`
$SourceConnection = New-Object Oracle.ManagedDataAccess.Client.OracleConnection($SourceConnectionnectionstring)
$SourceConnection.Open()
$SourceCmd = $SourceConnection.CreateCommand()
$SourceCmd.CommandType = "text"
$SourceCmd.CommandText = $queryStatment
$bulkCopy = New-Object Data.SqlClient.SqlBulkCopy($targetConnectionString, [System.Data.SqlClient.SqlBulkCopyOptions]::UseInternalTransaction)
$bulkcopy.DestinationTableName = $destTable
$bulkcopy.bulkcopyTimeout = 0
$bulkcopy.batchsize = 500
$SourceReader = $SourceCmd.ExecuteReader()
Start-Sleep -Seconds 2
$bulkCopy.WriteToServer($SourceReader)
`
We tried different batch sizes, smaller and larger, with same result.
Tried enableStreaming 1 / 0
tried using Internal Transaction (in the code sample above) or just using default options, but still specifying batch size...
Anything else we can try to do to avoid the memory pressure?
Thank you in advance!
Turned out, after extensive research, an obscure Oracle command attribute is responsible for sending data for CLOBs, which is what was saturating memory.
InitialLOBFetchSize
This property specifies the amount of data that the OracleDataReader initially fetches for LOB columns. It is defaulted to 0, which means "the entire CLOB".
I set it to 1M bytes, which is plenty, and the process never ate into memory.

Why will my SQL Transaction log file not auto-grow?

The Issue
I've been running a particularly large query, generating millions of records to be inserted into a table. Each time I run the query I get an error reporting that the transaction log file is full.
I've managed to get a test query to run with a reduced set of results and by using SELECT INTO instead of INSERT into as pre built table. This reduced set of results generated a 20 gb table, 838,978,560 rows.
When trying to INSERT into the pre built table I've also tried using it with and without a Cluster index. Both failed.
Server Settings
The server is running SQL Server 2005 (Full not Express).
The dbase being used is set to SIMPLE for recovery and there is space available (around 100 gb) on the drive that the file is sitting on.
The transaction log file setting is for File Growth of 250 mb and to a maximum of 2,097,152 mb.
The log file appears to grow as expected till it gets to 4729 mb.
When the issue first appeared the file grow to a lower value however i've reduced the size of other log files on the same server and this appears to allow this transaction log file grow further by the same amount as the reduction on the other files.
I've now run out of ideas of how to solve this. If anyone has any suggestion or insight into what to do it would be much appreciated.
First, you want to avoid auto-growth whenever possible; auto-growth events are HUGE performance killers. If you have 100GB available why not change the log file size to something like 20GB (just temporarily while you troubleshoot this). My policy has always been to use 90%+ of the disk space allocated for a specific MDF/NDF/LDF file. There's no reason not to.
If you are using SIMPLE recovery SQL Server is supposed manage the task of returning unused space but sometimes SQL Server does not do a great job. Before running your query check the available free log space. You can do this by:
right-click the DB > go to Tasks > Shrink > Files.
change the type to "Log"
This will help you understand how much unused space you have. You can set "Reorganize pages before releasing unused space > Shrink File" to 0. Moving forward you can also release unused space using CHECKPOINT; this may be something to include as a first step before your query runs.

Deleting indexedDB database doesn't reduce Internet.edb size

I have got an application that I am storing some images in the indexedDB.
In IE11 I seem to get the following problem:
When I create a database the size of the Interner.edb (C:\Users\%USERNAME%\AppData\Local\Microsoft\Internet Explorer\Indexed DB) file where the database gets stored increases obviously but the size of it does not reduce when I delete the database either programmatically
var request = window.indexedDB.deleteDatabase("databaseName");
or manually through "Tools-> Internet Options -> Settings (in Browsing history section) -> Caches and databases tab" and delete the entry.
As a result, it reaches the max limit and I get a QuotaExceededError or a "Not enough storage is available to complete this operation" error message when I try to store a few hundred mb of data. I have set the limit to its max (1GB).
Any idea why Internet.edb size does not reduce after deleting the database?

Configuration HSQLDB big storage

First, sorry for my approximative english.
I'm a little lost with HSQLDB using.
I need to save in local database a big size of data (3Go+), in a minimum of time.
So I made the following :
CREATE CACHED TABLE ...; for save data in .data file
SET FILES LOG FALSE; for don't save data in .log file and gain time
SHUTDOWN COMPACT; for save records in local disk
I know there's other variable to parameter for increase the .data size and for increase data access speed, like :
hsqldb.cache_scale=
hsqldb.cache_size_scale=
SET FILES NIO SIZE xxxx
But I don't know how to parameter this for a big storage.
Thanks to help me.
When you use SET FILES LOG FALSE data changes are not saved until you execute SHUTDOWN
or CHECKPOINT.
The other parameters can be left to their default values. If you want to use more memory and gain some speed, you can multiply the default values of the parameters by 2 or 4.

Moving data from one table to another in Sql Server 2005

I am moving around 10 million data from one table to another in SQL Server 2005. The Purpose of Data transfer is to Offline the old data.
After some time it throws an error Description: "The LOG FILE FOR DATABASE 'tempdb' IS FULL.".
My tempdb and templog is placed in a drive (other than C drive) which has around 200 GB free. Also my tempdb size in database is set to 25 GB.
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Please let me know other factors and I cannot experiment much as I am working on Production database so can you please let me know if they changes will have some other impact.
Thanks in Advance.
You know the solution. Seems you are just moving part of data to make your queries faster.
I am agree with your solution
As per my understanding I will have to increase the size of tempdb from 25 GB to 50 GB and set the log file Auto growth portion to "unrestricted file growth (MB)".
Go ahead
My guess is that you're trying to move all of the data in a single batch; can you break it up into smaller batches, and commit fewer rows as you insert? Also, as noted in the comments, you may be able to set your destination database to SIMPLE or BULK-INSERT mode.
Why are you using Log file at all? Copy your data (Data and Logfile) then set the mode on SIMPLE and run the transfer again.