Ravendb bulk insert with "Document Expiration"? - ravendb

How can i do to bulk insert with document expiration?
The doc files only say to add metadata "#expires" but only way to do that is over single insert mode and that method does not work on bulk insert
https://ravendb.net/docs/article-page/5.4/csharp/studio/database/settings/document-expiration
https://ravendb.net/docs/article-page/5.4/csharp/server/extensions/expiration#setting-the-document-expiration-time

You can provide the metadata for the stored entity.
https://ravendb.net/docs/article-page/5.4/csharp/client-api/bulk-insert/how-to-work-with-bulk-insert-operation#methods

Related

checking triggers in sql file

I have sql file which contains triggers. I have some 30-40 triggers in that file.
Each trigger contains the insert statement into update_delete table.
Sometimes its
insert into update_delete(id,value,name) values (:old.id,:old.value,:old.name);
or
insert into update_delete(id,value,name)values(:old.id,:old.value,null);
or
insert into update_delete(id,name)values(:old.id,:old.name);
I want to write a script which scans all the triggers in sql file and check the if name field in update_delete table is inserted with old.name or null.
Please do suggest how do I proceed with this.

FileTable and Foreign Key from another table

I try to use FileTable with Entity Framework (I know it is not supported directly). So I use custom Sql commands to insert and delete (no update) the data. My problem is I have a table which refers to the FileTable with a foreign key to the stream_id of the FileTable. If I insert into the FileTable, how can I get the stream_id back?
I want to use SqlBulkCopy to insert lots of files, I can bulk insert into the FileTable, but SqlBulkCopy won´t tell me the inserted stream_id values.
If I execute single insert statements with select scopeIdentity() or something similar, the performance becomes worse.
I want to insert like 5.000 files (2MB until 20MB) into the FileTable and connect them with my own Table via foreign key. Is this bad practice and I should use a simple path column and store the data directly in the filesystem? I thought FileTable is doing exactly this for me, because I need to secure the database and the files are always in sync even if I go one hour or 4 days back in the past. I cannot backup the database and the filesystem exactly at the same time so they are 100 percent synchronized.
I want to use SqlBulkCopy to insert lots of files, I can bulk insert into the FileTable, but SqlBulkCopy won´t tell me the inserted stream_id values.
SqlBulkCopy doesn't allow to retrieve inserted identity values or any other values.
Solution 1
You can find on the web a lot of code snippets to insert into a temporary table using SqlBulkCopy. Then from the temporary table to the destination table using the OUTPUT clause to get the stream_id values.
It's a few more steps, but the performance is still very great.
Solution 2
Disclaimer: I'm the owner of the project Entity Framework Extensions
Disclaimer: I'm the owner of the project Bulk Operations
Both libraries are not free but allow to overcome SqlBulkCopy limitation more easily.
Both of them support to output identity value.
// Easy to customize
var bulk = new BulkOperation<Customer>(connection);
bulk.BatchSize = 1000;
bulk.ColumnInputExpression = c => new { c.Name, c.FirstName };
bulk.ColumnOutputExpression = c => c.CustomerID;
bulk.ColumnPrimaryKeyExpression = c => c.Code;
bulk.BulkMerge(customers);
// Easy to use
var bulk = new BulkOperation(connection);
bulk.BulkInsert(dt);
bulk.BulkUpdate(dt);
bulk.BulkDelete(dt);
bulk.BulkMerge(dt);

HSQL database storage format

I am just starting usage of HSQL database and I'm misunderstanding storage format of data.
I have made simple test program creating entity via Hibernate. I used file-based standalone in-process mode of HSQL.
I got this file of data:
testdb.script
SET DATABASE UNIQUE NAME HSQLDB52B647B0B4
// SET DATABASE lines skipped
INSERT INTO PERSON VALUES(1,'Peter','UUU')
INSERT INTO PERSON VALUES(2,'Nasta','Kuzminova')
INSERT INTO PERSON VALUES(3,'Peter','Sagan')
INSERT INTO PERSON VALUES(4,'Nasta','Kuzminova')
INSERT INTO PERSON VALUES(5,'Peter','Sagan')
INSERT INTO PERSON VALUES(6,'Nasta','Kuzminova')
As I understand when I get a lot of data, all of it will be stored as such SQL script and executed every time of database startup and will be kept in memory?
The INSERT statements in the .script file are for MEMORY tables of the database.
If you create a table with CREATE CACHED TABLE ... or you change a MEMORY table to CACHED with SET TABLE <name> TYPE CACHED the data for the table will be stored in the .data file and there will be no INSERT statements in the .script. file.

how to insert large number of rows in oracle?

Can anyone tell me how to insert large number of rows in Oracle ?
Using insert statement we can insert data into rows of table.
insert into example values(1,'name','address');
Suppose I want to insert 100,000 rows , do I need to insert one by one by following above procedure? Or is there any other way to insert a large number of rows at a time? Can any one advise me with an example please.
Note: here i'm not asking copying data from another table.. just consider we have an XL sheet consist of 1,00,000 rows,then how we can insert them into a particular table..
Thanks,
Sai.
If you are loading using individual insert statements from a script, using SQL*Plus, say, then one handy speed-up is to bunch sets of inserts into anonymous PL/SQL blocks ...
begin
insert into example values(1,'name','address');
insert into example values(1,'name','address');
insert into example values(1,'name','address');
...
end;
/
begin
insert into example values(1,'name','address');
insert into example values(1,'name','address');
insert into example values(1,'name','address');
...
end;
/
This reduces the client/server chatter enormously.
An original file can often be easily modified with unix scripts or macro in a decent text editor.
Not necessarily what you'd want to embed into a production process but handy for the occasional job.
Use sqlldr with the direct path option.
I suspect you have it in a CSV file.
Create directory object
Create external table. You can query external table the same way as regular table the difference is that the data in the table is from a file located in a directory object.
http://www.oracle-base.com/articles/9i/external-tables-9i.php

Dumping a table's content in sqlite3 to be imported into a new database

Is there an easy way of dumping a SQLite database table into a text string with insert statements to be imported into the same table of a different database?
In my specific example, I have a table called log_entries with various columns. At the end of every day, I'd like to create a string which can then be dumped into an other database with a table of the same structure called archive. (And empty the table log_entries)
I know about the attach command to create new databases. I actually wish to add it to an existing one rather than creating a new one every day.
Thanks!
ATTACH "%backup_file%" AS Backup;
INSERT INTO Backup.Archive SELECT * FROM log_entries;
DELETE FROM log_entries;
DETACH Backup;
All you need to do is replace %backup_file% with the path to your backup database. This approach considers that your Archive table is already defined and that you are using the same database file to cumulate your archive.
$ sqlite3 exclusion.sqlite '.dump exclusion'
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE exclusion (word string);
INSERT INTO "exclusion" VALUES('books');
INSERT INTO "exclusion" VALUES('rendezvousing');
INSERT INTO "exclusion" VALUES('motherlands');
INSERT INTO "exclusion" VALUES('excerpt');
...