Creating an off-heap in-memory ChronicleMap - chronicle-map

Testing out off-heap caching with ChronicleMap. Will this code create an off-heap in-memory map?
chronicleMap = ChronicleMapBuilder.of(String.class, ByteBuffer.class)
.entries(aMaxSize)
.averageKeySize(100)
.averageValueSize(1000)
.create();

Yes. Any instance of ChronicleMap is off heap, it may be file-backed (memory-mapped file) or not, but in any case it will store data off heap.

Related

Azure Database cannot reduce the sizing

Azure Database cannot reduce the sizing from 750 to 500 GB.
Overall sizing after I checked in the Azure dashboard.
Used space is 248.29 GB.
Allocated space is 500.02 GB
Maximum storage size is 750 GB.
The validation message when I reduce the sizing:
Message is
The storage size of your database cannot be smaller than the currently
allocated size. To reduce the database size, the database first needs
to reclaim unused space by running DBCC SHRINKDATABASE (XXX_Database
Name). This operation can impact performance while it is running and
may take several hours to complete.
What should I do?
Best Regard
If we want to reduce the database size, we need to ensure the number of databse size is larger than the number of the Allocated space you set. Now, according to your need, we should reclaim unused allocated space. Regarding how to do it, we can run the following command
-- Shrink database data space allocated.
DBCC SHRINKDATABASE (N'db1')
For more details, please refer to the document
I got this error via CLI after also disabling read scale and the solution was to remove --max-size 250GB from command:
az sql db update -g groupname -s servername -n dbname --edition GeneralPurpose --capacity 1 --max-size 250GB --family Gen5 --compute-model Serverless --no-wait

Redis: List all data structures

I'm absolutely a newbie using redis.
I need to:
list all databases
list all data structures
I've connected to redis 4.0.11 server using redis-cli.
Redis is a key value storage not a database, You can't query or structure the Redis like you do in a database. You can only receive the relevant value from the key that you are passing.
Usually instead of database a key value storage like redis is used to to high performance key value storage and retrieve, if performance of a database is not enough.

Apache Ignite and Apache Spark integration, Cache loading into Spark Context using IgniteRDD

If I create a igniteRDD out of a cache with 10M entries in my spark job, will it load all 10M into my spark context? Please find my code below for reference.
SparkConf conf = new SparkConf().setAppName("IgniteSparkIntgr").setMaster("local");
JavaSparkContext context = new JavaSparkContext(conf);
JavaIgniteContext<Integer, Subscriber> igniteCxt = new JavaIgniteContext<Integer,Subscriber>(context,"example-ignite.xml");
JavaIgniteRDD<Integer,Subscriber> cache = igniteCxt.fromCache("subscriberCache");
DataFrame query_res = cache.sql("select id, lastName, company from Subscriber where id between ? and ?", 12, 15);
DataFrame input = loadInput(context);
DataFrame joined_df = input.join(query_res,input.col("id").equalTo(query_res.col("ID")));
System.out.println(joined_df.count());
In the above code, subscriberCache is having more than 10M entries. Will at any point of the above code the 10M Subscriber objects be loaded into JVM? Or it only loads the query output?
FYI:(Ignite is running in a separate JVM)
cache.sql(...) method queries the data that is already in Ignite in-memory cache, so before doing this you should load the data. You can use IgniteRDD.saveValues(...) or IgniteRDD.savePairs(...) method for this. Each of them will iterate through all partitions and load all the data that currently exists in Spark into Ignite.
Note that any transformations or joins that you're doing with the resulting DataFrame will be done locally on the driver. You should avoid this as much as possible to get the best performance from Ignite SQL engine.

Setting JVM parameter not affecting the size for HSQLDB in-memory database increase in size

I set JVM memory(JRE Parameter) size to 1024MB and by default it is 256MB. I inserted data into HSQLDB tables (size ~220MB) and i am getting the out of memory error on windows 7 machine. Though i set the size to 1024MB and i am still facing out of memory error. Please let me how to resolve this issue as this database is about to move into production site.
Any suggestion is greatly appreciated.
How do you know the size of HSQLDB tables?
The size of files that contain the database is not the same as the total Java object size of the database in memory. You can use CACHED tables for your largest tables to restrict the amount of objects loaded into memory.

Configuration HSQLDB big storage

First, sorry for my approximative english.
I'm a little lost with HSQLDB using.
I need to save in local database a big size of data (3Go+), in a minimum of time.
So I made the following :
CREATE CACHED TABLE ...; for save data in .data file
SET FILES LOG FALSE; for don't save data in .log file and gain time
SHUTDOWN COMPACT; for save records in local disk
I know there's other variable to parameter for increase the .data size and for increase data access speed, like :
hsqldb.cache_scale=
hsqldb.cache_size_scale=
SET FILES NIO SIZE xxxx
But I don't know how to parameter this for a big storage.
Thanks to help me.
When you use SET FILES LOG FALSE data changes are not saved until you execute SHUTDOWN
or CHECKPOINT.
The other parameters can be left to their default values. If you want to use more memory and gain some speed, you can multiply the default values of the parameters by 2 or 4.