can't turn off activemq's temp storage - activemq

I want to test the producer flow control, but the message always go into the temp store, giving me no way to test.
I have set the tempUsage to 1kb.
<tempUsage>
<tempUsage limit="1 kb"/>
</tempUsage>
And when activemq starting, I can see a log that seems fine:
Temporary Store limit is 0 mb, whilst the max journal file size for the temporary store is: 32 mb, the temp store will not accept any data when used.
But when I send one 50mb message, it still go into temp store:
PListStore:[D:\apache-activemq-5.14.4\bin\..\data\localhost\tmp_storage] initialized
How I can do to turn off temp storage?

Related

Spark - Failed to load collect frame - "RetryingBlockFetcher - Exception while beginning fetch"

We have a Scala Spark application, that reads something like 70K records from the DB to a data frame, each record has 2 fields.
After reading the data from the DB, we make minor mapping and load this as a broadcast for later usage.
Now, in local environment, there is an exception, timeout from the RetryingBlockFetcher while running the following code:
dataframe.select("id", "mapping_id")
.rdd.map(row => row.getString(0) -> row.getLong(1))
.collectAsMap().toMap
The exception is:
2022-06-06 10:08:13.077 task-result-getter-2 ERROR
org.apache.spark.network.shuffle.RetryingBlockFetcher Exception while
beginning fetch of 1 outstanding blocks
java.io.IOException: Failed to connect to /1.1.1.1:62788
at
org.apache.spark.network.client.
TransportClientFactory.createClient(Transpor .tClientFactory.java:253)
at
org.apache.spark.network.client.
TransportClientFactory.createClient(TransportClientFactory.java:195)
at
org.apache.spark.network.netty.
NettyBlockTransferService$$anon$2.
createAndStart(NettyBlockTransferService.scala:122)
In the local environment, I simply create the spark session with local "spark.master"
When I limit the max of records to 20K, it works well.
Can you please help? maybe I need to configure something in my local environment in order that the original code will work properly?
Update:
I tried to change a lot of Spark-related configurations in my local environment, both memory, a number of executors, timeout-related settings, and more, but nothing helped! I just got the timeout after more time...
I realized that the data frame that I'm reading from the DB has 1 partition of 62K records, while trying to repartition with 2 or more partitions the process worked correctly and I managed to map and collect as needed.
Any idea why this solves the issue? Is there a configuration in the spark that can solve this instead of repartition?
Thanks!

Error while uploading a huge .csv file to dynamodb through s3 bucket using lambda function

My funtion is
import boto3
import csv
s3 = boto3.client('s3')
dynamodb = boto3.resource('dynamodb')
def lambda_handler(event, context):
bucket='bucketname'
file_name='filename.csv'
obj = s3.get_object(Bucket=bucket,Key=file_name)
rows = obj['Body'].read()
lines = rows.splitlines()
# print(lines)
reader = csv.reader(lines)
parsed_csv = list(reader)
num_rows = (len(parsed_csv))
table = dynamodb.Table('table_name')
with table.batch_writer() as batch:
for i in range(1,num_rows):
Brand_Name= parsed_csv[i][0]
Assigned_Brand_Name= parsed_csv[i][1]
Brand_URL= parsed_csv[i][2]
Generic_Name= parsed_csv[i][3]
HSN_Code= parsed_csv[i][4]
GST_Rate= parsed_csv[i][5]
Price= parsed_csv[i][6]
Dosage= parsed_csv[i][7]
Package= parsed_csv[i][8]
Size= parsed_csv[i][9]
Size_Unit= parsed_csv[i][10]
Administration_Form= parsed_csv[i][11]
Company= parsed_csv[i][12]
Uses= parsed_csv[i][13]
Side_Effects= parsed_csv[i][14]
How_to_use= parsed_csv[i][15]
How_to_work= parsed_csv[i][16]
FAQs_Downloaded= parsed_csv[i][17]
Alternate_Brands= parsed_csv[i][18]
Prescription_Required= parsed_csv[i][19]
Interactions= parsed_csv[i][20]
batch.put_item(Item={
'Brand Name':Assigned_Brand_Name
'Brand URL':Brand_URL,
'Generic Name':Generic_Name,
'Price':Price,
'Dosage':Dosage,
'Company':Company,
'Uses':Uses,
'Side Effects':Side_Effects,
'How to use':How_to_use,
'How to work':How_to_work,
'FAQs Downloaded?':FAQs_Downloaded,
'Alternate Brands':Alternate_Brands,
'Prescription Required':Prescription_Required,
'Interactions':Interactions
})
Response:
{
"errorMessage": "2020-10-14T11:40:56.792Z ecd63bdb-16bc-4813-afed-cbf3e1fa3625 Task timed out after 3.00 seconds"
}
You haven't specified how many rows there are is your CSV file. "Huge" is pretty subjective so it is possible that your task is timing out due to throttling on the DynamoDB table.
If you are using provisioned capacity on the table you are loading into, make sure you have enough capacity allocated. If you're using on-demand capacity then this might be due to the on-demand partitioning that happens when the table needs to scale up.
Either way, you may want to add some error handling for situations like these and add a delay when you get a timeout, before retrying and resuming.
Something to keep in mind is that writes to Dynamo always take 1 WCU and the maximum capacity a single partition can have is 1000 WCU so as your write throughput increases, the table may undergo multiple splits behind the scenes when you're in on-demand mode. For provisioned mode, you'll have to have allocated enough capacity to begin with, otherwise you'll be limited to writing however many items / second you have allocated write capacity.

Not able to save large spark dataframe as pickle

I have large dataframe (little more than 20G), trying to save that as pickle object to be later used in the another process.
I have tried different configuration, below are the latest one.
executor_cores=4
executor_memory='20g'
driver_memory='40g'
deploy_mode='client'
max_executors_dynamic='spark.dynamicAllocation.maxExecutors=400'
num_executors_static=300
spark_driver_memoryOverhead='5g'
spark_executor_memoryOverhead='2g'
spark_driver_maxResultSize='8g'
spark_kryoserializer_buffer_max='1g'
Note:- I cannot increase spark_driver_maxResultSize more than 8G.
I have also tried saving dataframe as hdfs files and then tried to save it as pickel but getting same error messsage as earlier.
My understanding is, when we use pandas.pickle it brings all the data into one driver and then create pickle object. As data size is more than driver_max_result_size code is failing. (Code has worked earlier for 2G data).
Do we have any worksround to solve this problem?
big_data_frame.toPandas().to_pickle('{}/result_file_01.pickle'.format(result_dir))
big_data_frame.write.save('{}/result_file_01.pickle'.format(result_dir), format='parquet', mode='append')
df_to_pickel=sqlContext.read.format('parquet').load(file_path)
df_to_pickel.toPandas().to_pickle('{}/scoring__{}.pickle'.format(afs_dir, rd.strftime('%Y%m%d')))
Error message
Py4JJavaError: An error occurred while calling o1638.collectToPython.
: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 955 tasks (4.0 GB) is bigger than spark.driver.maxResultSize (4.0 GB)
Saving as pickle file is an RDD function in Spark, not dataframe. To save your frame using pickle, run
big_data_frame.rdd.saveAsPickleFile(filename)
If you are working with big data, it is never a good idea to run either collect or toPandas in spark as it collects everything in memory, crashing the system. I would suggest you to use parquet or any other format for saving your data as RDD functions are in maintenance mode, which means spark is not introducing any new features to it rapidly.
To read the file, try
pickle_rdd = sc.pickleFile(filename).collect()
df = spark.createDataFrame(pickle_rdd)

Ignite Data streamer optimization

I am using below settings:
allowOverwrite: false
nodeParallelOperations: 1
autoFlushFrequency: 10
perNodeBufferSize: 5000000
My records size is around 2000 bytes. And see the "grid-data-loader-flusher"
thread stats as below:
Thread Count Average Longest Duration
grid-data-loader-flusher-#100 38 4,737,793.579 30,427,862 180,036,156
What would be the best configurations for Data streamer?
Thanks
Its good to have parallel streaming mode for data streamer. You can achieve this by collecting you key-value records in java Map and call the streamer.addData() method in parallel mode over that map. Here is the snippet.
maptoStream.entrySet().parallelStream().forEach(streamer::addData);
Also, if you are setting allowOverWrite to false then you cant use your custom stream receiver to process your collection of records. In this case it will skip the record(s) if it is already there in cache.
Regarding buffersize, you need to wait till buffer gets full each time to get it flushed automatically to cache. flush frequency comes to your rescue in this case and it will do periodic flushing. so whatever condition first satisfies(either buffer gets full or flush frequency reach) it will do flush. I preferred calling manual flush after above method call.
I observed that streamer works well with much more big collection on which you will call streamer.addData() method in parallel.

Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106)

I get the following error while storing the data to aerospike ( client.put ). I have enough space on the drive.
Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106).
Here is my Aerospike server namespace configuration
namespace test {
replication-factor 1
memory-size 1G
default-ttl 30d # 30 days, use 0 to never expire/evict.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 2G
data-in-memory true # Store data in memory in addition to file.
}
}
By default namespaces have a write-block-size of 1 MiB. This is also the maximum configurable size and will limit the max object size the application is able to write to Aerospike.
If you need to go beyond 1 MiB see Large Data Types as a possible solution.
UPDATE 2019/09/06
Since Aerospike 3.16, the write-block-size limit has been increased from 1 MiB to 8 MiB.
Yes, but unfortunately, Aerospike has deprecated LDT (https://www.aerospike.com/blog/aerospike-ldt/). They now recommend to use Lists or Maps, but as stated in their post:
"the new implementation does not solve the problem of the 1MB Aerospike database row size limit. A future key feature of the product will be an enhanced implementation that transcends the 1MB limit for a number of types"
In other terms, it is still an unsolved problem when storing your data on SSD or HDD. However, you can store larger data on memory namespaces.