I use redis.
I want that the DB will be persistent, but when I kill my process, I notice that the data doesn't recover.
In example, I have 100 keys and values. my process run on id = 26060. When I do:
kill -9 26060
and run redis-server again, all the keys are lost.
I check relevant definition in redis.conf, but don't find anything.
How can I make it persistent?
Regarding your test, you should wait 5 minutes before killing the process if you want it to be snapshotted.
This is the default config for Redis (2.8 - 3.0):
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
Everything about persistence is explained in the documentation
The file where the data will be saved is defined by the following configuration options:
# The filename where to dump the DB
dbfilename dump.rdb
# For default save/load DB in/from the working directory
# Note that you must specify a directory not a file name.
dir /var/lib/redis/
Related
I am trying to get the jmeter html report for file transfer in SFTP protocol.
I am using SSH SFTP Protocol plugin and added Simple Data Writer to that thread group.
I have created my own sftp server using Apache MINA. Jmeter script will hit the server which i created and uploads the file.
Script Parameters:
Thread Group - 250
Ramp up period - 50
Loop Count - 1
After running the script in non GUI mode as nohup sh jmeter.sh -n -t Singlepart_MultipleThread_RampUp.jmx -l Singlepart_MultipleThread_RampUp.jtl. I do get a csv generated which i convert into html report command jmeter -g <csv> -o <destination_folder>.
The html report created shows Latency vs Time and Latency vs Request as zero and even the csv report shows latency column as zero.
Below is my user.properties file
user.properties
# Latencies Over Time graph definition
jmeter.reportgenerator.graph.latenciesOverTime.classname=org.apache.jmeter.report.processor.graph.impl.LatencyOverTimeGraphConsumer
jmeter.reportgenerator.graph.latenciesOverTime.title=Latencies Over Time
jmeter.reportgenerator.graph.latenciesOverTime.property.set_granularity=${jmeter.reportgenerator.overall_granularity}
# Latencies Vs Request graph definition
jmeter.reportgenerator.graph.latencyVsRequest.classname=org.apache.jmeter.report.processor.graph.impl.LatencyVSRequestGraphConsumer
jmeter.reportgenerator.graph.latencyVsRequest.title=Latencies Vs Request
jmeter.reportgenerator.graph.latencyVsRequest.exclude_controllers=true
jmeter.reportgenerator.graph.latencyVsRequest.property.set_granularity=${jmeter.reportgenerator.overall_granularity}
jmeter.properties
#---------------------------------------------------------------------------
# Results file configuration
#---------------------------------------------------------------------------
# This section helps determine how result data will be saved.
# The commented out values are the defaults.
# legitimate values: xml, csv, db. Only xml and csv are currently supported.
jmeter.save.saveservice.output_format=csv
# The below properties are true when field should be saved; false otherwise
#
# assertion_results_failure_message only affects CSV output
jmeter.save.saveservice.assertion_results_failure_message=true
#
# legitimate values: none, first, all
jmeter.save.saveservice.assertion_results=all
#
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
# response_data is not currently supported for CSV output
jmeter.save.saveservice.response_data=true
# Save ResponseData for failed samples
jmeter.save.saveservice.response_data.on_error=false
jmeter.save.saveservice.response_message=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
jmeter.save.saveservice.time=true
jmeter.save.saveservice.subresults=true
jmeter.save.saveservice.assertions=true
jmeter.save.saveservice.latency=true
# Only available with HttpClient4
#jmeter.save.saveservice.connect_time=true
jmeter.save.saveservice.samplerData=true
#jmeter.save.saveservice.responseHeaders=false
#jmeter.save.saveservice.requestHeaders=false
#jmeter.save.saveservice.encoding=false
jmeter.save.saveservice.bytes=true
# Only available with HttpClient4
jmeter.save.saveservice.sent_bytes=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.filename=false
jmeter.save.saveservice.hostname=false
jmeter.save.saveservice.thread_counts=true
jmeter.save.saveservice.sample_count=false
jmeter.save.saveservice.idle_time=true
# Timestamp format - this only affects CSV output files
# legitimate values: none, ms, or a format suitable for SimpleDateFormat
#jmeter.save.saveservice.timestamp_format=ms
#jmeter.save.saveservice.timestamp_format=yyyy/MM/dd HH:mm:ss.SSS
# For use with Comma-separated value (CSV) files or other formats
# where the fields' values are separated by specified delimiters.
# Default:
#jmeter.save.saveservice.default_delimiter=,
# For TAB, one can use:
#jmeter.save.saveservice.default_delimiter=\t
# Only applies to CSV format files:
# Print field names as first line in CSV
#jmeter.save.saveservice.print_field_names=true
# Optional list of JMeter variable names whose values are to be saved in the result data files.
# Use commas to separate the names. For example:
#sample_variables=SESSION_ID,REFERENCE
# N.B. The current implementation saves the values in XML as attributes,
# so the names must be valid XML names.
# By default JMeter sends the variable to all servers
# to ensure that the correct data is available at the client.
# Optional xml processing instruction for line 2 of the file:
# Example:
#jmeter.save.saveservice.xml_pi=<?xml-stylesheet type="text/xsl" href="../extras/jmeter-results-detail-report.xsl"?>
# Default value:
#jmeter.save.saveservice.xml_pi=
# Prefix used to identify filenames that are relative to the current base
#jmeter.save.saveservice.base_prefix=~/
# AutoFlush on each line written in XML or CSV output
# Setting this to true will result in less test results data loss in case of Crash
# but with impact on performances, particularly for intensive tests (low or no pauses)
# Since JMeter 2.10, this is false by default
#jmeter.save.saveservice.autoflush=false
So basically facing issue at two places:
How to get the latency value?
When i provide Ramp up value as 1, the script with Thread Group =50 takes around 16 seconds to complete the upload, whereas if i give Ramp up something other than 1 such as 10 then the script ends after 10 secs exact, irrespective of file is getting uploaded or not and providing vague results in html report as well.
Any idea how to solve this. Or need to do anything else in script.
You cannot as the plugin you're using doesn't call SampleResult.setLatency() function anywhere
theoretically it should be possible to request the functionality from the plugin developers
Setting 10 seconds ramp-up period for 50 virtual users means that JMeter starts with 1 virtual user and gradually increases the load to 50 within 10 seconds duration. Make sure to have enough loops defined in the Thread Group as you may run into the situation when 1st user has already finished uploading the file and was terminated and 2nd hasn't need started so you have maximum 1 user concurrency (it can be checked using Active Threads Over Time listener). See JMeter Test Results: Why the Actual Users Number is Lower than Expected for more detailed explanation if needed.
We have a portal for our customers that allow them to start new projects directly on our platform. The problem is that we cannot upload documents bigger than 10MO.
Every time I try to upload a file bigger than 10Mo, I have a "The connection was reset" error. After some research it seems that I need to change the max size for uploads but I don't know where to do it.
I'm on CentOS 6.4/RedHat with AOL Server.
Language: TCL.
Anyone has an idea on how to do it?
EDIT
In the end I could solve the problem with the command ns_limits set default -maxupload 500000000.
In your config.tcl, add the following line to the nssock module section:
set max_file_upload_mb 25
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param maxinput [expr {$max_file_upload_mb * 1024 * 1024}]
# ...
It is also advised to constrain the upload times, by setting:
set max_file_upload_min 5
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param recvwait [expr {$max_file_upload_min * 60}]
If running on top of nsopenssl, you will have to set those configuration values (maxinput, recvwait) in a different section.
I see that you are running Project Open. As well as setting the maxinput value for AOLserver, as described by mrcalvin, you also need to set 2 parameters in the Site Map:
Attachments package: parameter "MaximumFileSize"
File Storage package: parameter "MaximumFileSize"
These should be set to values in bytes, but not larger than the maxinput value for AOLserver. See the Project Open documentation for more info.
In the case where you are running Project Open using a reverse proxy, check the documentation here for Pound and here for Nginx. Most likely you will need to set a larger file upload limit there too.
I want to use redis purely as cache. what options do i have to disable in redis.conf for ensuring so . I read that by default redis persists data (AOF and rdb files and perhaps more). Is that true for even the keys which are set to expire.
Isnt it contradictory to persist data that is set to expire?
Redis stores all its data in RAM, but dumps it to the persistent storage (HDD/SDD) from time to time. This procedure is called snapshotting.
You could configure snapshotting frequency in your redis.conf file (see SNAPSHOTTING section):
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
So, if you want to disable snapshotting completely, you should remove or comment all save directives in redis.conf file.
I have a rather large database (5 dbs of about a million keys each), and each key has the environment namespace in it. For example: "datamine::production::crosswalk==foobar"
I need to sync my development environment with this data copied from the production RDB snapshot.
So what I'm trying to do is batch rename every key, changing the namespace from datamine::production to datamine::development. Is there a good, way to achieve this?
What I've tried so far
redis-cli command of keys "datamine::production*", piped into sed, then back to redis-cli. This takes forever, and bombs for some reason on many keys (combining several in the same line, sporadically). I'd prefer a better option.
Perl search/replace on the .rdb file. My local redis-server flat refuses to load the modified RDB.
The solution:
Ok, here's the script I wrote to solve this problem. It requires the "Redis" gem. Hopefully someone else finds this useful...
#!/usr/bin/env ruby
# A script to translate the current redis database into a namespace for another environment
# GWI's Redis keys are namespaced as "datamine::production", "datamine::development", etc.
# This script connects to redis and translates these key names in-place.
#
# This script does not use Rails, but needs the "redis" gem available
require 'Benchmark'
require 'Redis'
FROM_NAMESPACE = "production"
TO_NAMESPACE = "development"
NAMESPACE_PREFIX = "datamine::"
REDIS_SERVER = "localhost"
REDIS_PORT = "6379"
REDIS_DBS = [0,1,2,3,4,5]
redis = Redis.new(host: REDIS_SERVER, port: REDIS_PORT, timeout: 30)
REDIS_DBS.each do |redis_db|
redis.select(redis_db)
puts "Translating db ##{redis_db}..."
seconds = Benchmark.realtime do
dbsize = redis.dbsize.to_f
inc_threshold = (dbsize/100.0).round
i = 0
old_keys = redis.keys("#{NAMESPACE_PREFIX}#{FROM_NAMESPACE}*")
old_keys.each do |old_key|
new_key = old_key.gsub(FROM_NAMESPACE, TO_NAMESPACE)
redis.rename(old_key, new_key)
print "#{((i/dbsize)*100.0).round}% complete\r" if (i % inc_threshold == 0) # on whole # % only
i += 1
end
end
puts "\nDone. It took #{seconds} seconds"
end
I have a working solution:
EVAL "local old_prefix_len = string.len(ARGV[1])
local keys = redis.call('keys', ARGV[1] .. '*')
for i = 1, #keys do
local old_key = keys[i]
local new_key = ARGV[2] .. string.sub(old_key, old_prefix_len + 1)
redis.call('rename', old_key, new_key)
end" 0 "datamine::production::" "datamine::development::"
Two last parameters are respectively an old prefix and a new prefix.
My redis rdb file keeps growing and growing in size until the db becomes inoperational and connections are refused. I realise this is to do with some config setting - I'm using the default config file.
Is there any way I can prevent this ? I'm not to concerned about constant backups.
this is obviously in the redis.conf,
# Note: you can disable saving at all commenting all the "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
the text above is in redis.conf, if u don't want to save the rdb file, comment the three line begin with save, such as
# Note: you can disable saving at all commenting all the "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
# save 900 1
# save 300 10
# save 60 10000