Rate limiting with redis - redis

I'm using elasticache redis for rate limit and use Redisson as the client, the related code is:
public CompletableFuture<List<Long>> incrementSingleKeys(
List<String> keys, List<Long> increments, List<Long> ttls) {
RBatch batch = redissonClient.createBatch(BatchOptions.defaults());
for (int i = 0; i < keys.size(); i++) {
batch.getAtomicLong(keys.get(i)).addAndGetAsync(increments.get(i));
}
return batch
.executeAsync()
.thenCompose(
(counters) -> {
List<String> keysToSet = Lists.newArrayList();
List<Long> TTLsToSet = Lists.newArrayList();
for (int i = 0; i < counters.size(); i++) {
if (counters.get(i) == increments.get(i)) { // only set ttl for new keys
keysToSet.add(keys.get(i));
TTLsToSet.add(ttls.get(i));
}
}
if (!keysToSet.isEmpty()) { // Call setTTLS
return setTTLs(keysToSet, TTLsToSet)
.thenApply(
(r) -> counters
);
} else {
return CompletableFuture.completedFuture(counters);
}
});
}
public CompletableFuture<List<Boolean>> setTTLs(List<String> keys, List<Long> TTLs) {
CompletableFuture<List<Boolean>> future = new CompletableFuture<>();
Stopwatch timer = Stopwatch.createStarted();
RBatch batch = redissonClient.createBatch(BatchOptions.defaults());
for (int i = 0; i < keys.size(); i++) {
batch.getBucket(keys.get(i)).expireAsync(TTLs.get(i), TimeUnit.MILLISECONDS);
}
batch
.executeAsync()
.whenComplete(
(list, ex) -> {
if (ex != null) {
future.complete(
Collections.nCopies(size, false); // fail open
)
} else {
future.complete(
list.stream()
.map(entry -> (entry instanceof Boolean ? (Boolean) entry : false))
.collect(Collectors.toList()),
);
}
});
return future;
}
Basically, I'll set ttl only for new keys. The issue is that sometimes the increment batch call is successful but setTTL call timeout, which results in a permanent key and could lead to incorrect rate limiting. One workaround is to always get and set ttl whenever increment happens, but this would affect the performance. Is there any other solution?

Related

Bulk hash insert with multiple Hash Key value using StackExchange.Redis

i am using Redis server - 4.0.9 , StackExchange.Redis (2.2.4).
i am trying to insert my csv data into Redis hash CSV contain ~10K Record and file size is 2 mb.
its take around 17 second to push 10k record.
Here is all my Test Result -
simple Key, value pair Process Time for ~100k Record -> 8 Second
using StringSetAsync.
Hash Key with 2 pair Process Time sec ~100k Record -> 14 Second
HashSetAsync.
Hash Key CSV Process Time sec ~10K Record -> 17 Second
I am using pipelining for process this record .
public class Redis_Test
{
private static ConfigurationOptions _configurationOptions;
public Redis_Test(string argStrEndPoints)
{
_configurationOptions = new ConfigurationOptions
{
EndPoints = { argStrEndPoints },
AbortOnConnectFail = false,
KeepAlive = 60
};
}
public Redis_Test(string argStrEndPoints, string argStrPassword)
{
_configurationOptions = new ConfigurationOptions
{
EndPoints = { argStrEndPoints },
Password = argStrPassword,
AbortOnConnectFail = false,
KeepAlive = 60
};
}
private static IDatabase RedisCache
{
get
{
return Connection.GetDatabase();
}
}
private static readonly Lazy<ConnectionMultiplexer> LazyConnection
= new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(_configurationOptions));
public static ConnectionMultiplexer Connection
{
get
{
return LazyConnection.Value;
}
}
public void testpipe()
{
List<Task> redisTask = new List<Task>();
int batch = 10000;
for (int i = 0; i <= 100000; i++)
{
Task<bool> addAsyncTask = RedisCache.StringSetAsync("testKey:" + i.ToString(), "Value:" + i.ToString());
redisTask.Add(addAsyncTask);
if (i == batch)
{
Task[] TaskArray = redisTask.ToArray();
Task.WaitAll(TaskArray);
redisTask.Clear();
batch = batch + 10000;
}
}
}
public void testpipehash()
{
List<Task> redisTask = new List<Task>();
int batch = 10000;
for (int i = 0; i <= 100000; i++)
{
HashEntry[] ab = new HashEntry[] { new HashEntry("hash1"+ i.ToString(), "hashvalue1"+i.ToString()),
new HashEntry("hash2" + i.ToString(), "hashvalue2" + i.ToString()) };
Task addAsyncTask = RedisCache.HashSetAsync("testhashKey:" + i.ToString(), ab);
redisTask.Add(addAsyncTask);
if (i == batch)
{
Task[] TaskArray = redisTask.ToArray();
Task.WaitAll(TaskArray);
redisTask.Clear();
batch = batch + 10000;
}
}
}
public int testpipehashCSV(DataTable dataTable)
{
List<Task> redisTask = new List<Task>();
int batch = 1000;
var watch = System.Diagnostics.Stopwatch.StartNew();
int i = 0;
foreach (DataRow dr in dataTable.Rows)
{
// converting row to dic
watch.Stop();
Dictionary<string, string> test = new Dictionary<string, string>();
test = dr.Table.Columns
.Cast<DataColumn>()
.ToDictionary(c => c.ColumnName, c => dr[c].ToString());
//converting dic to hashentry[]
var hashEntries = test.Select(
Pair => new HashEntry(Pair.Key, Pair.Value)).ToArray();
watch.Start();
Task addAsyncTask = RedisCache.HashSetAsync("testhashKeyCSV:" + i.ToString(), hashEntries);
redisTask.Add(addAsyncTask);
if (i == batch)
{
Task[] TaskArray = redisTask.ToArray();
Task.WaitAll(TaskArray);
redisTask.Clear();
batch = batch + 1000;
}
i++;
}
watch.Stop();
TimeSpan ts = watch.Elapsed;
int totaltimeinsec = ts.Seconds;
return totaltimeinsec;
}
}
Please any suggestion where i can improve my CSV file data load into Redis Hash.
Thanks.

Does Sql batch query execution involve multiple exhange of data between server and client?

Batch query execution from what I have read from multiple sources online have been stating that it enables grouping multiple statements together and executing it at once, thereby eliminating multiple back and forth communication.
Some sources that claim this are:
https://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm#:~:text=Batch%20Processing%20allows%20you%20to,communication%20overhead%2C%20thereby%20improving%20performance.
http://tutorials.jenkov.com/jdbc/batchupdate.html
https://www.baeldung.com/jdbc-batch-processing
All of these talk about single network trip etc, however going through the source code of H2 or Sqlite, it does look like its executing one by one. Albeit with autocommit disabled.
Eg: Sqlite
final synchronized int[] executeBatch(long stmt, int count, Object[] vals, boolean autoCommit) throws SQLException {
if (count < 1) {
throw new SQLException("count (" + count + ") < 1");
}
final int params = bind_parameter_count(stmt);
int rc;
int[] changes = new int[count];
try {
for (int i = 0; i < count; i++) {
reset(stmt);
for (int j = 0; j < params; j++) {
rc = sqlbind(stmt, j, vals[(i * params) + j]);
if (rc != SQLITE_OK) {
throwex(rc);
}
}
rc = step(stmt);
if (rc != SQLITE_DONE) {
reset(stmt);
if (rc == SQLITE_ROW) {
throw new BatchUpdateException("batch entry " + i + ": query returns results", changes);
}
throwex(rc);
}
changes[i] = changes();
}
}
finally {
ensureAutoCommit(autoCommit);
}
reset(stmt);
return changes;
}
Eg: H2
public int[] executeBatch() throws SQLException {
try {
debugCodeCall("executeBatch");
if (batchParameters == null) {
// Empty batch is allowed, see JDK-4639504 and other issues
batchParameters = Utils.newSmallArrayList();
}
batchIdentities = new MergedResult();
int size = batchParameters.size();
int[] result = new int[size];
SQLException first = null;
SQLException last = null;
checkClosedForWrite();
for (int i = 0; i < size; i++) {
Value[] set = batchParameters.get(i);
ArrayList<? extends ParameterInterface> parameters =
command.getParameters();
for (int j = 0; j < set.length; j++) {
Value value = set[j];
ParameterInterface param = parameters.get(j);
param.setValue(value, false);
}
try {
result[i] = executeUpdateInternal();
// Cannot use own implementation, it returns batch identities
ResultSet rs = super.getGeneratedKeys();
batchIdentities.add(((JdbcResultSet) rs).result);
} catch (Exception re) {
SQLException e = logAndConvert(re);
if (last == null) {
first = last = e;
} else {
last.setNextException(e);
}
result[i] = Statement.EXECUTE_FAILED;
}
}
batchParameters = null;
if (first != null) {
throw new JdbcBatchUpdateException(first, result);
}
return result;
} catch (Exception e) {
throw logAndConvert(e);
}
}
From the above code I see that there are multiple calls to the database, with each having its own result set. How does batch execution actually work?

Optimizing Transposing and Comparison of Concurrent List in Java 8

I have an application in Java 8 Collecting Data of multiple Threads using BlockingQueue.
I need to perform comparison of samples.
But my application is very large, I implemented a mock application (Github) in Java 8.
I'm generating a chunk of bytes (really is random order).
The bytes are stored into ChunkDTO class.
I implemented a capturer the ChunkDTO in a List, code in Capturer class.
Each ChunkDTO of List is translated into a List of Samples (TimePitchValue exactly) returning a nested List (or List of List of TimePitchValue).
Later the nested List is transposed in order to performs comparisons between TimePitchValue with the same time value.
Due to enormous volume of TimePitchValue instances it's consumes huge time in my application.
Here some code (The complete functional Code is in Github) because is still large for this site).
public class Generator {
final static Logger LOGGER = Logger.getLogger("SampleComparator");
public static void main(String[] args) {
long previous = System.nanoTime();
final int minBufferSize = 2048;
int sampleRate = 8192;
int numChannels = 1;
int numBytesPerSample = 1;
int samplesChunkPerSecond = sampleRate / minBufferSize;
int minutes = 0;
int seconds = 10;
int time = 60 * minutes + seconds;
int chunksBySecond = samplesChunkPerSecond * numBytesPerSample * numChannels;
int pitchs = 32;
boolean signed = false;
boolean endianness = false;
AudioFormat audioformat = new AudioFormat(sampleRate, 8 * numBytesPerSample, numChannels, signed, endianness);
ControlDSP controlDSP = new ControlDSP(audioformat);
BlockingQueue<ChunkDTO> generatorBlockingQueue = new LinkedBlockingQueue<>();
Capturer capturer = new Capturer(controlDSP, pitchs, pitchs * time * chunksBySecond, generatorBlockingQueue);
controlDSP.getListFuture().add(controlDSP.getExecutorService().submit(capturer));
for (int i = 0; i < time * chunksBySecond; i++) {
for (int p = 0; p < pitchs; p++) {
ChunkDTO chunkDTO = new ChunkDTO(UtilClass.getArrayByte(minBufferSize), i, p);
LOGGER.info(String.format("chunkDTO: %s", chunkDTO));
try {
generatorBlockingQueue.put(chunkDTO);
} catch (InterruptedException ex) {
LOGGER.info(ex.getMessage());
}
}
try {
Thread.sleep(1000 / chunksBySecond);
} catch (Exception ex) {
}
}
controlDSP.tryFinishThreads(Thread.currentThread());
long current = System.nanoTime();
long interval = TimeUnit.NANOSECONDS.toSeconds(current - previous);
System.out.println("Seconds Interval: " + interval);
}
}
Capturer Class
public class Capturer implements Callable<Void> {
private final ControlDSP controlDSP;
private final int pitchs;
private final int totalChunks;
private final BlockingQueue<ChunkDTO> capturerBlockingQueue;
private final Counter intCounter;
private final Map<Long, List<ChunkDTO>> mapIndexListChunkDTO = Collections.synchronizedMap(new HashMap<>());
private volatile boolean isRunning = false;
private final String threadName;
private static final Logger LOGGER = Logger.getLogger("SampleComparator");
public Capturer(ControlDSP controlDSP, int pitchs, int totalChunks, BlockingQueue<ChunkDTO> capturerBlockingQueue) {
this.controlDSP = controlDSP;
this.pitchs = pitchs;
this.totalChunks = totalChunks;
this.capturerBlockingQueue = capturerBlockingQueue;
this.intCounter = new Counter();
this.controlDSP.getListFuture().add(this.controlDSP.getExecutorService().submit(() -> {
while (intCounter.getValue() < totalChunks) {
try {
Thread.sleep(100);
} catch (InterruptedException ex) {
LOGGER.log(Level.SEVERE, null, ex);
}
}
capturerBlockingQueue.add(new ChunkDTOStopper());
}));
this.threadName = this.getClass().getSimpleName();
}
#Override
public Void call() throws Exception {
long quantity = 0;
isRunning = true;
while (isRunning) {
try {
ChunkDTO chunkDTO = capturerBlockingQueue.take();
if (chunkDTO instanceof ChunkDTOStopper) {
break;
}
//Find or Create List (according to Index) to add the incoming Chunk
long index = chunkDTO.getIndex();
int sizeChunk = chunkDTO.getChunk().length;
List<ChunkDTO> listChunkDTOWithIndex = getListChunkDTOByIndex(chunkDTO);
//When the List (according to Index) is completed and processed
if (listChunkDTOWithIndex.size() == pitchs) {
mapIndexListChunkDTO.remove(index);
TransposerComparator transposerComparator = new TransposerComparator(controlDSP, controlDSP.getAudioformat(), index, sizeChunk, listChunkDTOWithIndex);
controlDSP.getListFuture().add(controlDSP.getExecutorService().submit(transposerComparator));
}
quantity++;
intCounter.setValue(quantity);
LOGGER.info(String.format("%s\tConsumes:%s\ttotal:%05d", threadName, chunkDTO, quantity));
} catch (Exception ex) {
LOGGER.log(Level.SEVERE, null, ex);
}
}
LOGGER.info(String.format("%s\tReceived:%05d\tQty:%s\tPitchs:%s\tEND\n", threadName, quantity, quantity / pitchs, pitchs));
return null;
}
private List<ChunkDTO> getListChunkDTOByIndex(ChunkDTO chunkDTO) {
List<ChunkDTO> listChunkDTOWithIndex = mapIndexListChunkDTO.get(chunkDTO.getIndex());
if (listChunkDTOWithIndex == null) {
listChunkDTOWithIndex = new ArrayList<>();
mapIndexListChunkDTO.put(chunkDTO.getIndex(), listChunkDTOWithIndex);
listChunkDTOWithIndex = mapIndexListChunkDTO.get(chunkDTO.getIndex());
}
listChunkDTOWithIndex.add(chunkDTO);
return listChunkDTOWithIndex;
}
}
TransposerComparator class.
The optimization required is in this code, specifically on transposedNestedList method.
public class TransposerComparator implements Callable<Void> {
private final ControlDSP controlDSP;
private final AudioFormat audioformat;
private final long index;
private final int sizeChunk;
private final List<ChunkDTO> listChunkDTOWithIndex;
private final String threadName;
private static final Logger LOGGER = Logger.getLogger("SampleComparator");
public TransposerComparator(ControlDSP controlDSP, AudioFormat audioformat, long index, int sizeChunk, List<ChunkDTO> listChunkDTOWithIndex) {
this.controlDSP = controlDSP;
this.audioformat = audioformat;
this.index = index;
this.sizeChunk = sizeChunk;
this.listChunkDTOWithIndex = listChunkDTOWithIndex;
this.threadName = this.getClass().getSimpleName() + "_" + String.format("%05d", index);
}
#Override
public Void call() throws Exception {
Thread.currentThread().setName(threadName);
LOGGER.info(String.format("%s\tINI", threadName));
try {
int numBytesPerSample = audioformat.getSampleSizeInBits() / 8;
int quantitySamples = sizeChunk / numBytesPerSample;
long baseTime = quantitySamples * index;
// Convert the List of Chunk Bytes to Nested List of TimePitchValue
List<List<TimePitchValue>> nestedListTimePitchValue = listChunkDTOWithIndex
.stream()
.map(chunkDTO -> {
return IntStream
.range(0, quantitySamples)
.mapToObj(time -> {
int value = extractValue(chunkDTO.getChunk(), numBytesPerSample, time);
return new TimePitchValue(chunkDTO.getPitch(), baseTime + time, value);
}).collect(Collectors.toList());
}).collect(Collectors.toList());
List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue);
} catch (Exception ex) {
ex.printStackTrace();
LOGGER.log(Level.SEVERE, null, ex);
throw ex;
}
return null;
}
private static int extractValue(byte[] bytesSamples, int numBytesPerSample, int time) {
byte[] bytesSingleNumber = Arrays.copyOfRange(bytesSamples, time * numBytesPerSample, (time + 1) * numBytesPerSample);
int value = numBytesPerSample == 2
? (UtilClass.Byte2IntLit(bytesSingleNumber[0], bytesSingleNumber[1]))
: (UtilClass.byte2intSmpl(bytesSingleNumber[0]));
return value;
}
private static List<List<TimePitchValue>> transposedNestedList(List<List<TimePitchValue>> nestedList) {
List<List<TimePitchValue>> outNestedList = new ArrayList<>();
nestedList.forEach(pitchList -> {
pitchList.forEach(pitchValue -> {
List<TimePitchValue> listTimePitchValueWithTime = listTimePitchValueWithTime(outNestedList, pitchValue.getTime());
if (!outNestedList.contains(listTimePitchValueWithTime)) {
outNestedList.add(listTimePitchValueWithTime);
}
listTimePitchValueWithTime.add(pitchValue);
});
});
outNestedList.forEach(pitchList -> {
pitchList.sort(Comparator.comparingInt(TimePitchValue::getValue).reversed());
});
return outNestedList;
}
private static List<TimePitchValue> listTimePitchValueWithTime(List<List<TimePitchValue>> nestedList, long time) {
List<TimePitchValue> listTimePitchValueWithTime = nestedList
.stream()
.filter(innerList -> innerList.stream()
.anyMatch(timePitchValue -> timePitchValue.getTime() == time))
.findAny()
.orElseGet(ArrayList::new);
return listTimePitchValueWithTime;
}
}
I was testing:
With 5 Seconds in Generator class and the List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue); line in TransposerComparator class, Commented 7 Seconds needed, Uncommented 211 Seconds needed.
With 10 Seconds in Generator class and the List<List<TimePitchValue>> timeNestedListTimePitchValue = transposedNestedList(nestedListTimePitchValue); line in TransposerComparator class, Commented 12 Seconds needed, Uncommented 574 Seconds needed.
I need to use the application at least 60 minutes.
With the purpose of reduce the needed (consumed) time, I have two ways:
I choose for short is to optimize the methods that I am currently using.
That should be successful but longer and is to use GPGPU, but I don't know where to start implementing it yet.
QUESTIONS
This Question is for the first way: What changes do you recommend in the code of the transposedNestedList method in order to improve speed?
Is there better alternative to use this Comparison?
outNestedList.forEach(pitchList -> {
pitchList.sort(Comparator.comparingInt(TimePitchValue::getValue).reversed());
});

How can I write code to receive whole string from a device on rs232?

I want to know how to write code which receives specific string.for example, this one OK , in this I only need "OK" string.
Another string is also like OK
I have written code in keil c51 for at89s52 microcontroller which works but I need more reliable code.
I'm using interrupt for rx data from rs232 serial.
void _esp8266_getch() interrupt 4 //UART Rx.{
if(TI){
TI=0;
xmit_bit=0;
return ;
}
else
{
count=0;
do
{
while(RI==0);
rx_buff=SBUF;
if(rx_buff==rx_data1) //rx_data1 = 0X0D /CR
{
RI=0;
while(RI==0);
rx_buff=SBUF;
if(rx_buff==rx_data2) // rx_data2 = 0x0A /LF
{
RI=0;
data_in_buffer=1;
if(loop_conti==1)
{
if(rec_bit_flag==1)
{
data_in_buffer=0;
loop_conti=0;
}
}
}
}
else
{
if(data_in_buffer==1)
{
received[count]=rx_buff; //my buffer in which storing string
rec_bit_flag=1;
count++;
loop_conti=1;
RI=0;
}
else
{
loop_conti=0;
rec_bit_flag=0;
RI=0;
}
}
}
while(loop_conti==1);
}
rx_buff=0;
}
This is one is just for reference, you need develop the logic further to your needs. Moreover, design is depends on what value is received, is there any specific pattern and many more parameter. And this is not a tested code, I tried to give my idea on design, with this disclaimer here is the sample..
//Assuming you get - "OK<CR><LF>" in which <CR><LF> indicates the end of string steam
int nCount =0;
int received[2][BUF_SIZE]; //used only 2 buffers, you can use more than 2, depending on how speed
//you receive and how fast you process it
int pingpong =0;
bool bRecFlag = FALSE;
int nNofbytes = 0;
void _esp8266_getch() interrupt 4 //UART Rx.
{
if(TI){
TI=0;
xmit_bit=0;
return ;
}
if(RI) // Rx interrupt
{
received[pingpong][nCount]=SBUF;
RI =0;
if(nCount > 0)
{
// check if you receive end of stream value
if(received[pingpong][nCount-1] == 0x0D) && (received[pingpong][nCount] == 0x0A))
{
bRecFlag = TRUE;
pingpong = (pingpong == 0);
nNofbytes = nCount;
nCount = 0;
return;
}
}
nCount++;
}
return;
}
int main()
{
// other stuff
while(1)
{
// other stuff
if(bRecFlag) //String is completely received
{
buftouse = pingpong ? 0 : 1; // when pingpong is 1, buff 0 will have last complete string
// when pingpong is 0, buff 1 will have last complete string
// copy to other buffer or do action on received[buftouse][]
bRecFlag = false;
}
// other stuff
}
}

Inserting 100 Records per Transaction

Is it possible to insert only 100 records at a time per transactionScope?
I would like to do things that way to avoid timeouts on my application.
using(var scope = new TransactionScope())
{
foreach ( object x in list)
{
// doing insertOnSubmit here
}
context.SubmitChanges();
scope.Complete();
}
So I would like to insert only 100 or even 50 rows inside that transaction just to avoid timeouts.
Something like this?
TransactionScope scope;
bool committed = false;
int i = 0;
const int batchSize = 100; // or even 50
try
{
foreach ( object x in list)
{
if (i % batchSize == 0)
{
if (scope != null)
{
scope.Commit();
scope.Dispose();
context.SubmitChanges();
}
scope = new TransactionScope();
}
// doing insertOnSubmit here
++i;
}
if (scope != null)
{
scope.Commit();
context.SubmitChanges();
}
}
finally
{
if (scope != null)
{
scope.Dispose();
}
}
Sure. Just start a new transaction every 50-100 records. What is your exact problem?