Remove an entire hset from Redis (Jedis), having issues since it just won't remove - redis

pipe.hset(uuid, "name", "Archie");
This is an example of how I am using the hset. There are about 10 other attributes (name, age, etc.).
I am trying to remove the entire hset, e.g. remove uuid so it is no longer a key (is key the right term?).
I have tried removing each element individually through a pipeline;
for (String s : profileData) {
pipe.hdel("profile#" + uuid.toString(), s);
}
But firstly, this has time complexity O(n) and so can be more efficient and secondly it isn't actually working for me, as the keys are still present (think this could be my own coding fault).
I've seen questions asking for a hdelall function and I know that one doesn't exist.
I also tried using,
pipe.del(uuid);
But this does nothing - so obviously I'm using it incorrectly. I assumed it would just delete the whole hset but it doesn't, it must be used to delete a single value instead? I'm unsure.
So my question boils down to;
How can I efficiently remove an entire hset from Redis, using Jedis.
Thank you.

I'm not sure how your code looks like, but I did this quick test and it worked for me as expected.
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
import redis.clients.jedis.Pipeline;
import java.time.Duration;
import java.util.Set;
public class TestRedisDelete {
public static void main(String[] args) {
TestRedisDelete redis = new TestRedisDelete();
Pipeline p = redis.jedisPool.getResource().pipelined();
p.hset("h1", "f", "v");
p.hset("h2", "f", "v");
p.hset("h3", "f", "v");
p.del("h1");
p.sync();
Set<String> keys = redis.jedisPool.getResource().keys("*");
System.out.println(keys);
}
final JedisPoolConfig poolConfig = buildPoolConfig();
JedisPool jedisPool = new JedisPool(poolConfig, "127.0.0.1", 6379);
private JedisPoolConfig buildPoolConfig() {
final JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxTotal(10);
poolConfig.setMaxIdle(10);
poolConfig.setMinIdle(4);
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
poolConfig.setTestWhileIdle(true);
poolConfig.setMinEvictableIdleTimeMillis(Duration.ofSeconds(60).toMillis());
poolConfig.setTimeBetweenEvictionRunsMillis(Duration.ofSeconds(30).toMillis());
poolConfig.setNumTestsPerEvictionRun(3);
poolConfig.setBlockWhenExhausted(true);
return poolConfig;
}
}
Output: [h2, h3]

How about using the delete method of JEDIS
jedis.del(uuid);
Check this link for more details

Related

Use Java8 Stream on JDBCTemplate Results from HIVE

I am using jdbcTemplate to query hive then writing the results to a .csv file. I basically just generate a list of objects then steam the list to write each record to the file.
I will like to stream the results as they coming back from hive and write it to the file instead of wait to get the whole thing then processing it. Can anyone pointing me to the right direction? Thanks!
private List<Avs> queryAvsData(String asSql) {
List<Avs> llistAvs = new ArrayList<Avs>();
List<Map<String, Object>> rows = hiveJdbcTemplate.queryForList(asSql);
Iterator<Map<String, Object>> it = rows.iterator();
while (it.hasNext()) {
Map<String, Object> row = it.next();
Avs laAvs = Avs.builder()
.make((String) row.get("make"))
.model((String) row.get("model"))
.build();
llistAvs.add(laAvs);
}
return llistAvs;
}
It doesn't look like there's a built-in solution, but you can do it. Basically, you wrap the existing functionality in an iterator, and use a spliterator to turn it into a stream. Here's a blog post on the subject:
The code implements Spring’s ResultSetExtractor interface, which is a Single Abstract Method (SAM) interface, allowing the use of a lambda expression to implement it.
The implementation wraps the SQL ResultSet in an iterator, constructs a stream using the Spliterators and StreamSupport utility classes, and applies that to a Function taking a stream of row sets and returning a generic result.
It's possible to stream values from JdbcTemplate. The following example is a service based on Spring Boot 2.4.8.
As, I run into problems (connection leak) using queryForStream then I will put a demo code here just to know that stream must be closed after usage.
import lombok.RequiredArgsConstructor;
import org.springframework.jdbc.core.SingleColumnRowMapper;
import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate;
import org.springframework.stereotype.Service;
import java.util.Map;
import java.util.stream.Stream;
#Service
#RequiredArgsConstructor
public class DataCleaningService {
private final NamedParameterJdbcTemplate jdbcTemplate;
public void doSomeStreaming() {
String nativeQuery = "SELECT string_value FROM my_table WHERE column = :valueToFiler";
Map<String, Object> queryParameters = Map.of("valueToFiler", "my value");
SingleColumnRowMapper<String> stringRowMapper = SingleColumnRowMapper.newInstance(String.class);
try (Stream<String> stringValueStream = jdbcTemplate.queryForStream(nativeQuery, queryParameters, stringRowMapper)) {
stringValueStream.forEach(stringValue -> {
// do the needed action with the value
//..
System.out.printf("My cool value: %s", stringValue);
});
}
}
}

Use of RabbitTemplate.convertSendAndReceive with org.springframework.messaging.Message

I have successfully used the following to send an org.springframework.amqp.core.Message and receive a byte []
import org.springframework.amqp.core.Message;
import org.springframework.amqp.core.MessageBuilder;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
Message message =
MessageBuilder.withBody(payload)..setCorrelationIdString(id).build();
byte [] response = (byte[]) rabbitTemplate.convertSendAndReceive(message,m -> {
m.getMessageProperties().setCorrelationIdString(id);
This works fine if the queues are set up to handle the message correctly for Message<?>. But I have a series of queues that use the message type org.springframework.messaging.Message specifically Message<String>.
Is there a way I can use rabbitTemplate.convertSendAndReceive to send the org.springframework.messaging.Message Message< String>. Such that the following would work.
import org.springframework.messaging.Message;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
Message<String> message =
MessageBuilder.withPayload(payload).setCorrelationId(id).build();
Object returnObject = rabbitTemplate.convertSendAndReceive(message);
I have looked at the MessageConverter but I am unsure if I can use that.
Alternatively, should I use org.springframework.messaging.core.GenericMessagingTemplate.convertSendAndReceive
UPDATE.
I can make it work if I change what I have on the queues from
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Message<String> transform(Message<String> inMessage) {
to
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Message<String> transform(Message<?> inMessage) { GenericMessage<?>
genericMessage = (GenericMessage<?>)inMessage.getPayload();
String payload = (String)genericMessage.getPayload();
but I would rather not have to change the transformers to make this work as the code in question is for integration tests and existing code already works with what I already have.
END UPDATE
I think I have given enough information but please let me know if more details are required. Ideally, I am looking for a code example or to point me to the documentation that answers my question.
Use the RabbitMessagingTemplate documentation here.
public Message<?> sendAndReceive(String exchange, String routingKey, Message<?> requestMessage)

Simple currency observer

I am trying to use the cryptsy.com's API to get the current price of doge. This is my code.
package main;
import java.text.DecimalFormat;
import java.util.Date;
import java.util.concurrent.TimeUnit;
import main.Cryptsy.CryptsyException;
import main.Cryptsy.PublicMarket;
public class Main {
public static void main (String [] args) throws CryptsyException, InterruptedException{
Cryptsy cryptsy = new Cryptsy();
while(true){
PublicMarket[] markets = cryptsy.getPublicMarketData();
for(PublicMarket market : markets) {
DecimalFormat df = new DecimalFormat("#.########");
if(market.label.equals("DOGE/BTC"))
System.out.println(new Date() + " " + market.label + " " + df.format(market.lasttradeprice));
}
TimeUnit.SECONDS.sleep(30);
}
}
}
the problem is that the price get updated too rear (30 mins or something) and only if I restart my program. Anyone to know how to get the current price? Also there is connection errors sometimes.
Actually the connection problems are normal with the Cryptsy API. It's slow and often disconnects without an answer. They are overcrowded like all the times.
There is a new API location that should be faster and solve the connection issies, here:
http://pubapi.cryptsy.com/api.php?method=marketdatav2
And also, if you are only interested in one single currency, you can get the market data of only that currency. The whole Answer from Cryptsy for all Currencies is like 300k, so you would waste bandwith, if you poll that every minute or so.
For only one currency it will be like:
http://pubapi.cryptsy.com/api.php?method=singlemarketdata&marketid={MARKET ID}
Where the market ID can be gathered inside the answer of the first URL. But you just need the int ID of the market once, from then on you can always use the direct call..
Every Detail is BTW available here:
https://www.cryptsy.com/pages/api

Lucene 4.1 : How split words that contains "dots" when indexing?

I'l trying to figure out what I should do to index my keywords that contains "." .
ex : this.name
I want to index the terms : this and name in my index.
I use the StandardAnalyser. I try to extends the WhitespaceTokensizer or extends TokenFilter, but I'm not sure if I'm in the right direction.
if I use the StandardAnalyser, I'll obtain "this.name" as a keyword, and that's not what I want, but the analyser do the rest correctly for me.
You can put a CharFilter in front of StandardTokenizer that converts periods and underscores to spaces. MappingCharFilter will work.
Here's MappingCharFilter added to a stripped-down StandardAnalyzer (see the original 4.1 version here):
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.charfilter.MappingCharFilter;
import org.apache.lucene.analysis.charfilter.NormalizeCharMap;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopAnalyzer;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.util.StopwordAnalyzerBase;
import org.apache.lucene.util.Version;
import java.io.IOException;
import java.io.Reader;
public final class MyAnalyzer extends StopwordAnalyzerBase {
private int maxTokenLength = 255;
public MyAnalyzer() {
super(Version.LUCENE_41, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
}
#Override
protected TokenStreamComponents createComponents
(final String fieldName, final Reader reader) {
final StandardTokenizer src = new StandardTokenizer(matchVersion, reader);
src.setMaxTokenLength(maxTokenLength);
TokenStream tok = new StandardFilter(matchVersion, src);
tok = new LowerCaseFilter(matchVersion, tok);
tok = new StopFilter(matchVersion, tok, stopwords);
return new TokenStreamComponents(src, tok) {
#Override
protected void setReader(final Reader reader) throws IOException {
src.setMaxTokenLength(MyAnalyzer.this.maxTokenLength);
super.setReader(reader);
}
};
}
#Override
protected Reader initReader(String fieldName, Reader reader) {
NormalizeCharMap.Builder builder = new NormalizeCharMap.Builder();
builder.add(".", " ");
builder.add("_", " ");
NormalizeCharMap normMap = builder.build();
return new MappingCharFilter(normMap, reader);
}
}
Here's a quick test to demonstrate it works:
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.BaseTokenStreamTestCase;
public class TestMyAnalyzer extends BaseTokenStreamTestCase {
private Analyzer analyzer = new MyAnalyzer();
public void testPeriods() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"this.name; here.i.am; sentences ... end with periods.",
new String[] { "name", "here", "i", "am", "sentences", "end", "periods" } );
}
public void testUnderscores() throws Exception {
BaseTokenStreamTestCase.assertAnalyzesTo
(analyzer,
"some_underscore_term _and____ stuff that is_not in it",
new String[] { "some", "underscore", "term", "stuff" } );
}
}
If I understand you correctly, you need to use a tokenizer that removes dots -- that is, any name that contains a dot should be split at that point ("here.i.am" becomes "here" + "i" + "am").
you are getting caught by behavior documented here:
However, a dot that's not followed by whitespace is considered part of a token.
StandardTokenizer introduces some more complex to parsing rules than you may not be looking for. This one, in particular, is intended to prevent tokenization of URLs, IPs, idenifiers, etc. A simpler implementation might suit your needs, like LetterTokenizer.
If that doesn't really suit your needs (and it might well turn out to be throwing the baby out with the bathwater), then you may need to modify StandardTokenizer yourself, which is explicitly encouraged by the Lucene docs:
Many applications have specific tokenizer needs. If this tokenizer does not suit your application, please consider copying this source code directory to your project and maintaining your own grammar-based tokenizer.
Sebastien Dionne: I didn't understand how to split a word, do I have to parse the document char by char ?
Sebastien Dionne: I still want to know how to split a token into multiple part, and index them all
You may have to write a custom analyzer.
Analyzer is a combination of Tokenizer and possibly a chain of TokenFilter instances.
Tokenizer : Takes in the input text passed by you probably as a java.io.Reader. It
JUST breakdowns the text. Doesn't alter, just breaks it down.
TokenFilter : Takes in the token emitted by Tokenizer, adds / removes / alters tokens and emits the same one by one until all are finished.
If it replaces a token with multiple tokens based on requirements, buffers all, emits them one by one to the Indexer.
You may check following resource, unfortunately, you may have to sign-up for a trial membership.
By writing a custom analyzer, you can breakdown the text the way you want to. You may even use some existing components like LowercaseFilter. Fortunately, it is achievable with Lucene to come up with some Analyzer that serves your purpose if you couldn't find that as a built-in or on the web.
" Writing Custom Filters: Lucene in Action 2"

How to write a string to Amazon S3 bucket?

How can I add a string as a file on amazon s3? From whaterver I searched, I got to know that we can upload a file to s3. What is the best way to upload data without creating file?
There is an overload for the AmazonS3.putObject method that accepts the bucket string, a key string, and a string of text content. I hadn't seen mention of it on stack overflow so I'm putting this here. It's going to be similar #Jonik's answer, but without the additional dependency.
AmazonS3 s3client = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1).build();
s3client.putObject(bucket, key, contents);
Doesn't look as nice, but here is how you can do it using Amazons Java client, probably what JetS3t does behind the scenes anyway.
private boolean putArtistPage(AmazonS3 s3,String bucketName, String key, String webpage)
{
try
{
byte[] contentAsBytes = webpage.getBytes("UTF-8");
ByteArrayInputStream contentsAsStream = new ByteArrayInputStream(contentAsBytes);
ObjectMetadata md = new ObjectMetadata();
md.setContentLength(contentAsBytes.length);
s3.putObject(new PutObjectRequest(bucketname, key, contentsAsStream, md));
return true;
}
catch(AmazonServiceException e)
{
log.log(Level.SEVERE, e.getMessage(), e);
return false;
}
catch(Exception ex)
{
log.log(Level.SEVERE, ex.getMessage(), ex);
return false;
}
}
What is the best way to upload data
without creating file?
If you meant without creating a file on S3, well, you can't really do that. On Amazon S3, the only way to store data is as files, or using more accurate terminology, objects. An object can contain from 1 byte zero bytes to 5 terabytes of data, and is stored in a bucket. Amazon's S3 homepage lays out the basic facts quite clearly. (For other data storing options on AWS, you might want to read e.g. about SimpleDB.)
If you meant without creating a local temporary file, then the answer depends on what library/tool you are using. (As RickMeasham suggested, please add more details!) With the s3cmd tool, for example, you can't skip creating temp file, while with the JetS3t Java library uploading a String directly would be easy:
// (First init s3Service and testBucket)
S3Object stringObject = new S3Object("HelloWorld.txt", "Hello World!");
s3Service.putObject(testBucket, stringObject);
There is a simple way to do it with PHP, simply send the string as the body of the object, specifying the name of the new file in the key -
$s3->putObject(array(
'Bucket' => [Bucket name],
'Key' => [path/to/file.ext],
'Body' => [Your string goes here],
'ContentType' => [specify mimetype if you want],
));
This will create a new file according to the specified key, which has a content as specified in the string.
If you're using java, check out https://ivan-site.com/2015/11/interact-with-s3-without-temp-files/
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.io.*;
import java.nio.charset.StandardCharsets;
class S3StreamJacksonTest {
private static final String S3_BUCKET_NAME = "bucket";
private static final String S3_KEY_NAME = "key";
private static final String CONTENT_TYPE = "application/json";
private static final AmazonS3 AMAZON_S3 = new AmazonS3Client();
private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
private static final TestObject TEST_OBJECT = new TestObject("test", 123, 456L);
public void testUploadWithStream() throws JsonProcessingException {
String fileContentString = OBJECT_MAPPER.writeValueAsString(TEST_OBJECT);
byte[] fileContentBytes = fileContentString.getBytes(StandardCharsets.UTF_8);
InputStream fileInputStream = new ByteArrayInputStream(fileContentBytes);
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType(CONTENT_TYPE);
metadata.setContentLength(fileContentBytes.length);
PutObjectRequest putObjectRequest = new PutObjectRequest(
S3_BUCKET_NAME, S3_KEY_NAME, fileInputStream, metadata);
AMAZON_S3.putObject(putObjectRequest);
}
}
This works for me:
public static PutObjectResult WriteString(String bucket, String key, String stringToWrite, AmazonS3Client s3Client) {
ObjectMetadata meta = new ObjectMetadata();
meta.setContentMD5(new String(com.amazonaws.util.Base64.encode(DigestUtils.md5(stringToWrite))));
meta.setContentLength(stringToWrite.length());
InputStream stream = new ByteArrayInputStream(stringToWrite.getBytes(StandardCharsets.UTF_8));
return s3Client.putObject(bucket, key, stream, meta);
}
The sample code at https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html works for me.
s3Client.putObject(bucketName, stringObjKeyName, "Uploaded String Object");
Looks like this was added around 1.11.20, so make sure you are using that or new version of SDK.
https://javadoc.io/doc/com.amazonaws/aws-java-sdk-s3/1.11.20/com/amazonaws/services/s3/AmazonS3.html#putObject-java.lang.String-java.lang.String-java.lang.String-