I am storing Map in a keyspace in REDIS. I have stored multiple Map in multiple keyspace as shown in the code. Now I want to search the keyspace using wildcard. Is it possible, if yes how to do that?
package kafka;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import redis.clients.jedis.JedisCluster;
import redis.clients.jedis.ScanParams;
public class Test {
public static void main(String[] args) {
JedisCluster connection = JedisConnection.getInstance().getconnection();
Map<String, String> ks = new HashMap<>();
ks.put("one", "one");
ks.put("two", "two");
ks.put("four", "four");
connection.hmset("parent_child1", ks);
Map<String, String> ks1 = new HashMap<>();
ks1.put("one", "one1");
ks1.put("two", "two1");
ks1.put("three", "three");
connection.hmset("parent_child2", ks1);
Map<String, String> ks2 = new HashMap<>();
ks2.put("one", "one2");
ks2.put("two", "two1");
ks2.put("three", "three3");
connection.hmset("parent_child3", ks2);
Map<String, String> ks3 = new HashMap<>();
ks3.put("one", "one3");
ks3.put("two", "two3");
ks3.put("three", "three3");
connection.hmset("parent_child1", ks3);
System.out.println(connection.hkeys("parent_child1"));
//Output : [two, three, four, one]
connection.hscan("parent_*", "0", new ScanParams().match("{parent_}"));
connection.hscan("parent_*", "0");
System.out.println(connection.hgetAll("parent_child1"));
//Output: {two=two3, three=three3, four=four, one=one3}
}
}
Now I want to search the keyspace using parent_* so that it give me all keyspace name starting with parent_ i.e. parent_child1, parent_child2, parent_child3.
As mentioned in this JedisCluster : Scan For Key does not work
You have to iterate through all your nodes and search for the keys like below.
Set<byte[]> keys = new HashSet<>();
Map<String, JedisPool> nodes = connection.getClusterNodes();
for(String key : nodes.keySet()) {
JedisPool jedisPool = nodes.get(key);
Jedis jedis = jedisPool.getResource();
keys.addAll(connection.keys("parent_*"));
jedis.close();
}
Related
public void generateHash(HashMap<String, String> valueMap, PayUHashGenerationListener hashGenerationListener) {
String hashName = valueMap.get(PayUCheckoutProConstants.CP_HASH_NAME);
String hashData = valueMap.get(PayUCheckoutProConstants.CP_HASH_STRING);
if (!TextUtils.isEmpty(hashName) && !TextUtils.isEmpty(hashData)) {
//Do not generate hash from local, it needs to be calculated from server side only. Here, hashString contains hash created from your server side.
String hash = hashString;
HashMap<String, String> dataMap = new HashMap<>();
dataMap.put(hashName, hash);
hashGenerationListener.onHashGenerated(dataMap);
}
}
I have apache-ignite running in a cluster with 3 nodes and populated it with some random data using a Long as the key.
IgniteCache<Long, String> cache = ignite.getOrCreateCache("myCache");
Map<Long, String> data = new HashMap<>();
data.put(1L,"Data for 1");
data.put(2L,"Data for 2");
cache.putAll(data);
for retrieval
Set<Long> keys = new HashSet<Long>(Arrays.asList(new Long[]{1L,2L}));
Map<Long,String> data = cache.getAll(keys);
data.forEach( (k,v) -> {
System.out.println(k+" "+v);
});
This all works great but when changing the key of the map to a POJO I am unable to retrieve the data...
IgniteCache<IdTimeStamp, String> cache = ignite.getOrCreateCache("myCache");
Map<IdTimeStamp, String> data = new HashMap<>();
data.put(new IdTimeStamp(1L, 1514759400000L),"Data for 1514759400000");
data.put(new IdTimeStamp(1L, 1514757600000L),"Data for 1514757600000L");
cache.putAll(data);
for retrieval
Set<IdTimeStamp> keys = new HashSet<IdTimeStamp>();
keys.add(new IdTimeStamp(1L, 1514757600000L));
keys.add(new IdTimeStamp(1L, 1514759400000L));
Map<IdTimeStamp,String> data = cache.getAll(keys);
System.out.println(data.size());
data.forEach( (k,v) -> {
System.out.println(k+" "+v);
});
and the IdTimeStamp class:
public class IdTimeStamp {
private Long id;
private Long timestamp;
public IdTimeStamp(Long id, Long timestamp) {
this.id = id;
this.timestamp = timestamp;
}
}
Not working:
ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
IgniteClient client = Ignition.startClient(cfg);
ClientCache<IdTimeStamp, String> cache = client.cache("myCache");
Working:
public static IgniteCache<IdTimeStamp, String> getIgnite() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(false); //true ??
// Setting up an IP Finder to ensure the client can locate the servers.
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setClientReconnectDisabled(true);
discoverySpi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discoverySpi);
// Starting the node
Ignite ignite = Ignition.start(cfg);
// Create an IgniteCache and put some values in it.
IgniteCache<IdTimeStamp, String> cache = ignite.getOrCreateCache("myCache");
return cache;
}
Looks like a known limitation when you are using different clients for data population and retrieving the records. Take a look at this question if configuring compactFooter=true solves that problem.
clientConfig.setBinaryConfiguration(new BinaryConfiguration().setCompactFooter(true)
Otherwise, your code looks fine and should work as expected.
Is it possible to do Stream injection from a Client Node and intercept the same stream in the Server Node to process the stream before inserting in the cache ?
The reason for doing this is that the Client Node receives the stream from an external source and the same needs to be injected into a partitioned cache based on AffinityKey across multiple server nodes. The stream needs to be intercepted on each node and processed with the lowest latency.
I could've used cache events to do this but StreamVisitor is supposed to be faster.
following is the sample that i am trying to execute. Start 2 nodes : one containing the streamer, other containing the streamReciever :
public class StreamerNode {
public static void main(String[] args) {
......
Ignition.setClientMode(false);
Ignite ignite = Ignition.start(igniteConfiguration);
CacheConfiguration<SeqKey, String> myCfg = new CacheConfiguration<SeqKey, String>("myCache");
......
IgniteCache<SeqKey, String> myCache = ignite.getOrCreateCache(myCfg);
IgniteDataStreamer<SeqKey, String> myStreamer = ignite.dataStreamer(myCache.getName()); // Create Ignite Streamer for windowing data
for (int i = 51; i <= 100; i++) {
String paddedString = org.apache.commons.lang.StringUtils.leftPad(i+"", 7, "0") ;
String word = "TEST_" + paddedString;
SeqKey seqKey = new SeqKey("TEST", counter++ );
myStreamer.addData(seqKey, word) ;
}
}
}
public class VisitorNode {
public static void main(String[] args) {
......
Ignition.setClientMode(false);
Ignite ignite = Ignition.start(igniteConfiguration);
CacheConfiguration<SeqKey, String> myCfg = new CacheConfiguration<SeqKey, String>("myCache");
......
IgniteCache<SeqKey, String> myCache = ignite.getOrCreateCache(myCfg);
IgniteDataStreamer<SeqKey, String> myStreamer = ignite.dataStreamer(myCache.getName()); // Create Ignite Streamer for windowing data
myStreamer.receiver(new StreamVisitor<SeqKey, String>() {
int i=1 ;
#Override
public void apply(IgniteCache<SeqKey, String> cache, Map.Entry<SeqKey, String> e) {
String tradeGetData = e.getValue();
System.out.println(nodeID+" : visitorNode ..count="+ i++ + " received key="+e.getKey() + " : val="+ e.getValue());
//do some processing here before inserting in the cache ..
cache.put(e.getKey(), tradeGetData);
}
});
}
}
Of course it can be executed on a different node. Usually, addData() is executed on client node, and StreamReceiver works on server node. You don't have to do anything special to make it happen.
As for the rest of your post, can you elaborate it with more details and samples perhaps? I could not understand the setup that is desired.
You can use continuous queries if you don't need to modify data, only act on it.
I am looking decode the following JWT using Apache Commons Codec. How we can do that ?
eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0Iiwicm9sZXMiOiJST0xFX0FETUlOIiwiaXNzIjoibXlzZ
WxmIiwiZXhwIjoxNDcxMDg2MzgxfQ.1EI2haSz9aMsHjFUXNVz2Z4mtC0nMdZo6bo3-x-aRpw
This should retrieve Header, Body and Signature part. Whats the code ?
Here you go:
import org.apache.commons.codec.binary.Base64;
#Test
public void testDecodeJWT(){
String jwtToken = "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0Iiwicm9sZXMiOiJST0xFX0FETUlOIiwiaXNzIjoibXlzZWxmIiwiZXhwIjoxNDcxMDg2MzgxfQ.1EI2haSz9aMsHjFUXNVz2Z4mtC0nMdZo6bo3-x-aRpw";
System.out.println("------------ Decode JWT ------------");
String[] split_string = jwtToken.split("\\.");
String base64EncodedHeader = split_string[0];
String base64EncodedBody = split_string[1];
String base64EncodedSignature = split_string[2];
System.out.println("~~~~~~~~~ JWT Header ~~~~~~~");
Base64 base64Url = new Base64(true);
String header = new String(base64Url.decode(base64EncodedHeader));
System.out.println("JWT Header : " + header);
System.out.println("~~~~~~~~~ JWT Body ~~~~~~~");
String body = new String(base64Url.decode(base64EncodedBody));
System.out.println("JWT Body : "+body);
}
The output below:
------------ Decode JWT ------------
~~~~~~~~~ JWT Header ~~~~~~~
JWT Header : {"alg":"HS256"}
~~~~~~~~~ JWT Body ~~~~~~~
JWT Body : {"sub":"test","roles":"ROLE_ADMIN","iss":"myself","exp":1471086381}
Here is a non-package-import way:
java.util.Base64.Decoder decoder = java.util.Base64.getUrlDecoder();
String[] parts = jwtToken.split("\\."); // split out the "parts" (header, payload and signature)
String headerJson = new String(decoder.decode(parts[0]));
String payloadJson = new String(decoder.decode(parts[1]));
//String signatureJson = new String(decoder.decode(parts[2]));
REGARDLESS (of this alternative to org.apache.commons.codec.binary.Base64 SiKing'sanswer )... you may want to also push those json fragments to pojo's.
You can then take those json fragments and turn them into pojo.
The headers are "dynamic" (as in, you don't know all the header-names beforehand), so you probably want to convert to Key Value pairs (aka "Map" in java)
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import java.util.Map;
public class JwtTokenHeaders {
private final Map<String, Object> jsonMap; // = new HashMap<String, Object>();
public JwtTokenHeaders(String jsonString) {
ObjectMapper mapper = new ObjectMapper();
//String jsonString = "{\"name\":\"JavaInterviewPoint\", \"department\":\"blogging\"}";
//Map<String, Object> jsonMap = new HashMap<String, Object>();
try {
// convert JSON string to Map
this.jsonMap = mapper.readValue(jsonString,
new TypeReference<Map<String, String>>() {
});
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
#Override
public String toString() {
return org.apache.commons.lang3.builder.ToStringBuilder.reflectionToString(this);
}
}
the payload (aka, the body) is more well-defined, so you can map to a pojo..... you can take the json and create a matching pojo here:
http://pojo.sodhanalibrary.com/
after you use an online tool (or hand craft the pojo youself)..to create something like "MyPojo(.java)"....
you'll end up with something like this:
//import com.fasterxml.jackson.databind.DeserializationFeature;
//import com.fasterxml.jackson.databind.ObjectMapper;
ObjectMapper mapper = new ObjectMapper();
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
MyPojo tp = mapper.readValue(payloadJson, MyPojo.class);
if http://pojo.sodhanalibrary.com/ ceases to exist in the future, just internet search "online json to pojo" and you'll probably find something.
I am using Lucene's Highlighter class to highlight fragments of matched search results and it works well. I would like to switch from searching with the StandardAnalyzer to the EnglishAnalyzer, which will perform stemming of terms.
The search results are good, but now the highlighter doesn't always find a match. Here's an example of what I'm looking at:
document field text 1: Everyone likes goats.
document field text 2: I have a goat that eats everything.
Using the EnglishAnalyzer and searching for "goat", both documents are matched, but the highlighter is only able to find a matched fragment from document 2. Is there a way to have the highlighter return data for both documents?
I understand that the characters are different for the tokens, but the same tokens are still there, so it seems reasonable for it to just highlight whatever token is present at that location.
If it helps, this is using Lucene 3.5.
I found a solution to this problem. I changed from using the Highlighter class to using the FastVectorHighlighter. It looks like I'll pick up some speed improvements too (at the expense of storage of term vector data). For the benefit of anyone coming across this question later, here's a unit test showing how this all works together:
package com.sample.index;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.en.EnglishAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.vectorhighlight.*;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import org.junit.Before;
import org.junit.Test;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import static junit.framework.Assert.assertEquals;
public class TestIndexStuff {
public static final String FIELD_NORMAL = "normal";
public static final String[] PRE_TAGS = new String[]{"["};
public static final String[] POST_TAGS = new String[]{"]"};
private IndexSearcher searcher;
private Analyzer analyzer = new EnglishAnalyzer(Version.LUCENE_35);
#Before
public void init() throws IOException {
RAMDirectory idx = new RAMDirectory();
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_35, analyzer);
IndexWriter writer = new IndexWriter(idx, config);
addDocs(writer);
writer.close();
searcher = new IndexSearcher(IndexReader.open(idx));
}
private void addDocs(IndexWriter writer) throws IOException {
for (String text : new String[] {
"Pretty much everyone likes goats.",
"I have a goat that eats everything.",
"goats goats goats goats goats"}) {
Document doc = new Document();
doc.add(new Field(FIELD_NORMAL, text, Field.Store.YES,
Field.Index.ANALYZED, Field.TermVector.WITH_POSITIONS_OFFSETS));
writer.addDocument(doc);
}
}
private FastVectorHighlighter makeHighlighter() {
FragListBuilder fragListBuilder = new SimpleFragListBuilder(200);
FragmentsBuilder fragmentBuilder = new SimpleFragmentsBuilder(PRE_TAGS, POST_TAGS);
return new FastVectorHighlighter(true, true, fragListBuilder, fragmentBuilder);
}
#Test
public void highlight() throws ParseException, IOException {
Query query = new QueryParser(Version.LUCENE_35, FIELD_NORMAL, analyzer)
.parse("goat");
FastVectorHighlighter highlighter = makeHighlighter();
FieldQuery fieldQuery = highlighter.getFieldQuery(query);
TopDocs topDocs = searcher.search(query, 10);
List<String> fragments = new ArrayList<String>();
for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
fragments.add(highlighter.getBestFragment(fieldQuery, searcher.getIndexReader(),
scoreDoc.doc, FIELD_NORMAL, 10000));
}
assertEquals(3, fragments.size());
assertEquals("[goats] [goats] [goats] [goats] [goats]", fragments.get(0).trim());
assertEquals("Pretty much everyone likes [goats].", fragments.get(1).trim());
assertEquals("I have a [goat] that eats everything.", fragments.get(2).trim());
}
}