I have apache-ignite running in a cluster with 3 nodes and populated it with some random data using a Long as the key.
IgniteCache<Long, String> cache = ignite.getOrCreateCache("myCache");
Map<Long, String> data = new HashMap<>();
data.put(1L,"Data for 1");
data.put(2L,"Data for 2");
cache.putAll(data);
for retrieval
Set<Long> keys = new HashSet<Long>(Arrays.asList(new Long[]{1L,2L}));
Map<Long,String> data = cache.getAll(keys);
data.forEach( (k,v) -> {
System.out.println(k+" "+v);
});
This all works great but when changing the key of the map to a POJO I am unable to retrieve the data...
IgniteCache<IdTimeStamp, String> cache = ignite.getOrCreateCache("myCache");
Map<IdTimeStamp, String> data = new HashMap<>();
data.put(new IdTimeStamp(1L, 1514759400000L),"Data for 1514759400000");
data.put(new IdTimeStamp(1L, 1514757600000L),"Data for 1514757600000L");
cache.putAll(data);
for retrieval
Set<IdTimeStamp> keys = new HashSet<IdTimeStamp>();
keys.add(new IdTimeStamp(1L, 1514757600000L));
keys.add(new IdTimeStamp(1L, 1514759400000L));
Map<IdTimeStamp,String> data = cache.getAll(keys);
System.out.println(data.size());
data.forEach( (k,v) -> {
System.out.println(k+" "+v);
});
and the IdTimeStamp class:
public class IdTimeStamp {
private Long id;
private Long timestamp;
public IdTimeStamp(Long id, Long timestamp) {
this.id = id;
this.timestamp = timestamp;
}
}
Not working:
ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
IgniteClient client = Ignition.startClient(cfg);
ClientCache<IdTimeStamp, String> cache = client.cache("myCache");
Working:
public static IgniteCache<IdTimeStamp, String> getIgnite() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(false); //true ??
// Setting up an IP Finder to ensure the client can locate the servers.
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setClientReconnectDisabled(true);
discoverySpi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(discoverySpi);
// Starting the node
Ignite ignite = Ignition.start(cfg);
// Create an IgniteCache and put some values in it.
IgniteCache<IdTimeStamp, String> cache = ignite.getOrCreateCache("myCache");
return cache;
}
Looks like a known limitation when you are using different clients for data population and retrieving the records. Take a look at this question if configuring compactFooter=true solves that problem.
clientConfig.setBinaryConfiguration(new BinaryConfiguration().setCompactFooter(true)
Otherwise, your code looks fine and should work as expected.
Related
public void generateHash(HashMap<String, String> valueMap, PayUHashGenerationListener hashGenerationListener) {
String hashName = valueMap.get(PayUCheckoutProConstants.CP_HASH_NAME);
String hashData = valueMap.get(PayUCheckoutProConstants.CP_HASH_STRING);
if (!TextUtils.isEmpty(hashName) && !TextUtils.isEmpty(hashData)) {
//Do not generate hash from local, it needs to be calculated from server side only. Here, hashString contains hash created from your server side.
String hash = hashString;
HashMap<String, String> dataMap = new HashMap<>();
dataMap.put(hashName, hash);
hashGenerationListener.onHashGenerated(dataMap);
}
}
I am doing some integration testing of my web API that uses NancyFX end points. I have the xUnit test create a test server for the integration test
private readonly TestServer _server;
private readonly HttpClient _client;
public EventsModule_Int_Tester()
{
//Server setup
_server = new TestServer(new WebHostBuilder()
.UseStartup<Startup>());
_server.AllowSynchronousIO = true;//Needs to be overriden in net core 3.1
_client = _server.CreateClient();
}
Inside a Test Method I tried the following
[Fact]
public async Task EventTest()
{
// Arrange
HttpResponseMessage expectedRespone = new HttpResponseMessage(System.Net.HttpStatusCode.OK);
var data = _server.Services.GetService(typeof(GenijalnoContext)) as GenijalnoContext;
//Get come random data from the DBcontext
Random r = new Random();
List<Resident> residents = data.Residents.ToList();
Resident random_residnet = residents[r.Next(residents.Count)];
List<Apartment> apartments = data.Apartments.ToList();
Apartment random_Apartment = apartments[r.Next(apartments.Count)];
EventModel model = new EventModel()
{
ResidentId = random_residnet.Id,
ApartmentNumber = random_Apartment.Id
};
//Doesnt work
IList<KeyValuePair<string, string>> nameValueCollection = new List<KeyValuePair<string, string>> {
{ new KeyValuePair<string, string>("ResidentId", model.ResidentId.ToString()) },
{ new KeyValuePair<string, string>("ApartmentNumber", model.ApartmentNumber.ToString())}
};
var result = await _client.PostAsync("/Events/ResidentEnter", new FormUrlEncodedContent(nameValueCollection));
//Also Doesnt work
string json = JsonConvert.SerializeObject(model, Formatting.Indented);
var httpContent = new StringContent(json, Encoding.UTF8, "application/json");
var response = await _client.PostAsync("/Events/ResidentEnter", httpContent);
//PostAsJsonAsync also doesnt work
// Assert
Assert.Equal(response.StatusCode, expectedRespone.StatusCode);
}
The NancyFX module does trigger the endpoint and receives the request but without the body
What am I doing wrong? Note that the NancyFX endpoint has no issue transforming a Postman call into a valid model.
The NancyFX endpoint
Alright I fixed it, for those curious the issue was that the NancyFX body reader sometimes does not properly start reading the request body. That is that the stream reading position isn't 0 (the start) all the time.
To fix this you need to create a CustomBoostrapper and then override the ApplicationStartup function so you can set up a before request pipeline that sets the body position at 0
Code below
protected override void ApplicationStartup(TinyIoCContainer container, IPipelines pipelines)
{
base.ApplicationStartup(container, pipelines);
pipelines.BeforeRequest.AddItemToStartOfPipeline(ctx =>
{
ctx.Request.Body.Position = 0;
return null;
});
}
Following are the items that I did
I started the ignite in remote mode.
I created a cache and added some data. (Also created the cache configuration)
I am doing the text query.
My code looks like this
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);
String hosts[] = new String[]{"ip:48500"} ;
ipFinder.setAddresses(Arrays.asList(hosts));
/**
* create a new instance of tcp discovery multicast ip finder TcpDiscoveryMulticastIpFinder tcMp = new TcpDiscoveryMulticastIpFinder();
*
**tcMp.setAddresses(Arrays.asList("localhost")); // change your IP address here // set the multi cast ip finder for spi
*/
spi.setIpFinder(ipFinder);
/**
* create new ignite configuration
*/
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
/**
* CacheConfiguration cacheConfig = cacheConfigure();
* cfg.setCacheConfiguration(cacheConfig);
*/
#SuppressWarnings("rawtypes")
CacheConfiguration cacheConfig = cacheConfigure();
cfg.setCacheConfiguration(cacheConfig);
/**
* set the discovery spi to ignite configuration
*/
cfg.setDiscoverySpi(spi);
/**
* Start ignite
*/
Ignite ignite = Ignition.getOrStart(cfg);
and my cache configuration is
CacheConfiguration ccfg = new CacheConfiguration(DEFAULT_CACHE_NAME);
QueryEntity queryEntity = new QueryEntity();
queryEntity.setKeyType(Integer.class.getName());
queryEntity.setValueType(Account.class.getName());
LinkedHashMap<String, String> fields = new LinkedHashMap();
fields.put("accid", Integer.class.getName());
fields.put("attrbool", Boolean.class.getName());
fields.put("accbalance", BigDecimal.class.getName());
fields.put("acctype", String.class.getName());
fields.put("attrbyte", Byte.class.getName());
fields.put("accifsc", String.class.getName());
queryEntity.setFields(fields);
// Listing indexes.
Collection<QueryIndex> indexes = new ArrayList<>(3);
indexes.add(new QueryIndex("accid"));
indexes.add(new QueryIndex("accifsc"));
indexes.add(new QueryIndex("acctype"));
queryEntity.setIndexes(indexes);
ccfg.setQueryEntities(Arrays.asList(queryEntity));
and I am putting data to cache
for(int i=0;i<5;i++) {
Account account=new Account();
account.setAccid(1234+i);
account.setAttrbool(true);
account.setAccbalance(new BigDecimal(100000+i));
account.setAcctype("Demat");
account.setAttrbyte(new Byte("1"));
account.setAccifsc("Master Degree Pstgraduate");
cache.put(new Integer(i), account);
}
and now doing the text query
TextQuery txt = new TextQuery(Account.class,"IFC" );
try (#SuppressWarnings("unchecked")
QueryCursor<Entry<Integer, Account>> masters = cache.query(txt)) {
for (Entry<Integer, Account> e : masters)
System.out.println("results "+e.getValue().toString());
}
My Data Class is
public class Account {
//primary key
#QueryTextField
private Integer accid ;
#QueryTextField
private BigDecimal accbalance ;
#QueryTextField#QuerySqlField
private String accifsc ;
private BigInteger accnum ;
private String accstr ;
#QueryTextField
private String acctype ;
#QueryTextField
private Boolean attrbool ;
#QueryTextField
private Byte attrbyte ;
// getter and setter
}
What am I doing wrong? There is no error in the log.
I changed the Text query code part a bit and it worked for me
TextQuery txt = new TextQuery(Account.class,"IFC" );
try (#SuppressWarnings({ "unchecked", "rawtypes" })
QueryCursor masters = cache.query(txt)) {
#SuppressWarnings("unchecked")
List<CacheEntryImpl<Integer,Account>> accounts = masters.getAll();
Iterator<CacheEntryImpl<Integer, Account>> iterator = accounts.iterator();
while(iterator.hasNext()) {
System.out.println(iterator.next().getValue().getAccifsc());
}
}
Is it possible to do Stream injection from a Client Node and intercept the same stream in the Server Node to process the stream before inserting in the cache ?
The reason for doing this is that the Client Node receives the stream from an external source and the same needs to be injected into a partitioned cache based on AffinityKey across multiple server nodes. The stream needs to be intercepted on each node and processed with the lowest latency.
I could've used cache events to do this but StreamVisitor is supposed to be faster.
following is the sample that i am trying to execute. Start 2 nodes : one containing the streamer, other containing the streamReciever :
public class StreamerNode {
public static void main(String[] args) {
......
Ignition.setClientMode(false);
Ignite ignite = Ignition.start(igniteConfiguration);
CacheConfiguration<SeqKey, String> myCfg = new CacheConfiguration<SeqKey, String>("myCache");
......
IgniteCache<SeqKey, String> myCache = ignite.getOrCreateCache(myCfg);
IgniteDataStreamer<SeqKey, String> myStreamer = ignite.dataStreamer(myCache.getName()); // Create Ignite Streamer for windowing data
for (int i = 51; i <= 100; i++) {
String paddedString = org.apache.commons.lang.StringUtils.leftPad(i+"", 7, "0") ;
String word = "TEST_" + paddedString;
SeqKey seqKey = new SeqKey("TEST", counter++ );
myStreamer.addData(seqKey, word) ;
}
}
}
public class VisitorNode {
public static void main(String[] args) {
......
Ignition.setClientMode(false);
Ignite ignite = Ignition.start(igniteConfiguration);
CacheConfiguration<SeqKey, String> myCfg = new CacheConfiguration<SeqKey, String>("myCache");
......
IgniteCache<SeqKey, String> myCache = ignite.getOrCreateCache(myCfg);
IgniteDataStreamer<SeqKey, String> myStreamer = ignite.dataStreamer(myCache.getName()); // Create Ignite Streamer for windowing data
myStreamer.receiver(new StreamVisitor<SeqKey, String>() {
int i=1 ;
#Override
public void apply(IgniteCache<SeqKey, String> cache, Map.Entry<SeqKey, String> e) {
String tradeGetData = e.getValue();
System.out.println(nodeID+" : visitorNode ..count="+ i++ + " received key="+e.getKey() + " : val="+ e.getValue());
//do some processing here before inserting in the cache ..
cache.put(e.getKey(), tradeGetData);
}
});
}
}
Of course it can be executed on a different node. Usually, addData() is executed on client node, and StreamReceiver works on server node. You don't have to do anything special to make it happen.
As for the rest of your post, can you elaborate it with more details and samples perhaps? I could not understand the setup that is desired.
You can use continuous queries if you don't need to modify data, only act on it.
I am trying to use BinaryObjects to create the cache at runtime. For example, instead of writing a pojo class such as Employee and configuring it as a cache value type, I need to be able to dynamically configure the cache with the field names and field types for the particular cache.
Here is some sample code:
public class EmployeeQuery {
public static void main(String[] args) throws Exception {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
if (!ExamplesUtils.hasServerNodes(ignite))
return;
CacheConfiguration<Integer, BinaryObject> cfg = getbinaryCache("emplCache", 1);
ignite.destroyCache(cfg.getName());
try (IgniteCache<Integer, BinaryObject> emplCache = ignite.getOrCreateCache(cfg)) {
SqlFieldsQuery top5Qry = new SqlFieldsQuery("select * from Employee where salary > 500 limit 5", true);
while (true) {
QueryCursor<List<?>> top5qryResult = emplCache.query(top5Qry);
System.out.println(">>> Employees ");
List<List<?>> all = top5qryResult.getAll();
for (List<?> list : all) {
System.out.println("Top 5 query result : "+list.get(0) + " , "+ list.get(1) + " , " + list.get(2));
}
System.out.println("..... ");
Thread.sleep(5000);
}
}
finally {
ignite.destroyCache(cfg.getName());
}
}
}
private static QueryEntity createEmployeeQueryEntity() {
QueryEntity employeeEntity = new QueryEntity();
employeeEntity.setTableName("Employee");
employeeEntity.setValueType(BinaryObject.class.getName());
employeeEntity.setKeyType(Integer.class.getName());
LinkedHashMap<String, String> fields = new LinkedHashMap<>();
fields.put("id", Integer.class.getName());
fields.put("firstName", String.class.getName());
fields.put("lastName", String.class.getName());
fields.put("salary", Float.class.getName());
fields.put("gender", String.class.getName());
employeeEntity.setFields(fields);
employeeEntity.setIndexes(Arrays.asList(
new QueryIndex("id"),
new QueryIndex("firstName"),
new QueryIndex("lastName"),
new QueryIndex("salary"),
new QueryIndex("gender")
));
return employeeEntity;
}
public static CacheConfiguration<Integer, BinaryObject> getbinaryCache(String cacheName, int duration) {
CacheConfiguration<Integer, BinaryObject> cfg = new CacheConfiguration<>(cacheName);
cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setName(cacheName);
cfg.setStoreKeepBinary(true);
cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cfg.setIndexedTypes(Integer.class, BinaryObject.class);
cfg.setExpiryPolicyFactory(FactoryBuilder.factoryOf(new CreatedExpiryPolicy(new Duration(SECONDS, duration))));
cfg.setQueryEntities(Arrays.asList(createEmployeeQueryEntity()));
return cfg;
}
}
I am trying to configure the cache with the employeeId (Integer) as key and the whole employee record (BinaryObject) as value. When I run the above class, I get the following exception :
Caused by: org.h2.jdbc.JdbcSQLException: Table "EMPLOYEE" not found; SQL statement:
select * from "emplCache".Employee where salary > 500 limit 5
What am I doing wrong here? Is there anything more other than this line:
employeeEntity.setTableName("Employee");
Next, I am trying to stream data into the cache. Is this the right way to do it?
public class CsvStreamer {
public static void main(String[] args) throws IOException {
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("examples/config/example-ignite.xml")) {
if (!ExamplesUtils.hasServerNodes(ignite))
return;
CacheConfiguration<Integer, BinaryObject> cfg = EmployeeQuery.getbinaryCache("emplCache", 1);
try (IgniteDataStreamer<Integer, BinaryObject> stmr = ignite.dataStreamer(cfg.getName())) {
while (true) {
InputStream in = new FileInputStream(new File(args[0]));
try (LineNumberReader rdr = new LineNumberReader(new InputStreamReader(in))) {
int count =0;
for (String line = rdr.readLine(); line != null; line = rdr.readLine()) {
String[] words = line.split(",");
BinaryObject emp = getBinaryObject(words);
stmr.addData(new Integer(words[0]), emp);
System.out.println("Sent data "+count++ +" , sal : "+words[6]);
}
}
}
}
}
}
private static BinaryObject getBinaryObject(String[] rawData) {
BinaryObjectBuilder builder = Ignition.ignite().binary().builder("Employee");
builder.setField("id", new Integer(rawData[0]));
builder.setField("firstName", rawData[1]);
builder.setField("lastName", rawData[2]);
builder.setField("salary", new Float(rawData[6]));
builder.setField("gender", rawData[4]);
BinaryObject binaryObj = builder.build();
return binaryObj;
}
}
Note: I am running this in cluster mode. Both EmployeeQuery and CsvStreamer I run from one machine, and I have ignite running in server mode in two other machines. Ideally I want to avoid the use of a pojo class in my application and make things as dynamic and generic as possible.
You are getting this exception because you didn't configure SQL scheme. In your case (you don't want to create pojo object and etc) I recommend to use SQL like syntacsis which was added to Apache Ignite since 2.0 version. I sure that the following example helps you with configuration: https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryDdlExample.java