Apache Geode RegionExistsException - gemfire

In the Pivotal Native Client I've setup a method to read and write a Geode cache region as follows:
public void GeodePut(string region, string key, string value)
{
CacheFactory cF = CacheFactory.CreateCacheFactory();
Cache c cF.Create();
RegionFactory rF = c.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
IRegion<string, string> r = rF.Create<string, string>(region);
r[key] = value;
cache.Close();
}
when I call this multiple times I get RegionExistsException how do I get around that? Thanks

Solution is easy.
Add a try-catch block to catch the RegionExistsException, then in the catch segment replace the 'create' method with 'get'.
Change this: rF.Create
for this: rf.get
This works pretty well using Java, i would post the exact signature of the method you needed but im not using .Net native client.
Hope it helps :)

It's to do with the cache.Close() command. I no longer use cache.Close()

Related

move from imperative try-with-resource to reactive using, using()

i'm trying to move from imperative try-with-resources to reactive try-with-resources without success. I have the following piece of code i would like to move.
private final AmazonS3 amazonS3;
private final String bucket;
#Override
public Mono<String> getTemplate(String templateId) {
return Mono.fromCallable(() -> {
S3Object s3Object = amazonS3.getObject(bucket, templateId);
try (s3Object) {
return IOUtils.toString(s3Object.getObjectContent());
}
}).subscribeOn(Schedulers.boundedElastic());
}
I would like to rewrite using reactive try-with-resources construct.
My first try was using Flux.using
Flux.using(amazonS3.getObject(bucket, templateId),
s3Object -> Flux.just(IOUtils.toString(s3Object.getObjectContent())),
S3Object::close);
the s3Object is not being casted as an S3Object so getObjectContent doesn't exist.
Then i had a look athttps://projectreactor.io/docs/core/release/reference/ and i guess that i might use Disposable, however i'm not sure how to wrap S3Object with a disposable object.
Does anyone know how can i make it work?
Thanks
You can't achieve this with the approach you're taking. It's literally impossible to take a blocking API like the one you see here (AWS SDK v1) and somehow wrap it to make it reactive.
You can however use the AWS SDK v2 (you should be using this anyway for new development), which has an asynchronous S3 client (S3AsyncClient) that you can use to return a CompleteableFuture<String>:
CompletableFuture<String> contents = s3AsyncClient
.getObject(GetObjectRequest.builder().build(), new ByteArrayAsyncResponseTransformer<>())
.thenApplyAsync(rb -> rb.asUtf8String());
You can then use Mono.fromFuture(contents) to obtain a Mono<String> from the above CompleteableFuture.

Set value configuration.GetSection("").Value from header request

I need to set in my asp.net core configuration a value from the header in every request.
I'm doing like so:
public async Task Invoke(HttpContext context)
{
var companyId = context.Request.Headers["companyid"].ToString().ToUpper();
configuration.GetSection("CompanyId").Value = companyId;
await next(context);
}
It works fine. But is this the proper way? In case of multiple request at same time is there a risk of messing the values? I've searched around but couldn't find an answer.
I'm using .Net 3.1.
As far as I know, the appsetting.json value is a global setting value, you shouldn't be modifying global state per request, this action is not thread safe. At some point, you will face a rice condition.
If you still want to use this codes, I suggest you could try to add a lock. Notice: This will make your Invoke method very slowly.
Details, you could refer to below codes:
private static Object _factLock = new Object();
lock (_factLock)
{
Configuration.GetSection("CompanyId").Value = "";
}

Serialization error with Elasticsearch NEST/C#

I'm using NEST to index my objects and I'm running into a Newtonsoft error on serialization. One of my objects has a self referencing loop. Would there be a way for me to access the JsonSerializer and change how it handles self-references without having to modify the source code?
You can register custom converters on your client:
public void AddConverter(JsonConverter converter)
{
this.IndexSerializationSettings.Converters.Add(converter);
this.SerializationSettings.Converters.Add(converter);
}
This might be of help.
There is no direct way to alter the JsonSerializerSettings used in the client though.
There is a new api now, take a look at:
var cs2 = new ConnectionSettings(new Uri("http://localhost:9200"))
.SetJsonSerializerSettingsModifier(settings => settings.TypeNameHandling = TypeNameHandling.None)
.EnableTrace();
Thanks for adding the support!

An interesting Restlet Attribute behavior

Using Restlet 2.1 for Java EE, I am discovering an interesting problem with its ability to handle attributes.
Suppose you have code like the following:
cmp.getDefaultHost().attach("/testpath/{attr}",SomeServerResource.class);
and on your browser you provide the following URL:
http://localhost:8100/testpath/command
then, of course, the attr attribute gets set to "command".
Unfortunately, suppose you want the attribute to be something like command/test, as in the following URL:
http://localhost:8100/testpath/command/test
or if you want to dynamically add things with different levels, like:
http://localhost:800/testpath/command/test/subsystems/network/security
in both cases the attr attribute is still set to "command"!
Is there some way in a restlet application to make an attribute that can retain the "slash", so that one can, for example, make the attr attribute be set to "command/test"? I would like to be able to just grab everything after testpath and have the entire string be the attribute.
Is this possible? Someone please advise.
For the same case I usually change the type of the variable :
Route route = cmp.getDefaultHost().attach("/testpath/{attr}",SomeServerResource.class);
route.getTemplate().getVariables().get("attr") = new Variable(Variable.TYPE_URI_PATH);
You can do this by using url encoding.
I made the following attachment in my router:
router.attach("/test/{cmd}", TestResource.class);
My test resource class looks like this, with a little help from Apache Commons Codec URLCodec
#Override
protected Representation get() {
try {
String raw = ResourceWrapper.get(this, "cmd");
String decoded = new String(URLCodec.decodeUrl(raw.getBytes()));
return ResourceWrapper.wrap(raw + " " + decoded);
} catch(Exception e) { throw new RuntimeException(e); }
}
Note my resource wrapper class is simply utility methods. The get returns the string of the url param, and the wrap returns a StringRepresentation.
Now if I do something like this:
http://127.0.0.1/test/haha/awesome
I get a 404.
Instead, I do this:
http://127.0.0.1/test/haha%2fawesome
I have URLEncoded the folder path. This results in my browser saying:
haha%2fawesome haha/awesome
The first is the raw string, the second is the result. I don't know if this is suitable for your needs as it's a simplistic example, but as long as you URLEncode your attribute, you can decode it on the other end.

Redis on Appharbor - Booksleeve GetString exception

i am trying to setup Redis on appharbor. I have followed their instructions and again i have an issue with the Booksleeve API. Here is the code i am using to make it work initially:
var connectionUri = new Uri(url);
using (var redis = new RedisConnection(connectionUri.Host, connectionUri.Port, password: connectionUri.UserInfo.Split(new[] { ':' }, 2)[1]))
{
redis.Strings.Set(1, "greeting", "welcome to remember your stuff!");
try
{
var task = redis.Strings.GetString(1, "greeting");
redis.Wait(task);
ViewBag.Message = task.Result;
}
catch (Exception)
{
// It throws an exception trying to wait for the task?
}
}
However, the issue is that it sets the string correctly, but when trying to retrieve the same string from the key value store, it throws a timeout exception waiting for the task to eexecute. However, this code works on my local redis server connection.
Am i using the API in a wrong way? or is this something related to Appharbor?
Thanks
Like a SqlConnection, you need to call Open() (otherwise your messages are queued for delivery).
Unlike SqlConnection, you should not fire up a RedisConnection each time you need it - it is intended to be used as a shared, thread-safe, multiplexer - i.e. a single connection is held somewhere and used by lots and lots of unrelated callers. Unless of course you only need to do one thing!