How to set TTL to List Values in ServiceStack.Redis? - redis

I Have a List in ServiceStack.Redis that I want to set a TimeSpan to expire it.
In the other word, how to call the following redis command in ServiceStack.Redis
EXPIRE ListId ttl
my desired method is:
client.Lists(listId, timespan);
Is there any solution for my problem?

With the new Custom and RawCommand APIs on IRedisClient and IRedisNativeClient you can now use the RedisClient to send your own custom commands that can call adhoc Redis commands:
public interface IRedisClient
{
...
RedisText Custom(params object[] cmdWithArgs);
}
public interface IRedisNativeClient
{
...
RedisData RawCommand(params object[] cmdWithArgs);
RedisData RawCommand(params byte[][] cmdWithBinaryArgs);
}
These Custom APIs take a flexible object[] arguments which accepts any serializable value e.g. byte[], string, int as well as any user-defined Complex Types which are transparently serialized as JSON and send across the wire as UTF-8 bytes.
Redis.Custom("SET", "foo", 1);
Result:
client.Custom("EXPIRE", "list-id", "100");
See ServiceStack github

Related

How best to handle data fetching needed for FluentValidation

In the app I'm working on, I'm using Mediatr and its pipelines to handle database interaction, some minor business logic, validation, etc.
There's a few checks for things like access control I can handle in the pipeline, since I'm using a context object as described here https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ to go from ASP.Net identity to a custom context object with user information and claims.
One problem I'm having is that since this application is multi-tenant, I need to ensure that even if an object exists, it belongs to that tenant, and the only way to be sure of that is to grab the object from the database and check it. It seems to me the validation shouldn't have side effects, so I don't want to rely on that to populate the context object. But then that pushes a bunch of validation down into the Mediatr handlers as they check for object existence, and so on, leading to a lot of repeated code. I don't really want to query the database multiple times since some queries can be expensive.
Another issue with doing the more complicated validation in the actual request handlers is getting what are essentially validation errors back out. Currently, if one of these checks fail I throw a ValidationException, which is then caught by middleware and turned into a ProblemDetails that's returned to the API caller. This is basically exceptions as flow control, and a validation failure really isn't "exceptional" anyhow.
The thoughts I'm having on how to solve this are:
Somewhere in the pipeline, when I'm building the context, include attempting to fetch the objects needed from the database. Validation then fails if any of these are null. This seems like it would make testing harder, as well as needing to decorate the requests somehow (or use reflection) so the pipeline can know to attempt to load these objects.
Have the queries in the validator, but use some sort of cache aware repository so when the same object is queried later, it's served from the cache, and not the database. The handlers would also use this cache aware repository (Currently the handlers interact directly with the EF Core DbContext to query). This then adds the issue of cache invalidation, which I'm going to have to handle at some point, anyhow (quite a few items are seldom modified). For testing, a dummy cache object can be injected that doesn't actually cache anything.
Make all the responses from requests implement an interface (or extend an abstract class) that has validation info, general success flags, etc. This can either be returned through the API directly, or have some pipeline that transforms failures into ProblemDetails. This would add some boilerplate to every response and handler, but avoids exceptions as flow control, and the caching/reflection issues in the other options.
Assume for 1 and 2 that any sort of race conditions are not an issue. Objects don't change owners, and things are seldom actually deleted from the database for auditing/accounting purposes.
I know there's no true one size fits all for problems like this, but I would like to know if there's additional options I'm missing, or any long term maintainability issues anyone with a similar pipeline has encountered if they went with one of these listed options.
We use MediatR IRequestPreProcessor for fetching data that we need both in RequestHandler and in FluentValidation validators.
RequestPreProcessor:
public interface IProductByIdBinder
{
int ProductId { get; }
ProductEntity Product { set; }
}
public class ProductByIdBinder<T> : IRequestPreProcessor<T> where T : IProductByIdBinder
{
private readonly IRepositoryReadAsync<ProductEntity> productRepository;
public ProductByIdBinder(IRepositoryReadAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public async Task Process(T request, CancellationToken cancellationToken)
{
request.Product = await productRepository.GetAsync(request.ProductId);
}
}
RequestHandler:
public class ProductDeleteCommand : IRequest, IProductByIdBinder
{
public ProductDeleteCommand(int id)
{
ProductId = id;
}
public int ProductId { get; }
public ProductEntity Product { get; set; }
private class ProductDeleteCommandHandler : IRequestHandler<ProductDeleteCommand>
{
private readonly IRepositoryAsync<ProductEntity> productRepository;
public ProductDeleteCommandHandler(
IRepositoryAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public Task<Unit> Handle(ProductDeleteCommand request, CancellationToken cancellationToken)
{
productRepository.Delete(request.Product);
return Unit.Task;
}
}
}
FluentValidation validator:
public class ProductDeleteCommandValidator : AbstractValidator<ProductDeleteCommand>
{
public ProductDeleteCommandValidator()
{
RuleFor(cmd => cmd)
.Must(cmd => cmd.Product != null)
.WithMessage(cmd => $"The product with id {cmd.ProductId} doesn't exist.");
}
}
I see nothing wrong with handling business logic validation in the handler layer.
Moreover, I do not think it is right to throw exceptions for them, as you said it is exceptions as flow control.
Introducing a cache seems like overkill for the use case too. The most reasonable option is the third IMHO.
Instead of implementing an interface you can use the nifty OneOf library and have something like
using HandlerResponse = OneOf<Success, NotFound, ValidationResponse>;
public class MediatorHandler : IRequestHandler<Command, HandlerResponse>
{
public async Task<HandlerResponse> Handle(
Command command,
CancellationToken cancellationToken)
{
Resource resource = await _userRepository
.GetResource(command.Id);
if (resource is null)
return new NotFound();
if (!resource.IsValid)
return new ValidationResponse(new ProblemDetails());
return new Success();
}
And then map it in your API Layer like
public async Task<IActionResult> PostAsync([FromBody] DummyRequest request)
{
HandlerResponse response = await _mediator.Send(
new Command(request.Id));
return response.Match<IActionResult>(
success => Created(),
notFound => NotFound(),
failed => new UnprocessableEntityResult(failed.ProblemDetails))
);
}

Sending DataStream in Flink using sockets; serialization issue

I want to send Stream of data from VM to host machine and I am using method writeToSocket() as shown below:
joinedStreamEventDataStream.writeToSocket("192.168.1.10", 6998) ;
Here joinedStreamEventDataStream is of type DataStream<Integer,Integer>.
Can someone please tell me how should I pass serializer to above method.
Thanks in Advance
It depends a little bit on how you would like to read the data from the socket. If you expect it to be the String representation of the data, then you could do it via:
joinedStreamEventDataStream.map(new MapFunction<Type, String>() {
#Override
public String map(Type value) throws Exception {
return value.toString();
}
}).writeToSocket(hostname, port, new SimpleStringSchema());
If you want to keep Flink's serialization format, then you can do write:
joinedStreamEventDataStream.writeToSocket(
hostname,
port,
new TypeInformationSerializationSchema<>(
joinedStreamEventDataStream.getType(),
env.getConfig()));
If you want to output it in your own serialization format, then you have to implement your own SerializationSchema as pointed out by Alex.
The writeToSocket() method takes 3 arguments: a socket host and port and also an implementation of SerializationSchema interface which used to serialize your data. So your implementation maybe like this:
joinedStreamEventDataStream.writeToSocket(
"192.168.1.10", // host name
6998, // port
new SerializationSchema<Integer>() {
#Override
public byte[] serialize(Integer element) {
return ByteBuffer.allocate(4).putInt(element).array();
}
}
);
It's true if joinedStreamEventDataStream has DataStream<Integer> type.

How to modify variables in an atomic way using REST API

Consider a process instance variable which currently has some value. I would like to update its value, for instance increment it by one, using the REST API of Activiti / Camunda. How would you do this?
The problem is that the REST API has services for setting variable values and to get them. But incorporating such API could easily lead to race condition.
Also consider that my example is regarding integers while a variable could be a complex JSON object or array!
This answer is for Camunda 7.3.0:
There is no out-of-the-box solution. You can do the following:
Extend the REST API with a custom resource that implements an endpoint for variable modification. Since the Camunda REST API uses JAX-RS, it is possible to add the Camunda REST resources to a custom JAX-RS application. See [1] for details.
In the custom resource endpoint, implement the read-modify-write cycle in one transaction using a custom command:
protected void readModifyWriteVariable(CommandExecutor commandExecutor, final String processInstanceId,
final String variableName, final int valueToAdd) {
try {
commandExecutor.execute(new Command<Void>() {
public Void execute(CommandContext commandContext) {
Integer myCounter = (Integer) runtimeService().getVariable(processInstanceId, variableName);
// do something with variable
myCounter += valueToAdd;
// the update provokes an OptimisticLockingException when the command ends, if the variable was updated meanwhile
runtimeService().setVariable(processInstanceId, variableName, myCounter);
return null;
}
});
} catch (OptimisticLockingException e) {
// try again
readModifyWriteVariable(commandExecutor, processInstanceId, variableName, valueToAdd);
}
}
See [2] for a detailed discussion.
[1] http://docs.camunda.org/manual/7.3/api-references/rest/#overview-embedding-the-api
[2] https://groups.google.com/d/msg/camunda-bpm-users/3STL8s9O2aI/Dcx6KtKNBgAJ

using log4net and IHttpModule with a WCF service

I have a Website that contains a number of webpages and some WCF services.
I have a logging IHttpModule which subscribes to PreRequestHandlerExecute and sets a number of log4net MDC variables such as:
MDC.Set("path", HttpContext.Current.Request.Path);
string ip = HttpContext.Current.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
if(string.IsNullOrWhiteSpace(ip))
ip = HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"];
MDC.Set("ip", ip);
This module works well for my aspx pages.
To enable the module to work with WCF I have set aspNetCompatibilityEnabled="true" in the web.config and RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed on the service.
But when the service method is called the MDC no longer contains any of the set values. I have confirmed they are being set by putting a logging method in the PreRequestHandlerExecute.
I think the MDC is loosing the values because in the log I can see the PreRequestHandlerExecute handler method and service method calls are on separate
threads.
The post log4net using ThreadContext.Properties in wcf PerSession service suggests using log4net.GlobalContext but I think that solution would run into issues if two users hit the application at the same time as GlobalContext is shared by all threads.
Is there a way to make this work?
Rather than taking the values from the HttpContext and storing them in one of log4net's context objects, why not log the values directly from the HttpContext? See my answer to the linked question for some techniques that might work for you.
Capture username with log4net
If you go to the bottom of my answer, you will find what might be the best solution. Write an HttpContext value provider object that you can put in log4net's GlobalDiagnosticContext.
For example, you might do something like this (untested)
public class HttpContextValueProvider
{
private string name;
public HttpContextValueProvider(string name)
{
this.name = name.ToLower();
}
public override string ToString()
{
if (HttpContext.Current == null) return "";
var context = HttpContext.Current;
switch (name)
{
case "path":
return context.Request.Path;
case "user"
if (context.User != null && context.User.Identity.IsAuthenticated)
return context.User.Identity.Name;
case "ip":
string ip = context.Request.ServerVariables["HTTP_X_FORWARDED_FOR"];
if(string.IsNullOrWhiteSpace(ip))
ip = context.Request.ServerVariables["REMOTE_ADDR"];
return ip;
default:
return context.Items[name];
}
return "";
}
}
In the default clause I assume the name, if it is not a specifically case that we want to handle, represents a value in the HttpContext.Current.Items dictionary. You could make it more generic by also adding the ability to access Request.ServerVariables and/or other HttpContext information.
You would use this object like so:
Somewhere in your program/web site/service, add some instances of the object to log4net's global dictionary. When log4net resolves the value from the dictionary, it will call ToString before logging the value.
GDC.Set("path", new HttpContextValueProvider("path"));
GDC.Set("ip", new HttpContextValueProvider("ip"));
Note, you are using log4net's global dictionary, but the objects that you are putting in the dictionary are essentially wrappers around the HttpContext.Current object, so you will always be getting the information for the current request, even if you are handling simultaneous requests.
Good luck!

Pipeline support with BinaryJedis

I am using BinaryJedis to store and retrieve data as I am dealing with raw data. With Jedis pipeline I am able to save data in byte[] form in redis list. But when I try to retrieve this list data (one entry) using "lindex", I dont find any interface for this, i.e. lindex takes byte[] as input but returns Response
public Response<String> lindex(byte[] key, int index) {
client.lindex(key, index);
return getResponse(BuilderFactory.STRING);
}
Why there is no interface that returns Response