Does ServiceStack RedisClient support Sort command? - redis

my sort command is
"SORT hot_ids by no_keys GET # GET msg:->msg GET msg:->count GET msg:*->comments"
it works fine in redis-cli, but it doesn't return data in RedisClient. the result is a byte[][], length of result is correct, but every element of array is null.
the response of redis is
...
$-1
$-1
...
c# code is
data = redis.Sort("hot_ids ", new SortOptions()
{
GetPattern = "# GET msg:*->msg GET msg:*->count GET msg:*->comments",
Skip = skip,
Take = take,
SortPattern = "not-key"
});

Redis Sort is used in IRedisClient.GetSortedItemsFromList, e.g. from RedisClientListTests.cs:
[Test]
public void Can_AddRangeToList_and_GetSortedItems()
{
Redis.PrependRangeToList(ListId, storeMembers);
var members = Redis.GetSortedItemsFromList(ListId,
new SortOptions { SortAlpha = true, SortDesc = true, Skip = 1, Take = 2 });
AssertAreEqual(members,
storeMembers.OrderByDescending(s => s).Skip(1).Take(2).ToList());
}
You can use the MONITOR command in redis-cli to help diagnose and see what requests the ServiceStack Redis client is sending to redis-server.

Related

what is correct use of consumer groups in spring cloud stream dataflow and rabbitmq?

A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?

Need to make a port detector

I am trying to make a service which can detect when a port gets occupied.
So netstat -lntu | grep tcp can list all occupied ports. So a difference between the output from first netstat executioin and one after the a port gets occupied can get me the port value.
And it needs to be fully automated so i made a infinite loop which will continuously check for differences between two consecutive netstat.
Here is my code in js and i am running it using node-daemonize2:
var shell = require("shelljs");
var InfiniteLoop = require("infinite-loop");
var il = new InfiniteLoop();
var diff = require('diff');
var oldStr = '';
var newStr = '';
shell.exec("netstat -lntu | grep tcp",{silent:true}, function (code, output, error) {
oldStr = output;
});
il.add(function () {
shell.exec("netstat -lntu | grep tcp",{silent:true}, function (code, output, error) {
newStr = output;
if(!newStr.replace(oldStr,'')==''){
var dD = diff.diffChars(oldStr, newStr)[1];
if(dD.added){
let value = dD.value.replace(/\n/g,'');
re = /[0-9][0-9][0-9][0-9]/
value = re.exec(value);
console.log(value[0]);
}
}
oldStr = newStr;
});
});
il.run();
Now this thing works perfectly fine, but is resource heavy. So is there a way to make it light or a completely new and more efficient way to approach this problem?

Apache-ignite: Persistent Storage

My understanding for Ignite Persistent Storage is that the data is not only saved in memory, but also written to disk.
When the node is restarted, it should read the data from disk to memory.
So, I am using this example to test it out. But I update it a little bit because I don't want to use xml.
This is my slightly updated code.
public class PersistentIgniteExpr {
/**
* Organizations cache name.
*/
private static final String ORG_CACHE = "CacheQueryExample_Organizations";
/** */
private static final boolean UPDATE = true;
public void test(String nodeId) {
// Apache Ignite node configuration.
IgniteConfiguration cfg = new IgniteConfiguration();
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
// Enabling the persistence.
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
// Applying settings.
cfg.setDataStorageConfiguration(storageCfg);
List<String> addresses = new ArrayList<>();
addresses.add("127.0.0.1:47500..47502");
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setIpFinder(new TcpDiscoveryMulticastIpFinder().setAddresses(addresses));
cfg.setDiscoverySpi(tcpDiscoverySpi);
try (Ignite ignite = Ignition.getOrStart(cfg.setIgniteInstanceName(nodeId))) {
// Activate the cluster. Required to do if the persistent store is enabled because you might need
// to wait while all the nodes, that store a subset of data on disk, join the cluster.
ignite.active(true);
CacheConfiguration<Long, Organization> cacheCfg = new CacheConfiguration<>(ORG_CACHE);
cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheCfg.setBackups(1);
cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheCfg.setIndexedTypes(Long.class, Organization.class);
IgniteCache<Long, Organization> cache = ignite.getOrCreateCache(cacheCfg);
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
// Run SQL without explicitly calling to loadCache().
QueryCursor<List<?>> cur = cache.query(
new SqlFieldsQuery("select id, name from Organization where name like ?")
.setArgs("organization-54321"));
System.out.println("SQL Result: " + cur.getAll());
// Run get() without explicitly calling to loadCache().
Organization org = cache.get(54321l);
System.out.println("GET Result: " + org);
}
}
}
When I run the first time, it works as intended.
After running it one time, I am assuming that data is written to disk since the code is about persistent storage.
When I run the second time, I commented out this part.
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
That is the part where data is written. When the sql query is executed, it is returning null. That means data is not written to disk?
Another question is I am not very clear about TcpDiscoverySpi. Can someone explain about it as well?
Thanks in advance.
Do you have any exceptions at node startup?
Very probably, you don't have IGNITE_HOME env variable configured. And the Work Directory for persistence is chosen somehow differently each time you run a node.
You can either setup IGNITE_HOME env variable or add a code line to setup workDirectory explicitly: cfg.setWorkDirectory("C:\\workDirectory");
TcpDiscoverySpi provides a way to discover remote nodes in a grid, so the starting node can join a cluster. It is better to use TcpDiscoveryVmIpFinder if you know the list of IPs. TcpDiscoveryMulticastIpFinder broadcasts UDP messages to a network to discover other nodes. It does not require IPs list at all.
Please see https://apacheignite.readme.io/docs/cluster-config for more details.

How to pass same parameter with different value

I am trying the following API using Alamofire, but this API has multiple "to" fields. I tried to pass an array of "to" emails as parameters. It shows no error but did not send to all emails. API is correct, I tested that from terminal. Any suggestions will be cordially welcomed.
http -a email:pass -f POST 'sampleUrl' from="email#email.com" to="ongkur.cse#gmail.com" to="emailgmail#email.com" subject="test_sub" bodyText="testing hello"
I am giving my code:
class func sendMessage(message:MessageModel, delegate:RestAPIManagerDelegate?) {
let urlString = "http://localhost:8080/app/user/messages"
var parameters = [String:AnyObject]()
parameters = [
"from": message.messageFrom.emailAddress
]
var array = [String]()
for to in message.messageTO {
array.append(to)
}
parameters["to"] = array
for cc in message.messageCC {
parameters["cc"] = cc.emailAddress;
}
for bcc in message.messageBCC {
parameters["bcc"] = bcc.emailAddress;
}
parameters["subject"] = message.messageSubject;
parameters["bodyText"] = message.bodyText;
Alamofire.request(.POST, urlString, parameters: parameters)
.authenticate(user: MessageManager.sharedInstance().primaryUserName, password: MessageManager.sharedInstance().primaryPassword)
.validate(statusCode: 200..<201)
.validate(contentType: ["application/json"])
.responseJSON {
(_, _, jsonData, error) in
if(error != nil) {
println("\n sendMessage attempt json response:")
println(error!)
delegate?.messageSent?(false)
return
}
println("Server response during message sending:\n")
let swiftyJSONData = JSON(jsonData!)
println(swiftyJSONData)
delegate?.messageSent?(true)
}
}
First of all if you created the API yourself you should consider changing the API to expect an array of 'to' receivers instead of multiple times the same parameter name.
As back2dos states it in this answer: https://stackoverflow.com/a/1898078/672989
Although POST may be having multiple values for the same key, I'd be cautious using it, since some servers can't even properly handle that, which is probably why this isn't supported ... if you convert "duplicate" parameters to a list, the whole thing might start to choke, if a parameter comes in only once, and suddendly you wind up having a string or something ...
And I think he's right.
In this case I guess this is not possible with Alamofire, just as it is not possible with AFNetworking: https://github.com/AFNetworking/AFNetworking/issues/21
Alamofire probably store's its POST parameter in a Dictionary which doesn't allow duplicate keys.

matching and verifying Express 3/Connect 2 session keys from socket.io connection

I have a good start on a technique similar to this in Express 3
http://notjustburritos.tumblr.com/post/22682186189/socket-io-and-express-3
the idea being to let me grab the session object from within a socket.io connection callback, storing sessions via connect-redis in this case.
So, in app.configure we have
var db = require('connect-redis')(express)
....
app.configure(function(){
....
app.use(express.cookieParser(SITE_SECRET));
app.use(express.session({ store: new db }));
And in the app code there is
var redis_client = require('redis').createClient()
io.set('authorization', function(data, accept) {
if (!data.headers.cookie) {
return accept('Sesssion cookie required.', false)
}
data.cookie = require('cookie').parse(data.headers.cookie);
/* verify the signature of the session cookie. */
//data.cookie = require('cookie').parse(data.cookie, SITE_SECRET);
data.sessionID = data.cookie['connect.sid']
redis_client.get(data.sessionID, function(err, session) {
if (err) {
return accept('Error in session store.', false)
} else if (!session) {
return accept('Session not found.', false)
}
// success! we're authenticated with a known session.
data.session = session
return accept(null, true)
})
})
The sessions are being saved to redis, the keys look like this:
redis 127.0.0.1:6379> KEYS *
1) "sess:lpeNPnHmQ2f442rE87Y6X28C"
2) "sess:qsWvzubzparNHNoPyNN/CdVw"
and the values are unencrypted JSON. So far so good.
The cookie header, however, contains something like
{ 'connect.sid': 's:lpeNPnHmQ2f442rE87Y6X28C.obCv2x2NT05ieqkmzHnE0VZKDNnqGkcxeQAEVoeoeiU' }
So now the SessionStore and the connect.sid don't match, because the signature part (after the .) is stripped from the SessionStore version.
Question is, is is safe to just truncate out the SID part of the cookie (lpeNPnHmQ2f442rE87Y6X28C) and match based on that, or should the signature part be verified? If so, how?
rather than hacking around with private methods and internals of Connect, that were NOT meant to be used this way, this NPM does a good job of wrapping socket.on in a method that pulls in the session, and parses and verifies
https://github.com/functioncallback/session.socket.io
Just use cookie-signature module, as recommended by the comment lines in Connect's utils.js.
var cookie = require('cookie-signature');
//assuming you already put the session id from the client in a var called "sid"
var sid = cookies['connect.sid'];
sid = cookie.unsign(sid.slice(2),yourSecret);
if (sid == "false") {
//cookie validation failure
//uh oh. Handle this error
} else {
sid = "sess:" + sid;
//proceed to retrieve from store
}