redis unsubscribe time complexity - redis

Document says that "O(N) where N is the number of clients already subscribed to a channel"
Does N means the number of clients who subscribing the channel which I want to unsubscribe, or who subscribing any channel?

It means the sum of all clients subscribed to all the channels you unsubscribe in the batch operation.
UNSUBSCRIBE [channel [channel ...]]

Good question, like #JeanJacquesGourdin said, it is sum of clients of all channels that you are going to unsubscribe. I look into source code of redis 5.0.5, here is what I found, please correct me if I am wrong, thanks in advance
There is a server side hashtable called server.pubsub_channels, it keeps track of all channels and its subscribed clients
when you want to unsubscribe one channel, the server will try to remove your client from list of the server.pubsub_channels[your_channel], and it is a O(N) time complexity operation to iter through this list. ln = listSearchKey(clients,c);
int pubsubUnsubscribeChannel(client *c, robj *channel, int notify) {
dictEntry *de;
list *clients;
listNode *ln;
int retval = 0;
/* Remove the channel from the client -> channels hash table */
incrRefCount(channel); /* channel may be just a pointer to the same object
we have in the hash tables. Protect it... */
if (dictDelete(c->pubsub_channels,channel) == DICT_OK) {
retval = 1;
/* Remove the client from the channel -> clients list hash table */
de = dictFind(server.pubsub_channels,channel);
serverAssertWithInfo(c,NULL,de != NULL);
clients = dictGetVal(de);
ln = listSearchKey(clients,c); /** the iteration occurs here **/
serverAssertWithInfo(c,NULL,ln != NULL);
listDelNode(clients,ln);
if (listLength(clients) == 0) {
/* Free the list and associated hash entry at all if this was
* the latest client, so that it will be possible to abuse
* Redis PUBSUB creating millions of channels. */
dictDelete(server.pubsub_channels,channel);
}
}
/* Notify the client */
if (notify) {
addReply(c,shared.mbulkhdr[3]);
...
}
decrRefCount(channel); /* it is finally safe to release it */
return retval;
}
listNode *listSearchKey(list *list, void *key)
{
listIter iter;
listNode *node;
listRewind(list, &iter);
while((node = listNext(&iter)) != NULL) {
if (list->match) {
if (list->match(node->value, key)) {
return node;
}
} else {
if (key == node->value) {
return node;
}
}
}
return NULL;
}

Related

How to create MPI performance models using Hockney model parameters?

I understand that the parameters α and β can be used in the Hockney model to represent latency and bandwidth in peer to peer communications with m representing the message size. For example:
T(m) = α + β · m
I have been trying to model some OpenMPI algorithms using this technique and can't figure out this following algorithm for MPI_Scatter:
int
ompi_coll_base_scatter_intra_linear_nb(const void *sbuf, int scount,
struct ompi_datatype_t *sdtype,
void *rbuf, int rcount,
struct ompi_datatype_t *rdtype,
int root,
struct ompi_communicator_t *comm,
mca_coll_base_module_t *module,
int max_reqs)
{
int i, rank, size, err, line, nreqs;
ptrdiff_t incr;
char *ptmp;
ompi_request_t **reqs = NULL, **preq;
rank = ompi_comm_rank(comm);
size = ompi_comm_size(comm);
/* If not root, receive data. */
if (rank != root) {
err = MCA_PML_CALL(recv(rbuf, rcount, rdtype, root,
MCA_COLL_BASE_TAG_SCATTER,
comm, MPI_STATUS_IGNORE));
if (MPI_SUCCESS != err) {
line = __LINE__; goto err_hndl;
}
return MPI_SUCCESS;
}
if (max_reqs <= 1) {
max_reqs = 0;
nreqs = size - 1; /* no send for myself */
} else {
/* We use blocking MPI_Send (which does not need a request)
* every max_reqs send operation (which is size/max_reqs at most),
* therefore no need to allocate requests for these sends. */
nreqs = size - (size / max_reqs);
}
reqs = ompi_coll_base_comm_get_reqs(module->base_data, nreqs);
if (NULL == reqs) {
err = OMPI_ERR_OUT_OF_RESOURCE;
line = __LINE__; goto err_hndl;
}
err = ompi_datatype_type_extent(sdtype, &incr);
if (OMPI_SUCCESS != err) {
line = __LINE__; goto err_hndl;
}
incr *= scount;
/* I am the root, loop sending data. */
for (i = 0, ptmp = (char *)sbuf, preq = reqs; i < size; ++i, ptmp += incr) {
/* simple optimization */
if (i == rank) {
if (MPI_IN_PLACE != rbuf) {
err = ompi_datatype_sndrcv(ptmp, scount, sdtype, rbuf, rcount,
rdtype);
}
} else {
if (!max_reqs || (i % max_reqs)) {
err = MCA_PML_CALL(isend(ptmp, scount, sdtype, i,
MCA_COLL_BASE_TAG_SCATTER,
MCA_PML_BASE_SEND_STANDARD,
comm, preq++));
} else {
err = MCA_PML_CALL(send(ptmp, scount, sdtype, i,
MCA_COLL_BASE_TAG_SCATTER,
MCA_PML_BASE_SEND_STANDARD,
comm));
}
}
if (MPI_SUCCESS != err) {
line = __LINE__; goto err_hndl;
}
}
err = ompi_request_wait_all(preq - reqs, reqs, MPI_STATUSES_IGNORE);
if (MPI_SUCCESS != err) {
line = __LINE__; goto err_hndl;
}
return MPI_SUCCESS;
err_hndl:
if (NULL != reqs) {
/* find a real error code */
if (MPI_ERR_IN_STATUS == err) {
for (i = 0; i < nreqs; i++) {
if (MPI_REQUEST_NULL == reqs[i]) continue;
if (MPI_ERR_PENDING == reqs[i]->req_status.MPI_ERROR) continue;
if (reqs[i]->req_status.MPI_ERROR != MPI_SUCCESS) {
err = reqs[i]->req_status.MPI_ERROR;
break;
}
}
}
ompi_coll_base_free_reqs(reqs, nreqs);
}
OPAL_OUTPUT((ompi_coll_base_framework.framework_output,
"%s:%4d\tError occurred %d, rank %2d", __FILE__, line, err, rank));
(void)line; /* silence compiler warning */
return err;
}
So far I understand that from looking at the code that the model should be
T(NP, m) = (NP − 1) · (α + m · β).
With NP being the number of processes (As Scatter distributes using all processes apart from the root).
This does not account for the use of non-blocking sends that are send using MPI_Isend. (on the condition found in the code snippet) I am unsure of how to account for both the non-blocking and blocking sends using simply the Hockney Model.
Any help would be very much appreciated as non of the papers that I have read on the subject seem to explain the process well.
First of all, the source file mentions that this implementation is probably only for small numbers of processes: for larger numbers you probably want to do something treewise. Next, the max_reqs parameter controls how many isend calls you do before a blocking send call. So the running time of this algorithm would be equal to the number of times you do a blocking send. In an ideal world. In practice, non-blocking sends still have to be serialized out.
My best guess is that this algorithm can handle the case where there are multiple network cards or multiple ports per network card. If you can send 4 messages at a time (physically!) then this code sets up 3 non-blocking & 1 blocking send, and when the blocking send has gone through, your network ports are ready for the next batch of messages.

how to add function for closing and opening bet per match smart contract?

source code:https://github.com/laronlineworld/bettingMatch/blob/main/bettingMatch.sol
How to open and close bet per match
function bet(uint16 _matchSelected, uint16 _resultSelected) public payable {
//Check if the player already exist
// require(!checkIfPlayerExists(msg.sender));
//Check if the value sended by the player is higher than the min value
require(msg.value >= minimumBet);
//Set the player informations : amount of the bet, match and result selected
playerInfo[msg.sender].amountBet = msg.value;
playerInfo[msg.sender].matchSelected = _matchSelected;
playerInfo[msg.sender].resultSelected = _resultSelected;
//Add the address of the player to the players array
players.push(msg.sender);
//Finally increment the stakes of the team selected with the player bet
if ( _resultSelected == 1){
totalBetHome[_matchSelected] += msg.value;
}
else if( _resultSelected == 2){
totalBetAway[_matchSelected] += msg.value;
}
else{
totalBetDraw[_matchSelected] += msg.value;
}
}
This is the code for opening the betting
/* Function to enable betting */
function beginVotingPeriod() public onlyOwner returns(bool) {
bettingActive = true;
return true;
}
how about opening the bet per match?
also closing the bet per match
/* Function to close voting and handle payout. Can only be called by the owner. */
function closeVoting() public onlyOwner returns (bool) {
// Close the betting period
bettingActive = false;
return true;
}
The linked code recognizes a match as index of these arrays: totalBetHome, totalBetAway and totalBetDraw.
You can add a mapping where the key is the match ID, and the value is a flag signalizing whether there's a betting enabled on the match or not.
// default value for each is `false`
mapping (uint256 => bool) isMatchEnabled;
function enableBetting(uint256 _matchId) external onlyOwner {
isBettingEnabled[_matchId] = true;
}
function disableBetting(uint256 _matchId) external onlyOwner {
isBettingEnabled[_matchId] = false;
}
Then you can amend the bet() function - add a condition requiring betting for this match to be enabled.
function bet(uint16 _matchSelected, uint16 _resultSelected) public payable {
require(isBettingEnabled[_matchSelected] == true);
// rest of your code
}
Note: Solidity v0.4.2 that you're using in the linked code, is few years old and has few security issues. The current version (August 2021) is 0.8.7. Consider upgrading to the latest version.

Ethereum contract, running a function corrupts contract members

I made a simple contract that stores ether and then can send ether. The function that sends ether has a requirement that only the owner of the contract can send ether from the contract.
The contract mysteriously fails to send ether every subsequent call after the first.
I created a function to retrieve the owner address value in the contract and it turns out that after the first function call, it changes the data to 0x000000000000000000000000000000000000000a
Sending function:
function SendToAddress (uint8 amt, address adr) isOwner {
/* Have we transferred over the maximum amount in
the last 24 hours? */
if ((now - dayStartTime) >= secondsInADay) {
dayStartTime = now;
curDayTransfer = 0;
}
if ((curDayTransfer + amt) < dayMaxTransfer) {
adr.transfer (amt);
walletBalance -= amt;
curDayTransfer += amt;
MoneyTransfer newTransfer;
newTransfer.amount = amt;
newTransfer.target = adr;
newTransfer.timeStamp = now;
if (transferHistory.length == 100) {
// Shift all of the transactions in the history list forward
// to make space for the transaction.
for (uint8 i = 1; i < 100; i++) {
transferHistory[i] = transferHistory[i-1];
}
transferHistory[0] = newTransfer;
} else {
transferHistory.push (newTransfer);
}
}
}
isOwner modifier:
modifier isOwner() {
require(msg.sender == creatorAddress);
_;
}
constructor:
constructor () public {
creatorAddress = msg.sender;
}
I assume the compiler gives you a warning on the line MoneyTransfer newTransfer; about implicitly storing the data in storage. If you explicitly use MoneyTransfer storage newTransfer;, then you'll get a warning that you're using an uninitialized storage reference. That means whatever values you put in newTransfer will overwrite whatever's in the first few storage slots.
Use MoneyTransfer memory newTransfer; instead.

how can i get all process name in os x programmatically? not just app processes

I want to get a snapshot of the process info in the os x system.
The 'NSProcessInfo' can only get info of the calling process.
The ps cmd can be one solution, but i'd like a c or objective-c program.
Here's an example using using libproc.h to iterate over all the processes on the system and determine how many of them belong to the effective user of the process. You can easily modify this for your needs.
- (NSUInteger)maxSystemProcs
{
int32_t maxproc;
size_t len = sizeof(maxproc);
sysctlbyname("kern.maxproc", &maxproc, &len, NULL, 0);
return (NSUInteger)maxproc;
}
- (NSUInteger)runningUserProcs
{
NSUInteger maxSystemProcs = self.maxSystemProcs;
pid_t * const pids = calloc(maxSystemProcs, sizeof(pid_t));
NSAssert(pids, #"Memory allocation failure.");
const int pidcount = proc_listallpids(pids, (int)(maxSystemProcs * sizeof(pid_t)));
NSUInteger userPids = 0;
uid_t uid = geteuid();
for (int *pidp = pids; *pidp; pidp++) {
struct proc_bsdshortinfo bsdshortinfo;
int writtenSize;
writtenSize = proc_pidinfo(*pidp, PROC_PIDT_SHORTBSDINFO, 0, &bsdshortinfo, sizeof(bsdshortinfo));
if (writtenSize != (int)sizeof(bsdshortinfo)) {
continue;
}
if (bsdshortinfo.pbsi_uid == uid) {
userPids++;
}
}
free(pids);
return (NSUInteger)userPids;
}

How to check forwarded Packets in UDPBasicApp in Omnet

How can I modify UDPBasicApp to find duplicates in the messages recieved?
I made these changes to the class UDPBasicApp.cc to add an extra step to check recieved udp data packets like below, but I see no effect in .sca/.vec and does not even show bubbles.
Where could the error be?
void UDPBasicApp::handleMessageWhenUp(cMessage *msg)
{
if (msg->isSelfMessage()) {
ASSERT(msg == selfMsg);
switch (selfMsg->getKind()) {
case START:
processStart();
break;
case SEND:
processSend();
break;
case STOP:
processStop();
break;
default:
throw cRuntimeError("Invalid kind %d in self message", (int)selfMsg->getKind());
}
}
else if (msg->getKind() == UDP_I_DATA) {
// process incoming packet
//-----------------------------------------------------Added step
//std::string currentMsg= "" + msg->getTreeId();
std::string currentPacket= PK(msg)->getName();
if( BF->CheckBloom(currentPacket) == 1) {
numReplayed++;
getParentModule()->bubble("Replayed!!");
EV<<"----------------------WSNode "<<getParentModule()->getIndex() <<": REPLAYED! Dropping Packet\n";
delete msg;
return;
}
else
{
BF->AddToBloom(currentPacket);
numLegit++;
getParentModule()->bubble("Legit.");
EV<<"----------------------WSNode "<<getParentModule()->getIndex() <<":OK. Pass.\n";
}
//-----------------------------------------------------------------------------
processPacket(PK(msg));
}
else if (msg->getKind() == UDP_I_ERROR) {
EV_WARN << "Ignoring UDP error report\n";
delete msg;
}
else {
throw cRuntimeError("Unrecognized message (%s)%s", msg->getClassName(), msg->getName());
}
if (hasGUI()) {
char buf[40];
sprintf(buf, "rcvd: %d pks\nsent: %d pks", numReceived, numSent);
getDisplayString().setTagArg("t", 0, buf);
}
}
Since I don't have enough context about the entities participating in your overall system, I will provide the following idea:
You can add a unique ID to each message of your application by adding the following line to your applications *.msg:
int messageID = simulation.getUniqueNumber();
Now on the receiver side you can have an std::map<int, int> myMap where you store the <id,number-of-occurences>
Each time you receive a message you add the message to the std::map and increment the number-of-occurences
if(this->myMap.count(myMessage->getUniqueID) == 0) /* check whether this ID exists in the map */
{
this->myMap.insert(std::make_pair(myMessage->getUniqueID(), 1)); /* add this id to the map and set the counter to 1 */
}
else
{
this->myMap.at(myMessage->getUniqueID())++; /* add this id to the map and increment the counter */
}
This will allow you to track whether the same message has been forwarded twice, simply by doing:
if(this->myMap.at(myMessage->getUniqueID()) != 1 ) /* the counter is not 1, message has been "seen" more than once */
The tricky thing for you is how do you define whether a message has been seen twice (or more).