Ejabberd module to send an acknowledge message - module

This erlang code I am using to send message acknowledgement. While using this I am getting error, error log is given below
My code:
-module(mod_ack).
-behaviour(gen_mod).
%% public methods for this module
-export([start/2, stop/1]).
-export([on_user_send_packet/3]).
-include("logger.hrl").
-include("ejabberd.hrl").
-include("jlib.hrl").
%%add and remove hook module on startup and close
start(Host, _Opts) ->
?INFO_MSG("mod_echo_msg starting", []),
ejabberd_hooks:add(user_send_packet, Host, ?MODULE, on_user_send_packet, 0),
ok.
stop(Host) ->
?INFO_MSG("mod_echo_msg stopping", []),
ejabberd_hooks:delete(user_send_packet, Host, ?MODULE, on_user_send_packet, 0),
ok.
on_user_send_packet(From, To, Packet) ->
return_message_reciept_to_sender(From, To, Packet),
Packet.
return_message_reciept_to_sender(From, _To, Packet) ->
IDS = xml:get_tag_attr_s("id", Packet),
ReturnRecieptType = "serverreceipt",
%% ?INFO_MSG("mod_echo_msg - MsgID: ~p To: ~p From: ~p", [IDS, _To, From]),
send_message(From, From, ReturnRecieptType, IDS, "").
send_message(From, To, TypeStr, IDS, BodyStr) ->
XmlBody = {xmlelement, "message",
[{"type", TypeStr},
{"from", jlib:jid_to_string(From)},
{"to", jlib:jid_to_string(To)},
{"id", IDS},
[{xmlelement, "body", [],
[{xmlcdata, BodyStr}]}]},
ejabberd_router:route(From, To, XmlBody).
I have removed the modules where i used on_user_send hook but still getting error also updated the error log.
Error log:
2015-10-06 07:13:45.796 [error] <0.437.0>#ejabberd_hooks:run_fold1:371 {function_clause,[{xml,get_tag_attr_s,[<<"id">>,{jid,<<"xxxxxx">>,<<"xxxxxx">>,<<>>,<<"xxxxxx">>,
<<"xxxxxx">>,<<>>}],[{file,"src/xml.erl"},{line,210}]},{mod_ack,return_message_reciept_to_sender,3,
[{file,"src/mod_ack.erl"},{line,36}]},{mod_ack,on_user_send_packet,4,[{file,"src/mod_ack.erl"},{line,30}]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},{line,385}]},{ejabberd_hooks,run_fold1,4,[{file,"src/ejabberd_hooks.erl"},{line,368}]},{ejabberd_c2s,session_established2,2,[{file,"src/ejabberd_c2s.erl"},{line,1296}]},{p1_fsm,handle_msg,10,[{file,"src/p1_fsm.erl"},{line,582}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}

You seem to have a module interfering (mod_send_receipt) or you have more code registering to hook. Your module is called mod_ack and it is not the one generating the crash. The crash happen because you have a hook registered to run function mod_send_receipt:on_user_send_packet which is 'undef': It means it does not exist or is not exported.

The ejabberd docs define the user_send_packet hook as an arity-4 function:
user_send_packet(Packet, C2SState, From, To) -> Packet
You are registering an arity-3 function, so when ejabberd tries to call your on_user_send_packet function, it passes 4 arguments and gets the undef function exception.
In order for your callback function to actually be called, you will need to match its argument list to what ejabberd will be sending, i.e.:
on_user_send_packet(Packet, _C2SState, From, To)

Related

address(this).send(msg.value) returning false but ethers got transfered

Below is my function:
// Function
function deposit() payable external {
// if(!wallet_address.send(msg.value)){
// revert("doposit fail");
// }
bool isErr = address(this).send(msg.value);
console.log(isErr);
emit Deposit(msg.sender, msg.value, address(this).balance);
}
I use Remix IDE with solidity version 0.8.7 and my question is why send() returns false but the ethers got transferred. Is send() returns false when it success by default?
address(this).send(msg.value) effectively just creates an unnecessary internal transaction redirecting the value accepted by "this contract" to "this contract"
This internal transaction fails because because your contract does not implement the receive() nor fallback() special functions that are needed to accept ETH sent to your contract from send(), transfer(), call() in some cases, and generally any transaction (internal or main) that does not invoke any specific existing function. It does not fail the main transaction, just returns false from the send() method.
TLDR: The send() function is in this case redundant and you can safely remove it. Your contract is able to accept ETH from the deposit() function even without it.
It is low level function call, it can fail in after transfer step. If you don't check success variable, the compiler is warning you that the call could revert and you might carry on unaware that you ignored failure.
So you should check success variable to ensure transaction is success.
require(success, "ETH_TRANSFER_FAILED");

TypeError: Type is not callable - on compile

I have created a game where I have an interface and a contract with a Game function
But when I compile it throws an exception:
TypeError: Type is not callable
myGameContract.depositPlayer(0, msg.value)();
It is clear that it refers to the fallback after: msg.value)();
But, I don't know how to fix the interface.
interface Game{
function depositPlayer(uint256 _pid, uint256 _amount) external payable;
function myGame(address _reciever) external payable {
addressBoard = payable(_reciever);
myGameContract= Game(addressBoard );
myGameContract.depositPlayer(0, msg.value)();
I need in this case it to contain a fallback
();
More bellow:
For more clarification, comment as the answer indicates, only the call
function contains a fallback
You can execute an external function without the empty parentheses.
Example that executes the external depositPlayer function:
myGameContract.depositPlayer(0, msg.value); // removed the `()`
You can execute the fallback function by sending an empty data field.
address(myGameContract).call("");
But the fallback function is executed when no suitable function (specified in the data field) is found. It's not executed after each function. So you can't execute both depositPlayer and fallback in the same call (except for executing depositPlayer from the fallback).
This is an easy error to make when you accidentally make an argument shadow a function. Here's an easy to understand place this error would pop up:
constructor(address owner, address[] memory defaultOperators) ERC777("Shipyard", "SHIP", defaultOperators) {
transferOwnership(owner);
ico = address(new ShipICO(this));
_mint(owner(), INITIAL_SUPPLY, "", "");
_pause();
}
Note that the argument owner has the same name as the function called in the _mint(owner()) line. Because of the argument having the same name, you're now trying to call the argument owner, which is of type address, as if it were a function. (Note that in this case, owner() is a function inherited from the OZ Ownable contract, making it especially easy to miss.
Easy, common convention is to add an underscore. In this case, _owner may already be taken, so you may add an underscore to the end (owner_ ) for the argument.
As to the OP's question:
This means that myGameContract.depositPlayer is not a function. Go figure out why you think it is, and the compiler thinks it isn't.

Reactor framework confusion with Assembly time and subscription time (when to call subscribe)

I'm actually confused on assembly time and subscription time. I know mono's are lazy and does not get executed until its subscribed. Below is a method.
public Mono<UserbaseEntityResponse> getUserbaseDetailsForEntityId(String id) {
GroupRequest request = ImmutableGroupRequest
.builder()
.cloudId(id)
.build();
Mono<List<GroupResponse>> response = ussClient.getGroups(request);
List<UserbaseEntityResponse.GroupPrincipal> groups = new CopyOnWriteArrayList<>();
response.flatMapIterable(elem -> elem)
.toIterable().iterator().forEachRemaining(
groupResponse -> {
groupResponse.getResources().iterator().forEachRemaining(
resource -> {
groups.add(ImmutableGroupPrincipal
.builder()
.groupId(resource.getId())
.name(resource.getDisplayName())
.addAllUsers(convertMemebersToUsers(resource))
.build());
}
);
}
);
log.debug("Response Object - " + groups.toString());
ImmutableUserbaseEntityResponse res = ImmutableUserbaseEntityResponse
.builder()
.userbaseId(id)
.addAllGroups(groups)
.build();
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
return Mono.just(res);
}
This gets executed Mono<List<GroupResponse>> response = ussClient.getGroups(request); without calling subscribe, however below will not get executed unless I call subscribe on that.
Flux<UserbaseEntityResponse.GroupPrincipal> f = Flux.fromIterable(res.getGroups())
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(groupPrincipal -> getResourcesForGroup((ImmutableGroupPrincipal)groupPrincipal, res.getUserbaseId()))
.sequential();
Can I get some more input on assembly time vs subscription?
"Nothing happens until you subscribe" isn't quite true in all cases. There's three scenarios in which a publisher (Mono or Flux) will be executed:
You subscribe;
You block;
The publisher is "hot".
Note that the above scenarios all apply to an entire reactive chain - i.e. if I subscribe to a publisher, everything upstream (dependent on that publisher) also executes. That's why frameworks can, and should call subscribe when they need to, causing the reactive chain defined in a controller to execute.
In your case it's actually the second of these - you're blocking, which is essentially a "subscribe and wait for the result(s)". Usually the methods that block are clearly labelled, but again that's not always the case - in your case it's the toIterable() method on Flux doing the blocking:
Transform this Flux into a lazy Iterable blocking on Iterator.next() calls.
But ah, you say, I'm not calling Iterator.next() - what gives?!
Well, implicitly you are by calling forEachRemaining():
The default implementation behaves as if:
while (hasNext())
action.accept(next());
...and as per the above rule, since ussClient.getGroups(request) is upstream of this blocking call, it gets executed.

Always finish Mono.usingWhen on cancel signal

we write a reactive WebFlux application that basically gets a resource, does amount of work in closure and at the end unlocks the resource with either initial version or updated one from closure.
Ex:
Mono<ProductLock> lock = service.lock()
Mono.usingWhen(lock,
(ProductLock state) -> service.doLongOperationAsync(state),
(ProductLock state) -> service.unlock(state))
ProductLock is a definition of locked product meaning that we can do operations via HTTP API including multiple microservices.
service.doLongOperationAsync() - calls a few HTTP APIs which ARE EXPECTED to ALWAYS be finished once started (failure - is normal, but if operation started - it needs to be finished, because HTTP call cannot be rolled back)
service.unlock() - operation MUST be called only after successful or failed execution of doLongOperationAsync.
In happy scenarios everything works as expected: on success product unlocked, on failure also unlocked.
The problems come when client, who calls our service (SOAP UI, POSTMAN, any real client), drops connection or times out - the cancel signal is generated and getting up till the above code.
At this point, anything within service.doLongOperationAsync is stopped and service.unlock is called on cancel asynchronously.
Question: how can we prevent this from happening.
The requirements are:
once doLongOperationAsync started - it must finish
service.unlock(state) - must be called ONLY after doLongOperationAsync, even on cancel.
Spring Boot repro
MRE:
#Bean
public RouterFunction<ServerResponse> route() {
return RouterFunctions.route().GET("/work", request -> processRequest()
.flatMap(res -> ServerResponse.ok().bodyValue(res))).build();
}
private Mono<String> processRequest() {
//Need unlock to execute exactly after doWorkAsync in any case
return Mono.usingWhen(lock(), this::doWorkAsync, this::unlock)
.doOnNext((id) -> System.out.println("Request processed:" + id))
.doOnCancel(() -> System.out.println("Request cancelled"));
}
private Mono<UUID> lock() {
return Mono.defer(() -> Mono.just(UUID.randomUUID())
.doOnNext(id -> System.out.println("Locked:" + id)));
}
//Need this to finish no matter what
private Mono<String> doWorkAsync(UUID lockID) {
return Mono.just(lockID).map(UUID::toString)
.doOnNext(id -> System.out.println("Start working on:" + lockID))
.delayElement(Duration.ofSeconds(10))
.doOnNext(id -> System.out.println("Finished work on:" + id))
// Should never be called
.doOnCancel(() -> System.out.println("Processing cancelled:" + lockID));
}
private Mono<Void> unlock(UUID lockID) {
return Mono.fromRunnable(() -> System.out.println("Unlocking:" + lockID));
}
Apparently what does work for the usecase is using Mono/Flux create with sink. This way it's initiated when subscribed, but still not cancelled even when Cancellation requested:
Mono.create(sink -> {
doWorkAsync().subscribe(sink::next, sink::error, null, sink.currentContext())
})
In case of using, whole usingWhen should be placed inside create

How to stop replyTo so that #SendTo works

I have Java code similar to this in a class called "MyService" to receive messages, process the object passed and return a response, with the intention to have the returned response using the configured exchange and routing key, as specified using the #SendTo annotation:
#RabbitListener(containerFactory = "myContainerFactory", queues = RabbitConfig.MY_QUEUE_NAME)
#SendTo("#{T(com.acme.config.RabbitOutboundConfig).OUTBOUND_EXCHANGE_NAME + '/' + myService.getRoutingKey()}")
public OrderResponse handlePaidOrder(Order order) {
// do processing on the input Order object here...
OrderResponse orderResponse = new OrderResponse();
// fill up response object here
return orderResponse;
}
public String getRoutingKey() {
String routingKey;
// .. custom logic to build a routing key
return routingKey;
}
This makes sense and works fine. The problem I am having is I can't figure out how to stop the "reply_to" property from coming in the message. I know if my sender configures a RabbitTemplate by calling setReplyAddress, that will result in a reply_to property and a correlation_id in the message.
However, if I simply do not call setReplyAddress, I still get a reply_to property, one that looks like this:
reply_to: amq.rabbitmq.reply-to.g2dkAAxyYWJiaXRAd3NK and so forth
and with that reply_to in the message, #SendTo has no effect. The Spring AMQP docs and this post: Dynamic SendTo annotation state:
The #SendTo is only used if there's no replyTo in the message.
Furthermore, when I don't call setReplyAddress on the RabbitTemplate, I don't get a correlation-id either. I pretty sure I am going to need that. So, my question is, how do I get my sender to generate a correlation-id but to not generate a reply-to so that my receiver can use the #SendTo annotation?
Thanks much in advance.
The correlationId is for the sender; it's not needed with direct reply-to since the channel is reserved; you could add a MessagePostProcessor on the sending side to add a correlationId.
The #SendTo is a fallback in case there is no reply_to header.
If you want to change that behavior you can add an afterReceivePostProcessor to the listener container to remove the replyTo property on the MessageProperties.
container.setAfterReceivePostProcessor(m -> {
m.getMessageProperties().setReplyTo(null);
return m;
}
Bear in mind, though, that if the sender set a replyTo, he is likely expecting a reply, so sending the reply someplace else is going to disappoint him and likely will cause some delay there until the reply times out.
If you mean you want to send an initial reply someplace else that does some more work and then finally replies to the originator, then you should save off the replyTo in another header, and reinstate it (or use an expression that references the saved-off header).