make-basic.publish with default value for exchange gives exception error: bad argument - rabbitmq

I'm trying to follow the RabbitMq hello world tutorial in Lisp Flavoured Erlang. The tutorial is for Elixir, with the help of Erlang RabbitMQ client user guide I try to translate the steps to LFE. To publish a message, I need a basic.publish-record.
When I try:
(make-basic.publish routing_key #"hello")
(in a function in my lfe-file, which I call from the REPL).
This results in:
** exception error: bad argument
in min_make_record:not-working-queue-declare/0 (/home/.../min_make_record/_build/default/lib/amqp_client/include/amqp_client.hrl, line 26)
When I call make-basic.publish with the same arguments from the REPL, it returns the record as expected.
The error appears to be related to the default argument for exchange. The record is defined in rabbit_common/include/rabbit_framing.hrl as:
-record('basic.publish', {ticket = 0, exchange = <<"">>, routing_key = <<"">>, mandatory = false, immediate = false}).
The following do work:
make-basic.publish from the REPL
passing an argument for both exchange and routing_key:
(make-basic.publish exchange #"" routing_key #"hello")
Removing exchange and routing_key from the record (or a copy of it) and then calling make-basic.publish
(make-amqp_params_network) from amqp_client.hrl this record also has binary strings with defaults:
-record(amq_params_network, {username = <<"guest">>, password=<<"guest">>, virual_host=<<"/">>, host="localhost", port=undefined, channel_max=2047, frame_max=0, heartbeat=10, connection_timeout=60000, ssl_options=none, auth_mechanisms=[fun amqp_auth_mechanisms:plain/3, fun amqp_auth_mechanisms:amqplain/3], client_properties = []¸socjet_options=[]}).
Is there a difference in the syntactic sugar LFE generates for direct includes and transitive includes?
Or does the point in the name cause problems?
I tried:
starting amqp_client (via .app.src): amqp_client must be started otherwise there is an other error.
including both amqp_client.hrl and rabbit_framing.hrl from my lfe-file: this results in lots of "record ... already defined"-errors.
(Adding an include guard to rabbit_framing.hrl does not help)

Related

WebSphere wsadmin testConnection error message

I'm trying to write a script to test all DataSources of a WebSphere Cell/Node/Cluster. While this is possible from the Admin Console a script is better for certain audiences.
So I found the following article from IBM https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/txml_testconnection.html which looks promising as it describles exactly what I need.
After having a basic script like:
ds_ids = AdminConfig.list("DataSource").splitlines()
for ds_id in ds_ids:
AdminControl.testConnection(ds_id)
I experienced some undocumented behavior. Contrary to the article above the testConnection function does not always return a String, but may also throw a exception.
So I simply use a try-catch block:
try:
AdminControl.testConnection(ds_id)
except: # it actually is a com.ibm.ws.scripting.ScriptingException
exc_type, exc_value, exc_traceback = sys.exc_info()
now when I print the exc_value this is what one gets:
com.ibm.ws.scripting.ScriptingException: com.ibm.websphere.management.exception.AdminException: javax.management.MBeanException: Exception thrown in RequiredModelMBean while trying to invoke operation testConnection
Now this error message is always the same no matter what's wrong. I tested authentication errors, missing WebSphere Variables and missing driver classes.
While the Admin Console prints reasonable messages, the script keeps printing the same meaningless message.
The very weird thing is, as long as I don't catch the exception and the script just exits by error, a descriptive error message is shown.
Accessing the Java-Exceptions cause exc_value.getCause() gives None.
I've also had a look at the DataSource MBeans, but as they only exist if the servers are started, I quickly gave up on them.
I hope someone knows how to access the error messages I see when not catching the Exception.
thanks in advance
After all the research and testing AdminControl seems to be nothing more than a convinience facade to some of the commonly used MBeans.
So I tried issuing the Test Connection Service (like in the java example here https://www.ibm.com/support/knowledgecenter/en/SSEQTP_8.5.5/com.ibm.websphere.base.doc/ae/cdat_testcon.html
) directly:
ds_id = AdminConfig.list("DataSource").splitlines()[0]
# other queries may be 'process=server1' or 'process=dmgr'
ds_cfg_helpers = __wat.AdminControl.queryNames("WebSphere:process=nodeagent,type=DataSourceCfgHelper,*").splitlines()
try:
# invoke MBean method directly
warning_cnt = __wat.AdminControl.invoke(ds_cfg_helpers[0], "testConnection", ds_id)
if warning_cnt == "0":
print = "success"
else:
print "%s warning(s)" % warning_cnt
except ScriptingException as exc:
# get to the root of all evil ignoring exception wrappers
exc_cause = exc
while exc_cause.getCause():
exc_cause = exc_cause.getCause()
print exc_cause
This works the way I hoped for. The downside is that the code gets much more complicated if one needs to test DataSources that are defined on all kinds of scopes (Cell/Node/Cluster/Server/Application).
I don't need this so I left it out, but I still hope the example is useful to others too.

Full example of Message Broker in Lagom

I'm trying to implement a Message Broker set up with Lagom 1.2.2 and have run into a wall. The documentation has the following example for the service descriptor:
default Descriptor descriptor() {
return named("helloservice").withCalls(...)
// here we declare the topic(s) this service will publish to
.publishing(
topic("greetings", this::greetingsTopic)
)
....;
}
And this example for the implementation:
public Topic<GreetingMessage> greetingsTopic() {
return TopicProducer.singleStreamWithOffset(offset -> {
return persistentEntityRegistry
.eventStream(HelloEventTag.INSTANCE, offset)
.map(this::convertEvent);
});
}
However, there's no example of what the argument type or return type of the convertEvent() function are, and this is where I'm drawing a blank. On the other end, the subscriber to the MessageBroker, it seems that it's consuming GreetingMessage objects, but when I create a function convertEvent to return GreetingMessage objects, I get a compilation error:
Error:(61, 21) java: method map in class akka.stream.javadsl.Source<Out,Mat> cannot be applied to given types;
required: akka.japi.function.Function<akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset>,T>
found: this::convertEvent
reason: cannot infer type-variable(s) T
(argument mismatch; invalid method reference
incompatible types: akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset> cannot be converted to com.example.GreetingMessage)
Are there any more more thorough examples of how to use this? I've already checked in the Chirper sample app and it doesn't seem to have an example of this.
Thanks!
The error message you pasted tells you exactly what map expects:
required: akka.japi.function.Function<akka.japi.Pair<com.example.GreetingEvent,com.lightbend.lagom.javadsl.persistence.Offset>,T>
So, you need to pass a function that takes Pair<GreetingEvent, Offset>. What should the function return? Well, update it to take that, and then you'll get the next error, which once again will tell you what it was expecting you to return, and in this instance you'll find it's Pair<GreetingMessage, Offset>.
To explain what these types are - Lagom needs to track which events have been published to Kafka, so that when you restart a service, it doesn't start from the beginning of your event log and republish all the events from the beginning of time again. It does this by using offsets. So the event log produces pairs of events and offsets, and then you need to transform these events to the messages that will be published to Kafka, and when you returned the transformed message to Lagom, it needs to be a in a pair with the offset that you got from the event log, so that after publishing to Kafka, Lagom can persist the offset, and use that as the starting point next time the service is restarted.
A full example can be seen here: https://github.com/lagom/online-auction-java/blob/a32e696/bidding-impl/src/main/java/com/example/auction/bidding/impl/BiddingServiceImpl.java#L91

Ryu Controller Drop Packet

How do I send a flow entry to drop a package using Ryu? I've learned from tutorials how to send package out flow entry:
I define the action:
actions = [ofp_parser.OFPActionOutput(ofp.OFPP_FLOOD)]
Then the entry itself:
out = ofp_parser.OFPPacketOut(datapath=dp, buffer_id=msg.buffer_id, in_port=msg.in_port,actions=actions)
Send the message to the switch:
dp.send_msg(out)
I'm trying to find the documentation to make this code drop the package instead of flooding, without success. I imagine I'll have to change actions on the first step and fp_parser.OFPPacketOut on the second step. I need someone more experienced on Ryu and developing itself to point me to the right direction. Thank you.
The default disposition of a packet in OpenFlow is to drop the packet. Therefore if you have a Flow Rule that when it matches you want to drop the packet, you should simply have an instruction to CLEAR_ACTIONS and then no other instruction, which means that no other tables will be processed since there is no instruction to process (go to) another table and no actions on it.
Remember to keep in mind your flow priorities. If you have more than one flow rule that will match the packet, the one with the highest priority will be the one to take effect. So your "drop packet" could be hidden behind a higher priority flow rule.
Here is some code that I have that will drop all traffic that matches a given EtherType, assuming that no higher priority packet matches. The function is dependent on a couple of instance variables, namely datapath, proto, and parser.
def dropEthType(self,
match_eth_type = 0x0800):
parser = self.parser
proto = self.proto
match = parser.OFPMatch(eth_type = match_eth_type)
instruction = [
parser.OFPInstructionActions(proto.OFPIT_CLEAR_ACTIONS, [])
]
msg = parser.OFPFlowMod(self.datapath,
table_id = OFDPA_FLOW_TABLE_ID_ACL_POLICY,
priority = 1,
command = proto.OFPFC_ADD,
match = match,
instructions = instruction
)
self._log("dropEthType : %s" % str(msg))
reply = api.send_msg(self.ryuapp, msg)
if reply:
raise Exception

Understanding Erlang ODBC application

I'm connecting to an DB source with Erlang ODBC. My code looks like:
main() ->
Sql = "SELECT 1",
Connection = connect(),
case odbc:sql_query(Connection, Sql) of
{selected, Columns, Results} ->
io:format("Success!~n Columns: ~p~n Results: ~p~n",
[Columns, Results]),
ok;
{error, Reason} ->
{error, Reason}
end.
connect() ->
ConnectionString = "DSN=dsn_name;UID=uid;PWD=pqd",
odbc:start(),
{ok, Conn} = odbc:connect(ConnectionString, []),
Conn.
It's ok now. But how can I handle errors at least in my query? As I understand it contains in {error, Reason}, but how can I output it when something gone wrong? I'm trying to add io:format like at the first clause, but it doesn't work.
At second, unfortunately, I can't find any reference that can explain syntax well, I can't understand what does ok mean in this code (first - line 8, and second - line 16. If I'm right it just means the case that connection is ok and this variable isn't assigned? But what it means at 8 line?)
ok in line 8 is the return value of the case statement when the call to odbc:sql_query(Connection, Sql) returns a result that can match the expression {selected, Columns, Results}. In this case it is useless since the function io:format(...) already returns ok.
the second ok: {ok, Conn} is a very common Erlang usage: the function returns a tuple {ok,Value} in case of success and {error,Reason} in case of failure. So you can match on the success case and extract the returned value with this single line: {ok, Conn} = odbc:connect(ConnectionString, []),
In this case the function connect() doesn't handle the error case, so this code has 4 different possible behaviors:
It can fails to connect to the database: the process will crash with a badmatch error at line 16.
It connects to the database but the query fails: the main function will return the value {error,Reason}.
It connects to the database and the query returns an answer that doesn't match the tuple {selected, Columns, Results}: the process will crash with a badmatch error at line 4.
It connects to the database and the query returns an answer that matches the tuple {selected, Columns, Results}: the function will print
Success!
Columns: Column
Results: Result
and returns ok
So I found something. The {error, Reason} contains the connection errors, means that we specified wrong DSN name etc. Regarding to my offer to catch query error we can read this from Erlang reference:
Gaurds All API-functions are guarded and if you pass an argument of
the wrong type a runtime error will occur. All input parameters to
internal functions are trusted to be correct. It is a good programming
practise to only distrust input from truly external sources. You are
not supposed to catch these errors, it will only make the code very
messy and much more complex, which introduces more bugs and in the
worst case also covers up the actual faults. Put your effort on
testing instead, you should trust your own input.
Means that we should be careful about what we write. That's not bad.

Network Steganography StegoSip Tool

I am trying to use StegoSip tool coupled with Ekiga softphone. Finally Ekiga works, but when I run StegoSip, it gives me the warning that cb() takes exactly 3 arguments (2 given).
I found the function in the code and my opinion is that stegoSip does not recognize my conversation ( third argument ). I check the port and everything looks ok (SIP uses 5060 port).
I understand that the question is details, but I wasted too much time trying to fix this and I am desperate.
StegoSIP https://github.com/epinna/Stegosip
The problematic code:
def cb(self,i,nf_payload):
"""
Callback function of packet processing.
Get corresponding dissector and direction of packets with .getLoadedDissectorByMarker()
and send to the correct dissector using checkPkt() and processPkt().
"""
data = nf_payload.get_data()
pkt = stegoIP(data)
marker = nf_payload.get_nfmark()
dissector, incoming = dissector_dict.dissd.getLoadedDissectorByMarker(marker)
pkt.incoming = incoming
if not dissector:
nf_payload.set_verdict(nfqueue.NF_ACCEPT)
else:
dissector.checkPkt(pkt)
if pkt.extracted_payload:
dissector.processPkt(pkt, nf_payload)
return 1
The output is:
TypeError: cb() takes exactly 3 parameters (2 given)
Callback failure!