Network Steganography StegoSip Tool - udp

I am trying to use StegoSip tool coupled with Ekiga softphone. Finally Ekiga works, but when I run StegoSip, it gives me the warning that cb() takes exactly 3 arguments (2 given).
I found the function in the code and my opinion is that stegoSip does not recognize my conversation ( third argument ). I check the port and everything looks ok (SIP uses 5060 port).
I understand that the question is details, but I wasted too much time trying to fix this and I am desperate.
StegoSIP https://github.com/epinna/Stegosip
The problematic code:
def cb(self,i,nf_payload):
"""
Callback function of packet processing.
Get corresponding dissector and direction of packets with .getLoadedDissectorByMarker()
and send to the correct dissector using checkPkt() and processPkt().
"""
data = nf_payload.get_data()
pkt = stegoIP(data)
marker = nf_payload.get_nfmark()
dissector, incoming = dissector_dict.dissd.getLoadedDissectorByMarker(marker)
pkt.incoming = incoming
if not dissector:
nf_payload.set_verdict(nfqueue.NF_ACCEPT)
else:
dissector.checkPkt(pkt)
if pkt.extracted_payload:
dissector.processPkt(pkt, nf_payload)
return 1
The output is:
TypeError: cb() takes exactly 3 parameters (2 given)
Callback failure!

Related

Sometimes the Google Geocoding API returns a 500 server error [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
Sometimes the Google Maps API returns a 500 server error response according to German postal codes and i cannot understand why.
I hope it is specific enough.
Any ideas?
https://maps.googleapis.com/maps/api/geocode/json?key={api_key}&address={postal_code}&language=de&region=de&components=country:DE&sensor=false
Since you specify that the problem is not a given address but a seemingly "random" behavior, this may fall under a documented behavior of other "famous" API.
As for other cases, the recommended strategy is Exponential backoff for the Geocoding API, which basically means that you have to retry after a certain delay.
In case the above link goes down or changes, I'm quoting the article:
Exponential Backoff
In rare cases something may go wrong serving your request; you may receive a 4XX or 5XX HTTP response code, or the TCP connection may simply fail somewhere between your client and Google's server. Often it is worthwhile re-trying the request as the followup request may succeed when the original failed. However, it is important not to simply loop repeatedly making requests to Google's servers. This looping behavior can overload the network between your client and Google causing problems for many parties.
A better approach is to retry with increasing delays between attempts. Usually the delay is increased by a multiplicative factor with each attempt, an approach known as Exponential Backoff.
For example, consider an application that wishes to make this request to the Google Maps Time Zone API:
https://maps.googleapis.com/maps/api/timezone/json?location=39.6034810,-119.6822510&timestamp=1331161200&key=YOUR_API_KEY
The following Python example shows how to make the request with exponential backoff:
import json
import time
import urllib
import urllib2
def timezone(lat, lng, timestamp):
# The maps_key defined below isn't a valid Google Maps API key.
# You need to get your own API key.
# See https://developers.google.com/maps/documentation/timezone/get-api-key
maps_key = 'YOUR_KEY_HERE'
timezone_base_url = 'https://maps.googleapis.com/maps/api/timezone/json'
# This joins the parts of the URL together into one string.
url = timezone_base_url + '?' + urllib.urlencode({
'location': "%s,%s" % (lat, lng),
'timestamp': timestamp,
'key': maps_key,
})
current_delay = 0.1 # Set the initial retry delay to 100ms.
max_delay = 3600 # Set the maximum retry delay to 1 hour.
while True:
try:
# Get the API response.
response = str(urllib2.urlopen(url).read())
except IOError:
pass # Fall through to the retry loop.
else:
# If we didn't get an IOError then parse the result.
result = json.loads(response.replace('\\n', ''))
if result['status'] == 'OK':
return result['timeZoneId']
elif result['status'] != 'UNKNOWN_ERROR':
# Many API errors cannot be fixed by a retry, e.g. INVALID_REQUEST or
# ZERO_RESULTS. There is no point retrying these requests.
raise Exception(result['error_message'])
if current_delay > max_delay:
raise Exception('Too many retry attempts.')
print 'Waiting', current_delay, 'seconds before retrying.'
time.sleep(current_delay)
current_delay *= 2 # Increase the delay each time we retry.
tz = timezone(39.6034810, -119.6822510, 1331161200)
print 'Timezone:', tz
Of course this will not resolve the "false responses" you mention; I suspect that depends on data quality and does not happen randomly.

Ryu Controller Drop Packet

How do I send a flow entry to drop a package using Ryu? I've learned from tutorials how to send package out flow entry:
I define the action:
actions = [ofp_parser.OFPActionOutput(ofp.OFPP_FLOOD)]
Then the entry itself:
out = ofp_parser.OFPPacketOut(datapath=dp, buffer_id=msg.buffer_id, in_port=msg.in_port,actions=actions)
Send the message to the switch:
dp.send_msg(out)
I'm trying to find the documentation to make this code drop the package instead of flooding, without success. I imagine I'll have to change actions on the first step and fp_parser.OFPPacketOut on the second step. I need someone more experienced on Ryu and developing itself to point me to the right direction. Thank you.
The default disposition of a packet in OpenFlow is to drop the packet. Therefore if you have a Flow Rule that when it matches you want to drop the packet, you should simply have an instruction to CLEAR_ACTIONS and then no other instruction, which means that no other tables will be processed since there is no instruction to process (go to) another table and no actions on it.
Remember to keep in mind your flow priorities. If you have more than one flow rule that will match the packet, the one with the highest priority will be the one to take effect. So your "drop packet" could be hidden behind a higher priority flow rule.
Here is some code that I have that will drop all traffic that matches a given EtherType, assuming that no higher priority packet matches. The function is dependent on a couple of instance variables, namely datapath, proto, and parser.
def dropEthType(self,
match_eth_type = 0x0800):
parser = self.parser
proto = self.proto
match = parser.OFPMatch(eth_type = match_eth_type)
instruction = [
parser.OFPInstructionActions(proto.OFPIT_CLEAR_ACTIONS, [])
]
msg = parser.OFPFlowMod(self.datapath,
table_id = OFDPA_FLOW_TABLE_ID_ACL_POLICY,
priority = 1,
command = proto.OFPFC_ADD,
match = match,
instructions = instruction
)
self._log("dropEthType : %s" % str(msg))
reply = api.send_msg(self.ryuapp, msg)
if reply:
raise Exception

Why does my code run 2 different ways with the exact same code?

I have an application that I have started developing that monitors websocket messages from all clients connected to the websocket server by relaying all messages received from the server to this application.
Problem
When I run my program (In visual studio I hit Start), it builds and starts up perfectly, and does most of the functionality the same everytime. However, I have a common occurance of a portion of code that will not run the same. Below is the small snippet of that code.
msg = "set name monitor"
SendMessage2(socket, msg, msg.Length)
msg = "set monitor 1"
SendMessage2(socket, msg, msg.Length)
Console.WriteLine("We are after our second SendMessage2 function")
I know that the two calls to SendMessage2 are always executed because visual studio's debug console will output the following
We are at the end of the SendMessage2 Sub
We are at the end of the SendMessage2 Sub
We are after our second SendMessage2 function
I also know when it executes correctly because my websocket server will either output one of the two blocks
Output when app runs correctly
Client 4 connected
New thread created
Connection received. Parsing headers.
Message from socket #4: "set name monitor"
Message from socket #4: "set monitor 1"
Output when app runs incorrectly
Client 4 connected
New thread created
Connection received. Parsing headers.
Message from socket #4: "set name monitor"
Notice how the second output is missing the second message from the monitor application.
What have I tried
Using a string variable to call the functions
Calling the functions using static string arguments (not using the variable msg)
SyncLocking the functions separately
SyncLocking inside the SendMessage2 function
Reordering the functions (swapping the strings to change behavior)
TL;DR
Why is it that even when I do not change my code, my program will execute two separate ways? Am I doing something incorrectly when calling my SendMessage2 Sub?
I am all out of ideas. I am willing to try any recommendation to fix this problem.
All code can be found on GitHub here
So I figured it out.
It is actually not the VB application that is messing up. Nor was my server. While debugging I was looking at the number of bytes received by my server and I noticed the following:
Client 4 connected
New thread created
Connection received. Parsing headers.
bytes read: 25
Message from socket #4: "set name monitor"
bytes read: 22
Message from socket #4: "set monitor 1"
Ok great we have 25 bytes from set name monitor and 22 bytes from set monitor 1
Client 4 connected
New thread created
Connection received. Parsing headers.
bytes read: 47
Message from socket #4: "set name monitor"
And boom. Both programs were doing their jobs, sending the correct number of bytes every time and reading the correct number. However, the VB application is sending them so quickly back to back that my server was reading all 47 bytes at a time instead of the separate 25 and 22 bytes.
Solution
I solved this problem by implementing a secondary buffer in my server to store off all bytes after the first message should multiple messages by group like this. Now I check if my secondaryBuffer is empty before reading in new bytes.
Here is a portion of the code used to solve the problem
/*Byte Check*/
for (j=0; j < bytes; j++) {
if (j == 0)
continue;
if (readBuffer[j] == '\x81' && readBuffer[j-1] == '\x00' && readBuffer[j-2] == '\x00') {
secondaryBytes = bytes - j;
printf("Potential second message attached to this message\nCopying it to the secondary buffer.\n");
memcpy(secondaryBuffer, readBuffer + j, secondaryBytes);
break;
}//END IF
}//END FOR LOOP
/**/

Erlang process event error

I'm basically following the tutorial on this site Learn you some Erlang:Designing a concurrent application and I tried to run the code below with the following commands and got an error on line 48. I did turn off my firewall just in case that was the problem but no luck. I'm on windows xp SP3.
9> c(event).
{ok,event}
10> f().
ok
11> event:start("Event",0).
=ERROR REPORT==== 9-Feb-2013::15:05:07 ===
Error in process <0.61.0> with exit value: {function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
<0.61.0>
12>
-module(event).
-export([start/2, start_link/2, cancel/1]).
-export([init/3, loop/1]).
-record(state, {server,
name="",
to_go=0}).
%%% Public interface
start(EventName, DateTime) ->
spawn(?MODULE, init, [self(), EventName, DateTime]).
start_link(EventName, DateTime) ->
spawn_link(?MODULE, init, [self(), EventName, DateTime]).
cancel(Pid) ->
%% Monitor in case the process is already dead
Ref = erlang:monitor(process, Pid),
Pid ! {self(), Ref, cancel},
receive
{Ref, ok} ->
erlang:demonitor(Ref, [flush]),
ok;
{'DOWN', Ref, process, Pid, _Reason} ->
ok
end.
%%% Event's innards
init(Server, EventName, DateTime) ->
loop(#state{server=Server,
name=EventName,
to_go=time_to_go(DateTime)}).
%% Loop uses a list for times in order to go around the ~49 days limit
%% on timeouts.
loop(S = #state{server=Server, to_go=[T|Next]}) ->
receive
{Server, Ref, cancel} ->
Server ! {Ref, ok}
after T*1000 ->
if Next =:= [] ->
Server ! {done, S#state.name};
Next =/= [] ->
loop(S#state{to_go=Next})
end
end.
%%% private functions
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
Now = calendar:local_time(),
ToGo = calendar:datetime_to_gregorian_seconds(TimeOut) -
calendar:datetime_to_gregorian_seconds(Now),
Secs = if ToGo > 0 -> ToGo;
ToGo =< 0 -> 0
end,
normalize(Secs).
%% Because Erlang is limited to about 49 days (49*24*60*60*1000) in
%% milliseconds, the following function is used
normalize(N) ->
Limit = 49*24*60*60,
[N rem Limit | lists:duplicate(N div Limit, Limit)].
It's running purely locally on your machine so the firewall will not affect it.
The problem is the second argument you gave when you started it event:start("Event",0).
The error reason:
{function_clause,[{event,time_to_go,[0],[{file,"event.erl"},{line,48}]},{event,init,3,[{file,"event.erl"},{line,31}]}]}
says that it is a function_clause error which means that there was no clause in the function definition which matched the arguments. It also tells you that it was the function event:time_to_go/1 on line 48 which failed and that it was called with the argument 0.
It you look at the function time_to_go/ you will see that it expects its argument to be a tuple of 2 elements where each element is a tuple of 3 elements:
time_to_go(TimeOut={{_,_,_}, {_,_,_}}) ->
The structure of this argument is {{Year,Month,Day},{Hour,Minute,Second}}. If you follow this argument backwards you that time_to_go/ is called from init/3 where the argument to time_to_go/1, DateTime, is the 3rd argument to init/3. Almost there now. Now init/3 is the function which the process spawned in start/2 (and start_link/2) and the 3rd argument toinit/3is the second argument tostart/2`.
So when you call event:start("Event",0). it is the 0 here which is passed into the call time_to_go/1 function in the new peocess. And the format is wrong. You should be calling it with something like event:start("Event", {{2013,3,24},{17,53,62}}).
To add background to rvirding's answer, you get the error because the
example works up until the final code snippet as far
as I know. The normalize function is used first, which deals with the
problem. Then the paragraph right after the example in the question
above, the text says:
And it works! The last thing annoying with the event module is that we
have to input the time left in seconds. It would be much better if we
could use a standard format such as Erlang's datetime ({{Year, Month,
Day}, {Hour, Minute, Second}}). Just add the following function that
will calculate the difference between the current time on your
computer and the delay you inserted:
The next snippet introduces the code bit that takes only a date/time and
changes it to the final time left.
I could not easily link to all transitional versions of the file, which
is why trying the linked file directly with the example doesn't work
super easily in this case. If the code is followed step by step, snippet
by snippet, everything should work fine. Sorry for the confusion.

Twisted IRC Bot connection lost repeatedly to localhost

I am trying to implement an IRC Bot on a local server. The bot that I am using is identical to the one found at Eric Florenzano's Blog. This is the simplified code (which should run)
import sys
import re
from twisted.internet import reactor
from twisted.words.protocols import irc
from twisted.internet import protocol
class MomBot(irc.IRCClient):
def _get_nickname(self):
return self.factory.nickname
nickname = property(_get_nickname)
def signedOn(self):
print "attempting to sign on"
self.join(self.factory.channel)
print "Signed on as %s." % (self.nickname,)
def joined(self, channel):
print "attempting to join"
print "Joined %s." % (channel,)
def privmsg(self, user, channel, msg):
if not user:
return
if self.nickname in msg:
msg = re.compile(self.nickname + "[:,]* ?", re.I).sub('', msg)
prefix = "%s: " % (user.split('!', 1)[0], )
else:
prefix = ''
self.msg(self.factory.channel, prefix + "hello there")
class MomBotFactory(protocol.ClientFactory):
protocol = MomBot
def __init__(self, channel, nickname='YourMomDotCom', chain_length=3,
chattiness=1.0, max_words=10000):
self.channel = channel
self.nickname = nickname
self.chain_length = chain_length
self.chattiness = chattiness
self.max_words = max_words
def startedConnecting(self, connector):
print "started connecting on {0}:{1}"
.format(str(connector.host),str(connector.port))
def clientConnectionLost(self, connector, reason):
print "Lost connection (%s), reconnecting." % (reason,)
connector.connect()
def clientConnectionFailed(self, connector, reason):
print "Could not connect: %s" % (reason,)
if __name__ == "__main__":
chan = sys.argv[1]
reactor.connectTCP("localhost", 6667, MomBotFactory('#' + chan,
'YourMomDotCom', 2, chattiness=0.05))
reactor.run()
I added the startedConnection method in the client factory, which it is reaching and printing out the proper address:host. It then disconnects and enters the clientConnectionLost and prints the error:
Lost connection ([Failure instance: Traceback (failure with no frames):
<class 'twisted.internet.error.ConnectionDone'>: Connection was closed cleanly.
]), reconnecting.
If working properly it should log into the appropriate channel, specified as the first arg in the command (e.g. python module2.py botwar. would be channel #botwar.). It should respond with "hello there" if any one in the channel sends anything.
I have NGIRC running on the server, and it works if I connect from mIRC or any other IRC client.
I am unable to find a resolution as to why it is continually disconnecting. Any help on why would be greatly appreciated. Thank you!
One thing you may want to do is make sure you will see any error output produced by the server when your bot connects to it. My hunch is that the problem has something to do with authentication, or perhaps an unexpected difference in how ngirc handles one of the login/authentication commands used by IRCClient.
One approach that almost always applies is to capture a traffic log. Use a tool like tcpdump or wireshark.
Another approach you can try is to enable logging inside the Twisted application itself. Use twisted.protocols.policies.TrafficLoggingFactory for this:
from twisted.protocols.policies import TrafficLoggingFactory
appFactory = MomBotFactory(...)
logFactory = TrafficLoggingFactory(appFactory, "irc-")
reactor.connectTCP(..., logFactory)
This will log output to files starting with "irc-" (a different file for each connection).
You can also hook directly into your protocol implementation, at any one of several levels. For example, to display any bytes received at all:
class MomBot(irc.IRCClient):
def dataReceived(self, bytes):
print "Got", repr(bytes)
# Make sure to up-call - otherwise all of the IRC logic is disabled!
return irc.IRCClient.dataReceived(self, bytes)
With one of those approaches in place, hopefully you'll see something like:
:irc.example.net 451 * :Connection not registered
which I think means... you need to authenticate? Even if you see something else, hopefully this will help you narrow in more closely on the precise cause of the connection being closed.
Also, you can use tcpdump or wireshark to capture the traffic log between ngirc and one of the working IRC clients (eg mIRC) and then compare the two logs. Whatever different commands mIRC is sending should make it clear what changes you need to make to your bot.