Pings do not work with manually set up Flow Rules - sdn

I am currently playing around with ONOS and OpenFlow. I am using ONOS 2.0.0 and mininet-wifi. I have to following scenario: A wireless node moves between multiple access points. I would like to set up flow rules for the current and the following access point. The topology looks like this:
The host with IP 10.0.0.1 moves between the access points. However, I cannot get pings to work between the two hosts. At the access points I have two rules forwarding everything from their port 1 to 2 to and vice versa:
In the core switch my manual flow rules look e.g. like that:
What am I doing wrong here? What is the reason I cannot ping in this scenario? The rules of the reactive forwarding app do not look really different. One difference in the code is that I am using FlowRule objects while the reactive forwarding uses the ForwardingObjective object. I also tried that without any difference.

The problem was that the ARP requests were not answered. I had to start the ProxyARP application of ONOS. By that ONOS responds to received ARP requests properly. After that the flow rules were used as expected for sending the ping packages.

Related

Can DDS protocol be used to communicate between devices connected to different networks?How?

I am trying to implement a publish and subscribe hello world program for communication between 2 devices using eclipse cyclone DDS protocol, I am able to do it when devices are connected in the same network but when devices are in a different network there is no communication happening. As per my understanding, it's because of the default DDS domain but how do I change it?
I followed https://github.com/eclipse-cyclonedds/cyclonedds
Here there's a mention to make use of an XML file, but I am not understanding how to use it or where to use the file.
Any suggestion would be of much help, thank you!
Cyclone DDS looks at the value of the CYCLONEDDS_URI environment variable to find its configuration file. What you can do is make an XML file somewhere on your computer and put its path in that environment variable. E.g., on Linux:
export CYCLONEDDS_URI=/path/to/cdds.xml
or on Windows (“cmd”, I don’t know how to do it in powershell):
set "CYCLONEDDS_URI=c:/path/to/cdds.xml"
Windows is a bit tricky with the quotes, this seems to work fine. Then, when you start your application, Cyclone DDS will read that file and apply the settings in it. Of course you also need to know what to put in it.
For that, it is useful to know a few things about the networks you are using. In one network, it all works without any configuration because the UDP/IP multicast works semi-magically in a single network. If there are multiple networks, there is a router in between and those routers are often configured not to route multicast traffic.
That means you basically have two options:
Configure the routers to route multicast traffic between the networks (especially the 239.255.0.1 address used by default by DDS). If that works, you’re all set, no need to configure anything in Cyclone DDS.
Disable the use of multicast and instead list the hostnames/IP addresses of the machines you want to communicate with in the configuration file. You still need a router willing to route traffic from the one network to the other, but that is usually not a problem with unicast packets. (If for example you can ping it or login to it remotely, it’s fine.)
For (2), something like:
<CycloneDDS>
<Domain>
<General>
<AllowMulticast>false</AllowMulticast>
</General>
<Discovery>
<ParticipantIndex>auto</ParticipantIndex>
<Peers>
<Peer Address="ip-of-node-1" />
<Peer Address="ip-of-node-2" />
<Peer Address="ip-of-node-3" />
/Peers>
</Discovery>
</Domain>
</CycloneDDS>
should work (obviously with the ip-of-node-1 &c. replaced with the correct addresses/hostnames). Setting “AllowMulticast” to false simply disables all use of multicast. If multicast doesn’t work reliably with all nodes, assuming it works can give a broken system. So at this stage, it is definitely easier to just not use it.
The “ParticipantIndex” has to do with the UDP port numbers it uses. With multicast, multiple processes on a single machine can all use the same UDP port number for receiving the discovery packets, and so there is this agreed-upon port number for discovery that makes everything work without any configuration (port number 7400 for domain id 0). That in turn allows it to use random port numbers for receiving unicast traffic.
With unicast, however, each process needs to have its own unique port number, and that in turn means the other processes need to know to which port numbers to send the data to. Setting the “ParticipantIndex” to auto forces it use predictable port numbers so that the processes can find each other.

Wireguard with dynamic setup for iot

At the moment we have multiple raspberry pies placed at different locations on different networks.
Our current solution to be able to reach them if something goes wrong is auto-ssh with jump host.
Recently I stumbled on Wireguard which could be another more slim way to solve the calling home problem.
The problem is that we would like the setup phase to be more dynamic, we don't want to do special configuration per node we have out there, we just want them to call home with a key and then be apart of the network.
Two questions:
Is Wireguard for us or are there other problems that I can't foresee here.
Is there a way to set it up dynamically with one key and let the clients get random ips?
wireguard always needs a unique keypair / host. So not what you are looking for.
If you just want a phone home option with ip connectivity I would suggest an openvpn server and client. If you use a username/password config (not using certificates), you can reuse the config on multiple clients. Openvpn will act as an dhcp server.
an howto:
https://openvpn.net/community-resources/how-to/
search for:
client-cert-not-required
The option that Maxim Sagaydachny is also valid for command access, an alternative to salt could be puppet with mco/bolt.
On any option you choose, be sure that the daemon restarts when it crashes, reboots, fails...
for systemd services this would be an override with:
[service]
restart=always

NSURLSession - The request timed out

I'm posting data from my app to my server using NSURLSession when a button is pressed. I can successfully send the data to my server and insert into a database, for the first two occasions, but any time after that, the request times out.
I've tried: changing session configuration (connections per host, timeoutInterval etc), session configuration types, changing the way the data is posted.
Has anyone seen this sort of behaviour before and know how I can fix this issue?
Or is it a server issue? I thought my server was down initially. I couldn't connect to it, nor load certain pages. However, it was only down for me. After rebooting my modem, I could connect back to the server. I didn't have any issues connecting to phpMyAdmin.
If the problem was reproducible after a reboot of the router, then I would look into whether Apple's captive portal test servers were down at the time.
Otherwise, my suspicion is that it is a network problem rather than anything specific to your app.
It is quite possible that the pages you were loading successfully were coming from cache.
Because you said that rebooting your modem fixed the problem, that likely means that your modem stopped responding to either DHCP requests or DNS lookups (either for your domain or for one of the captive portal test domains).
It is also possible that you have a packet loss problem, and that it gets worse the longer your router has been up and running. This could cause some requests to complete and others to fail.
Occasionally, I've seen weird behavior vaguely similar to this when ICMP is getting blocked too aggressively.
I've also seen this when a stateful firewall loses its mind and forgets the state.
This can also be caused by keeping HTTP/HTTPS connections alive past the point at which the server gives up and drops the connection, if your firewall is blocking the packet that tells you that the connection was closed by the remote end.
But without a packet trace, there's no way to be certain. To get one:
If your network code is running on OS X, you can just do this with tcpdump on your network interface.
If you are doing this on iOS, you can do this by connecting your computer via wired Ethernet, enabling network sharing over Wi-Fi, pointing tcpdump at the bridge interface, and pointing your iPhone at that Wi-Fi network.
Either way, that will tell you if there are requests going out that never come back, and more importantly, what type of requests they are, and who is responsible for replying to them. Once you have that information, if the source of the problem isn't obvious, post a link to the packet trace and we'll add more suggestions.

What is a proxy? What is it in Apache? Does it have many different meanings?

It has nothing to do file-descriptors. Is it some sort of connection between different protocols? Does there exist more like that? Reverse -proxy? Direct -proxy? Indirect -proxy? Does proxy mean 3-layer, 7-layer or different layer in OSI reference model? If you have NAT, you have 3-layer while 7-layer is the common proxy according to Wikipedia here. The Wikipedia continues "Because NAT operates at layer-3, it is less resource-intensive than the layer-7 proxy, but also less flexible" -- there are different kind of ways of doing the proxy:
So now a very stupid and irrogant question "What is a proxy in Apache?"
Other ignorant Questions by which I try to understand the proxies deeper
https://stackoverflow.com/questions/12397242/explain-apache-mod-proxy-module-is-it-overused-and-many-times-a-red-herring-w
Explain CouchDB's serving of websites, is CouchDB bundled somehow with Apache and how does it work?
Apache is a layer-7 proxy (as far as OSI is concerned), it doesn't use network address translation or any type of packet mangling/rewriting. It receives a request and based on some rules/configuration, makes a request on behalf of the client. Apache can act as a forward proxy and/or reverse proxy. In your images above, apache would be running on the blob that is red.
In the first image, apache would be acting as a reverse proxy, it receives an HTTP request from the internet, and proxies it to a specific place internally.
In the second image, apache acts as a forward proxy. Local users are using it to request anything on the internet (within the rules/config).
In a reverse proxy, a request for a specific resource is received, e.g. http://my.homepage.com/, and apache, knowing that the content is actually internally located at http://192.168.2.45/my.homepage/, proxies the request to the internal location.
In a forward proxy, a user on a LAN requests http://www.google.com/, and either the browser or OS knows to proxy the request to a local proxy server (apache, the red blob in the image), and apache then makes the request to www.google.com on the user's behalf.
There are different kinds of proxies! The key is a middleman, it is somehow in the middle of things A and B. I will use now terminology of Tanenbaum (more here). He defines for example in the context of the Globus security model two different proxies: user proxy and resource proxy. Then he defines object proxy that is an interface in object-distributed systems. Then he defines a web-proxy that is some sort of ancient idea when client-side web-browsers missed features such as ftp-support.
Now according to Jon Lin, reverse/forward proxies are similar to resource/user, respectively. Object proxy and web-proxy are special kind of implementations. I think they can be either resource -proxy or user -proxy, actually. If you have object -proxy, it could be implemented in different ways: you could could implement it so that user gives rights to use it, hence user proxy, or more global activity where it has different methods by which it co-operates with local environment from some global setting, hence a resource-proxy.
Related
https://stackoverflow.com/questions/12398389/different-definitions-of-the-term-proxy/12398390#12398390

Error with DOJO when using IP

Strange error with an Project using dojo:
if i call : http://localhost/project everything works like expected.
if i call : http://127.0.0.1/project everything works like expected.
if i call : http://192.168.2.1/project i get the following error (ONLY in IE6!):
"Bundle not found, locale.."
Any ideas?
Iam running Zend Server CE with PHP 5.2
if i add: 192.168.2.1 to "hosts" it works (windows)
Sounds like Zend server is performing some kind of virtual site support using the site name as a partial domain.
I can't say 100% if/how it is beacuse I don't use Zend, but I can explain the principle using Apache as an Example.
There are 3 ways in which a web site can be virtually hosted under a single web server application, this applies to most servers on the market today, Apache, IIS, nginx and many others.
It all boils down to one thing, giving one running server application instance the ability to host multiple individual websites.
The 3 methods of seperating sites are as follows:
By IP address : If you have multiple IP addresses (Usually -but not always beacuse you have multiple network interface cards) then you can tell your server application to listen to one IP for one site, another IP for another site and so on. If you browse to one IP you'll get one site, and likewise the other on the other IP.
By Port Number : If your using only one IP address, then you can bind to multiple port numbers, port 80 is generally the default for web servers, but by browsing to an address and pinning the port number on the end (http://mysite.com:99) you'll force the browser to use that port. You can then have multiple websites listening on different ports and select them manually at browse time as required.
By Host Name Header: This is by far the most common way of supporting multiple sites, all web servers that understand the HTTP/1.1 protocol have to obey a header field in the request that contains the host name, when a request comes in for EG: http://mysite,com/ then there will be an entry in the request header that looks like 'Host: mysite.com' the webserver can then use that to say, oh yes.. I know which one that is.. and it then selects and serves the correct website.
The problems start to arise however when you start to use IP addresses that generally cannot be resolved or have no DNS name, because the web server then doesn't know which hostname to tag it to.
As an example in Apache, if you set up a virtual host, then try to browse that server using just the IP address, you'll get the default server, which in many cases won't even be configured to respond correctly or display anything.
To compound this, going up to web application layer, many frameworks also do their own checks on hostnames and other variables passed to them by the web server, and many make decisions on how to operate based on this information.
If you've gotten to the default web application by IP address, then there's a high chance that the framework may get confused at being presented with an IP address as a host name.
As the OP noted, in many cases, you can add a name to your hosts file and use this as a poor man's DNS substitute, the file to modify can be found in the following locations:
c:\windows\system32\drivers\etc\ - on windows
and
/etc/
on Linux/Unix
The file is generally just called 'hosts' and is a plain text file. Adding a line like:
123.456.789.123 myserver
Will tie http://myserver/ to http://123.456.789.123/
If you can, and your doing a lot of web applications it may be worth setting up your own DNS server, most Linux distros will allow you to install 'Bind' and I do also believe there is a version available for windows too.
I'm not going to go into the pro's and cons of private DNS servers here, it's a whole other subject in itself, but if your likely to be doing a lot of additions to your hosts, then in the long run you'll find it a better option.