I wrote a linux kernel module and a user space application. They had been communicating very well via netlink. But I got errno 111 (connection refused) when I was trying to run the user space application on an emulated node in CORE (Common Open Research Emulator). Could you help me find the cause (according to CORE, an emulated node is a virtual machine, which uses the same kernel as the local host)?
Thanks a lot!
The reason why I got "connection refused" error is because the user-land and kernel-land processes were not residing in the same network space. The kernel-land process was listening in the "root" space, while the user-land process was sending in another space.
CORE uses Linux virtualization. It creates separate process and network spaces for each emulated node. If an application is running on a CORE node, its user-land process has its own process-ID space and network stack space. The messages sent by the application were confined within the CORE node's own spaces.
To enable kernel-land and user-land communication when CORE is used, we should let the application switch to the kernel's network space first, and then create a netlink socket and send messages over the socket.
To switch to the kernel's network space, we need first mount /proc to /proc_root. And then, in the application, add {fd = open("/proc_root/1/ns/net", O_RDONLY); setns(fd, 0);} before sending messages to the kernel-land process using Netlink.
My guess is that it happened because of lack of linux capabilities (CAP_NET_ADMIN). Did you check capabilities of your user space process and your VM process?
Related
I have an embedded application running on a Xilinx ZynqMp SoC. The application running on the PS (processor) memory maps the PL (FPGA) of the SoC over an AXI bus via /dev/mem at some base physical address.
I would like to run this application in a KVM/QEMU VM running on the PS. This means I will need to somehow expose that memory window available via /dev/mem on the host to the guest VM.
Through some research I thought that virtio-mmio would be the method to do this. I made some attempts using virtio-mmio but hit a wall, so I asked a question: Memory map address space on host from KVM/QEMU guest using virtio-mmio
The response seems to indicate that virtio-mmio is not the method I should be using for this.
If that is the case, what is the method used for exposing a memory space available on the host to a guest VM? I do not need any sort of device driver/layer on top of this. I just need raw memory access.
I'm having no joy in getting a replayed UDP Multicast packet to be "seen" by a client program on a different machine.
Details:
I have two machines on my local (wired) network connected through one unmanaged switch. One machine (running tcpreplay) is running Ubuntu 20.04, the other machine is running Windows 10.
On the Windows machine I have a Python program I wrote which listens for UDP multicast packets on port 5110 (this is dictated by the source of the UDP stream which is a commercial program). When I run the commercial program, my Python code correctly consumes the incoming packets and all seems to be working fine. I have a lot of work yet to do on the contents of those packets after they are received, but that isn't important for this issue.
So, moving forward, I decided it would be great to be able to work on the Python code without having the commercial program always running in the background hogging up resources. I figured if I could catch a snippet of UDP broadcasts from that program, I should be able to replay at leisure without having to run that resource hog.
So, on the Windows machine, I captured a UDP multicast packet stream using Wireshark and saved to a pcap file which I then copied to the Ubuntu machine.
I then attempted to replay that pcap file (on the Ubuntu machine) as follows:
$sudo tcpreplay -i enp5s0 single.pcap
To my disappointment, my Python program (on the Windows machine) did not receive the incoming packets.
Back on the Windows machine, I fired up Wireshark again and captured the "replayed" packet coming from the Ubuntu machine - so it appears the packet did make it out of my Ubuntu machine and into my Windows one. The contents of both the source packet (sent by tcpreplay) and the received packet (grabbed by Wireshark) appear identical - including the source and destination MAC addresses and the checksums. A diff on the byte contents of each packet yields no differences.
However, my Python program still stoically sits there waiting at:
data, address = sock.recvfrom(1024)
Here on stackoverflow, I did find this thread which seems to be an identical problem, however none of the solutions presented within helped (including changing the rp_filter parameter). I also saw mention of a Windows program, "Colasoft PacketPlayer", which I tried - running on the same machine as my Python client. This appears to have the same apparent results (i.e. no joy). I did not initially try that route as I was concerned with generating the packet on the same machine which is listening for it. (As an aside, I did also capture the replayed packet from Colasoft PacketPlayer and it too appears identical to the source packet).
At this point I'm out of ideas and am reaching out to the community for possible next steps?
I have got a storm cluster running and I want to monitor its performance. I followed this blog and was able to measure the number of tuples received by a bolt using codahale metrics and display it in graphite.
My goal is to deploy a storm cluster on a lightweight computer such as beaglebone and for that I need to be able to monitor JVM parameters such as CPU, thread and memory usage of each Worker Process.
I really like codahale metrics and would like to continue using it in my application. Can anyone direct me as to how I can measure JVM parameters separately for each worker using codahale metrics?
I would really appreciate it if someone posted an example of how to get jvm metrics using codahale metrics.
Thanks,
Palak
I found an excellent tutorial here. Works like a charm.
Using VisualVM and JMX we can get the CPU usage,GC activity, class loading information, Heap size & Used Heap statistics, All the Threads information with statistics,
CPU & Memory profiling, performance monitoring, Memory leaks of worker nodes. And also you can take heap dumps and thread dumps, profiler snapshots.
STEPS for setup
STEP 1: Staring VisualVM
Java VisualVM is bundled with JDK version 6 update 7 or greater. Navigate to your JDK software's bin directory and double-click the Java VisualVM executable.
Alternatively, navigate to your JDK software's bin directory and type the following command at the command (shell) prompt: jvisualvm.
STEP 2: Adding MBean plugin
For JMX monitoring you need to add MBean plugin explicitly.
1, Choose Tools > Plugins from the main menu.
2, In the downloaded Plugins tab, Click Add Plugins
3, Select the Mbean plugin
After successfully adding MBean plugin you can see MBean tab in VisualVM and you can monitor JMX.
STEP 3: Local Monitoring
By default VisualVM will monitor all the applications running on the local JVM. No need to do any changes if your using Java 1.6 and above.
STEP 4: Remote Monitoring
To retrieve and display information on applications running on the remote host, the jstatd utility needs to be running on the remote host.
Steps to run jstatd
The jstatd tool is an RMI server application that monitors for the creation and termination of instrumented HotSpot Java virtual machines (JVMs) and provides an
interface to allow remote monitoring tools to attach to JVMs.
1, create a file with "jstatd.all.policy" file name and copy the below content
grant codebase "file:${java.home}/../lib/tools.jar" { permission java.security.AllPermission ;};
2, copy "jstatd.all.policy" file in java bin (Java\jdk1.7.0_10\bin) directory
3, Navigate to your JDK software's bin directory and type the following command at the command prompt: jstatd -J-Djava.security.policy=jstatd.all.policy.txt
4, to run jstatd admin privileges required, then only all the other users can connect it remote host.
It’s one time activity. (Run with background process in CIT and SIT)
To add a remote host in VisualVM, right-click the Remote node in the Applications window,
choose Add Remote Host and type the host name or IP address in the Add Remote Host dialog box.
When Java VisualVM is connected to a remote host, a node for the remote host appears under the Remote node in the Applications window.
You can expand the remote host node to view the applications running on the remote host.
Use jvisualvm.exe jdk/bin and you can monitor storm workers.
Jvisualvm can also point to remote Storm topology.
Download and add mbean plugin into jvisualvm.
How about running a linux application on windows platform without any OS virtualization.
Lets say we have an linux software installed on windows machine which can run successfully on windows with below mentioned approach:
A normal windows application runs on windows by creating a virtual address space on any Operating system. Program loader loads required libraries for the application from physical drive onto virtual memory address space. All those libraries related to application gets loaded when required by using File System APIs.
Now lets go in different way, instead of creating a virtual address space on local system, we can create a process address space on different machine which is capable to run the application. In our case, create address space for linux application on remote linux machine instead of local windows machine. All file system access can be grab on remote machine and transferred to local windows machine. In this way linux application located on local windows
machine, creates process address space on remote linux machine, access file system on local windows machine. All file system related apis can be remoted and routed to local machine. Linux application UI can be captured on linux machine and sent for display on local windows machine.
In this way different platform applications can be run on other platform as well without need of OS virtualization. What is your opinion on this approach and how much it is feasible. Is there any big fault in this approach which makes this approach non-feasible.
That little word- API that you have used there means translating the entire set of system-calls of an operating system to another. Calls that go into creating a socket connection or a directory to file locking etc, EVERYTHING changes. You've discussed just memory here, the GUI has it's own calls, so do drivers and networks.
By the end of 6 years, that little million-line of code that you would've written to achieve all this, when packaged and bundled, will be called; surprise, surprise- a hypervisor.
I'm working on a Communications Device Class (CDC) driver for an embedded device, a Full Speed implementation of USB 2.0. The COM port settings are 115200, 8-bit, no parity, 1 stop bit, no flow control. Our PC application (32-bit, Windows 7, .NET 2.0) communicates with the target device through a virtual COM port, which on the target device can connect to either a FTDI (USB-to-SCI bridge) chip or the integrated USB peripheral in the microcontroller, depending on which port is selected by the application.
Both virtual COM ports work without any problems using Realterm. However, while our desktop application works using the virtual COM port connected via the FTDI chip, it hangs when attempting to use the virtual COM connected via the microcontroller's integrated USB peripheral.
When connected via the virtual COM port using the integrated USB, the application consistently hangs on the second call to SerialPort.Write(...). Using Serial Monitor from HHD Software I can see that the data is transmitted on the first call to SerialPort.Write(...). However, that data is never received by the target device.
It's odd because the only time I have seen similar problems on previous projects was when the flow control settings on each side of the bus were mismatched.
Additional info...
Here is the data captured from various port monitoring tools while running our PC application connected to the target device via its integrated USB peripheral. Any insight would be appreciated.
Sysinternals Portmon
Advanced USB Port Monitor
Device Monitoring Studio - Request View
Device Monitoring Studio - Packet View
For those that are interested, I am using CodeWarrior 10.2 with the MCF51JM128 from Freescale.
Any ideas or suggestions would be appreciated. Thanks.
It is clear from the logs that you are making the classic mistake of not taking care of the hardware handshaking signals. That only ever works by accident, a terminal emulator like Realterm will never make that mistake.
You must set the DtrEnable property to true. That turns on the Data Terminal Ready signal. It is important because RS-232 is an unterminated bus so is subject to electrical noise when the cable is disconnected or the power turned off. DTR convinces the device that it is in fact connected to a powered device. This is emulated of course in your case but the driver or firmware will still typically implement the RS-232 behavior.
And the RtsEnable property is important, used to handshake with the device and prevent the receive buffer from overflowing when the app isn't emptying the buffer in a timely matter. You really should set the Handshake property to Handshake.RequestToSend, the most common way a device implements it. Which then also takes care of turning RTS on. If you have to use Handshake.None for some reason then you have to turn it on yourself by setting RtsEnable to true.
This ought to take of the problem. If you still have trouble then use PortMon to spy on the way Realterm initializes the driver. Compare the commands you see against the commands that the SerialPort class sends. Make sure they are the same. In value, not in sequence.