SO FAR:
1- I have a native application that uses libwebsocket server to communicate to a browser(websockets client).
2- I see a HIGH CPU usage(activity monitor indicates 100% usage in a 4 core mac machine- Yosemite(10.10.4)) when the app is connected to the websockets client, and we see that the function _poll() is using 75% CPU : seen using Time Profiler Application.
3- So, I configured libwebsockets to use libev, hoping that libev would use kqueue internally, thereby reducing the cpu utilization [ as per steps mentioned in this link: https://github.com/warmcat/libwebsockets/blob/e800db52bd0b42285b56d32a20f6d0d142571a89/changelog scroll down to v1.3-chrome37-firefox30 -> User api additions]
BUT STILL : I'm seeing that _poll() function is getting engaged by libwebsockets.
Could anyone please let me know if I missed anything. My end goal is to see libwebsockets working - using kqueue internally and check if it reduces CPU utilization.
Related
I'm not sure if this makes sense so please comment if I need to provide more info:
My webserver is used to upload files (receives files as Multipart/form-data and uploads them to another service). Using WebFlux, the controller defines the argument as a #RequestPart(name = "payload") final Part payload which wraps the header and Flux.
Reactor / Netty uses DirectByteBuffers to accomodate the payload. If the request handler cannot get enough direct memory to handle the request, it's gonna fail on an OOM and return 500. So this is normal / expected.
However, what's supposed to happen after?
I'm running load tests by sending multiple requests at the same time (either lots of requests with small files or less requests with bigger files). Once I get the first 500 due to an OOM, the system becomes unstable. Some requests will go through, and other fails with OOM (even requests with very small payload can fail).
This behaviour leds me to believe the allocated Pooled buffers are not shared between IO Channels? However this seems weird, it makes the system very easy to DDOS?
From the tests I did, I get the same behaviour using unpooled databuffers, although for a different reason. I do see the memory being unallocated when doing jcmd <PID> VM.native_memory but they aren't released to the OS according to metrics & htop. For instance, the reserved memory shown by jcmd goes back down but htop still reports the previous-high amount and it eventually OOM.
So Question :
Is that totally expected or am I missing a config value somewhere?
Setup :
Spring-boot 2.5.5 on openjdk11:jdk-11.0.10_9
Netty config :
-Dio.netty.allocator.type=pooled -Dio.netty.leakDetectionLevel=paranoid -Djdk.nio.maxCachedBufferSize=262144 -XX:MaxDirectMemorySize=1g -Dio.netty.maxDirectMemory=0
We are currently using ExoPlayer for one of our applications, which is very similar to the HQ Trivia app, and we use HLS as the streaming protocol.
Due to the nature of the game, we are trying to keep all the viewers of this stream to have the same latency, basically to keep them in sync.
We noticed that with the current backend configuration the latency is somewhere between 6 and 10 seconds. Based on this fact, we assumed that it would be safe to “force” the player to play at a bigger delay (15 seconds, further off the live edge), this way achieving the same (constant) delay across all the devices.
We’re using EXT-X-PROGRAM-DATE-TIME tag to get the server time of the currently playing content and we also have a master clock with the current time (NTP). We’re constantly comparing the 2 clocks to check the current latency. We’re pausing the player until it reaches the desired delay, then we’re resuming the playback.
The problem with this solution is that the latency might get worse (accumulating delay) over time and we don’t have other choice than restarting the playback and redo the steps described above if the delay gets too big (steps over a specified threshold). Before restarting the player we’re also trying to slightly increase the playback speed until it reaches the specified delay.
The exoPlayer instance is setup with a DefaultLoadControl, DefaultRenderersFactory, DefaultTrackSelector and the media source uses a DefaultDataSourceFactory.
The server-side configuration is as follows:
cupertinoChunkDurationTarget: 2000 (default: 10000)
cupertinoMaxChunkCount: 31 (default: 10)
cupertinoPlaylistChunkCount: 15 (default: 3)
My first question would be if this is even achievable with a protocol like HLS? Why is the player drifting away accumulating more and more delay?
Is there a better setup for the exoPlayer instance considering our specific use case?
Is there a better way to achieve a constant playback delay across all the playing devices? How important are the parameters on the server side in trying to achieve such a behaviour?
I would really appreciate any kind of help because I have reached a dead-end. :)
Thanks!
The only sollution for this is provided by:
https://netinsight.net/product/sye/
Their sollution includes frame accurate sync with no drift and stateful ABR. This probably can’t be done with http based protocols hence their sollution is built upon UDP transport.
I searched for a long time in Objective-C on how I could retrieve some stats for bytes in/out of macOS network interface and haven't found anything.
I thought about using command line, and find an OS stats on socket (maybe in sysctl) which I could use to make some math, but haven't found.
I tried to grep on nettop, but the way it's done in CLI is impossible to grep.
I just found 2 apps on AppStore which does't exactly this, showing bandwidth used in real time:
https://itunes.apple.com/us/app/network-scanner/id1103147103?l=en&mt=12
https://itunes.apple.com/us/app/network-inspector/id515794671?l=en&mt=12
I tried to use Activity Monitor and sample process to understand the call they do without success.
Any idea on how those two apps retrieve number of bytes in/out of network interface?
I am using the UI Automation COM-to-.NET Adapter to read the contents of the target Google Chrome browser that plays a FLASH content on Windows 7. It works.
I succeeded to get the content and elements. Everything works fine for some time but after few hours the elements become inaccessible.
The (AutomationElement).FindAll() returns 0 children.
Is there any internal undocumented Timeout used by UIAutomation ?
According to this IUIAutomation2 interface
There are 2 timeouts but they are not accessible from IUIAutomation interface.
IUIAutomation2 is supported only on Windows 8 (desktop apps only).
So I believe there is some timeout.
I made a workaround that restarts the searching and monitoring of elements from the beginning of the desktop tree but the elements are still not available.
After some time (not sure how much) the elements are available again.
My requirements are to read the values all the time as fast as possible but this behavior makes a damage to the whole architecture.
I read somewhere that there is some timeout of 3 minutes but not sure.
if there is a timeout, is it possible to change it ?
Is it possible to restart something or release/dispose something ?
I can't find anything on MSDN.
Does anybody have any idea what is happening and how to resolve ?
Thanks for this nicely put question. I have a similar issue with a much different setup. I'm on Win7, using UIAutomationCore.dll directly from C# to test our application-under-development. After running my sequence of actions & event subscriptions and all the other things, I intermittently observe that the UIA interface stops working (about 8-10min in my case, but I'm heavily using the UIA interface).
Many different things including dispatching the COM interface, sleeping at different places failed. The funny revelation was I managed to use the AccEvent.exe (part of SDK like inspect.exe) during the test and saw that events also stopped flowing to AccEvent, too. So it wasn't my client's interface that stopped, but it was rather the COM-server (or whatever the UIAutomationCore does) that stopped responding.
As a solution (that seems to work most of the time - or improve the situation a lot), I decided I should give the application-under-test some breathing point, since using the UIA puts additional load on it. This could be a smartly-put sleep points in your client, but instead of sleeping a set time, I'm monitoring the processor load of the application and waiting until it settles down.
One of the intermittent errors I receive when the problem manifests itself is "... was unable to call any of the subscribers..", and my search resulted in an msdn page saying they have improved things on CUIAutomation8 interface, but as this is Windows8 specific, I didn't have the chance to try that yet.
I should also add that I also reduced the number of calls to UIA by incorporating more ui caching (FindAllBuildCache), as the less the frequency of back-and-forth the better it is for the uia. Thanks to the answer of Guy in another question: UI Automation events stop being received after a while monitoring an application and then restart after some time
I have been working on Reachability class for a while and have tried both the one from Apple sample and the one from ddg. I wonder whether the Reachability class keep sending / receiving data after starting the notifier.
As I'm developing an app which connect to different hosts quite often, I decided to write a singleton and attach the reachability classes I need on it. The reacability classes would be initiated and start their notifiers once the app start. I use the singleton approach as I want this singleton class to be portable and can be applied to other apps without much rewriting. I am not sure if it is good idea to implement like this but it worked quite well.
However, someone reported that the battery of his device drain significantly faster after using the app and someone reported more data usage. My app does not send / receive data on background so I start wondering if it is related to the reachability.
I tried profiling the energy usage with Instrument and I notice that there are continuous small data (few hundred bytes in average) coming in via the network interfaces even I put my app in idle. However, there are almost no data sending out.
I know that Reachability requires data usage when initiate (resolving DNS etc) but I am not sure that whether it still keep using data after starting notifier. Does anyone can tell?
I am not familiar with the low-level programming, it would be nice if someone could explain how does the Reachability work.
I use Reachability, and while I haven't monitored the connections, I have browsed the code, and I can't see any reason why it would keep sending ( or receiving).
If you have a ethernet connection to your Mac, it is quite easy to check. Enable sharing over wifi of your ethernet connection. Install little snitch, it will run in demo mode for three hours after every boot. Turn off the data connection on the test device and connect it to your mac over wifi.
This will allow you to see any network access your test device is making.
If this isn't possible, you can also run your app in the simulator as the network side should be the same, so you should be able to check.
There are also a ton of other tools to track network activity, but I think little snitch is the easiest to use.