We are experiencing very slow processing and converting videos to m3u8 using ffmpeg library.
Note that the operating system used is Ubuntu Server and that the server has huge resources in RAM and CBU, and we noticed that the processing process does not consume the available resources significantly
Average video size: 1 GB
Average video processing time: 2 - 3 hours
The programming language used: asp.net core 3.1
We need to reach a maximum of 20 minutes in processing time, is that possible?
This problem does not seem to be related to programming. It is recommended to take the following steps to test, there should be some test results that can help you.
Use a local computer with a higher configuration of Ubuntu for testing, and it is recommended to install a solid state drive.
If the Ubuntu server you mentioned is a cloud server, it is recommended to upgrade to a higher performance for testing. It is best to test locally before deciding if such an upgrade is needed to save money.
If the above two points are difficult, for example, there is no such Ubuntu, we can also test it on a personal PC with Win 10/11 (with a solid state drive), install the ffmpeg environment, and then ensure that other software resources are closed, only Keep IIS, ffmpeg and other related services for testing.
I personally recommend trying the third suggestion first, so that you can get test results for the problem you care about. If the 1GB video can be processed within 20min, then we can consider upgrading the relevant configuration of Ubuntu.
Related
My application can be fairly CPU-intensive, as can the server I launch from my application using NativeProcess.
The problem is that they're both using the one core. On a quad-core machine, they both slow to a crawl as they're severely limited on their CPU share.
Is there any way to launch a native process on a different core, or in a way that won't result in such a shared, throttled bottleneck?
If you already using NativeProcess, you could also set CPU affinity in platform specific way.
I'm having a wierd performance problem with the DotNetZip library.
In the application (which runs under asp.net) i'm reading a set of files from the database and packs them on-the-fly into a zip file for the user to download.
Everything works fine on my development laptop. A zip file being about 10MB with default compression rate takes something around 5 seconds to finish. However, on the dev server at the customer, the same set of files takes around 1-2 minutes to compress. I've even experienced even longer times, up to several minutes. The CPU utilization is 100% when the zipping is running, but otherwise it stays around 0%, so it's not due to overload.
What's even more interesting is that on the production server, it takes something about 20 seconds to finish.
Where should I start looking?
Some hardware specs:
My Laptop
Development environment running on a virtualbox with 2 cores and 4GB RAM dedicated.
Core i5 M540 2,5GHz
8 GB RAM
Win7
Dev Server
According to properties dialog on My Computer (probably virtualized)
Intel Xeon 5160 3GHz
540MB RAM
Windows 2003 Server
Task Manager Reports Single Core
Production Server
According to properties dialog on My Computer (probably virtualized)
Xenon 5160 3GHz
512MB RAM
Windows 2003 Server
Task Manager Reports Dual Core
Update
The servers are running on a VMWare host. Found the VMWare icon hiding in the taskbar.
as mitch said, the virus scanner would probably be your best bet. that combined with the dev server being just a single core machine and the production server being a dual core (and probably without virus scanner) may explain a delay. what would also be valuable to know is the type of disk in those machines. if the production server and your laptop have SSDs and the dev server has a very old standard harddisk with low rpm, for example, that would also explain a delay. try getting a view on the I/O reads/writes for the zipfolder for the dev server and production server, you could use the SysInternals tools for that, and if you have a virus scanner or any other unexpected process running you're probably going to see a difference there. the SysInternals tools could be of value here in finding the culprit quickly.
UPDATE: because you commented the zip is created in-memory I'd like to add you can also use those tools to get a better understanding of what happens in memory. a delay of several minutes where you'd expect almost equal results because the dev server and production server are a lot alike has me thinking of the page file.. see if there are other processes on the dev server that have claimed a lot of memory. if there isn't enough left for the zip operation the dev server will start using the page file, which is very expensive.
The hardware seemed to be the problem here.
The customer's IT guys have now upgraded the server hardware on which the virtualized dev server runs and I now see compression times at about 6s for the same package size and number of files as on my local computer.
The specs now found in the My Computer properties window:
AMD Phenom II X6 1100T
3.83GHz 1,99 GB RAM
Is it possible having virtual machines in the cloud, install visual studio there, and making developers using the 'cloud' to do day-to-day programming work? Is the cost going to be too high? Is the speed going to be too slow?
Where can I find statistics or numbers to convince people?
I like using remote virtual machines to run development servers, but I don't like using my IDE on a remote server. The latency is noticeable. If you're without an internet connection you can't work. My happy compromise is to have a dev server available (EC2) and sync it with my laptop via git.
It is completely possible to do this, using a service like Rackspace you can set up a fairly powerful windows server for as little as $60 a month:
http://www.rackspacecloud.com/cloud_hosting_products/servers/pricing
In my experience using Remote Desktop to log into a Rackspace Windows Cloud Server has been snappy and quick (of course a lot of that depends on the strength of your internet connection). The process of standing up the server is lighting fast, backing it up is even easier, and it can be easily resized down the line if you need more storage/bandwidth.
These days I don't understand why a small to mid sized organization would actually waste capital on server hardware.
Evan
I am planning to build a new development computer for both Windows & Linux platforms. On Windows, my development would be primarily in .NET/C#/IIS/MSSQL Server. On Linux—preferably Ubuntu—my development would be in Ruby and Python.
I am thinking of buying a laptop with Windows 7 pre-installed with 4GB RAM, Intel Core 2 Duo, and 320 GB HD; running 2 VMs for both Windows and Linux development with the host OS as my work station. Of course, I would be running DBs and web servers on the respective platforms.
Is this a typical setup? My only concern is running two VMs side by side. Not sure if this configuration would be optimal. Alternative would be to do my Windows development on the host Windows 7 OS. What are your thoughts?
I really like using VMs for development, because it makes it really easy to maintain different configurations, make backups, test comms between machines, experiment, and so forth.
Linux VMs work pretty well. Windows in a VM on Windows, however, can be a resource hog. You probably want more than 4 GB on the laptop.
If your not going to be switching between the two platform frequently, I would recommend repartitioning your hard drive after you get your machine, and installing Windows in one partition and Linux in the other. Doing things that way is usually simpler, in that you don't need the over head of the VMs.
Sounds like you will get 15 minutes out of this laptop's battery, maybe 20.
Speaking from experience, you will prefer a desktop plus a "more mobile" laptop. You can do this without spending more than you had budgeted (remember you can skip the monitor on the deskop), but that will probably get you slightly less specs in exchange for the flexibility and a laptop you really can take with you. But I recommend spending slightly more than you would on the single laptop, and remember you do get two machines out of it.
You can network between them (e.g. use remote desktop programs from the laptop to connect to VMs on the desktop).
In my particular case, about 6 years ago I needed a new machine that could be used for on-location photography and had the power, screen-size, disk space, etc. to run Photoshop and other tools (e.g. batch processing ~900 largish images was one use case). I got a beefy laptop, which worked great for that, but the battery quickly died and never had much lifetime in the first place. The system has always been more of a "slightly-easier-to-move desktop" than a laptop, and it sounds like you'd rather have a real laptop.
Heres the problem. I use around three different machines for development. My partner is using two. We have to go through the same freaking set up procedure on all five machines to get to work.
Working with a php project here, so:
Install and configure, PDT, a php debugger, and some version of XAMPP.
Then possible install an svn client, and any other tools.
Again, to each of the five machines.
What if, instead, we did all of this once, in a virtual machine that is set up with the same stack, same versions, as the production server. Then each of us could grab a copy of the VM image, run that image on each of the five machines and do all of our development in that VM. Put Eclipse, apache, mysql, the works, all in that vm.
The only negative of this approach, and please correct me on the only part, is performance. Is it really that big of an issue though? The slowest machine out of the five is a Samsung NC10 powered by an Intel Atom 1.6 ghz processor.
Do you think this is possible and practically usable? Or am I crazy?
I use a VM for development (running on my laptop) and have never had performance problems. Another approach that you could take would be to image the drive in the state that you want. Use Acronis or Ghost to re-image each machine when you need to. Only takes about 5-10 minutes to restore an image on any modern PC.
I use a VM for all my "work" as it keeps it away from my "play". This set up allows me to use the office VPN without exposing my whole machine to the office environment (which I trust about as much as the internets. ;-) Also I don't have to worry about messing up my development environment by trying games or other software. My work VM is currently running inside VirtualBox but I have used VMWare in the past. I have only noticed performance issues when using graphic intensive programs like Webex or the Terminal Server Client.
It can certainly be done. What turns me off is the size of the VM image, which would normally be several GBs. Having it on a network share means it can take longer to transfer then your current setup process takes. I guess an external hard drive would be the easiest way to move it around.
Performance wouldn't be an issue with any web development.
I have to ask why your current machines need to be "re-imaged" each time you sit down for work?
If you're using Windows you'll probably want to use SYSPREP on the master image so that the 'mini-setup' runs when you boot up the virtual machines for the first time.
Otherwise in terms of Windows' point of view, the machines have the exact same SID, hostname and other things - running multiple machines with the same SID on the same network can cause tons of headaches. Even more if you want them to communicate with each other.
I've run websphere for zSeries on a vmware virtual machine with no problem and websphere is more resource intensive then any PHP stack. I find that having a multi core machine or at least hyper threading makes it run a lot faster.
With vmware, disk operations are slower. For PHP development I doubt it would be a problem, but you'd definitely notice it if you are compiling a large C++ project. There is also Sun's VirtualBox which is free, and the latest version is rather nice (but I haven't looked at how slow disk operations are yet).
I am using that idea in practice. Virtual machines are generally great for development.
To run on multiple operating systems and multiple separate development environments.
Preserver older development environments for later support.
Can be easily backed up, when hard drive crashes no need to start from beginning.
Can be copied from developer to another, so everyone don't have to do tedious installations and configurations.
Down sides are:
Virtual machines are slower, you need more powerful computers than you would need otherwise. I would recommend having at least 4 G of ram, but preferably more like 16, fast multi core processors and fast hard drives.
Copying Windows OS virtual machines, each used copy of virtual machine should have it's own product key. When you make a copy, it needs to be registered with new product key.
Did you think about a software configuration manager like ansible, chef or puppet? With such software automation of such tasks is very easy! It can even create fresh vm and then configure it.