I've got a tcpserver serving multiple clients at the same time using multithreading. It works very good but i have got an issue on memory management.
Up to 1500 clients may connect to the server and stay connected for hours and days.
So i am having "Out of memory trouble" because i MUST use 32bit operating system. So i want converting to 64bit os should stay as plan b.
What do you suggest me to do?
A task based asynchronous tcpserver serving multiclient?
Or keep going with multi threads and making reusable 1500 threads?
Thanks.
simply changing "new thread" to "new task" solved my problem. Thank you all.
New Task(Sub() listen(tcp_client))
Related
I am developing an application that calls a web service, which deletes information from a database (the web service was developed by a third party vendor). On the first run approximately 100,000 records are deleted.
I have tested the routine a few times and this appears in Visual Studio occasionally:
"The CLR has been unable to transition from COM context 0x22c4f60 to COM context 0x22c51b0 for 60 seconds.
The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages.
This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time.
To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations."
I assume that the web service is taking more than sixty seconds to pass control back to the .NET Forms app. Please see the following quote from the message: "To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations". As this is a Windows Forms app, does this mean that I do not need to do anything to allow for this?
Sometimes this issue may also occur due to wrong server name, like if you had included SERVER/SQLEXPRESS instead of SERVER\SQLEXPRESS then also this error displays, which was my case.
Check and be sure if you are using a reader that you do not included try/catches. You should make and effort to try and resolve your solution issue better than that. A try/catch will cause a time out especially in a while loop.
I have a web service that intermittently uses up to 100% CPU on the server. We cannot replicate this on the development server, it only seems to happen on production. We thought it might be a load issue, but we have been testing it on development with up to 10 people at a time. On production there is only 2 people using it, in it's beta state. I have scoured the code for any infinite loops and there are none.
After lots and lots of searching I found this page that says that string concatenation may be the issue. There are 5 places that used a StringWriter, which I have changed to StringBuilder as advised in the link there. But the issue still occurs and am wondering if it may be the DataTable.WriteXML calls. Basically every method in the web service fills a data table and returns the data as XML as this is the required format for the third party. The calls are all like:
Dim SB As New StringBuilder
Dim SW As New IO.StringWriter(SB)
dsSource.Tables("Test").WriteXml(SW, Data.XmlWriteMode.IgnoreSchema, True)
Is this inefficient? Would this cause 100% CPU usage? If it could be the cause, what alternative would be best?
I would love to run some monitoring on this, but the fact of the matter is it happens maybe once a week, it's only on the live server, so if I take the time to run some performance diagnostics it is time that the service isn't running and our users can't get their work done. It is much quicker and simpler to kill the process and regenerate it.
Any ideas?
Does this production web server only have 1 CPU? I ask because the WriteXML, like any syncronous/single threaded code, will use 100% of a single CPU. If the server has multiple CPUs, and it jumps to 100 and stays there a while, this is likely something else. Look at stuff like multiple threads (running too many) or cleaning up after yourself, IE;
SW.flush
SW.Close
SW = Nothing
Or put it in a USING block
USING SW As New IO.StringWriter(SB)
'Your code here
END USING
Can anyone tell me what is holding us back.
I tried every different php script in front end to send emails. Interspire, Oempro, PHP list, PHPknode. But we are only able to send 5 emails every 2 seconds.
We upgraded our server, Our H/W configuration is good. We have used EXIM, We even tried PMTA. Even though our sending speed does not improved.
Our requirement is to send 200,000 - 300,000 emails a day But we need to send this in peak hours i.e. between 9am to 1pm. We are only able to send 15000 emails in 6-7 hours.
I don't know what is the problem, Why are we not able to send emails quickly. Is it because of the PHP script, MTA or the server h/w configuration.
Can anyone please help me with this problem? Any help will be appreciated.
I can tell you directly that Interspire Emailmarketer is not especially high-performing. I had a similar situation as you do. We had a high-end server machine, with SAS disks, 16 CPU cores and lots of RAM. We had a highly fine-tuned Postfix MTA and MySQL server (spent a few days configuring those). The performance you get matches our experience. The load in our case was entirely in the PHP script, not the database and not in the MTA.
I suspect that the Interspire software is meant for very low-traffic newsletters (where receivers can be counted in the hundreds).
Interspire by default uses a single php process to process the email queue, and thus it's unable to use multi-core machines. There is a paid multi-processing script called MSH addon which takes IEM processing queue and distributes it across several processor cores for massive speed bonus. From the addons website:
MSH is built around "multi processing library", a multi platform
,shared nothing, multi processing framework inspired by Python
multiprocessing module(but very different from api level). It uses
"proc family functions" for process spawning and "soq" for IPC.
Disclaimer: I am one of the developers of MSH addon.
after reading and searching about OS and process and threads, I checked on wiki and it said,
A computer program is a passive
collection of instructions, a process
is the actual execution of those
instructions. Several processes may be
associated with the same program; for
example, opening up several instances
of the same program often means more
than one process is being executed.
Now is it possible for a program to have more than one process and I am not including the possibility of running more than one instance of the same program. I mean one instance of one program is running, is it possible for a program to have more than one process?
If yes, how? If no, why not?
I am a newbie in this, but damn curious :)
Thanks for all your help..
Yes, fairly obviously - you can run two or more copies of most programs - I routinely have about 5 copies of vim running, and each of those is a separate process. As to how, the OS loads the executable file, creates a process and then tells that process to start executing the file contents.
It is most definitely possible but a desktop application might not be a good example and I think this is the source of your confusion.
Consider a webserver instead (NginX or Apache). There is one master process and multiple worker processes at work. The master process "accpets" the work , so to speak, and delegates it to the workers. Both NginX and Apache could be configured to any number of worker processes.
At our company we are in the business of delivering a SaaS that helps businesses have an online chat with their visitors via their websites. The back-end part of our system has multiple "service"es communicating with each other to accomplish the task. Each service has multiple instances running.
My .net WCF service calls a SSIS package using the Package.Execute(); method.
After I call execute, I set pkg.Dispose() and app = null;
The memory usage keeps climbing, 100mb to 150mb all the way to almost 300mb.
I am recycling the process now, but want to know the source of the problem.
Any ideas?
Update
The application that calls the WCF service is on another server so there is no issue there.
Are you closing your host? Just using a using statement? what does the open/close code look like?
There are a number of ways to do this quite common task (diagnose memory leaks in w3wp work processes). Tess has a great "getting started" post here:
http://blogs.msdn.com/tess/archive/2008/05/21/debugdiag-1-1-or-windbg-which-one-should-i-use-and-how-do-i-gather-memory-dumps.aspx
Oisin
An increase in virtual memory is not necessarily a problem, and 300MB is not very much memory in any case. Does IIS recycle on its own? If not, then I suggest you leave it alone.
Are you running SSIS 05 or 08? I remember 05 having a known mem leak issue when called using the API.
HTH