I am using Indy 10 on C++Builder 6.0 Professional Edition.
My SMTP server imposes a limit on the number of connections in a certain time interval, so I need to send more than one email using the same connection. Is it possible? How can I do that ?
I am already able to connect and send one email on each connection.
Thank you very much for any help.
You can call TIdSMTP.Send() multiple times between a single pair of Connect()/Disconnect() calls, adjusting the TIdMessage as needed for each Send() call.
IdSMTP1.Connect;
try
// prepare TIdMessage as needed...
IdSMTP1.Send(IdMessage1);
// prepare TIdMessage as needed...
IdSMTP1.Send(IdMessage1);
// prepare TIdMessage as needed...
IdSMTP1.Send(IdMessage1);
finally
IdSMTP1.Disconnect;
end;
Good day
I'm trying to perform load testing with LoadRunner 11. Here's an issue:
I've got automatically generated script after actions recording
Need to catch Session ID. I do it with web_reg_save_param() in the next way:
web_reg_save_param("S_ID",
"LB=Set-Cookie: JSESSIONID=",
"RB=; Path=/app/;",
LAST);
web_add_cookie("S_ID; DOMAIN={host}");
I catch ID from the response (Tree View):
D2B6F5B05A1366C395F8E86D8212F324
Compare it with Replay Log and see:
"S_ID = 75C78912AE78D26BDBDE73EBD9ADB510".
Compare 2 IDs above with the next request ID and see 3rd ID (Tree View):
80FE367101229FA34EB6429F4822E595
Why do I have 3 different IDs?
Let me know if I have to provide extra information.
You should Use(Search=All) below Code. Provided your Right and left boundary is correct:
web_reg_save_param("S_ID",
"LB=Set-Cookie: JSESSIONID=",
"RB=; Path=/app/;",
"Search=All",
LAST);
web_add_cookie("{S_ID}; DOMAIN={host}");
For Details refer HP Mannual for web_reg_save_param function.
I do not see what the conflict or controversy is here. Yes, items related to state or session will definitely change from user to user, one recording session to the next. They may even change from one request to the next. You may need to record several times to identify the change and use pattern for when you need to collect and when you need to reuse the collected data from a response in a subsequent request.
Take a listen to this podcast. It should help
http://www.perfbytes.com/dynamic-data-correlation
While inserting multiple records to DB ,i find that at times the thread waits indefinitely at Socket -> waitForDataIfClosed: where the readSemaphore is asked to wait. I am not too much into sockets, i would appreciate if any Pharo gurus can take a look at it.
I have objects derived from PersistentObject , and its session will always return a single instance of GlorpSession. A collection of such objects are itratively sent messages bePersistent and commitUnitOfWork.
myPersistantObjectCollection do:[:each | each bePersistent;
commitUnitOfWork.]
Would appreciate any sort of comments.
Thanks.
1) What is the difference between defaultSelenium.shutDownSeleniumServer() and seleniumServer.stop() ?
I observe that when I just use
defaultSelenium.stop();
seleniumServer.stop();
the browser closes but the server does not shut down. If that is the case, what is the use of seleniumServer.stop()?
2) Is this the right sequence of commands? If not, what is and why?
defaultSelenium.stop();
defaultSelenium.shutDownSeleniumServer();
seleniumServer.stop();
Answer to this question can be found on this thread.
I want to have a task that will execute every 5 minutes, but it will wait for last execution to finish and then start to count this 5 minutes. (This way I can also be sure that there is only one task running) The easiest way I found is to run django application manage.py shell and run this:
while True:
result = task.delay()
result.wait()
sleep(5)
but for each task that I want to execute this way I have to run it's own shell, is there an easy way to do it? May be some king custom ot django celery scheduler?
Wow it's amazing how no one understands this person's question. They are asking not about running tasks periodically, but how to ensure that Celery does not run two instances of the same task simultaneously. I don't think there's a way to do this with Celery directly, but what you can do is have one of the tasks acquire a lock right when it begins, and if it fails, to try again in a few seconds (using retry). The task would release the lock right before it returns; you can make the lock auto-expire after a few minutes if it ever crashes or times out.
For the lock you can probably just use your database or something like Redis.
You may be interested in this simpler method that requires no changes to a celery conf.
#celery.decorators.periodic_task(run_every=datetime.timedelta(minutes=5))
def my_task():
# Insert fun-stuff here
All you need is specify in celery conf witch task you want to run periodically and with which interval.
Example: Run the tasks.add task every 30 seconds
from datetime import timedelta
CELERYBEAT_SCHEDULE = {
"runs-every-30-seconds": {
"task": "tasks.add",
"schedule": timedelta(seconds=30),
"args": (16, 16)
},
}
Remember that you have to run celery in beat mode with the -B option
manage celeryd -B
You can also use the crontab style instead of time interval, checkout this:
http://ask.github.com/celery/userguide/periodic-tasks.html
If you are using django-celery remember that you can also use tha django db as scheduler for periodic tasks, in this way you can easily add trough the django-celery admin panel new periodic tasks.
For do that you need to set the celerybeat scheduler in settings.py in this way
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"
To expand on #MauroRocco's post, from http://docs.celeryproject.org/en/v2.2.4/userguide/periodic-tasks.html
Using a timedelta for the schedule means the task will be executed 30 seconds after celerybeat starts, and then every 30 seconds after the last run. A crontab like schedule also exists, see the section on Crontab schedules.
So this will indeed achieve the goal you want.
Because of celery.decorators deprecated, you can use periodic_task decorator like that:
from celery.task.base import periodic_task
from django.utils.timezone import timedelta
#periodic_task(run_every=timedelta(seconds=5))
def my_background_process():
# insert code
Add that task to a separate queue, and then use a separate worker for that queue with the concurrency option set to 1.