Hello I wanted to debug some of my bukkit plugins, but the problem is I can do brakepoint for max of 1 min, because after this time server auto-stops how can I disable it?
Minecraft's server.properties defines how long a tick can be before shutting down
max-tick-time=60000
Increasing this should let you pause the tick for a longer period of time
The time is in ms, so for a 1 hour pause use 1*60*60*1000=3600000
max-tick-time=3600000
If you wish to disable the feature entirely, -1 can be used
max-tick-time=-1
Related
I have a test that was alerting because it was taking extra time for an asset to load. We changed from waitForNoRequest to a pause (at Catchpoint's suggestion). That did not seem to have the expected effect of waiting for things to load. We increased the pause from 3000 to 12000 and that helped to allow the page to load and stop the alert. We noticed some more alerts, so I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
So the main question here is - what functionality does both of these different features provide? What do I gain by pausing instead of waiting, if anything?
Here's the test, data changed to protect company specific info. Step 3 is where we had some failures and we switched between pause and wait.
// Step - 1
open("https://website.com/")
waitForNoRequest("2000")
click("//*[#id=\"userid\"]")
type("//*[#id=\"userid\"]", "${username}")
setStepName("Step1-Login-")
// Step - 2
clickMouseAndWait("//*[#id=\"continue\"]")
waitForVisible("//*[#id=\"challenge-password\"]")
click("//*[#id=\"challenge-password\"]")
type("//*[#id=\"challenge-password\"]", "${password}")
setStepName("Step2-Login-creds")
// Step - 3
clickMouseAndWait("//*[#id=\"signIn\"]")
setStepName("Step3-dashboard")
waitForTitle("Dashboard")
waitForNoRequest("3000")
click("//*[#id=\"account-header-wrapper\"]")
waitForVisible("//*[#id=\"logout-link\"]")
click("//*[#id=\"logout-link\"]")
// Step - 4
clickAndWait("//*[text()=\"Sign Out\"]")
waitForTitle("Login - ")
verifyTextPresent("You have been logged out.")
setStepName("Step5-Logout")
Rachana here, I’m a member of the Technical Service Team here at Catchpoint, I’ll be happy to answer your questions.
Please find the differences below between waitForNoRequest and Pause commands:
Pause
Purpose: This command pauses the script execution for a specified amount of time, whether there are HTTP/s requests downloading or not. Time value is provided in milliseconds, it can range between 100 to 30,000 ms.
Explanation: This command is used when the agent needs to wait for a set amount of time and this is not impacted by the way the requests are loaded before proceeding to the next step or command. Only a parameter is required for this action.
WaitForNoRequest
Purpose: This commands waits for a specified amount of time, when there was no HTTP/s requests downloading. The wait time parameter can range between 1,000 to 5,000 ms.
Explanation: The only parameter for this action is a wait time. The agent will wait for that specified amount of time before moving onto the next step/command. Which will, in return, allow necessary requests more time to load after document complete.
For instance when you add waitforNoRequest(5000), initially agent waits 5000 ms after doc complete for any network activity. During that period if there is any network activity, then the agent waits another 5000 ms for the next network activity to end and the process goes on until no other request loads within the specified timeframe(5000 ms).
A pause command with 12000 ms, gives exactly 12 seconds to load the page. After 12 seconds the script execution will continue to next command no matter the page is loaded or not.
Since waitForNoRequest has a max time value of 5000 ms, you can tell the agent to wait for a gap of 5 seconds when there is no network activity. In this case, the page did not have any network activity for 3 seconds and hence proceeded to the next action. The page was not loaded completely and the script failed.
I tried to increase the pause to something like 45000 and it would not allow me to pause for that long.
We allow a maximum of 30 seconds pause time hence 45 seconds will not work.
Please reach out to our support team and we’ll be glad to connect you with our scripting SMEs and help you with any scripting needs you might have.
A Google Colab session expires after 12 hours at the longest. For this reason, I don't know whether it's worth starting to train my model or wait until the session has expired to start a brand new session.
Is there a way to know how long my session has been active for, or, equivalently, how much time I have left on my session?
Thanks.
import time, psutil
uptime = time.time() - psutil.boot_time()
remain = 12*60*60 - uptime
Menu -> Runtime -> View runtime logs
Look at the start time (may be on the last page), then add 12 hours.
I know that a constant delay can be set in
settings.py
DOWNLOAD_DELAY = 2
however, if I set the delay to 2s it is not efficient enough. If I set the DOWNLOAD_DELAY = 0.
The crawler is able to crawl about 10 pages. after that, the target page will return something like " you are requesting too frequently ".
What I want to do is the keep the download_delay to 0. once the "requesting too frequently" msg is found in the html. it change the delay to 2s. After a while it switch back to zero.
is there any module can do this? or any other better idea to handle such case?
Update:
I found that is a extension call AutoThrottle
but is it able to customize some logic like this??
if (requesting too frequently) is found
increase the DOWNLOAD_DELAY
If right after you get anti-spider page, then in 2 seconds you can get data page, then what you are asking probably requires writing a downloader middleware
that checks for anti-spider page, reset all scheduled requests to a renew-queue, start a looping call when spider is idle to get request from the renew-queue, (the looping interval is your hack for a new download delay), and try to decide when the download delay is not necessary again (requires some tests), then stop the looping and reschedule all the requests in renew-queue to scrapy scheduler. You will need to use redis queue in case of distributed crawl.
With download delay set to 0, in my experience throughput can go easily above 1000 items/min. If anti-spider page pops up after 10 responses, then it is not worth the effort.
Instead maybe you can try to find out how fast does your target server allow, may be 1.5s, 1s, 0.7s, 0.5s etc. Then maybe redesign your product takes into consideration the throughput your crawler can achieve.
You can use Auto Throttle extension now. It is turned off by default. You can add these parameters in your project's settings.py file to enable it.
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 300
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
AUTOTHROTTLE_DEBUG = True
Yes, You can use the time module to set the dynamic delay.
import time
for i in range(10):
*** Operations 1****
time.sleep( i )
*** Operations 2****
Now you can see the delay between Operations 1 and Operations 2.
Note:
the variable 'i' is in the form of seconds.
I have a process that tries to make an SSL connection after start up, but that fails if the clock has not yet been set (the dates don't match the effective dates on the certificates). Is it possible to configure upstart to only start the process after the internal clock is set?
The default setting for the clock is 2010-01-01, so perhaps something like date >= 2014 is sufficient (obviously not legit upstart syntax, but the concept holds).
The best I could figure out was to start up after NTP has started, but that doesn't necessarily mean the clock has been set as the network connection establishment may be delayed or not available for a while.
The simple solution is probably to just poll the date and wait 500ms or whatever before trying again if the date isn't sane yet.
Here's what I ended up doing:
start on started connman
stop on runlevel [016]
script
YEAR=$(date +'%Y')
until [ $YEAR -ge "2014" ]; do
sleep 5
YEAR=$(date +'%Y')
done
python access_point.py
end script
I wait until the connection manager is running and then I check the year every 5 seconds until the year is 2014 or greater.
Delayed job is great, but I would like to change its timer interval to be more frequent (every 2 second) to meet my special need.
Is there a config or hard-coding anywhere to change it?
With DJ 3.0 you can add this to the config/initializers/delayed_job_config.rb file:
Delayed::Worker.sleep_delay = 2
Try setting
Delayed::Worker.const_set("SLEEP", 2)
in your config/initializers/delayed_job_config.rb file.
Sure, just go to RAILS_ROOT/vendor/plugins/delayed_job/lib/delayed/worker.rb, look for the line
self.sleep_delay = 5
and change it to
self.sleep_delay = 2
or whatever you'd like
On an earlier version of DJ I set this to as little as 0.1 so that the jobs in the queue get picked up for processing almost instantly and it works just fine.