There is an API that I need to periodically poll in order to check the status message.
ticker := time.NewTicker(time.Second * tickerWaitTimeSeconds)
defer ticker.Stop()
for range ticker.C {
res, err := r.client.Get(someURL)
if err != nil {
results <- err
return
}
if err := json.NewDecoder(res.Body).Decode(&response); err != nil {
results <- err
return
}
status := response.Data.Attributes.CompleteStatus
if status == "COMPLETED" {
results <- nil
return
}
if status == "ERROR" {
results <- ErrFailedJob
return
}
It has worked in quite a stable way so far, but there is one possible catch if I understand how tickers work correctly.
There is this constant tickerWaitTimeSeconds that is currently set to 2 seconds. The value was chosen in such a way that the request has enough time to succeed (it's not even close to 1 second, not to mention 2) and we don't spam the API.
I suspect, however, that if for some reason the request takes longer than tickerWaitTimeSeconds there might be more GET requests to the API than necessary.
Is my suspicion valid in this case? Probably, there is something wrong with my understanding of what really happens and the Get call blocks the ticker, but I doubt that's the case.
Based on the ticker documentation:
The ticker will adjust the time interval or drop ticks to make up for slow receivers.
So the ticker should drop ticks if Get takes longer than tick interval. The Get call is not run in a separate goroutine, so you cannot submit more than one request at any given time. It may be that if one Get lasts 3 seconds, the next Get happens 1 second after the first one. If you want to call the API at least 2 seconds after the completion of the previous call, reset the ticker at each iteration.
Why do you need a ticker? Wouldn't it be easier to use time.Sleep()?
Something like this:
import (
"fmt"
"time"
)
func ticktock( d time.Duration ) {
for true {
doSomethingUseful()
time.Sleep(d)
}
}
See it in action at https://go.dev/play/p/xtg6bqNdwuk
You might want to track the elapsed time required to doSomethingUseful() and deduct that from the sleep interval, so you'd never sleep more than the specified duration, but you might sleep less.
Related
I have to create a per-customer backpressure, and implemented this with returning a Mono.justOrEmpty(authentication).delayElement(Duration.ofMillis(calculatedDelay));
This seems to be working fine in my unit-tests, BUT if I have a calculatedDelay of 0, I can't test this. This snippet returns a java.lang.AssertionError: unexpected end during a no-event expectation:
#Test
public void testSomeDelay_8CPUs_Load7() throws IOException {
when(cut.getLoad()).thenReturn("7");
when(cut.getCpuCount()).thenReturn(8L);
when(authentication.getName()).thenReturn("delayeduser");
Duration duration = StepVerifier
.withVirtualTime(() -> cut.get(Optional.of(authentication)))
.expectSubscription() // swallow subscribe event
.expectNoEvent(Duration.ofMillis(0)) // here is the culprit
.expectNextCount(1)
.verifyComplete();
}
I don't know how to check for the cases, where I'm expected no delay at all. This is BTW the same, when I return a Mono.justOrEmpty(authentication) (without any delayed subscription). I can't seem to check, that I have created the correct non-delayed flow.
Yeah this is a corner case that is hard to cover. Especially when you know that foo.delayElement(0) is actually simply returning foo unmodified.
What you could do is test the delay differently, by appending an elapsed() operator:
Duration duration = StepVerifier
.withVirtualTime(() -> cut.get(Optional.of(authentication))
.elapsed() //this will get correct timing of 0 with virtual time
.map(Tuple2::getT1) //we only care about the timing, T2 being the value
)
.expectNext(calculateDelay)
.verifyComplete();
I am creating a perpetual trivia dapp (for learning purposes) that has 3 stages. Each stage should last approximately 30 secs. Example:
enum Stages {
AcceptingEntryFees,
RevealQuestion,
Complete
}
modifier transitionToReveal(uint _playerCount) {
_;
if (stage == Stages.AcceptingEntryFees && now >= creationTime + 30 seconds && _playerCount > 0) {
nextStage();
}
}
modifier transitionToComplete() {
_;
if (stage == Stages.RevealQuestion && now >= creationTime + 60 seconds) {
nextStage();
}
}
modifier transitionToAcceptingFees() {
_;
if (stage == Stages.Complete && now >= creationTime + 90 seconds) {
nextStage();
}
}
function nextStage() internal {
stage = Stages(uint(stage) + 1);
}
Im struggling with a solution on how to make the stage increment once the time requirement has been met. I don't need exactly 30 seconds by any means.
Take the first transition (accepting fees).
function payEntryFee() external payable transitionToReveal(getPlayerCount()) atStage(Stages.AcceptingEntryFees) {
....
}
I currently have it set up where people can pay to play up until the 30 seconds is up. However for the stage to increment a tx has to take place. So for this setup the first person to join after the 30 seconds is up will incur the gas price and trigger the next stage. This is not ideal because what if another player doesn't show for a while.
From my research there is no way to trigger a method internally by time and trying to trigger it from the front end would require gas and then who pays it?
Can anyone think of an elegant solution to this? I would like the stage to increment every ~ 30 seconds without interruption to the game.
You would either need an external entity, like a game master web application which changes the state using its own timer and sending transactions but that would cost you gas for every single transaction.
Or you could keep track of the state on the front end (kind of like syncing) and then whenever the player interacts with the ethereum dapp to make a function call, you could do a fetchState() to get the new state and then route the player to the correct game state.
For example, after the web app frontend gets confirmation that the user has paid, it personally keeps track of the state and presents the user with the UI options related to predicted state of the dapp, then when the user sends something like "submitTriviaAnswer" the dapp would update its state and verify that the user can submit a trivia answer.
function SubmitTriviaAnswer(int responseID) public {
fetchState()
...
}
I am drawing a chart using the data pulled from bitfinex.com via a simple API query. As the result, i will need to render a chart which is going to show the historical data of BTCUSD for the past two years.
Docs are available right here: https://bitfinex.readme.io/v2/reference#rest-public-candles
Everything works fine except the limit of the retrieved data.
This is my request:
https://api.bitfinex.com/v2/candles/trade:1h:tBTCUSD/hist?start=1514764800000&sort=1
The result can be seen over here or you can copy the request to the browser: https://docs.google.com/document/d/1sG11Ro0X21_UFgUtdqrlitcCchoSh30NzGCgAe6M0u0/edit?usp=sharing
The problem is that I receive candles for only 5 days no matter what dates or parameters I use. I can get more candles if i add the limit parameter to the string. But still, I can not get more than 1100-1000 candles. I even get the 500 error from the server:
Server error: GET https://api.bitfinex.com/v2/candles/trade:1h:tBTCUSD/hist?limit=1100&start=1512086400000&end=1516233600000&sort=1 resulted in a 500 Internal Server Error response:\n ["error",10020,"limit: invalid"]. What should be the valid limit? There is no such information in the docs.
The author of this topic has the same question but no solutions are given. The last answer does not make big changes: Bitfinex data api
How can I get the desired amount of data for the two years period of time? I do not want to break my query down into smaller pieces and go step by step. It will look ugly.
From the looks of it the limit is set to 1000. If you need more then 1000 historical entries you could parse the last timestamp of the response and create another request till you reach the desired end time.
Keep in mind that you can only do 10-90 requests peer minute. So it's smart to make some kind of sleeping mechanism on every request for 6 seconds or something like that.
import json
import time
import requests
start = 1512086400000
end = 1516233600000
timestamp = start
last_timestamp = None
url = 'https://api.bitfinex.com/v2/trades/tBTCUSD/hist/'
historical_data = []
while timestamp <= end and timestamp != last_timestamp:
print("Requesting "+str(timestamp))
params = {'start': timestamp, 'limit': 1000, 'sort': 1}
response = requests.get(url, params=params)
trades = json.loads(response.content)
historical_data.extend(trades)
last_timestamp = timestamp
id, timestamp, amount, price = trades[-1]
We run multiple short queries in parallel, and hit the 10 sec limit.
According to the docs, throttling might occur if we hit a limit of 10 API requests per user per project.
We send a "start query job", and then we call the "getGueryResutls()" with timeoutMs of 60,000, however, we get a response after ~ 1 sec, we look for JOB Complete in the JSON response, and since it is not there, we need to send the GetQueryResults() again many times and hit the threshold, that is causing an error, not a slowdown. the sample code is below.
our questions are as such:
1. What is a "user" is it an appengine user, is it a user-id that we can put in the connection string or in the query itslef?
2. Is it really per API project of BigQuery?
3. What is the behavior?we got an error: "Exceeded rate limits: too many user/method api request limit for this user_method", and not a throttling behavior as the doc say and all of our process fails.
4. As seen below in the code, why we get the response after 1 sec & not according to our timeout? are we doing something wrong?
Thanks a lot
Here is the a sample code:
while (res is None or 'jobComplete' not in res or not res['jobComplete']) :
try:
res = self.service.jobs().getQueryResults(projectId=self.project_id,
jobId=jobId, timeoutMs=60000, maxResults=maxResults).execute()
except HTTPException:
if independent:
raise
Are you saying that even though you specify timeoutMs=60000, it is returning within 1 second but the job is not yet complete? If so, this is a bug.
The quota limits for getQueryResults are actually currently much higher than 10 requests per second. The reason the docs say only 10 is because we want to have the ability to throttle it down to that amount if someone is hitting us too hard. If you're currently seeing an error on this API, it is likely that you're calling it at a very high rate.
I'll try to reproduce the problem where we don't wait for the timeout ... if that is really what is happening it may be the root of your problems.
def query_results_long(self, jobId, maxResults, res=None):
start_time = query_time = None
while res is None or 'jobComplete' not in res or not res['jobComplete']:
if start_time:
logging.info('requested for query results ended after %s', query_time)
time.sleep(2)
start_time = datetime.now()
res = self.service.jobs().getQueryResults(projectId=self.project_id,
jobId=jobId, timeoutMs=60000, maxResults=maxResults).execute()
query_time = datetime.now() - start_time
return res
then in appengine log I had this:
requested for query results ended after 0:00:04.959110
I am trying to search a LDAP server(Active Directory). When I parse the search results, the hasMoreElements method of NamingEnumeration takes around 15-20 seconds to execute when it returns false. It is not the case when it is returning true. Is there a way to solve this issue?
Code:
SearchControls ctrl = new SearchControls();
ctrl.setSearchScope(SearchControls.SUBTREE_SCOPE);
String searchFilter = "(&(objectClass=user("uid"="abc"))";
NamingEnumeration ne = dirContext.search("ldap://abc:389/dc=abc,dc=xy", searchFilter,ctrl);
if (ne != null) {
while (ne.hasMoreElements()) {
//parse results
}
The NamingEnumeration does some cleanup when calling hasMoreElements() the last time. It also checks if there are additional referrals is the context-property Context.REFERRAL is set to "follow". In one case in our software this caused exactly the behaviour as described: The last call to hasMoreElements() (or to hasMore() or calling next() more often than allowed) caused up to 40 seconds as referrals are searched in the LDAP context. The solution is to not set Context.REFERRAL to "follow".
AD has a default limit of number of objects it returns in an LDAP query. I think it is in the 1000 object range.
If you hit 1001, you get 1000 returned, then an error, so I could see this being the case.
Count how many objects you get back in a test, and betcha you beat 1000 and then fail.