I am implementing a module that supports 206 partial requests.
After reading RFC rfc2616, i noticed that when receiving a multi-range request, overlapping ranges such as: "a-b, a-d" are not allowed.
My question is:
What happens with single-range request and overlapping bytes?
Request #1: a-b
Request #2: a-d
Do I need to ignore bytes a-b in the second request?
OR
Do I need to overwrite the bytes?
Thanks
Overwrite the bytes.
Responses does not depend on any previous request semantically because HTTP is a stateless protocol.
Related
I've had my S3 bucket logging into another bucket using Server Access Log Format for a while. For the Operation: REST.GET.OBJECT sometimes an HTTP Status: 206 Partial Content is returned because the whole file wasn't downloaded. But I can see in the logs that sometimes when HTTP Status: 206 is returned the whole file was downloaded. I've removed some fields to make it simpler:
Operation: REST.GET.OBJECT
Request-URI: "GET [File] HTTP/1.1"
HTTP Status: 206
Error Code: -
Bytes Sent: 76431360
Object Size: 76431360
Total Time: 16276
Turn-Around Time: 190
What happened here? If the Bytes Sent are the same as the Object Size then how can the source report this as a Partial Content?
The 206 status has nothing to do with incomplete file transfer. The server determines what status code to send before it starts sending the response body, so it would have to predict future to know whether it will be able to send the whole file.
Instead, what 206 status code actually means is that the following three things happened at once:
the client sent Range header in its request;
the server decided to honour it and send exactly the bytes requested, not the whole file;
the server was actually able to do so — the range was valid and satisfiable.
In this case, the standard requires the server to reply with the 206 status code, not 200, regardless whether the range happen to cover exactly the whole file or only a part of it.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
Sometimes the Google Maps API returns a 500 server error response according to German postal codes and i cannot understand why.
I hope it is specific enough.
Any ideas?
https://maps.googleapis.com/maps/api/geocode/json?key={api_key}&address={postal_code}&language=de®ion=de&components=country:DE&sensor=false
Since you specify that the problem is not a given address but a seemingly "random" behavior, this may fall under a documented behavior of other "famous" API.
As for other cases, the recommended strategy is Exponential backoff for the Geocoding API, which basically means that you have to retry after a certain delay.
In case the above link goes down or changes, I'm quoting the article:
Exponential Backoff
In rare cases something may go wrong serving your request; you may receive a 4XX or 5XX HTTP response code, or the TCP connection may simply fail somewhere between your client and Google's server. Often it is worthwhile re-trying the request as the followup request may succeed when the original failed. However, it is important not to simply loop repeatedly making requests to Google's servers. This looping behavior can overload the network between your client and Google causing problems for many parties.
A better approach is to retry with increasing delays between attempts. Usually the delay is increased by a multiplicative factor with each attempt, an approach known as Exponential Backoff.
For example, consider an application that wishes to make this request to the Google Maps Time Zone API:
https://maps.googleapis.com/maps/api/timezone/json?location=39.6034810,-119.6822510×tamp=1331161200&key=YOUR_API_KEY
The following Python example shows how to make the request with exponential backoff:
import json
import time
import urllib
import urllib2
def timezone(lat, lng, timestamp):
# The maps_key defined below isn't a valid Google Maps API key.
# You need to get your own API key.
# See https://developers.google.com/maps/documentation/timezone/get-api-key
maps_key = 'YOUR_KEY_HERE'
timezone_base_url = 'https://maps.googleapis.com/maps/api/timezone/json'
# This joins the parts of the URL together into one string.
url = timezone_base_url + '?' + urllib.urlencode({
'location': "%s,%s" % (lat, lng),
'timestamp': timestamp,
'key': maps_key,
})
current_delay = 0.1 # Set the initial retry delay to 100ms.
max_delay = 3600 # Set the maximum retry delay to 1 hour.
while True:
try:
# Get the API response.
response = str(urllib2.urlopen(url).read())
except IOError:
pass # Fall through to the retry loop.
else:
# If we didn't get an IOError then parse the result.
result = json.loads(response.replace('\\n', ''))
if result['status'] == 'OK':
return result['timeZoneId']
elif result['status'] != 'UNKNOWN_ERROR':
# Many API errors cannot be fixed by a retry, e.g. INVALID_REQUEST or
# ZERO_RESULTS. There is no point retrying these requests.
raise Exception(result['error_message'])
if current_delay > max_delay:
raise Exception('Too many retry attempts.')
print 'Waiting', current_delay, 'seconds before retrying.'
time.sleep(current_delay)
current_delay *= 2 # Increase the delay each time we retry.
tz = timezone(39.6034810, -119.6822510, 1331161200)
print 'Timezone:', tz
Of course this will not resolve the "false responses" you mention; I suspect that depends on data quality and does not happen randomly.
We are using the big query JAVA API to retrieve results for our analytics reporting frontend. We are trying to retrieve the results synchronously. A lot of times we get Read timed out error, even before the query timeout as specified in the parameters. Here's the stack trace for a sample fail:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at com.sun.net.ssl.internal.ssl.InputRecord.readFully(InputRecord.java:293)
at com.sun.net.ssl.internal.ssl.InputRecord.read(InputRecord.java:331)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:830)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:787)
at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:697)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:640)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1195)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:379)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:318)
at com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:36)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:94)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:965)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:410)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:343)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:460)
I am not able to retrieve the job id of the resulting job as the error occurs before I can retrieve a JobReference object. The timeout specified in this case was 300 sec. The query failed well before it. The query contains three JOIN's and several GROUP EACH BY clauses. Can you suggest us a possible way to debug this ?
Adding the code snippet:
QueryRequest queryInfo = new QueryRequest().setQuery(sql)
.setTimeoutMs(timeOutInSec * 1000);
// get project id
BQGameConnectionDetails details = Config
.getBQConnectionDetails(gameId);
String projectId = details.getProjectId();
Bigquery.Jobs.Query queryRequest = getInstance(gameId).jobs()
.query(projectId, queryInfo);
QueryResponse response = queryRequest.execute();
There are two timeouts involved. The first timeout is in the HTTP request you've sent to bigquery. The second is in the bigquery request timeout. It sounds like you've set the latter to a large value, but the former is likely the timeout that you're hitting. If the HTTP request times out before the BigQuery timeout, the connection will be closed and BigQuery won't have a chance to respond.
There are two options: First is to increase the HTTP request timeout (which depends on the libraries you're using, but this page here may be helpful). The second is to decrease the bigquery timeout. This means you'll have to use jobs.getQueryResults() to read the actual results, but this is a more robust method because it doesn't matter how long the query takes, you can just call getQueryResults() in a loop. I would post a link to a good java sample that does this, but I don't know that one exists, unfortunately.
I have a snmp enabled device whose monitoring i want to do.
But this device gives response with Request-ID 0 for all the get request. snmp4j library
discards these received packets because it sends get request with some Request-ID value other than 0. On receiving the response it matches the sent "Request-ID" value with the received "Request-ID" value and on finding mismatch it just discards the received packet and returns "null" value to response.
If I set the Request-ID to 0 in snmp packet before sending get request then response snmp packet can be processed.
For this snmp4j library contains the "setRequestID(Integer32 (value))" function to set the desired Request-ID of any snmp packet, but this function cannot set the Request-ID value to 0. When I set the value to 0, this function replaces this value to some random Request-ID value.
If any one having solution then please give response.
Thank you.
The request-id field is used to identify the response when it arrives back to the client. So, if the device you are querying at is returning all requests with a request-id value of 0 instead of the supplied value, then the client (snmp4j) is correctly discarding the response because it is invalid. The request-id in the response packet must always match the request-id in the original request. The device has a buggy SNMP stack. If you change your code to force the requests to always have a request-id of 0 you are breaking functionality to enable compatibility with a non-standard agent and I would advise against it.
Following up on this GWT-RPC question (and answer #1) re. field size checking, I would like to know the right way to check pre-deserialization for max data size sent to server, something like if request data size > X then abort the request. Valuing simplicity and based on answer on aforementioned question/answer, I am inclined to believe checking for max overall request size would suffice, finer grained checks (i.e., field level checks) could be deferred to post-deserialization, but I am open to any best-practice suggestion.
Tech stack of interest: GWT-RPC client-server communication with Apache-Tomcat front-end web-server.
I suppose a first step would be to globally limit the size of any request (LimitRequestBody in httpd.conf or/and others?).
Are there finer-grained checks like something that can be set per RPC request? If so where, how? How much security value do finer grain checks bring over one global setting?
To frame the question more specifically with an example, let's suppose we have the two following RPC request signatures on the same servlet:
public void rpc1(A a, B b) throws MyException;
public void rpc2(C c, D d) throws MyException;
Suppose I approximately know the following max sizes:
a: 10 kB
b: 40 kB
c: 1 M B
d: 1 kB
Then I expect the following max sizes:
rpc1: 50 kB
rpc2: 1 MB
In the context of this example, my questions are:
Where/how to configure the max size of any request -- i.e., 1 MB in my above example? I believe it is LimitRequestBody in httpd.conf but not 100% sure whether it is the only parameter for this purpose.
If possible, where/how to configure max size per servlet -- i.e., max size of any rpc in my servlet is 1 MB?
If possible, where/how to configure/check max size per rpc request -- i.e., max rpc1 size is 50 kB and max rpc2 size is 1 MB?
If possible, where/how to configure/check max size per rpc request argument -- i.e., a is 10 kB, b is 40 kB, c is 1 MB, and d is 1 kB. I suspect it makes practical sense to do post-deserialization, doesn't it?
For practical purposes based of cost/benefit, what level of pre-deserialization checking is generally recommended -- 1. global, 2. servlet, 3. rpc, 4. object-argument? Stated differently, what is roughly the cost-complexity on one hand and the added value on the other hand of each of the above pre-deserialization level checks?
Thanks much in advance.
Based on what I have learned since I asked the question, my own answer and strategy until someone can show me better is:
First line of defense and check is Apache's LimitRequestBody set in httpd.conf. It is the overall max for all rpc calls across all servlets.
Second line of defense is servlet pre-deserialization by overriding GWT AbstractRemoteServiceServlet.readContent. For instance, one could do it as shown further below I suppose. This was the heart of what I was fishing for in this question.
Then one can further check each rpc call argument post-deserialization. One could conveniently use the JSR 303 validation both on the server and client side -- see references StackOverflow and gwt r.e. client side.
Example on how to override AbstractRemoteServiceServlet.readContent:
#Override
protected String readContent(HttpServletRequest request) throws ServletException, IOException
{
final int contentLength = request.getContentLength();
// _maxRequestSize should be large enough to be applicable to all rpc calls within this servlet.
if (contentLength > _maxRequestSize)
throw new IOException("Request too large");
final String requestPayload = super.readContent(request);
return requestPayload;
}
See this question in case the max request size if > 2GB.
From a security perspective, this strategy seems quite reasonable to me to control the size of data users send to server.