MapServer (WMS) OAPIF connection - disable schema establishing - gdal

According to the Layer Schema GDAL docs GDAL Driver does the following:
OGR needs a fixed schema per layer, but OGC API - Features Core doesn’t impose fixed schema. So the driver will retrieve the first page of features (10 features) and establish a schema from this
Is there any way of how to prevent this behaviour from happening?
How our system is set up:
WebClient ---> MapServer (WMS) ---> OGC Features API server
The web client fires a GetMap request to the MapServer(v8) in a form like this:
http://our.mapserver.domain/cgi-bin/mapserv?map=/etc/mapserver/playground/mapFile.map&SERVICE=WMS&VERSION=1.3.0&REQUEST=GetMap&BBOX=25.15879999999999939,55.32805299999999704,25.30794399999999911,55.46175600000000117&CRS=EPSG:4326&WIDTH=728&HEIGHT=811&LAYERS=SOME_LAYER&STYLES=&FORMAT=image/png&DPI=96&MAP_RESOLUTION=96&FORMAT_OPTIONS=dpi:96&TRANSPARENT=TRUE&datetime=2025-02-02T17:46:06Z
MapServer then contacts our "OGC Features API enabled" backend. And here's the problem
Why does it fire the first request (without) bbox and then continues with pagination? The wireshark sniffing may give a better overview. You can see how nicely it automatically increments the offset
Our goal is to get rid of this first request without bbox and start immediately with page 1
What we have tried so far is the IGNORE_SCHEMA flag set to NO in the mapfile
LAYER
CONNECTIONTYPE OGR
TYPE polygon
CONNECTIONOPTIONS
"PAGE_SIZE" "100"
"IGNORE_SCHEMA" "NO"
END
END
and we hoped it will download the schema in a separate call so it will not be forced to fire this establishment query.
The schema was downloaded but it didn't prevent the MapServer (GDAL respectively) from firing this unwanted initial request

Related

Capture/Log WCF Binding/Serialization/Deserialization Error

I've class which has a set of attributes.
I'm trying to call WebService from custom Billing device (based on proprietary HW/SW). The problem is that in the application in some cases the required field (an integer in this case) is sent as null. WebService just rejects that.
Is there any way to log such errors in the Server as "return false" prompts the App to resend again (which will fail as the value is still null). Idea is to write to database the errors (with device details and the actual error, integer column is null in this case) and the Application/Web Admin can get in touch with user of the device to take appropriate action.
For debugging, you can use Fiddler2 easily to capture any web traffic, including the full xml of a SOAP request/response (and it even handles SSL easily, unlike Wireshark)
For logging... I wish I knew. Sorry.
Also, dupe of In C#, How to look at the actual SOAP request/response in C#

Conditionally returning a custom response object to the client in Wep Api 2

I have a Web Api 2 service that will be deployed across 4 production servers. When a request doesn't pass validation a custom response object is generated and returned to the client.
A rudimentary example
if (!ModelState.IsValid)
{
var responseObject = responseGenerator.GetResponseForInvalidModelState(ModelState);
return Ok(responseObject);
}
Currently the responseGenerator is aware of what environment it is in and generates the response accordingly. For example, in development it'll return a lot detail but in production it'll only return a simple failure status.
How can I implement a "switch" that turns details on without requiring a round trip to the database each time?
Due to the nature of our environment using a config file isn't realistic. I've considered using a flag in the database and then caching it at the application layer but environmental constraints make refreshing the cache on all 4 servers very painful.
I ended up going with the parameter suggestion and then implementing a token system on the back end. If a Debug token is present in the request the service validates it against the database. If it's a valid and active token it returns the additional detail.
This allows us to control things from our end while keeping things simple for the vendors and only adds that extra round trip to the database during debugging.

Display server-side validation error in form semantic-ui

The client side form validation rules in Semantic UI are nice, but we all know the client cannot be trusted, so naturally we need to validate on the server.
Anyone knows how to have server-side errors displayed like the "native" SUI validation errors. Users shouldn't see any difference regarding where validation is done.
So far I've combining the SUI form validation with the SUI "api" function. This is because the API function gives med onFailure callback from the server, where I can then parsing the server errors and add with "add errors" form command.
But it never worked perfectly.
With such a basic requirement, how would you create a form with both client- and server-side validation in SUI?
Kind of like in this post but without the Meteor, just plain HTML.
This SS question is also similar, but the responses are not quite there.
Update
First, client validation is run, and only if this succeeds, we call the server. This means we're in the onSuccess.
If there are server errors (validation MUST always be done on server, client can't be trusted), I think they can be parsed and added like this:
$form.form('add errors', formErrors).
(based on a discussion on semantic-ui forum on Gitter, March 9th 2016)
https://gitter.im/Semantic-Org/Semantic-UI

Best way to cache RESTful API results of GET calls

I'm thinking about the best way to create a cache layer in front or as first layer for GET requests to my RESTful API (written in Ruby).
Not every request can be cached, because even for some GET requests the API has to validate the requesting user / application. That means I need to configure which request is cacheable and how long each cached answer is valid. For a few cases I need a very short expiration time of e.g. 15s and below. And I should be able to let cache entries expire by the API application even if the expiration date is not reached yet.
I already thought about many possible solutions, my two best ideas:
first layer of the API (even before the routing), cache logic by myself (to have all configuration options in my hand), answers and expiration date stored to Memcached
a webserver proxy (high configurable), perhaps something like Squid but I never used a proxy for a case like this before and I'm absolutely not sure about it
I also thought about a cache solution like Varnish, I used Varnish for "usual" web applications and it's impressive but the configuration is kind of special. But I would use it if it's the fastest solution.
An other thought was to cache to the Solr Index, which I'm already using in the data layer to not query the database for most requests.
If someone has a hint or good sources to read about this topic, let me know.
Firstly, build your RESTful API to be RESTful. That means authenticated users can also get cached content as to keep all state in the URL it needs to contain the auth details. Of course the hit rate will be lower here, but it is cacheable.
With a good deal of logged in users it will be very beneficial to have some sort of model cache behind a full page cache as many models are still shared even if some aren't (in a good OOP structure).
Then for a full page cache you are best of to keep all the requests off the web server and especially away from the dynamic processing in the next step (in your case Ruby). The fastest way to cache full pages from a normal web server is always a caching proxy in front of the web servers.
Varnish is in my opinion as good and easy as it gets, but some prefer Squid indeed.
memcached is a great option, and I see you mentioned this already as a possible option. Also Redis seems to be praised a lot as another option at this level.
On an application level, in terms of a more granular approach to cache on a file by file and/or module basis, local storage is always an option for common objects a user may request over and over again, even as simple as just dropping response objects into session so that can be reused vs making another http rest call and coding appropriately.
Now people go back and forth debating about varnish vs squid, and both seem to have their pros and cons, so I can't comment on which one is better but many people say Varnish with a tuned apache server is great for dynamic websites.
Since REST is an HTTP thing, it could be that the best way of caching requests is to use HTTP caching.
Look into using ETags on your responses, checking the ETag in requests to reply with '304 Not Modified' and having Rack::Cache to serve cached data if the ETags are the same. This works great for cache-control 'public' content.
Rack::Cache is best configured to use memcache for its storage needs.
I wrote a blog post last week about the interesting way that Rack::Cache uses ETags to detect and return cached content to new clients: http://blog.craz8.com/articles/2012/12/19/rack-cache-and-etags-for-even-faster-rails
Even if you're not using Rails, the Rack middleware tools are quite good for this stuff.
Redis Cache is best option.
check here.
It is open source. Advanced key-value cache and store.
I’ve used redis successfully this way in my REST view:
from django.conf import settings
import hashlib
import json
from redis import StrictRedis
from django.utils.encoding import force_bytes
def get_redis():
#get redis connection from RQ config in settings
rc = settings.RQ_QUEUES['default']
cache = StrictRedis(host=rc['HOST'], port=rc['PORT'], db=rc['DB'])
return cache
class EventList(ListAPIView):
queryset = Event.objects.all()
serializer_class = EventSerializer
renderer_classes = (JSONRenderer, )
def get(self, request, format=None):
if IsAdminUser not in self.permission_classes: # dont cache requests from admins
# make a key that represents the request results you want to cache
# your requirements may vary
key = get_key_from_request()
# I find it useful to hash the key, when query parms are added
# I also preface event cache key with a string, so I can clear the cache
# when events are changed
key = "todaysevents" + hashlib.md5(force_bytes(key)).hexdigest()
# I dont want any cache issues (such as not being able to connect to redis)
# to affect my end users, so I protect this section
try:
cache = get_redis()
data = cache.get(key)
if not data:
# not cached, so perform standard REST functions for this view
queryset = self.filter_queryset(self.get_queryset())
serializer = self.get_serializer(queryset, many=True)
data = serializer.data
# cache the data as a string
cache.set(key, json.dumps(data))
# manage the expiration of the cache
expire = 60 * 60 * 2
cache.expire(key, expire)
else:
# this is the place where you save all the time
# just return the cached data
data = json.loads(data)
return Response(data)
except Exception as e:
logger.exception("Error accessing event cache\n %s" % (e))
# for Admins or exceptions, BAU
return super(EventList, self).get(request, format)
in my Event model updates, I clear any event caches.
This hardly ever is performed (only Admins create events, and not that often),
so I always clear all event caches
class Event(models.Model):
...
def clear_cache(self):
try:
cache = get_redis()
eventkey = "todaysevents"
for key in cache.scan_iter("%s*" % eventkey):
cache.delete(key)
except Exception as e:
pass
def save(self, *args, **kwargs):
self.clear_cache()
return super(Event, self).save(*args, **kwargs)

How to design a REStful API for a media analysis engine

I am new to Restful concept and have to design a simple API for a media analysis service I need to set up, to perform various tasks, e.g. face analysis, region detection, etc. on uploaded images and video.
Outline of my initial design is as follows:
Client POSTs a configuration XML file to http://manalysis.com/facerecognition. This creates a profile that can be used for multiple analysis sessions. Response XML includes a ProfileID to refer to this profile. Clients can skip this step to use the default config parameters
Client POSTs video data to be analyzed to http://manalysis.com/facerecognition (with ProfileID as a parameter, if it's set up). This creates an analysis session. Return XML has the SessionID.
Client can send a GET to http://manalysis.com/facerecognition/SessionID to receive the status of the session.
Am I on the right track? Specifically, I have the following questions:
Should I include facerecognition in the URL? Roy Fielding says that "a REST API must not define fixed resource names or hierarchies" Is this an instance of that mistake?
The analysis results can either be returned to the client in one large XML file or when each event is detected. How should I tell the analysis engine where to return the results?
Should I explicitly delete a profile when analysis is done, through a DELETE call?
Thanks,
C
You can fix the entry point url,
GET /facerecognition
<FaceRecognitionService>
<Profiles href="/facerecognition/profiles"/>
<AnalysisRequests href="/facerecognition/analysisrequests"/>
</FaceRecognitionService>
Create a new profile by posting the XML profile to the URL in the href attribute of the Profiles element
POST /facerecognition/profiles
201 - Created
Location: /facerecognition/profile/33
Initiate the analysis by creating a new Analysis Request. I would avoid using the term session as it is too generic and has lots of negative associations in the REST world.
POST /facerecognition/analysisrequests?profileId=33
201 - Created
Location: /facerecognition/analysisrequest/2103
Check the status of the process
GET /facerecognition/analysisrequest/2103
<AnalysisRequest>
<Status>Processing</Status>
<Cancel Method="DELETE" href="/facerecognition/analysisrequest/2103" />
</AnalysisRequest>
when the processing has finished, the same GET could return
<AnalysisRequest>
<Status>Completed</Status>
<Results href="/facerecognition/analysisrequest/2103/results" />
</AnalysisRequest>
The specific URLs that I have chosen are relatively arbitrary, you can use whatever is the clearest to you.