Cloudflare Cache on Specific Query String Value - cloudflare

Can cloudflare cache be set to ignore all but a specific query string parameter? I can find things in the docs that allude to this being a possibility - but I cannot tell if it's part of the normal cache rules or else if it's something that requires cloudflare workers.
Example -
?var1=xxxx&var2=xxxx&var3=xxxx&var4=xxxx
Can I cache ONLY on differences say in var3?

Related

Need to extend url length on IIS express and I am getting a 400 BadRequest

I am trying to make a Get request against an API and the URL contains a lot of Guids.
How can set an unlimited URL limit to my IIS express in order to make this query.
Query example:
https://localhost:44329/api/user_collections/(7e9046ff-93ba-4f4f-a185-0330967e33b4,e88605f2-772f-493a-b9ee-1b561f45af57,e8796bef-cf53-47ca-85e7-1c683775c40f,2f795791-dfd7-469d-881b-217ad3aeb4f0,2f795791-dfd7-469d-881b-217ad3aeb4f0,e36c1106-e38d-40aa-816c-318e06bc9443,e36c1106-e38d-40aa-816c-318e06bc9443,dd07a986-a63a-4a8c-8f77-3944955ab5d4,46299c82-cbf8-4256-996b-4ac6ef975f09,46299c82-cbf8-4256-996b-4ac6ef975f09)
Turns out there’s a registry setting for indicating the maximum length of any individual path segment called “UrlMaxSegmentLength.” The default value is 260 characters. When our route value was longer than that, we’d get the 400 error.
The solution is to pass the value on the querystring rather than as a route value. Something to consider if you have route values that are going to be long. Of course, this isn’t a problem with routing, just that you can get into the situation pretty easily when you get into the habit of pushing parameters into the route. It might be nice in future versions of routing to have some sort of max length check when generating full route URLs so you can’t generate a path segment greater than, say, 256 characters without getting an exception.

LDAP search for multiple complete DNs?

Assume I have an array of N DNs (distinguished names), e.g.:
cn=foo,dc=capmon,dc=lan
cn=bar,dc=capmon,dc=lan
cn=Fred Flintstone,ou=CapMon,dc=capmon,dc=lan
cn=Clark Kent,ou=yada,ou=whatnot,dc=capmon,dc=lan
They are not related and I cannot reduce/simplify the search. I have N complete DNs and want N records.
Can I write a single LDAP search that will return exactly N records, one for each DN? The assumption being that performance of both client and server will be better if I do it all in one search. Had it been SQL, it would be:
SELECT *
FROM dc=capmon,dc=lan
WHERE dn IN (
"cn=foo,dc=capmon,dc=lan",
"cn=bar,dc=capmon,dc=lan",
"cn=Fred Flintstone,ou=CapMon,dc=capmon,dc=lan",
"cn=Clark Kent,ou=yada,ou=whatnot,dc=capmon,dc=lan"
)
rather than doing individual LDAP searches in a for loop (which I do know how to do).
I tried against an MS Active Directory. There, all fields (seem to) have a distinguishedName attribute, and a search filter like this works (I added some newlines for readability):
(|
(distinguishedName=cn=ppolicy,dc=capmon,dc=lan)
(distinguishedName=cn=Users,dc=capmon,dc=lan)
<more ORed terms>
)
But this doesn't work:
(|
(dn=cn=ppolicy,dc=capmon,dc=lan)
(dn=cn=Users,dc=capmon,dc=lan)
<more ORed terms>
)
even though the returned records look like they contain dn attributes. :-(
An OpenLDAP server's records don't have distinguishedName attributes, and neither of the filters above work against it.
Can I do something that will work against most major LDAP servers?
It's not possible to "Read" several entries in a single operation.
You can do a single search operation that will match and return several entries, but you cannot search on the "DN" itself.
I've seen several applications that are trying to get several entries by using complex filters such as "(|(cn=foo)(cn=bar)(cn=Fred Flintstone))", but this may result in more entries, unless all CN values are unique. It's not really a good practice either, as there are limits in the number of elements you can have in the filter, and such requests are usually not optimized in term of I/O.
It will be faster to read each invidual entry, as LDAP servers are optimized for such operations. If you want to reduce the latency, you can issue multiple asynchronous search operations on the same connection.

Apache solr set domain priority

I crawled with nutch 3 domains (domain01, domain02 and domain03).
I want to get all posts which contains specific keyword (ex. "champions league"), and than in results first show the posts from domain02, next posts from domain01 and last posts from domain03. simply i want to sort them in priority by domain
If there is a way to set priority of domains ?
If you always have the same order of domains, then you can use either index time document level boost or query time sort by domain (or domainorder) then by score.
If the domain order depends on the query, you can use QueryElevationComponent, though I think you have to provide full list of IDs then for each elevation rule and it may not support sequence.
You could also write your own Custom Function Query or component (similar to Query Elevation one).

How to invalid cache when using ListGrid and Datasource

As per what I found reading other sites :
SmartGWT uses data caching to optimize client-server connections and reduce network traffic. In your example, let's say you have the following in your database:
one word
two words
one sentence
When you type word, the fetch returns:
one word
two words
These values are cached in your client.
When you add one to word, because this is a more restraining search criteria, no need to server fetch, only client filter and the result is:
one word
Is there a way of avoiding this and make the search always against the server?
You can user the following properties of DataSource to turn caching OFF.
dataSource.setCacheAllData(false);
dataSource.setAutoCacheAllData(false);
If you want to turn the caching On, pass "true" to both function calls.
manually calling invalidateCache() on listgrid component should run the fetch method with actual criteria

yii caching and couchbase

need some help to clarify the concept.
$sql = 'SELECT * FROM tbl_post LIMIT 20';
$dependency = new CDbCacheDependency('SELECT MAX(update_time) FROM tbl_post');
$rows = Yii::app()->db->cache(1000, $dependency)->createCommand($sql)->queryAll();
1.if the cache contains an entry indexed by the SQL statement.
2.if the dependency has not changed (the maximum update_time value is the same as when the query result was saved in the cache).
I do not understand what do the above explanation means. Especially second one with regards to maximum update_time. Please correct me if I am wrong.
There is a update_time column in tbl_post table. Whenever a row is updated, the update_time is updated too. If a post is retrieved from the cache, CDbCacheDependency will first query the database for MAX(update_time)? What is the purpose of this and how exactly does it works in keeping the cache updated?
Another question is regarding memcache. I understand that it is possible to cluster memcache servers. Say I have the below configurations.
1 memcache server in US. 1 memcache server in Europe.
My Yii website makes use of the cluster of 2 nodes. memcache will split the caching between the 2 nodes.
1.user A retrieves a post from database and cached it. assume (123,$model) in US node.
2.user B wants to retrieve the same post, from Europe. Will looking for key 123 finds the cache? Does it matters if both users are in US or Europe?
Thanks!!
After first run - DB component puts its result into cache. Also it puts there result of dependency-query (max update time in your case).
Then when your try to get data, db component executs dependency query and compare it with cached one. If dependency is unchanged (there is no new posts) it get query results from cache, in other case it execute`s query.