Using log4javascript - is there a way to apply a Level to a group? - log4javascript

I am using log4javascript and want to be able to apply a Level to a group.
I currently have a PopupAppender that has a threshold of WARN. However, it is triggered whenever I create a group.
I am looking for a group function that takes a Level parameter like the time function does.
Is this implemented?

No, groups don't have a level. I may look into it for log4javascript 2.0.

Related

SQL Lag() in SPARQL

I was wondering if there is a way to get the functionality of SQL LAG() in my SPARQL query. More specifically, I have a set of states like
<http:\\stateA> p:start "2014-05-23T10:20:13+05:30"^^xsd:dateTime
<http:\\stateB> p:start "2014-06-23T10:20:13+05:30"^^xsd:dateTime
<http:\\stateC> p:start "2014-07-23T10:20:13+05:30"^^xsd:dateTime
And I want to find the duration of each state. The duration of each state, lets say state A, can be computed by subtracting the start of the next state (here it is state B) from the start of state A. So I need a way to find the next state with respect to each state.
These functions aren't yet part of SPARQL, but SEP-002 for proposed SPARQL 1.2 updates includes more XSD functions, including xsd:dateTime - xsd:dateTime. This is likely implemented in many implementations by now, although possibly with some setup required.
I've implemented it in my Ruby implementation.

Airflow: BigQueryOperator vs BigQuery Quotas and Limits

Is there any pratical way to control quotas and limits on Airflow?.
I'm specially interested on controlling BigQuery concurrency.
There are different levels of quotas on BigQuery . So according to the Operator inputs, there should be a way to check if conditions are met, otherwise waiting for it to fulfill.
It seems to be a composition of Sensor-Operators, querying against a database like redis for example:
QuotaSensor(Project, Dataset, Table, Query) >> QuotaAddOperator(Project, Dataset, Table, Query)
QuotaAddOperator(Project, Dataset, Table, Query) >> BigQueryOperator(Project, Dataset, Table, Query)
BigQueryOperator(Project, Dataset, Table, Query) >> QuotaSubOperator(Project, Dataset, Table, Query)
The Sensor must check conditions like:
- Global running queries <= 300
- Project running queries <= 100
- .. etc
Is there any lib that already does that for me? A plugin perhaps?
Or any other easier solution?
Otherwise, following the Sensor-Operators approach.
How can I encapsulate all of it under a single operator? To avoid repetition of code,
a single operator: QuotaBigQueryOperator
Currently, it is only possible to get the Compute Engine quotas programmatically. However, there is an opened feature request to get/set other project quotas via API. You can post there about the specific case you would like to have implemented and follow it to track it and ask for updates.
Meanwhile, as workaround you can try to use the PythonOperator. With it you can define your own custom code and you would be able to implement retries for the queries that you send that get a quotaExceeded error (or the specific error you are getting). In this way you wouldn't have to explicitly check for the quota levels. You just run the queries and retry until they get executed. This is a simplified code for the strategy I am thinking about:
for query in QUERIES_TO_RUN:
while True:
try:
run(query)
except quotaExceededException:
continue # Jumps to the next cycle of the nearest enclosing loop.
break

Update the value in region ONLY IF value.status is 'XXX'

We are trying to use Gemfire in our work. We have a region where we store each request coming in and it goes through its lifecycle (For example, states are A --> B --> C --> D)
But we also have a requirement that we need to update the state to C only if the current state is B (as the state D is getting updated Async by some other process). We used to achieve it in cassandra by using ONLY IF key word. Looking for something similar in Gemfire. Obviously we cannot do Read, Check State and Update because its not atomic.
Other option was to do this by taking a distribted lock and then perform check-update as mentioned above. But this option comes with a performance overhead.
We were also thinking of attaching a CacheWriter and check the state in beforeUpdate(..). But came to know that what we get as parameter to beforeUpdate is a copy of the value and not the real value.
Does anyone have an idea of how to achieve it in a atomic fashion that we can try?
What you are looking for is this, Region.replace(key, oldValue, newValue). This is an atomic operation.
UPDATE: I should also clarify that is not currently possible to inspect certain properties of the mapped object value (e.g. someObject.state = XYZ) to ascertain whether to perform the update/replace. For that you will need a properly implemented Object.equals() method.

Getting maximum value of field in solr

I'd like to boost my query by the item's view count; I'd like to use something like view_count / max_view_count for this purpose, to be able to measure how the item's view count relates to the biggest view count in the index. I know how to boost the results with a function query, but how can I easily get the maximum view count? If anybody could provide an example it would be very helpful...
There aren't any aggregate functions under solr in the way you might be thinking about them from SQL. The easiest way to do it is to have a two-step process:
Get the max value via an appropriate query with a sort
use it with the max() function
So, something like:
q=*:*&sort=view_count desc&rows=1&fl=view_count
...to get an item with the max view_count, which you record somewhere, and then
q=whatever&bq=div(view_count, max(the_max_view_count, 1))
Note that that max() function isn't doing an aggregate max; just getting the maximum of the max-view-count you pass in or 1 (to avoid divide-by-zero errors).
If you have a multiValued field (which you can't sort on) you could also use the StatsComponent to get the max. Either way, you would probably want to do this once, not for every query (say, every night at midnight or whatever once your data set settles down).
You can add just:
&stats=true&stats.field=view_count
You will see a small statistics on that specified field. More documentation here

Complex derived attributes in Django models

What I want to do is implement submission scoring for a site with users voting on the content, much like in e.g. reddit (see the 'hot' function in http://code.reddit.com/browser/sql/functions.sql). Edit: Ultimately I want to be able to retrieve an arbitrarily filtered list of arbitrary length of submissions ranked according to their score.
My submission model currently keeps track of up and down vote totals. Currently, when a user votes I create and save a related Vote object and then use F() expressions to update the Submission object's voting totals. The problem is that I want to update the score for the submission at the same time, but F() expressions are limited to only simple operations (it's missing support for log(), date_part(), sign() etc.)
From my limited experience with Django I can see 5 options here:
extend F() somehow (haven't looked at the code yet) to support the missing SQL functions; this is my preferred option and seems to fit within the Django framework the best
define a scoring function (much like reddit's 'hot' function) in my database, and have Django use the value of that function for the value of the score field; as far as I can tell, #2 is not possible
wrap my two step voting process in a suitably isolated transaction so that I can calculate the voting totals in Python and then update the Submission's voting totals without fear that another vote against the submission could be added/changed in the meantime; I'm hesitant to take this route because it seems overly complex - what is a "suitably isolated transaction" in this case anyway?
use raw SQL; I would prefer to avoid this entirely -- what's the point of an ORM if I have to revert to SQL for such a common use case as this! (Note that this coming from somebody who loves sprocs, but is using Django for ease of development.)
(edit: added this after further discussion) compute the score using an extra select parameter containing a call to my function; this would work but impose unnecessary load on the DB (would be forced to calculate the score for every submission ever made every time the query ran; caching could help here, but it still seems like a bit of lame workaround)
Before I embark on this mission to extend F() (which I'm not sure is even possible), am I about to reinvent the wheel? Is there a more standard way to do this? It seems like such a common use case and yet in an hour of searching I have yet to find a common solution...
EDIT: There is another option: set the default value of the field in the database script to be an expression containing my function. This is not as flexible as #1, but probably the quickest and cleanest approach to solving the problem (although my initial investigation into extending F() looks promising).
Why can't you just denormalize the score and reconstruct it with the Vote objects every once and a while?
If you can't do that, it is very easy to make a 'property' function that acts as an object attribute for scoring.
#property
def score(self):
... calculate score from Vote objects ...
return score
I've never used F() on a property like this, but it's Python, so I bet it works.
If you are using django-voting (which I recommend), you can put #3 in the manager's record_vote function since that's how all vote transactions take place.