How do I set the project default Maximum Bytes Billed in BigQuery? - google-bigquery

In the BigQuery UI, if I click Show Options there is a field called Maximum Bytes Billed - which is an awesome way to make sure you don't fat finger an expensive query. When you don't enter a value in that field it says "project default". How do I set that project default? I can't find it anywhere.
Also, how does the project default work? Is it a hard cap that nobody can go over, or if someone enters a higher value in that field can they override the default? Does it apply to all queries from any source or only from the BigQuery UI?
Thanks.

Got official confirmation from Google that this is not possible.

Related

In Ektron, Load Last Active Location

Question:
Anyone know what setting in my user profile will make Ektron always load the last active work area location? If not, is there a way to load a specific folder every time?
Already Tried:
"Set smart desktop as the start location in the workarea." doesn't seem to do anything.
Why:
I'm primarily a designer, so I'm usually just replacing files in the library and leaving the content area to the developers. It's kind of annoying that the content area always loads first because the folder structure looks the same and many times I actually navigate to the content folder instead of the library folder. This is a waste of time because Ektron is so slow. It would be really helpful to load the last active location in the area or at least the library files first.
Thanks!
The short answer is no. At least, not in any way that would be supported by Ektron or be upgrade-safe (upgrades would likely destroy changes made to include this functionality).
The long answer is that the Workarea code source is available and a .NET developer who wanted to parse through it and figure it out would probably be able to do so. It would require adding a user property / cookie to store their last activity (at the desired level of specificity) and then update any and all related code files that would automatically take them to any given recorded location. It would be inadvisable due to the effort required and for the upgrade-related reasoning above.
THAT BEING SAID - Ektron's Workarea uses Frames and you may be able to bookmark one of those internal frames or else create your own set of frames in your own HTML that would load up the view that you want. Depends on whether it's important enough to you to put in the effort to figure those things out by inspecting enough of the client-side code.

-Denable-debug-rules=true not giving out statistics

I'm giving the flag -Denable-debug-rules, which the documentation says should print something to a log at least every 5 minutes, according to http://graphdb.ontotext.com/documentation/standard/rules-optimisations.html
Unfortunately it's not, and I need to figure out why inferencing is taking so long.
Help?
The specific files is http://purl.obolibrary.org/obo/pr.owl and I'm using owl2-rl-optimized
Version graphdb-ee-6.3.1
An exchange with GraphDB tech support clarified that the built-in rule sets can not be monitored. To effectively monitor them, copy into a new file and add that file as a ruleset following http://graphdb.ontotext.com/documentation/enterprise/reasoning.html#operations-on-rulesets

Problems with BigQuery and Cloud SQL in same project

So, we have this one project which uses Cloud Storage and BigQuery as services. All has been well.
Then, I wanted to add Cloud SQL to this project to try it out. It asked for a unique Project ID so I gave it one. (The Project ID is different than the Project Number.)
Ever since then, I've been having a difficult time accessing my BigQuery tables. When I go to the BigQuery web interface, the URL contains the Project ID instead of the original Project Number. It shows the list of datasets, but now shows the Project Number before each dataset name and the datasets are greyed out and inaccessible. If I manually change the URL to contain the Project Number instead of the Project ID, it appears to work although it shows the list of datasets in the left nav twice, one set greyed out and inaccessible and the other set seemingly accessible.
At the same time, some code that I've been successfully using in Apps Script that accesses BigQuery is now regularly failing with a generic "We're sorry, a server error occurred. Please wait a bit and try again." I'm not sure if this is related to the Project ID/Project Number confusion, or if it's just a Red Herring.
Since we actively use the Cloud Storage service of this project, I am trying to be cautious with further experimentation with this project. I'm not sure if I should delete the Cloud SQL service in this project to get it back to the way it was, or if this is a known issue with some back-end solution. Please advise.
After setting the project id, there can be a delay where BigQuery picks up the change. It should happen within 15 minutes or so, but sometimes it takes longer.
If you send the project ID I can make sure it has been updated.

How to find the document visitior's count?

Actually I am in need of counting the visitors count for a particular document.
I can do it by adding a field, and increasing its value.
But the problem is following.,
I have 10 replication copies in different location. It is being replicated by scheduled manner. So replication conflict is happening because of document count is editing the same document in different location.
I would use an external solution for this. Just search for "visitor count" in your favorite search engine and choose a third party tool. You can then display the count on the page if that is important.
If you need to store the value in the database for some reason, perhaps you could store it as a new doc type that gets added each time (and cleaned up later) to avoid the replication issues.
Otherwise if storing it isn't required consider Google Analytics too.
Also I faced this problem. I can not say that it has a easy solution. Document locking is the only solution that i had found. But the visitor's count is not possible.
It is possible, but not by updating the document. Instead have an AJAX call to an agent or form with parameters on the URL identifying the document being read. This call writes a document into a tracking DB with one or two views and then determines from those views how many reads you have had. The number of reads is the return value of the AJAX form.
This can be written in LS, Java or #Formulas. I would try to do it 100% in #Formulas to make it as efficient as possible.
You can also add logic to exclude reads from the same user or same source IP address.
The tracking database then replicates using the same schedule as the other database.
Daily or Hourly agents can run to create summary documents and delete the detail documents so that you do not exceed the limits for #DBLookup.
If you do not need very nearly real time counts (and that is the best you can get with replicated system like this) you could use the web logs that domino generates by finding the reads in the logs and building the counts in a document per server.
/Newbs
Back in the 90s, we had a client that needed to know that each person had read a document without them clicking to sign or anything.
The initial solution was to add each name to a text field on a separate tracking document. This ran into problems when it got over 32k real fast. Then, one of my colleagues realized you could just have it create a document for each user to record that they'd read it.
Heck, you could have one database used to track all reads for all users of all documents, since one user can only open one document at a time -- each time they open a new document, either add that value to a field or create a field named after the document they've read on their own "reader tracker" document.
Or you could make that a mail-in database, so no worries about replication. Each time they open a document for which you want to track reads, it create a tiny document that has only their name and what document they read which gets mailed into the "read counter database". If you don't care who read it, you have an agent that runs on a schedule that updates the count and deletes the mailed-in documents.
There really are a lot of ways to skin this cat.

How to enable a view in SharePoint2010 where there can be more than 8 Lookup columns present?

I have a SharePoint2010 list which contains around 15 lookup columns. I have created a view in which all 15 are present. When I try to open that view I get the following message:
This view cannot be displayed because
the number of lookup and workflow
status columns it contains exceeds the
threshold (8) enforced by the
administrator.
Is there a way to remove or change this limitation? Thanks.
I've discovered that this limitation is not a limitation, but a setting - and it can be altered! Go to Central Administration and then browse to:
Application Management > Manage Web Application.
In the Web Application list, select the web application you need.
Then go to General Settings > Resource Throttling.
In the Resource Throttling window, scroll down to List View Lookup Threshold and change the value to the number that suits your needs.
Of course, increasing this value degrades the performance, as there's more drilling through SQL tables, so be careful not to go too far. And one more thing: changing this value not only affects the list views, but it also changes the behavior of the methods that work with list items. E.g. having this option set to 8 will result in returning a maximum number of 8 lookup fields for a list item, when the GetListItems(query); method is called (Client Object Model). Increasing this number to, say 15, would consequently increase the maximum number of returned lookup fields for a list item. Pretty neat!
You're right, Boris. Though, everyone, keep in mind that this setting cannot be changed in SharePoint 365/Online.
That's true Boris, but keep in mind that increasing this threshold will impact the performance heavily
Please check this article from MSDN