Error using metrics listed in Google's Metrics and dimensions - api

I am using this code to query the api
function getResults(&$analytics, $profileId) {
// Calls the Core Reporting API and queries for the number of sessions
// for the last 30 days.
return $analytics->data_ga->get(
'ga:' . $profileId,
'30daysAgo',
'today',
'ga:sessionCount,ga:sessionDurationBucket,ga:users,ga:percentNewSessions,ga:bounceRate,ga:pageviews');
}
i get this error upon executing the code
Fatal error: Uncaught exception 'Google_Service_Exception' with
message 'Error calling GET
https://www.googleapis.com/analytics/v3/data/ga?ids=ga%3A114460017&start-date=30daysAgo&end-date=today&metrics=ga%3AsessionCount%2Cga%3AsessionDurationBucket%2Cga%3Ausers%2Cga%3ApercentNewSessions%2Cga%3AbounceRate%2Cga%3Apageviews:
(400) Unknown metric(s): ga:sessionCount, ga:sessionDurationBucket
anyone ever experience? I do not understand why it does not recognize those metrics when it is listed
https://developers.google.com/analytics/devguides/reporting/core/dimsmets#view=detail&group=user&jump=ga_sessioncount

If you look more closely into that documentation you will see that session count is not a metric, it's a dimension. The reason is that you want to be able to do breakdowns of metrics by session count (e.g. "show avg. duration of sessions for users with 3 sessions") and for that you need categorical data.
Even if you overlook the (not particularly distinctive) column heading in the table of contents (ga:sessionCount is in the "dimensions"-column) the fact that the datatype is a string would be a dead giveaway. Metrics are always numbers. Dimensions are always strings, even if they sometimes look like numbers.
Same goes for ga:sessionDurationBucket.
Look at this example from the documentation to see how dimensions are passed into the query via an array that holds optional parameters:
private function queryCoreReportingApi() {
$optParams = array(
'dimensions' => 'ga:source,ga:keyword',
'sort' => '-ga:sessions,ga:source',
'filters' => 'ga:medium==organic',
'max-results' => '25');
return $service->data_ga->get(
TABLE_ID,
'2010-01-01',
'2010-01-15',
'ga:sessions',
$optParams);
}
You'd need to construct a similar $optParams array:
$optParams = array(
'dimensions' => 'ga:sessionCount,ga:sessionDurationBucket'
');
and pass it to your query:
return $analytics->data_ga->get(
'ga:' . $profileId,
'30daysAgo',
'today',
$optParams,
'ga:users,ga:percentNewSessions,ga:bounceRate,ga:pageviews');
}
and remove the dimensions from the list of metrics.
Btw. Google has a wonderful documentation page on the difference between dimensions and metrics and how they are used in the reports.

Related

BigQuery UDF Internal Error

We had a simple UDF in BigQuery that somehow throws an error that keeps returning
Query Failed
Error: An internal error occurred and the request could not be completed.
The query was simply trying to use UDF to perform a SHA256.
SELECT
input AS title,
input_sha256 AS title_sha256
FROM
SHA256(
SELECT
title AS input
FROM
[bigquery-public-data:hacker_news.stories]
GROUP BY
input
)
LIMIT
1000
The in-line UDF is pasted below. However I can not post the full UDF as StackOverflow complaints too much code in the post. The full UDF can be seen this gist.
function sha256(row, emit) {
emit(
{
input: row.input,
input_sha256: CryptoJS.SHA256(row.input).toString(CryptoJS.enc.Hex)
}
);
}
bigquery.defineFunction(
'SHA256', // Name of the function exported to SQL
['input'], // Names of input columns
[
{'name': 'input', 'type': 'string'},
{'name': 'input_sha256', 'type': 'string'}
],
sha256 // Reference to JavaScript UDF
);
Not sure if it helps, but the Job-ID is
bigquery:bquijob_7fd3b51c_153c058dc7c
Looks like there is a similar issue at:
https://code.google.com/p/google-bigquery/issues/detail?id=478
Short answer - this is an issue related to memory allocation that I uncovered via my own testing and fixed today, but it will take a little while to flow out to production.
Slightly longer answer - we just rolled out a fix today for an issue where users who were having "out of memory" issues when scaling up their UDFs over larger number of rows, even though the UDF would succeed on smaller numbers of rows. The queries that were hitting that condition are now running fine on our internal / test trees. However, since public BigQuery hosts have much higher traffic loads, the JavaScript engine that executes the UDFs (V8) behaves somewhat differently in production than it does in internal trees. Specifically, there's a new memory allocation error that some of the previously OOMing jobs are now hitting that we couldn't observe until the queries ran on a fully-loaded tree.
It's a minor error with a quick fix, but we'd ideally let it flow through our regular testing and QA cycle. This should put the fix in production in about a week, assuming nothing else goes wrong with the candidate. Would that be acceptable for you?
i am re-using answer box to provide full query text. it works if uncomment LIMIT 40
SELECT input, input_sha256 FROM JS(
(
SELECT title AS input
FROM [bigquery-public-data:hacker_news.stories]
GROUP BY input
//LIMIT 40
),
input,
"[ {'name': 'input', 'type': 'string'}, {'name': 'input_sha256', 'type': 'string'} ] ",
"function(row, emit) {
var CryptoJS=CryptoJS||function(h,s){var f={},g=f.lib={},q=function(){},m=g.Base={extend:function(a){q.prototype=this;var c=new q;a&&c.mixIn(a);c.hasOwnProperty('init')||(c.init=function(){c.$super.init.apply(this,arguments)});c.init.prototype=c;c.$super=this;return c},create:function(){var a=this.extend();a.init.apply(a,arguments);return a},init:function(){},mixIn:function(a){for(var c in a)a.hasOwnProperty(c)&&(this[c]=a[c]);a.hasOwnProperty('toString')&&(this.toString=a.toString)},clone:function(){return this.init.prototype.extend(this)}}, r=g.WordArray=m.extend({init:function(a,c){a=this.words=a||[];this.sigBytes=c!=s?c:4*a.length},toString:function(a){return(a||k).stringify(this)},concat:function(a){var c=this.words,d=a.words,b=this.sigBytes;a=a.sigBytes;this.clamp();if(b%4)for(var e=0;e<a;e++)c[b+e>>>2]|=(d[e>>>2]>>>24-8*(e%4)&255)<<24-8*((b+e)%4);else if(65535<d.length)for(e=0;e<a;e+=4)c[b+e>>>2]=d[e>>>2];else c.push.apply(c,d);this.sigBytes+=a;return this},clamp:function(){var a=this.words,c=this.sigBytes;a[c>>>2]&=4294967295<< 32-8*(c%4);a.length=h.ceil(c/4)},clone:function(){var a=m.clone.call(this);a.words=this.words.slice(0);return a},random:function(a){for(var c=[],d=0;d<a;d+=4)c.push(4294967296*h.random()|0);return new r.init(c,a)}}),l=f.enc={},k=l.Hex={stringify:function(a){var c=a.words;a=a.sigBytes;for(var d=[],b=0;b<a;b++){var e=c[b>>>2]>>>24-8*(b%4)&255;d.push((e>>>4).toString(16));d.push((e&15).toString(16))}return d.join('')},parse:function(a){for(var c=a.length,d=[],b=0;b<c;b+=2)d[b>>>3]|=parseInt(a.substr(b, 2),16)<<24-4*(b%8);return new r.init(d,c/2)}},n=l.Latin1={stringify:function(a){var c=a.words;a=a.sigBytes;for(var d=[],b=0;b<a;b++)d.push(String.fromCharCode(c[b>>>2]>>>24-8*(b%4)&255));return d.join('')},parse:function(a){for(var c=a.length,d=[],b=0;b<c;b++)d[b>>>2]|=(a.charCodeAt(b)&255)<<24-8*(b%4);return new r.init(d,c)}},j=l.Utf8={stringify:function(a){try{return decodeURIComponent(escape(n.stringify(a)))}catch(c){throw Error('Malformed UTF-8 data');}},parse:function(a){return n.parse(unescape(encodeURIComponent(a)))}}, u=g.BufferedBlockAlgorithm=m.extend({reset:function(){this._data=new r.init;this._nDataBytes=0},_append:function(a){'string'==typeof a&&(a=j.parse(a));this._data.concat(a);this._nDataBytes+=a.sigBytes},_process:function(a){var c=this._data,d=c.words,b=c.sigBytes,e=this.blockSize,f=b/(4*e),f=a?h.ceil(f):h.max((f|0)-this._minBufferSize,0);a=f*e;b=h.min(4*a,b);if(a){for(var g=0;g<a;g+=e)this._doProcessBlock(d,g);g=d.splice(0,a);c.sigBytes-=b}return new r.init(g,b)},clone:function(){var a=m.clone.call(this); a._data=this._data.clone();return a},_minBufferSize:0});g.Hasher=u.extend({cfg:m.extend(),init:function(a){this.cfg=this.cfg.extend(a);this.reset()},reset:function(){u.reset.call(this);this._doReset()},update:function(a){this._append(a);this._process();return this},finalize:function(a){a&&this._append(a);return this._doFinalize()},blockSize:16,_createHelper:function(a){return function(c,d){return(new a.init(d)).finalize(c)}},_createHmacHelper:function(a){return function(c,d){return(new t.HMAC.init(a, d)).finalize(c)}}});var t=f.algo={};return f}(Math);
(function(h){for(var s=CryptoJS,f=s.lib,g=f.WordArray,q=f.Hasher,f=s.algo,m=[],r=[],l=function(a){return 4294967296*(a-(a|0))|0},k=2,n=0;64>n;){var j;a:{j=k;for(var u=h.sqrt(j),t=2;t<=u;t++)if(!(j%t)){j=!1;break a}j=!0}j&&(8>n&&(m[n]=l(h.pow(k,0.5))),r[n]=l(h.pow(k,1/3)),n++);k++}var a=[],f=f.SHA256=q.extend({_doReset:function(){this._hash=new g.init(m.slice(0))},_doProcessBlock:function(c,d){for(var b=this._hash.words,e=b[0],f=b[1],g=b[2],j=b[3],h=b[4],m=b[5],n=b[6],q=b[7],p=0;64>p;p++){if(16>p)a[p]= c[d+p]|0;else{var k=a[p-15],l=a[p-2];a[p]=((k<<25|k>>>7)^(k<<14|k>>>18)^k>>>3)+a[p-7]+((l<<15|l>>>17)^(l<<13|l>>>19)^l>>>10)+a[p-16]}k=q+((h<<26|h>>>6)^(h<<21|h>>>11)^(h<<7|h>>>25))+(h&m^~h&n)+r[p]+a[p];l=((e<<30|e>>>2)^(e<<19|e>>>13)^(e<<10|e>>>22))+(e&f^e&g^f&g);q=n;n=m;m=h;h=j+k|0;j=g;g=f;f=e;e=k+l|0}b[0]=b[0]+e|0;b[1]=b[1]+f|0;b[2]=b[2]+g|0;b[3]=b[3]+j|0;b[4]=b[4]+h|0;b[5]=b[5]+m|0;b[6]=b[6]+n|0;b[7]=b[7]+q|0},_doFinalize:function(){var a=this._data,d=a.words,b=8*this._nDataBytes,e=8*a.sigBytes; d[e>>>5]|=128<<24-e%32;d[(e+64>>>9<<4)+14]=h.floor(b/4294967296);d[(e+64>>>9<<4)+15]=b;a.sigBytes=4*d.length;this._process();return this._hash},clone:function(){var a=q.clone.call(this);a._hash=this._hash.clone();return a}});s.SHA256=q._createHelper(f);s.HmacSHA256=q._createHmacHelper(f)})(Math);
(function(){var h=CryptoJS,j=h.lib.WordArray;h.enc.Base64={stringify:function(b){var e=b.words,f=b.sigBytes,c=this._map;b.clamp();b=[];for(var a=0;a<f;a+=3)for(var d=(e[a>>>2]>>>24-8*(a%4)&255)<<16|(e[a+1>>>2]>>>24-8*((a+1)%4)&255)<<8|e[a+2>>>2]>>>24-8*((a+2)%4)&255,g=0;4>g&&a+0.75*g<f;g++)b.push(c.charAt(d>>>6*(3-g)&63));if(e=c.charAt(64))for(;b.length%4;)b.push(e);return b.join('')},parse:function(b){var e=b.length,f=this._map,c=f.charAt(64);c&&(c=b.indexOf(c),-1!=c&&(e=c));for(var c=[],a=0,d=0;d< e;d++)if(d%4){var g=f.indexOf(b.charAt(d-1))<<2*(d%4),h=f.indexOf(b.charAt(d))>>>6-2*(d%4);c[a>>>2]|=(g|h)<<24-8*(a%4);a++}return j.create(c,a)},_map:'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/='}})();
emit( { input: row.input, input_sha256: CryptoJS.SHA256(row.input).toString(CryptoJS.enc.Hex) } );
}"
)

How do I get the results of the Plucky Query inside my controller?

I'm missing something simple - I do not want to access the results of this query in a view.
Here is the query:
#adm = Admin.where({:id => {"$ne" => params[:id].to_s},:email => params[:email]})
And of course when you inspect you get:
#adm is #<MongoMapper::Plugins::Querying::DecoratedPluckyQuery:0x007fb4be99acd0>
I understand (from asking the MM guys) why this is the case - they wished to delay the results of the actual query as long as possible, and only get a representation of the query object until we render (in a view!).
But what I'm trying to ascertain in my code is IF one of my params matches or doesn't match the result of my query in the controller so I can either return an error message or proceed.
Normally in a view I'm going to do:
#adm.id
To get the BSON out of this. When you try this on the Decorated Query of course it fails:
NoMethodError (undefined method `id' for #<MongoMapper::Plugins::Querying::DecoratedPluckyQuery:0x007fb4b9e9f118>)
This is because it's not actually a Ruby Object yet, it's still the query proxy.
Now I'm fundamentally missing something because I never read a "getting started with Ruby" guide - I just smashed my way in here and learned through brute-force. So, what method do I call to get the results of the Plucky Query?
The field #adm is set to a query as you've seen. So, to access the results, you'll need to trigger execution of the query. There are a variety of activation methods you can call, including all, first, and last. There's a little documentation here.
In this case, you could do something like:
adm_query = Admin.where({:id => {"$ne" => params[:id].to_s},:email => params[:email]})
#adm_user = adm_query.first
That would return you the first user and after checking for nil
if #adm_user.nil?
# do something if no results were found
end
You could also limit the query results:
adm_query = Admin.where( ... your query ...).limit(1)

Is getting the General ID same as getting FormattedID in rally?

I am trying to get the ID under "General" from a feature item in rally. This is my query:
body = { "find" => {"_ProjectHierarchy" => projectID, "_TypeHierarchy" => "PortfolioItem/Feature"
},
"fields" => ["FormattedID","Name","State","Release","_ItemHierarchy","_TypeHierarchy","Tags"],
"hydrate" => ["_ItemHierarchy","_TypeHierarchy","Tags"],
"fetch"=>true
}
I am not able to get any value for FormattedID, I tried using "_UnformattedID" but it pulls up an entirely different value than the FormattedID. Any help would be appreciated.
LBAPI does not have FormattedID field. You are correct using _UnformattedID. It is the FormattedID without the prefix. For example, this query:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/1111/artifact/snapshot/query.js?find={"_ProjectHierarchy":2222,"_TypeHierarchy":"PortfolioItem/Feature","State":"Developing",_ValidFrom: {$gte: "2013-06-01TZ",$lt: "2013-09-01TZ"}},sort:{_ValidFrom:-1}}&fields=["_UnformattedID","Name","State"]&hydrate=["State"]&compress=true&pagesize:200
shows _UnformattedID that correspond to FormattedID as this screenshot shows:
I noticed your are using fields and fetch . Per LBAPI's documentation, it uses fields rather than fetch. If you want to get all fields, use fields=true
As far as the missing custom fields, make sure that the custom field value was set within the dates of the query.
Compare these almost identical queries: the first query does not return a custom field, the second query does.
Query #1:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/1111/artifact/snapshot/query.js?find={"_ProjectHierarchy":2222,"_TypeHierarchy":"PortfolioItem/Feature","State":"Developing",_ValidFrom: {$gte: "2013-06-01TZ",$lt: "2013-09-01TZ"}}}&fields=["_UnformattedID","Name","State","c_PiCustomField"]&hydrate=["State","c_PiCustomField"]
Query #2:
https://rally1.rallydev.com/analytics/v2.0/service/rally/workspace/11111/artifact/snapshot/query.js?find={"_ProjectHierarchy":2222,"_TypeHierarchy":"PortfolioItem/Feature","State":"Developing",__At: "current"}&fields=["_UnformattedID","Name","State","c_PiCustomField"]&hydrate=["State","c_PiCustomField"]
The first query uses time period: _ValidFrom: {$gte: "2013-06-01TZ",$lt: "2013-09-01TZ"}
The second query uses __At: "current"
Let's say I just create a new custom field on PortfolioItem. It is not possible to create a custom field on PorfolioItem/Feature, so the field is created on PI, but both queries still use "_TypeHierarchy":"PortfolioItem/Feature".
After I created this custom field, called PiCustomField, I set a value of that field for a specific Feature, F4.
The first query does not have a single snapshot that includes that field because that field did not exist in the time period we lookback. We can't change the past.
The second query returns this field for F4. It does not return it for other Features because all other Features do not have this field set.
Here is the screenshot:

dijit filteringSelect with min length

I can't seem to find a way to require the filtering select input to be of a certain length. I've tried like this:
new dijit.form.FilteringSelect({
'name': 'bla',
'store': jsonRestStore,
'searchAttr': "name",
'pattern': '.{3,}',
'regExp': '.{3,}'
});
but it doesn't change a thing. I want the filtering select to only query the store, if at least 3 characters have been entered. Can't be that exotic a requirement, can it? There are thousands of items behind that store, so querying that with just 1 or 2 characters is slow.
I did a bit more searching and found this post on the dojo mailing list. To summarize, there is no way to native support in the FilteringSelect for it, but it is extremely easy to implement.
// custom min input character count to trigger search
minKeyCount: 3,
// override search method, count the input length
_startSearch: function (/*String*/key) {
if (!key || key.length < this.minKeyCount) {
this.closeDropDown();
return;
}
this.inherited(arguments);
}
Also in the API Docs, there is a searchDelay attribute, which could be helpful in minimizes the number of queries.
searchDelay
Delay in milliseconds between when user types something and we start searching based on that value

Amazon API -- Can I search Category ALL - Other than DVD etc?

I Am trying to build play with API code from Amazon -- I am a noob at this --
I have created a product search using the simple lookup code, and have gone though and set the search field form a form submission works fine, how ever I don't want to set a category Like I am currently below to say DVD, BABY MUSIC, I wish to set to ALL is this possible?
include("amazon_api_class.php");
$obj = new AmazonProductAPI(); -- I have edited this and added ALL as a category in here
try
{
$result = $obj->searchProducts($query,
AmazonProductAPI::BABY, -- I can change this to DVD or MUSIC and it works but if i set to ALL i get errors?
"TITLE"); - tryed changing this to KEYWORD doesnt work!
}
catch(Exception $e)
Any Help Would Be nice.
Thanks
Carl
OK --- updated -- ANd I belive I have to use KEYWORD when USING ALL so I have added this in
case "KEYWORD" : $parameters = array("Operation" => "ItemSearch",
"Title" => $search,
"SearchIndex" => $category,
"ResponseGroup" => "Small",
"MerchantId" => "All",
"Condition"=>"New",
'Keywords' => $searchTerm);
Warning: Invalid argument supplied for foreach() in /data/ADMINwhere2shoponline/www/include/amazon.php on line 23
still get this error?
carl
Carl,
You should be able to use ALL as the search parameter, but you need to make sure that the number of ItemPage you are requesting is not more than 5 or it will return an error. All other categories allow up to 10, but ALL is limited to 5.
Check that and see if you yet your problem resolved.