I have React Native project where I am using react-native-sqlite-store (npm i react-native-sqlite-storage). The issue is that I get no results when I try the follow query show below. I need to be able to select all rows where the columns have exact match with the values. I need to also be able to select where the value could be NULL. With the correct code I am not able to get results however if I remove the midiGroup field from the query I get results. Not sure how to fix this.
Basically I need to be able to check even if a column has NULL. In React Native the message?.midiGroup value is null
tx.executeSql(
"SELECT * FROM MidiMap WHERE midiType= ? AND midiNote= ? AND midiChannel= ? AND midiVelocity= ? AND midiGroup= ?;",
[
message?.midiType, // <---- noteOn
message?.midiNote, // <---- 60
message?.midiChannel, // <---- 0
message?.midiVelocity, // <---- 64
message?.midiGroup, // <---- for this query midiGroup is null
],
(sqlTx, res) => {
console.log(activeProfile, message?.midiType);
let len = res.rows.length;
// Check for items here
if (len > 0) {
for (let i = 0; i < len; i++) {
let item = res.rows.item(i);
// Trigger here
console.log(item);
}
}
},
(error) => {
console.log(`Error checking results for midi triggering: ${error.message}`);
}
);
Use the operator IS to make comparisons against null.
Depending on your requirement, your query could be written as:
SELECT *
FROM MidiMap
WHERE midiType = ?
AND midiNote = ?
AND midiChannel = ?
AND midiVelocity = ?
AND midiGroup IS ?;
or:
SELECT *
FROM MidiMap
WHERE midiType = ?
AND midiNote = ?
AND midiChannel = ?
AND midiVelocity = ?
AND (midiGroup = ? OR midiGroup IS NULL);
Related
I use sqlJdbs query as a data provider for my CCC controls. I use geospatial request in my query that's why I cache my results(Cache=True). Otherwise the request made long.
It works fine. However I have to use parameters in my query to filter resulting rows:
SELECT ...
FROM ...
WHERE someField IN (${aoi_param})
Is there some way to cache full set of rows and then apply WHERE to cached results without rebuilding new cache for each set of values in the ${aoi_param}?
What is the best practice?
So, I am not really sure that it is the best practice, but I solved my problem this way:
I included aoi_param to the Listeners and Parameters of my chart control
Then I filtered data set in Post Fetch:
function f(data){
var _aoi_param = this.dashboard.getParameterValue('${p:aoi_param}');
function isInArray(myValue, myArray) {
var arrayLength = myArray.length;
for (var i = 0; i < arrayLength; i++) {
if (myValue == myArray[i]) return true;
}
return false;
}
function getFiltered(cdaData, filterArray) {
var allCdaData = cdaData;
cdaData = {
metadata: allCdaData.metadata,
resultset: allCdaData.resultset.filter(function(row){
// 2nd column is an AOI id in my dataset
return isInArray(row[2], filterArray);
})
};
return cdaData;
}
var dataFiltered = getFiltered(data, _aoi_param);
return dataFiltered;
}
excluded WHERE someField IN (${aoi_param}) from the query of my sql over sqlJdbc component
I'm writing a UDF to process Google Analytics data, and getting the "UDF out of memory" error message when I try to process multiple rows. I downloaded the raw data and found the largest record and tried running my UDF query on that, with success. Some of the rows have up to 500 nested hits, and the size of the hit record (by far the largest component of each row of the raw GA data) does seem to have an effect on how many rows I can process before getting the error.
For example, the query
select
user.ga_user_id,
ga_session_id,
...
from
temp_ga_processing(
select
fullVisitorId,
visitNumber,
...
from [79689075.ga_sessions_20160201] limit 100)
returns the error, but
from [79689075.ga_sessions_20160201] where totals.hits = 500 limit 1)
does not.
I was under the impression that any memory limitations were per-row? I've tried several techniques, such as setting row = null; before emit(return_dict); (where return_dict is the processed data) but to no avail.
The UDF itself doesn't do anything fancy; I'd paste it here but it's ~45 kB in length. It essentially does a bunch of things along the lines of:
function temp_ga_processing(row, emit) {
topic_id = -1;
hit_numbers = [];
first_page_load_hits = [];
return_dict = {};
return_dict["user"] = {};
return_dict["user"]["ga_user_id"] = row.fullVisitorId;
return_dict["ga_session_id"] = row.fullVisitorId.concat("-".concat(row.visitNumber));
for(i=0;i<row.hits.length;i++) {
hit_dict = {};
hit_dict["page"] = {};
hit_dict["time"] = row.hits[i].time;
hit_dict["type"] = row.hits[i].type;
hit_dict["page"]["engaged_10s"] = false;
hit_dict["page"]["engaged_30s"] = false;
hit_dict["page"]["engaged_60s"] = false;
add_hit = true;
for(j=0;j<row.hits[i].customMetrics.length;j++) {
if(row.hits[i].customDimensions[j] != null) {
if(row.hits[i].customMetrics[j]["index"] == 3) {
metrics = {"video_play_time": row.hits[i].customMetrics[j]["value"]};
hit_dict["metrics"] = metrics;
metrics = null;
row.hits[i].customDimensions[j] = null;
}
}
}
hit_dict["topic"] = {};
hit_dict["doctor"] = {};
hit_dict["doctor_location"] = {};
hit_dict["content"] = {};
if(row.hits[i].customDimensions != null) {
for(j=0;j<row.hits[i].customDimensions.length;j++) {
if(row.hits[i].customDimensions[j] != null) {
if(row.hits[i].customDimensions[j]["index"] == 1) {
hit_dict["topic"] = {"name": row.hits[i].customDimensions[j]["value"]};
row.hits[i].customDimensions[j] = null;
continue;
}
if(row.hits[i].customDimensions[j]["index"] == 3) {
if(row.hits[i].customDimensions[j]["value"].search("doctor") > -1) {
return_dict["logged_in_as_doctor"] = true;
}
}
// and so on...
}
}
}
if(row.hits[i]["eventInfo"]["eventCategory"] == "page load time" && row.hits[i]["eventInfo"]["eventLabel"].search("OUTLIER") == -1) {
elre = /(?:onLoad|pl|page):(\d+)/.exec(row.hits[i]["eventInfo"]["eventLabel"]);
if(elre != null) {
if(parseInt(elre[0].split(":")[1]) <= 60000) {
first_page_load_hits.push(parseFloat(row.hits[i].hitNumber));
if(hit_dict["page"]["page_load"] == null) {
hit_dict["page"]["page_load"] = {};
}
hit_dict["page"]["page_load"]["sample"] = 1;
page_load_time_re = /(?:onLoad|pl|page):(\d+)/.exec(row.hits[i]["eventInfo"]["eventLabel"]);
if(page_load_time_re != null) {
hit_dict["page"]["page_load"]["page_load_time"] = parseFloat(page_load_time_re[0].split(':')[1])/1000;
}
}
// and so on...
}
}
row = null;
emit return_dict;
}
The job ID is realself-main:bquijob_4c30bd3d_152fbfcd7fd
Update Aug 2016 : We have pushed out an update that will allow the JavaScript worker to use twice as much RAM. We will continue to monitor jobs that have failed with JS OOM to see if more increases are necessary; in the meantime, please let us know if you have further jobs failing with OOM. Thanks!
Update : this issue was related to limits we had on the size of the UDF code. It looks like V8's optimize+recompile pass of the UDF code generates a data segment that was bigger than our limits, but this was only happening when when the UDF runs over a "sufficient" number of rows. I'm meeting with the V8 team this week to dig into the details further.
#Grayson - I was able to run your job over the entire 20160201 table successfully; the query takes 1-2 minutes to execute. Could you please verify that this works on your side?
We've gotten a few reports of similar issues that seem related to # rows processed. I'm sorry for the trouble; I'll be doing some profiling on our JavaScript runtime to try to find if and where memory is being leaked. Stay tuned for the analysis.
In the meantime, if you're able to isolate any specific rows that cause the error, that would also be very helpful.
A UDF will fail on anything but very small datasets if it has a lot of if/then levels, such as:
if () {
.... if() {
.........if () {
etc
We had to track down and remove the deepest if/then statement.
But, that is not enough. In addition, when you pass the data into the UDF run a "GROUP EACH BY" on all the variables. This will force BQ to send the output to multiple "workers". Otherwise it will also fail.
I've wasted 3 days of my life on this annoying bug. Argh.
I love the concept of parsing my logs in BigQuery, but I've got the same problem, I get
Error: Resources exceeded during query execution.
The Job Id is bigquery-looker:bquijob_260be029_153dd96cfdb, if that at all helps.
I wrote a very basic parser does a simple match and returns rows. Works just fine on a 10K row data set, but I get out of resources when trying to run against a 3M row logfile.
Any suggestions for a work around?
Here is the javascript code.
function parseLogRow(row, emit) {
r = (row.logrow ? row.logrow : "") + (typeof row.l2 !== "undefined" ? " " + row.l2 : "") + (row.l3 ? " " + row.l3 : "")
ts = null
category = null
user = null
message = null
db = null
found = false
if (r) {
m = r.match(/^(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (\+|\-)\d\d\d\d) \[([^|]*)\|([^|]*)\|([^\]]*)\] :: (.*)/ )
if( m){
ts = new Date(m[1])/1000
category = m[3] || null
user = m[4] || null
db = m[5] || null
message = m[6] || null
found = true
}
else {
message = r
found = false
}
}
emit({
ts: ts,
category: category,
user: user,
db: db,
message: message,
found: found
});
}
bigquery.defineFunction(
'parseLogRow', // Name of the function exported to SQL
['logrow',"l2","l3"], // Names of input columns
[
{'name': 'ts', 'type': 'timestamp'}, // Output schema
{'name': 'category', 'type': 'string'},
{'name': 'user', 'type': 'string'},
{'name': 'db', 'type': 'string'},
{'name': 'message', 'type': 'string'},
{'name': 'found', 'type': 'boolean'},
],
parseLogRow // Reference to JavaScript UDF
);
So I have a standard service reference proxy calss for MS CRM 2013 (i.e. right-click add reference etc...) I then found the limitation that CRM data calls limit to 50 results and I wanted to get the full list of results. I found two methods, one looks more correct, but doesn't seem to work. I was wondering why it didn't and/or if there was something I'm doing incorrectly.
Basic setup and process
crmService = new CrmServiceReference.MyContext(new Uri(crmWebServicesUrl));
crmService.Credentials = System.Net.CredentialCache.DefaultCredentials;
var accountAnnotations = crmService.AccountSet.Where(a => a.AccountNumber = accountNumber).Select(a => a.Account_Annotation).FirstOrDefault();
Using Continuation (something I want to work, but looks like it doesn't)
while (accountAnnotations.Continuation != null)
{
accountAnnotations.Load(crmService.Execute<Annotation>(accountAnnotations.Continuation.NextLinkUri));
}
using that method .Continuation is always null and accountAnnotations.Count is always 50 (but there are more than 50 records)
After struggling with .Continutation for a while I've come up with the following alternative method (but it seems "not good")
var accountAnnotationData = accountAnnotations.ToList();
var accountAnnotationFinal = accountAnnotations.ToList();
var index = 1;
while (accountAnnotationData.Count == 50)
{
accountAnnotationData = (from a in crmService.AnnotationSet
where a.ObjectId.Id == accountAnnotationData.First().ObjectId.Id
select a).Skip(50 * index).ToList();
accountAnnotationFinal = accountAnnotationFinal.Union(accountAnnotationData).ToList();
index++;
}
So the second method seems to work, but for any number of reasons it doesn't seem like the best. Is there a reason .Continuation is always null? Is there some setup step I'm missing or some nice way to do this?
The way to get the records from CRM is to use paging here is an example with a query expression but you can also use fetchXML if you want
// Query using the paging cookie.
// Define the paging attributes.
// The number of records per page to retrieve.
int fetchCount = 3;
// Initialize the page number.
int pageNumber = 1;
// Initialize the number of records.
int recordCount = 0;
// Define the condition expression for retrieving records.
ConditionExpression pagecondition = new ConditionExpression();
pagecondition.AttributeName = "address1_stateorprovince";
pagecondition.Operator = ConditionOperator.Equal;
pagecondition.Values.Add("WA");
// Define the order expression to retrieve the records.
OrderExpression order = new OrderExpression();
order.AttributeName = "name";
order.OrderType = OrderType.Ascending;
// Create the query expression and add condition.
QueryExpression pagequery = new QueryExpression();
pagequery.EntityName = "account";
pagequery.Criteria.AddCondition(pagecondition);
pagequery.Orders.Add(order);
pagequery.ColumnSet.AddColumns("name", "address1_stateorprovince", "emailaddress1", "accountid");
// Assign the pageinfo properties to the query expression.
pagequery.PageInfo = new PagingInfo();
pagequery.PageInfo.Count = fetchCount;
pagequery.PageInfo.PageNumber = pageNumber;
// The current paging cookie. When retrieving the first page,
// pagingCookie should be null.
pagequery.PageInfo.PagingCookie = null;
Console.WriteLine("#\tAccount Name\t\t\tEmail Address");while (true)
{
// Retrieve the page.
EntityCollection results = _serviceProxy.RetrieveMultiple(pagequery);
if (results.Entities != null)
{
// Retrieve all records from the result set.
foreach (Account acct in results.Entities)
{
Console.WriteLine("{0}.\t{1}\t\t{2}",
++recordCount,
acct.EMailAddress1,
acct.Name);
}
}
// Check for more records, if it returns true.
if (results.MoreRecords)
{
// Increment the page number to retrieve the next page.
pagequery.PageInfo.PageNumber++;
// Set the paging cookie to the paging cookie returned from current results.
pagequery.PageInfo.PagingCookie = results.PagingCookie;
}
else
{
// If no more records are in the result nodes, exit the loop.
break;
}
}
I am finding one funny error but not sure whether that is my mistake or not. Below is my code for phonegap storage application. I have to fetch the records by one field in SELECT Query. If I try without where condition it s working but with where clause it is not working. Anyone have an idea on this?
function setDetailWordList()
{
db.transaction(SetDetailWords);
}
function SetDetailWords(tx)
{
alert("Qiuery executing...");
alert('SELECT * FROM DW_Words WHERE word = "'+word+'"');
tx.executeSql('SELECT * FROM DW_Words where word = "'+word+'"', [], SetDetailWordsSuccess);
}
function SetDetailWordsSuccess(tx,results)
{
alert(results.rows.length);
$('#words').html("").listview('refresh');
for (var i=0; i < results.rows.length; i++)
{
$('#words').append('<li><a><div>'+results.rows.item(i).explanation+'</div></a></li>');
}
$('#words').listview('refresh');
//AddEvents();
}
Hello I a little problem with my paginator, i would like to use some custom SQL Request but each time i click on a paginator link it loads my model without the custom request
I enter informations on a form that I send by GET method:
$view->grid->js()->reload(array("From" =>
$form->get("From"),"To" => $form->get("To"),"SSID" => $form->get("SSID")))
->execute();
On my view I have :
$this->request=$this->api->db->dsql();
$this->grid=$this->add('Grid');
$this->grid->setModel('Systemevents',array('ID','ReceivedAt','Message'));
$this->grid->addPaginator(10);
if (isset($_GET["SSID"])) {
$this->ssid = $_GET["SSID"];
}
if (isset($_GET["From"])) {
$this->from = $_GET["From"];
}
if (isset($_GET["To"])) {
$this->to = $_GET["To"];
}
$this->grid->dq->where($this->requette->expr('Message like "%.% '
. $this->ssid . ' % src=%"'))
->where($this->requette->expr("ReceivedAt >= '".$this->from. "'
AND ReceivedAt <= '".$this->to."'"));
The problem is that the where condition disapear when i change the page with the paginator.
I did not found any solution to my problem so I have done something differently
I added two buttons to my grid wich allows me to change the limit of the sql request.
The previous button is hidden if the limit is 0.
Now i have to found how to count the number of lines (select count('ID') from 'SystemEvents' where....) and to stock it in a variable.
Finally the ultimate solution was to do :
if ((isset($_GET["SSID"])) || (isset($_GET["From"])) || (isset($_GET["To"]))) {
//GET Method from Recherche.php
$this->ssid = ($_GET["SSID"] == "null" ? null : $_GET["SSID"]);
$this->api->stickyGET("SSID"); // the solutiuon is here
$this->from = ($_GET["From"] == "null" ? null : $_GET["From"]);
$this->api->stickyGET("From"); // <===== the solutiuon is here
$this->to = ($_GET["To"] == "null" ? null : $_GET["To"]);
$this->api->stickyGET("To"); // <===== the solutiuon is here
}