AJAX filled pulldown shows 5 identical options when there are 5 different options in database - sql

Scratching head here. I've got a pulldown and if I query it in SQL Server Manager Query Window I get 5 different values (these are sample points for a water system).
However, when the pulldown loads, there are 5 options of the first value. Can someone see something I can't?
I narrowed it down to the code below because I held my cursor over "results" which was the final step in my Controller's code, and it showed 5 items all of the same value:
else if ((sampletype == "P") || (sampletype == "T") || (sampletype == "C") || (sampletype == "A"))
{
var SamplePoints = (from c in _db.tblPWS_WSF_SPID_ISN_Lookup
where c.PWS == id && c.WSFStateCode.Substring(0, 1) == "S"
select c).ToList();
if (SamplePoints.Any())
{
var listItemsBig = SamplePoints.Select(p => new SelectListItem
{
Selected = false,
Text = p.WSFStateCode.ToString() + ":::" + p.SamplePointID.ToString(),
Value = p.WSFStateCode.ToString()
}).ToList();
results = new JsonResult { Data = listItemsBig };
}
}
return results ;
}

I have had a similar problem in nHibernate, it was caused by how I defined my primary keys/foreign keys in the ORM, leading to a bad join and duplicate values.

Related

ASP.Net Core SelectList not displaying selected item

Using ASP.Net core 2.2 in VS 2017 (MVC):
I have a wizard based screen that has a drop down list of Organizations.
When I load a record from the Database, I want to set the select list to the values in the DB.
Spoiler alert: The specified Item is not selected when the page is initially displayed.
When the item is manually selected, and the ‘Next’ button is clicked and then the Prev button is clicked, returning us to the first page, the item is selected just as I intend.
None of the methods below set the ‘Selected’ property of the selected item (“0403”) when loading the Model in the controller for the first display.
The compare in Method 2 never evaluates to true even though I can see the compared values are equal while debugging.
Method 3 does not find the item even though I can see the compared values are equal while debugging
Method 4 does not find the item even though I can see the compared values are equal while debugging
This is all happening in the Controller, so please do not suggest changing the name of the dropdown in the View.
**Org table
ORG ID OrgName**
0004 Org 4
0007 Org 7
0008 Org 8
0403 Org 403
This is my source query:
var orgsQuery = from s in _context.Orgs
orderby s.OrgID
select s;
These are the various ways I have tried building the select List in the Controller:
1).
SelectList oList = new SelectList(orgsQuery.AsNoTracking(), "OrgID", "OrgName", selectedOrg);
2).
List<SelectListItem> oList = new List<SelectListItem>();
foreach ( var item in OrgsQuery)
{
oList.Add(new SelectListItem()
{
Text = item.OrgName,
Value = item.OrgID,
Selected = (item.OrgID == (string)selectedOrg ? true : false)
});
}
3).
if (selectedOrg != null)
{
oList = new SelectList(OrgsQuery.AsNoTracking(),"OrgID", "OrgName", OrgsQuery.First(s =>
s.OrgID == (string)selectedOrg));
}
else
{
oList = new SelectList(OrgsQuery.AsNoTracking(), "OrgID", "OrgName", selectedOrg);
}
4).
SelectList oList =
new SelectList(_context.Org.OrderBy(r => r.OrgID),
_context.Org.SingleOrDefault(s => s.OrgID == (string)selectedOrg))
Well, assume that selectedOrg type is numeric, than it should be convert to formatted string by following the OrgID format:
var formattedSelectedOrg = String.Format("{0:0000}", selectedOrg);
Or
// if selectedOrg is nullable numeric type
var formattedSelectedOrg = selectedOrg?.ToString("0000");
// if selectedOrg is base numeric type
var formattedSelectedOrg = selectedOrg.ToString("0000");
// if selectedOrg is String type (nullable)
var formattedSelectedOrg = selectedOrg?.Trim().PadLeft(4, '0');
Or
var number;
var formattedSelectedOrg =
String.Format("{0:0000}", Int32.TryParse(selectedOrg?.Trim(), out number) ? number : 0);
So it would be comparable when applied to your codes:
// assume selectedOrg is base numeric type
Selected = item.OrgID == selectedOrg.ToString("0000")
Or
// if selectedOrg is String type (nullable)
Selected = item.OrgID == selectedOrg?.Trim().PadLeft(4, '0')
Or
// if you prefer to use predefined formatted string
Selected = item.OrgID == formattedSelectedOrg
And
// assume selectedOrg is base numeric type
s => s.OrgID == selectedOrg.ToString("0000")
Or
// if selectedOrg is String type (nullable)
s => s.OrgID == selectedOrg?.Trim().PadLeft(4, '0')
Or
// if you prefer to use predefined formatted string
s => s.OrgID == formattedSelectedOrg
Check this out:
Custom numeric format strings
String.PadLeft Method
Also this important briefcase:
Pad left with zeroes
Hope this could helps. Enjoy your coding!

Merging many spreadsheets into report file exceeds maximum execution time

I am using the following script to add rows of files from a student loop in the Google spreadsheet if credits are less than x. The script was working good but as the data in the spreadsheet is being added daily, now the script is throwing "Exceeded maximum execution time" error (we have more than 2000 files). As I am new to scripting I don't know how to optimize the code.
Could someone help me to optimize the code or any solution so that the execution time take less than 5 min. Every time you compare to an email, it has to be compared to many emails. Please Help!
function updated() {
//Final file data (Combined)
var filecombined = SpreadsheetApp.openById("XXXXXXXXXX");
var sheet2 = filecombined.getSheets();
//Folder with all the files
var parentFolder = DriveApp.getFolderById("YYYYYYYYYYYY");
var files = parentFolder.getFiles();
//Current Date
var fecha = new Date();
//Path for each file in the folder
while (files.hasNext()) {
var idarchivo = files.next().getId();
var sps = SpreadsheetApp.openById(idarchivo);
var sheet = sps.getSheetByName('STUDENT PROFILE');
var data = sheet.getDataRange().getValues();
var credits = data[5][1];
//Flat; bandera:1 (new row), bandera:2 (update row)
var bandera = 1;
//Take data from final file (Combined)
var data2 = sheet2[0].getDataRange().getValues();
//If credits are less than X: write
if (credits < 120) {
var email = data[2][1];
var lastrow = filecombined.getLastRow();
var u = 0;
//comparison loop by email, if found it, update and exit the loop
while (u < lastrow) {
u = u + 1;
if (email == data2[u - 1][1]) {
sheet2[0].getRange(u, 3).setValue(credits);
sheet2[0].getRange(u, 4).setValue(fecha);
u = lastrow;
bandera = 2;
}
}
//if that email does not exist, write a new row
if (bandera == 1) {
var nombre = data[0][1];
sheet2[0].getRange(lastrow + 1, 1).setValue(nombre);
sheet2[0].getRange(lastrow + 1, 2).setValue(email);
sheet2[0].getRange(lastrow + 1, 3).setValue(credits);
sheet2[0].getRange(lastrow + 1, 4).setValue(fecha);
}
}
}
SpreadsheetApp.flush();
}
The questioner's code is taking taking more than 4-6 minutes to run and is getting an error Exceeded maximum execution time.
The following answer is based solely on the code provided by the questioner. We don't have any information about the 'filecombined' spreadsheet, its size and triggers. We are also in the dark about the various student spreadsheets, their size, etc, except that we know that there are 2,000 of these files. We don't know how often this routine is run, nor how many students have credits <120.
getvalues and setvalues statements are very costly; typically 0.2 seconds each. The questioners code includes a variety of such statements - some are unavoidable but others are not.
In looking at optimising this code, I made two major changes.
1 - I moved line 27 var data2 = sheet2[0].getDataRange().getValues();
This line need only be executed once and I relocated it at the top of the code just after the various "filecombined" commands. As it stood, this line was being executed once for every student spreadsheet; this along may have contributed to several minutes of execution time.
2) I converted certain setvalue commands to an array, and then updated the "filecombined" spreadsheet from the array once only, at the end of the processing. Depending on the number of students with low credits and who are not already on the "filecombined" sheet, this may represent a substantial saving.
The code affected was lines 47 to 50.
line47: sheet2[0].getRange(lastrow+1, 1).setValue(nombre);
line48: sheet2[0].getRange(lastrow+1, 2).setValue(email);
line49: sheet2[0].getRange(lastrow+1, 3).setValue(credits);
line50: sheet2[0].getRange(lastrow+1, 4).setValue(fecha);
There are setvalue commands also executed at lines 38 and 39 (if the student is already on the "filecombined" spreadsheet), but I chose to leave these as-is. As noted above, we don't know how many such students there might be, and the cost of these setvalue commands may be minor or not. Until this is clear, and in the light of other time savings, I chose to leave them as-is.
function updated() {
//Final file data (Combined)
var filecombined = SpreadsheetApp.openById("XXXXXXXXXX");
var sheet2 = filecombined.getSheets();
//Take data from final file (Combined)
var data2 = sheet2[0].getDataRange().getValues();
// create some arrays
var Newdataarray = [];
var Masterarray = [];
//Folder with all the files
var parentFolder = DriveApp.getFolderById("YYYYYYYYYYYY");
var files = parentFolder.getFiles();
//Current Date
var fecha = new Date();
//Path for each file in the folder
while (files.hasNext()) {
var idarchivo = files.next().getId();
var sps = SpreadsheetApp.openById(idarchivo);
var sheet = sps.getSheetByName('STUDENT PROFILE');
var data = sheet.getDataRange().getValues();
var credits = data[5][1];
//Flat; bandera:1 (new row), bandera:2 (update row)
var bandera = 1;
//If credits are less than X: write
if (credits < 120){
var email = data[2][1];
var lastrow = filecombined.getLastRow();
var u = 0;
//comparison loop by email, if found it, update and exit the loop
while (u < lastrow) {
u = u + 1;
if (email == data2[u-1][1]){
sheet2[0].getRange(u, 3).setValue(credits);
sheet2[0].getRange(u, 4).setValue(fecha);
u = lastrow;
bandera = 2;
}
}
//if that email does not exist, write a new row
if(bandera == 1){
var nombre = data[0][1];
Newdataarray = [];
Newdataarray.push(nombre);
Newdataarray.push(email);
Newdataarray.push(credits);
Newdataarray.push(fecha);
Masterarray.push(Newdataarray);
}
}
}
// update the target sheet with the contents of the array
// these are all adding new rows
lastrow = filecombined.getLastRow();
sheet2[0].getRange(lastrow+1, 1, Masterarray.length, 4);
sheet2[0].setValues(Masterarray);
SpreadsheetApp.flush();
}
As I mentioned in my comment, the biggest issue you have is that you repeatedly search an array for a value, when you could use a much faster lookup function.
// Create an object that maps an email address to the (last) array
// index of that email in the `data2` array.
const knownEmails = data2.reduce(function (acc, row, index) {
var email = row[1]; // email is the 2nd element of the inner array (Column B on a spreadsheet)
acc[email] = index;
return acc;
}, {});
Then you can determine if an email existed in data2 by trying to obtain the value for it:
// Get this email's index in `data2`:
var index = knownEmails[email];
if (index === undefined) {
// This is a new email we didn't know about before
...
} else {
// This is an email we knew about already.
var u = ++index; // Convert the array index into a worksheet row (assumes `data2` is from a range that started at Row 1)
...
}
To understand how we are constructing knownEmails from data2, you may find the documentation on Array#reduce helpful.

BigQuery UDF memory exceeded error on multiple rows but works fine on single row

I'm writing a UDF to process Google Analytics data, and getting the "UDF out of memory" error message when I try to process multiple rows. I downloaded the raw data and found the largest record and tried running my UDF query on that, with success. Some of the rows have up to 500 nested hits, and the size of the hit record (by far the largest component of each row of the raw GA data) does seem to have an effect on how many rows I can process before getting the error.
For example, the query
select
user.ga_user_id,
ga_session_id,
...
from
temp_ga_processing(
select
fullVisitorId,
visitNumber,
...
from [79689075.ga_sessions_20160201] limit 100)
returns the error, but
from [79689075.ga_sessions_20160201] where totals.hits = 500 limit 1)
does not.
I was under the impression that any memory limitations were per-row? I've tried several techniques, such as setting row = null; before emit(return_dict); (where return_dict is the processed data) but to no avail.
The UDF itself doesn't do anything fancy; I'd paste it here but it's ~45 kB in length. It essentially does a bunch of things along the lines of:
function temp_ga_processing(row, emit) {
topic_id = -1;
hit_numbers = [];
first_page_load_hits = [];
return_dict = {};
return_dict["user"] = {};
return_dict["user"]["ga_user_id"] = row.fullVisitorId;
return_dict["ga_session_id"] = row.fullVisitorId.concat("-".concat(row.visitNumber));
for(i=0;i<row.hits.length;i++) {
hit_dict = {};
hit_dict["page"] = {};
hit_dict["time"] = row.hits[i].time;
hit_dict["type"] = row.hits[i].type;
hit_dict["page"]["engaged_10s"] = false;
hit_dict["page"]["engaged_30s"] = false;
hit_dict["page"]["engaged_60s"] = false;
add_hit = true;
for(j=0;j<row.hits[i].customMetrics.length;j++) {
if(row.hits[i].customDimensions[j] != null) {
if(row.hits[i].customMetrics[j]["index"] == 3) {
metrics = {"video_play_time": row.hits[i].customMetrics[j]["value"]};
hit_dict["metrics"] = metrics;
metrics = null;
row.hits[i].customDimensions[j] = null;
}
}
}
hit_dict["topic"] = {};
hit_dict["doctor"] = {};
hit_dict["doctor_location"] = {};
hit_dict["content"] = {};
if(row.hits[i].customDimensions != null) {
for(j=0;j<row.hits[i].customDimensions.length;j++) {
if(row.hits[i].customDimensions[j] != null) {
if(row.hits[i].customDimensions[j]["index"] == 1) {
hit_dict["topic"] = {"name": row.hits[i].customDimensions[j]["value"]};
row.hits[i].customDimensions[j] = null;
continue;
}
if(row.hits[i].customDimensions[j]["index"] == 3) {
if(row.hits[i].customDimensions[j]["value"].search("doctor") > -1) {
return_dict["logged_in_as_doctor"] = true;
}
}
// and so on...
}
}
}
if(row.hits[i]["eventInfo"]["eventCategory"] == "page load time" && row.hits[i]["eventInfo"]["eventLabel"].search("OUTLIER") == -1) {
elre = /(?:onLoad|pl|page):(\d+)/.exec(row.hits[i]["eventInfo"]["eventLabel"]);
if(elre != null) {
if(parseInt(elre[0].split(":")[1]) <= 60000) {
first_page_load_hits.push(parseFloat(row.hits[i].hitNumber));
if(hit_dict["page"]["page_load"] == null) {
hit_dict["page"]["page_load"] = {};
}
hit_dict["page"]["page_load"]["sample"] = 1;
page_load_time_re = /(?:onLoad|pl|page):(\d+)/.exec(row.hits[i]["eventInfo"]["eventLabel"]);
if(page_load_time_re != null) {
hit_dict["page"]["page_load"]["page_load_time"] = parseFloat(page_load_time_re[0].split(':')[1])/1000;
}
}
// and so on...
}
}
row = null;
emit return_dict;
}
The job ID is realself-main:bquijob_4c30bd3d_152fbfcd7fd
Update Aug 2016 : We have pushed out an update that will allow the JavaScript worker to use twice as much RAM. We will continue to monitor jobs that have failed with JS OOM to see if more increases are necessary; in the meantime, please let us know if you have further jobs failing with OOM. Thanks!
Update : this issue was related to limits we had on the size of the UDF code. It looks like V8's optimize+recompile pass of the UDF code generates a data segment that was bigger than our limits, but this was only happening when when the UDF runs over a "sufficient" number of rows. I'm meeting with the V8 team this week to dig into the details further.
#Grayson - I was able to run your job over the entire 20160201 table successfully; the query takes 1-2 minutes to execute. Could you please verify that this works on your side?
We've gotten a few reports of similar issues that seem related to # rows processed. I'm sorry for the trouble; I'll be doing some profiling on our JavaScript runtime to try to find if and where memory is being leaked. Stay tuned for the analysis.
In the meantime, if you're able to isolate any specific rows that cause the error, that would also be very helpful.
A UDF will fail on anything but very small datasets if it has a lot of if/then levels, such as:
if () {
.... if() {
.........if () {
etc
We had to track down and remove the deepest if/then statement.
But, that is not enough. In addition, when you pass the data into the UDF run a "GROUP EACH BY" on all the variables. This will force BQ to send the output to multiple "workers". Otherwise it will also fail.
I've wasted 3 days of my life on this annoying bug. Argh.
I love the concept of parsing my logs in BigQuery, but I've got the same problem, I get
Error: Resources exceeded during query execution.
The Job Id is bigquery-looker:bquijob_260be029_153dd96cfdb, if that at all helps.
I wrote a very basic parser does a simple match and returns rows. Works just fine on a 10K row data set, but I get out of resources when trying to run against a 3M row logfile.
Any suggestions for a work around?
Here is the javascript code.
function parseLogRow(row, emit) {
r = (row.logrow ? row.logrow : "") + (typeof row.l2 !== "undefined" ? " " + row.l2 : "") + (row.l3 ? " " + row.l3 : "")
ts = null
category = null
user = null
message = null
db = null
found = false
if (r) {
m = r.match(/^(\d\d\d\d-\d\d-\d\d \d\d:\d\d:\d\d\.\d\d\d (\+|\-)\d\d\d\d) \[([^|]*)\|([^|]*)\|([^\]]*)\] :: (.*)/ )
if( m){
ts = new Date(m[1])/1000
category = m[3] || null
user = m[4] || null
db = m[5] || null
message = m[6] || null
found = true
}
else {
message = r
found = false
}
}
emit({
ts: ts,
category: category,
user: user,
db: db,
message: message,
found: found
});
}
bigquery.defineFunction(
'parseLogRow', // Name of the function exported to SQL
['logrow',"l2","l3"], // Names of input columns
[
{'name': 'ts', 'type': 'timestamp'}, // Output schema
{'name': 'category', 'type': 'string'},
{'name': 'user', 'type': 'string'},
{'name': 'db', 'type': 'string'},
{'name': 'message', 'type': 'string'},
{'name': 'found', 'type': 'boolean'},
],
parseLogRow // Reference to JavaScript UDF
);

AgileToolkit - Custom SQL request + Paginator

Hello I a little problem with my paginator, i would like to use some custom SQL Request but each time i click on a paginator link it loads my model without the custom request
I enter informations on a form that I send by GET method:
$view->grid->js()->reload(array("From" =>
$form->get("From"),"To" => $form->get("To"),"SSID" => $form->get("SSID")))
->execute();
On my view I have :
$this->request=$this->api->db->dsql();
$this->grid=$this->add('Grid');
$this->grid->setModel('Systemevents',array('ID','ReceivedAt','Message'));
$this->grid->addPaginator(10);
if (isset($_GET["SSID"])) {
$this->ssid = $_GET["SSID"];
}
if (isset($_GET["From"])) {
$this->from = $_GET["From"];
}
if (isset($_GET["To"])) {
$this->to = $_GET["To"];
}
$this->grid->dq->where($this->requette->expr('Message like "%.% '
. $this->ssid . ' % src=%"'))
->where($this->requette->expr("ReceivedAt >= '".$this->from. "'
AND ReceivedAt <= '".$this->to."'"));
The problem is that the where condition disapear when i change the page with the paginator.
I did not found any solution to my problem so I have done something differently
I added two buttons to my grid wich allows me to change the limit of the sql request.
The previous button is hidden if the limit is 0.
Now i have to found how to count the number of lines (select count('ID') from 'SystemEvents' where....) and to stock it in a variable.
Finally the ultimate solution was to do :
if ((isset($_GET["SSID"])) || (isset($_GET["From"])) || (isset($_GET["To"]))) {
//GET Method from Recherche.php
$this->ssid = ($_GET["SSID"] == "null" ? null : $_GET["SSID"]);
$this->api->stickyGET("SSID"); // the solutiuon is here
$this->from = ($_GET["From"] == "null" ? null : $_GET["From"]);
$this->api->stickyGET("From"); // <===== the solutiuon is here
$this->to = ($_GET["To"] == "null" ? null : $_GET["To"]);
$this->api->stickyGET("To"); // <===== the solutiuon is here
}

Refresh a dijit.form.Select

First, you have to know that I am developing my project with Struts (J2EE
Here is my problem :
I have 2 dijit.form.Select widgets in my page, and those Select are filled with the same list (returned by a Java class).
When I select an option in my 1st "Select widget", I would like to update my 2nd Select widget, and disable the selected options from my 1st widget (to prevent users to select the same item twice).
I succeed doing this (I'll show you my code later), but my problem is that when I open my 2nd list, even once, it will never be refreshed again. So I can play a long time with my 1st Select, and choose many other options, the only option disabled in my 2nd list is the first I've selected.
Here is my JS Code :
function removeSelectedOption(){
var list1 = dijit.byId("codeModif1");
var list2 = dijit.byId("codeModif2");
var list1SelectedOptionValue = list1.get("value");
if(list1SelectedOptionValue!= null){
list2.reset();
for(var i = 0; i < myListSize; i++){
// If the value of the current option = my selected option from list1
if(liste2.getOptions(i).value == list1SelectedOptionValue){
list2.getOptions(i).disabled = true;
} else {
list2.getOptions(i).disabled = false;
}
}
}
Thanks for your help
Regards
I think you have to reset() the Select after you've updated its options' properties. Something like:
function removeSelectedOption(value)
{
var list2 = dijit.byId("codeModif2"),
prev = list2.get('value');
for(var i = 0; i < myListSize; i++)
{
var opt = myList[i];
opt.disabled = opt.value === value;
list2.updateOption(opt);
}
list2.reset();
// Set selection again, unless it was the newly disabled one.
if(prev !== value) list2.set('value', prev);
};
(I'm assuming you have a myList containing the possible options here, and the accompanying myListSize.)