How can I set jmockit expectations for a chain of calls using times? - jmockit

I am testing a class instance called server and I am using partial mocking, like this:
new Expectations(server) {{
server.readPortNumber(withInstanceOf(File.class));
result = new FileNotFoundException();
times = 300;
}}
This works fine for the first 300 calls. However, the 301 call should succeed, so I was expecting something like this to work:
new Expectations(server) {{
server.readPortNumber(withInstanceOf(File.class));
result = new FileNotFoundException();
times = 300;
result = 100;
times = 1;
}}
But it doesn't. readPortNumber returns 100 in it's first call, showing values were overiden.
How can I specify a chain of results using times keyword?

I was able to find an answer using Delegate:
new Expectations(server) {{
server.readPortNumber(withInstanceOf(File.class));
result = new FileNotFoundException();
times = 301;
result = new Delegate() {
int n_calls = 0;
int delegate() throws FileNotFoundException {
n_calls++;
if (n_calls <= 300) {
throw new FileNotFoundException();
} else {
return 100;
}
}
};
}}
Not sure there is a better solution, less verbose than this.

Related

How to escape _ wildcard within Google app script sql?

The function to run a standard sql query within the app script throws up an error when _is used within the sql. It is used within the condition filter to look for all names with _x_. Backslashes break the app script when used.
Within Google Apps Script: var sql1 = 'sql string';
Within sql: WHERE lower(name) like "%\_x_\%"
Update: I managed to find a workaround using REGEXP_CONTAINS(LOWER(name), r"(_x_)" but am still interested to know if it works with the regular LIKE clause.
I reproduced your case using a modified sample code from the documentation.
I queried against a sample dataset using where like "%_". Then, I write the results in a Google spreadsheet.The table I am querying in BigQuery is:
Row id
1 _id_1212
2 id1212
The code I am using is below:
/**
* Runs a BigQuery query and logs the results in a spreadsheet.
*/
function runQuery() {
// Replace this value with the project ID listed in the Google
// Cloud Platform project.
var projectId = 'project_id';
//modified query
var request = {
query: 'SELECT * from `project_id.dataset.table` where id LIKE "%_id_%";'//it will also work for where like "%\_id\_%",
//configuring the query to use StandardSQL
useLegacySql: false
};
var queryResults = BigQuery.Jobs.query(request, projectId);
var jobId = queryResults.jobReference.jobId;
// Check on status of the Query Job.
var sleepTimeMs = 500;
while (!queryResults.jobComplete) {
Utilities.sleep(sleepTimeMs);
sleepTimeMs *= 2;
queryResults = BigQuery.Jobs.getQueryResults(projectId, jobId);
}
// Get all the rows of results.
var rows = queryResults.rows;
while (queryResults.pageToken) {
queryResults = BigQuery.Jobs.getQueryResults(projectId, jobId, {
pageToken: queryResults.pageToken
});
rows = rows.concat(queryResults.rows);
}
if (rows) {
var spreadsheet = SpreadsheetApp.create('BiqQuery Results');
var sheet = spreadsheet.getActiveSheet();
// Append the headers.
var headers = queryResults.schema.fields.map(function(field) {
return field.name;
});
sheet.appendRow(headers);
// Append the results.
var data = new Array(rows.length);
for (var i = 0; i < rows.length; i++) {
var cols = rows[i].f;
data[i] = new Array(cols.length);
for (var j = 0; j < cols.length; j++) {
data[i][j] = cols[j].v;
}
}
sheet.getRange(2, 1, rows.length, headers.length).setValues(data);
Logger.log('Results spreadsheet created: %s',
spreadsheet.getUrl());
} else {
Logger.log('No rows returned.');
}
}
The output,
id
_id_1212
Both where id LIKE "%_id_%" and where id LIKE "%\_id\_%" work when I set the query to use StandardSQL (useLegacySql: false).
In addition, the error GoogleJsonResponseException: API call to bigquery.jobs.query failed with error: Syntax error: Illegal escape sequence: \_ will be thrown when trying to escape the underscore using a double backslash such as where id LIKE "%\\_id\\_%".

How to Error Handle a NullReferenceException

My website went down for a few days, therefore I am trying to produce some error handling while the MVC app doesnt have access to certain resources so if something doesnt become unavailable again the WHOLE THING doesnt have to go down.
At the moment a controller is trying to access viewbag.moreNewProducts that isnt available.
public ActionResult Index(string search)
{
string[] newProductLines = this.getMoreNewProducts();
string[] newNews = this.getMoreNews();
string[] newPromotions = this.getMorePromotions();
string[] fewerProductLines = this.getLessNewProducts(newProductLines);
ViewBag.moreNewProducts = newProductLines;
ViewBag.moreNews = newNews;
ViewBag.morePromotions = newPromotions;
ViewBag.lessNewProducts = fewerProductLines;
bool disableShowMore = false;
This is where I run into an error: " foreach (string line in newProductLines)"
public string[] getLessNewProducts(string[] newProductLines)
{
int charCount = 0;
int arrayCount = 0;
string[] displayProductLines = new string[6];
bool continueWriting;
if (newProductLines == null)
{
foreach (string line in newProductLines)
{
continueWriting = false;
for (int i = 0; charCount < 250 && i < line.Length && arrayCount < 5; i++)
{
string index = newProductLines[arrayCount].Substring(i, 1);
displayProductLines[arrayCount] += index;
charCount++;
continueWriting = true;
}
if (continueWriting == true)
{
arrayCount++;
}
}
string[] LessNewProducts = new string[arrayCount];
for (int d = 0; d < arrayCount; d++)
{
LessNewProducts[d] = displayProductLines[d];
}
return LessNewProducts;
}
else
{
return null;
}
}
how do I get around an if else statement so the whole thing doesnt have to crash?
Two things.
Your if (newProductLines == null) statement has the wrong condition on it. I don't believe that you want to enter that if newProductLines is null. You can inverse this condition to get the desired result(if (newProductLines != null)).
If you run into another situation later where you need to catch an error, you can always use the try-catch block to catch exceptions that you are expecting.
try
{
//code that could cause the error here
}
catch(NullReferenceException nullRefExcep)
{
//what you want it to do if the null reference exception occurs
}
if (newProductLines == null)
should be replaced with if (newProductLines != null) so you don't have to handle the code with newProductLines as null. Basically, with this condition you will always have the NullReferenceException unless you manage your exception with a try catch block.
The real question to ask yourself is:
Why would newProductLines be null?
Presumably getMoreNewProducts() found a situation where it thought it would be appropriate to return a null value.
If this is happening because the system has an error that would make your page meaningless, then you may just want to change getMoreNewProducts() so that it throws an exception when that error state occurs. Typically it's safest and easiest to debug programs that fail as soon as they run into an unexpected situation.
If this is happening because there are no new products, then you should just return an empty collection, rather than null. All your code should work just fine after that, without the need for an if/else statement: it will return an empty array for LessNewProducts, which is probably correct.
However, let's assume that there's a situation that you're anticipating will occur from time to time, which will make it impossible for you to retrieve newProductLines at that time, but which you would like the system to handle gracefully otherwise. You could just use null to indicate that the value isn't there, but it's really hard to know which variables might be null and which never should be. It may be wiser to use an optional type to represent that getMoreNewProducts() might not return anything at all, so you can force any consuming code to recognize this possibility and figure out how to deal with it before the project will even compile:
public ActionResult Index(string search)
{
Maybe<string[]> newProductLines = this.getMoreNewProducts();
string[] newNews = this.getMoreNews();
string[] newPromotions = this.getMorePromotions();
Maybe<string[]> fewerProductLines = newProductLines.Select(this.getLessNewProducts);
Disclaimer: I am the author of the Maybe<> class referenced above.
Here are some additional improvements I'd suggest:
Don't use ViewBag. Instead, create a strongly-typed ViewModel so that you can catch errors in your code at compile-time more often:
var viewModel = new ReportModel {
newProductLines = this.getMoreNewProducts(),
newNews = this.getMoreNews(),
...
};
...
return View(viewModel);
Learn to use LINQ. It will simplify a lot of your very complicated code. For example, instead of:
string[] LessNewProducts = new string[arrayCount];
for (int d = 0; d < arrayCount; d++)
{
LessNewProducts[d] = displayProductLines[d];
}
return LessNewProducts;
... you can say:
string[] LessNewProducts = displayProductLines.Take(arrayCount).ToArray();
In fact, I think your entire getLessNewProducts() method can be replaced with this:
return newProductLines
.Where(line => line.Length > 0)
.Select(line => line.Substring(0, Math.Min(line.Length, 250)))
.Take(5);

Google BigQuery returns only partial table data with C# application using .net Client Library

I am trying to execute the query (Basic select statement with 10 fields). My table contains more than 500k rows. C# application returns the response with only 4260 rows. However Web UI returns all the records.
Why my code returns only partial data, What is the best way to select all the records and load into C# Data Table? If there is any code snippet it would be more helpful to me.
using Google.Apis.Auth.OAuth2;
using System.IO;
using System.Threading;
using Google.Apis.Bigquery.v2;
using Google.Apis.Bigquery.v2.Data;
using System.Data;
using Google.Apis.Services;
using System;
using System.Security.Cryptography.X509Certificates;
namespace GoogleBigQuery
{
public class Class1
{
private static void Main()
{
try
{
Console.WriteLine("Start Time: {0}", DateTime.Now.ToString());
String serviceAccountEmail = "SERVICE ACCOUNT EMAIL";
var certificate = new X509Certificate2(#"KeyFile.p12", "notasecret", X509KeyStorageFlags.Exportable);
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
Scopes = new[] { BigqueryService.Scope.Bigquery, BigqueryService.Scope.BigqueryInsertdata, BigqueryService.Scope.CloudPlatform, BigqueryService.Scope.DevstorageFullControl }
}.FromCertificate(certificate));
BigqueryService Service = new BigqueryService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "PROJECT NAME"
});
string query = "SELECT * FROM [publicdata:samples.shakespeare]";
JobsResource j = Service.Jobs;
QueryRequest qr = new QueryRequest();
string ProjectID = "PROJECT ID";
qr.Query = query;
qr.MaxResults = Int32.MaxValue;
qr.TimeoutMs = Int32.MaxValue;
DataTable DT = new DataTable();
int i = 0;
QueryResponse response = j.Query(qr, ProjectID).Execute();
string pageToken = null;
if (response.JobComplete == true)
{
if (response != null)
{
int colCount = response.Schema.Fields.Count;
if (DT == null)
DT = new DataTable();
if (DT.Columns.Count == 0)
{
foreach (var Column in response.Schema.Fields)
{
DT.Columns.Add(Column.Name);
}
}
pageToken = response.PageToken;
if (response.Rows != null)
{
foreach (TableRow row in response.Rows)
{
DataRow dr = DT.NewRow();
for (i = 0; i < colCount; i++)
{
dr[i] = row.F[i].V;
}
DT.Rows.Add(dr);
}
}
Console.WriteLine("No of Records are Readed: {0} # {1}", DT.Rows.Count.ToString(), DateTime.Now.ToString());
while (true)
{
int StartIndexForQuery = DT.Rows.Count;
Google.Apis.Bigquery.v2.JobsResource.GetQueryResultsRequest SubQR = Service.Jobs.GetQueryResults(response.JobReference.ProjectId, response.JobReference.JobId);
SubQR.StartIndex = (ulong)StartIndexForQuery;
//SubQR.MaxResults = Int32.MaxValue;
GetQueryResultsResponse QueryResultResponse = SubQR.Execute();
if (QueryResultResponse != null)
{
if (QueryResultResponse.Rows != null)
{
foreach (TableRow row in QueryResultResponse.Rows)
{
DataRow dr = DT.NewRow();
for (i = 0; i < colCount; i++)
{
dr[i] = row.F[i].V;
}
DT.Rows.Add(dr);
}
}
Console.WriteLine("No of Records are Readed: {0} # {1}", DT.Rows.Count.ToString(), DateTime.Now.ToString());
if (null == QueryResultResponse.PageToken)
{
break;
}
}
else
{
break;
}
}
}
else
{
Console.WriteLine("Response is null");
}
}
int TotalCount = 0;
if (DT != null && DT.Rows.Count > 0)
{
TotalCount = DT.Rows.Count;
}
else
{
TotalCount = 0;
}
Console.WriteLine("End Time: {0}", DateTime.Now.ToString());
Console.WriteLine("No. of records readed from google bigquery service: " + TotalCount.ToString());
}
catch (Exception e)
{
Console.WriteLine("Error Occurred: " + e.Message);
}
Console.ReadLine();
}
}
}
In this Sample Query get the results from public data set, In table contains 164656 rows but response returns 85000 rows only for the first time, then query again to get the second set of results. (But not known this is the only solution to get all the results).
In this sample contains only 4 fields, even-though it does not return all rows, in my case table contains more than 15 fields, I get response of ~4000 rows out of ~10k rows, I need to query again and again to get the remaining results for selecting 1000 rows takes time up to 2 minutes in my methodology so I am expecting best way to select all the records within single response.
Answer from User #:Pentium10
There is no way to run a query and select a large response in a single shot. You can either paginate the results, or if you can create a job to export to files, then use the files generated in your app. Exporting is free.
Step to run a large query and export results to files stored on GCS:
1) Set allowLargeResults to true in your job configuration. You must also specify a destination table with the allowLargeResults flag.
Example:
"configuration":
{
"query":
{
"allowLargeResults": true,
"query": "select uid from [project:dataset.table]"
"destinationTable": [project:dataset.table]
}
}
2) Now your data is in a destination table you set. You need to create a new job, and set the export property to be able to export the table to file(s). Exporting is free, but you need to have Google Cloud Storage activated to put the resulting files there.
3) In the end you download your large files from GCS.
It my turn to design the solution for better results.
Hoping this might help someone. One could retrieve next set of paginated result using PageToken. Here is the sample code for how to use PageToken. Although, I liked the idea of exporting for free. Here, I write rows to flat file but you could add them to your DataTable. Obviously, it is a bad idea to keep large DataTable in memory though.
public void ExecuteSQL(BigqueryService bqservice, String ProjectID)
{
string sSql = "SELECT r.Dealname, r.poolnumber, r.loanid FROM [MBS_Dataset.tblRemitData] R left join each [MBS_Dataset.tblOrigData] o on R.Dealname = o.Dealname and R.Poolnumber = o.Poolnumber and R.LoanID = o.LoanID Order by o.Dealname, o.poolnumber, o.loanid limit 100000";
QueryRequest _r = new QueryRequest();
_r.Query = sSql;
QueryResponse _qr = bqservice.Jobs.Query(_r, ProjectID).Execute();
string pageToken = null;
if (_qr.JobComplete != true)
{
//job not finished yet! expecting more data
while (true)
{
var resultReq = bqservice.Jobs.GetQueryResults(_qr.JobReference.ProjectId, _qr.JobReference.JobId);
resultReq.PageToken = pageToken;
var result = resultReq.Execute();
if (result.JobComplete == true)
{
WriteRows(result.Rows, result.Schema.Fields);
pageToken = result.PageToken;
if (pageToken == null)
break;
}
}
}
else
{
List<string> _fieldNames = _qr.Schema.Fields.ToList().Select(x => x.Name).ToList();
WriteRows(_qr.Rows, _qr.Schema.Fields);
}
}
The Web UI automatically flattens the data. This means that you see multiple rows for each nested field.
When you run the same query via the API, it won't be flattened, and you get fewer rows, as the nested fields are returned as objects. You should check if this is the case at you.
The other is that indeed you need to paginate through the results. Paging through list results has this explained.
If you want to do only one job, than you should write your query ouput to a table, than export the table as JSON, and download the export from GCS.

Variable Assignment Issues

Good Day Everyone, I am working on a script and it works really well, but there is just this one error that i just cannot figure out. What am i overlooking? I will post the error and code below.
Here is the error:
UnassignedReferenceException: The variable gameOverScore of Score has not been assigned.
You probably need to assign the gameOverScore variable of the Score script in the inspector.
Score.Start () (at Assets/2dspaceshooter/Scripts/Score.js:10)
Here is the script:
#pragma strict
var gameOverScore:GUIText;
var gameGUI:GameObject;
private var score:int = 0;
private var isGameOver = false;
function Start () {
gameOverScore.guiText.enabled = false;
guiText.text = "Score: " + score.ToString();
}
function addScore () {
if(!isGameOver){
score += 10;
guiText.text = "Score: " + score.ToString();
}
}
function doGameOver () {
isGameOver = true;
gameGUI.SetActive(false);
guiText.text = null;
gameOverScore.guiText.enabled = true;
gameOverScore.guiText.text = "Score: "+score;
}
Not that I really recognize your script as simple JavaScript, but it may be that you confuse an object Score with the lower case version score

JScript.NET private variables

I'm wondering about JScript.NET private variables. Please take a look on the following code:
import System;
import System.Windows.Forms;
import System.Drawing;
var jsPDF = function(){
var state = 0;
var beginPage = function(){
state = 2;
out('beginPage');
}
var out = function(text){
if(state == 2){
var st = 3;
}
MessageBox.Show(text + ' ' + state);
}
var addHeader = function(){
out('header');
}
return {
endDocument: function(){
state = 1;
addHeader();
out('endDocument');
},
beginDocument: function(){
beginPage();
}
}
}
var j = new jsPDF();
j.beginDocument();
j.endDocument();
Output:
beginPage 2
header 2
endDocument 2
if I run the same script in any browser, the output is:
beginPage 2
header 1
endDocument 1
Why it is so??
Thanks,
Paul.
Just a guess, but it appears that JScript.NET doesn't support closures the same way as EMCAScript, so the state variable in endDocument() isn't referencing the private member of the outer function, but rather an local variable (undeclared). Odd.
You don't have to use new when calling jsPDF here since you're using a singleton pattern. jsPDF is returning an object literal so even without new you'll have access to the beginPage and endDocument methods. To be perfectly honest I don't know what the specifications call for when using new on a function that returns an object literal so I'm not sure if JScript.NET is getting it wrong or the browser. But for now try either getting rid of the new before jsPDF() or change your function to this:
var jsPDF = function(){
var state = 0;
var beginPage = function(){
state = 2;
out('beginPage');
};
var out = function(text){
if(state == 2){
var st = 3;
}
MessageBox.Show(text + ' ' + state);
};
var addHeader = function(){
out('header');
};
this.endDocument = function(){
state = 1;
addHeader();
out('endDocument');
};
this.beginDocument: function(){
beginPage();
};
}
That will allow you to use the new keyword and create more than one jsPDF object.
I've come across the same problem. In the following code, the closure bound to fun should contain only one variable called result. As the code stands, the variable result in the function with one parameter seems to be different to the result variable in the closure.
If in this function the line
result = [];
is removed, then the result in the line
return result;
refers to the result in the closure.
var fun = function() {
var result = [];
// recursive descent, collects property names of obj
// dummy parameter does nothing
var funAux = function(obj, pathToObj, dummy) {
if (typeof obj === "object") {
for (var propName in obj) {
if (obj.hasOwnProperty(propName)) {
funAux(obj[propName], pathToObj.concat(propName), dummy);
}
}
}
else {
// at leaf property, save path to leaf
result.push(pathToObj);
}
}
return function(obj) {
// remove line below and `result' 3 lines below is `result' in closure
result = []; // does not appear to be bound to `result' above
funAux(obj, [], "dummy");
return result; // if result 2 lines above is set, result is closure is a different variable
};
}();