Rally c# task time spent - rally

I have a C# .net application using the rally 3.0.1 API. When I query task in my system I get 0.0 for time spent when I know they have time against them. Anyone know how to get this? Below is my code:
if (uTasks.Count > 0)
{
Request taskRequest = new Request(resultChild["Tasks"]);
QueryResult TaskQueryResult = restApi.Query(taskRequest);
foreach (var items in TaskQueryResult.Results)
//foreach (var items in uTasks)
{
DataRow dtrow2;
dtrow2 = dt.NewRow();
dtrow2["TaskID"]=items["FormattedID"];
dtrow2["Task Name"] = items["Name"];
if (items["Owner"] != null)
{
var owner = items["Owner"];
String ownerref = owner["_ref"];
var ownerFetch = restApi.GetByReference(ownerref, "Name");
string strTemp = ownerFetch["_refObjectName"];
dtrow2["Owner"] = strTemp.Replace(",", " ");
}
\\else { dtrow2["Owner"] = ""; }
dtrow2["Task-Est"] = items["Estimate"];
dtrow2["Task-ToDo"] = items["ToDo"];
dtrow2["Task-Spent"] = items["TimeSpent"];
dtrow2["ObjectType"] = "T";
dt.Rows.Add(dtrow2);
}
}

It seems like that should work. You may want to make sure you're including the TimeSpent field in your fetch before issuing the request.
taskRequest.Fetch = new List<string>() { "TimeSpent" };

Related

Convert EntityFramework to Raw SQL Queries in MVC

I am trying to make a crud calendar in my .net, my question is, How do make the below entity framework codes to SQL queries?
[HttpPost]
public JsonResult SaveEvent(Event e)
{
var status = false;
using (MyDatabaseEntities dc = new MyDatabaseEntities())
{
if (e.EventID > 0)
{
//Update the event
var v = dc.Events.Where(a => a.EventID == e.EventID).FirstOrDefault();
if (v != null)
{
v.Subject = e.Subject;
v.Start = e.Start;
v.End = e.End;
v.Description = e.Description;
v.IsFullDay = e.IsFullDay;
v.ThemeColor = e.ThemeColor;
}
}
else
{
dc.Events.Add(e);
}
dc.SaveChanges();
status = true;
}
return new JsonResult { Data = new { status = status } };
}
http://www.dotnetawesome.com/2017/07/curd-operation-on-fullcalendar-in-aspnet-mvc.html
Thanks guys
You can run raw query in entity framework with dc.Database.ExecuteSqlCommand() command like below:
var status = false;
using (MyDatabaseEntities dc = new MyDatabaseEntities())
{
if (e.EventID > 0)
{
dc.Database.ExecuteSqlCommand(&#"
UPDATE Events
SET Subject = {e.Subject},
Start = {e.Start},
End = {End},
Description = {Description},
IsFullDay = {IsFullDay},
ThemeColor = {ThemeColor},
WHERE EventID = {e.EventID}
IF ##ROWCOUNT = 0
INSERT INTO Events (EventID, Subject, Start, End, Description, IsFullDay, ThemeColor)
VALUES ({e.EventID}, {e.Subject}, ...)
");
status = true;
}
return new JsonResult { Data = new { status = status }
};

Problemas with API Key

I'm having some difficulties trying to access the ontologias of AgroPortal, it says my api key is not valid but I created an account and it was given to me an api key.
I'm trying to do like I did with BioPortal since the API is the same but with the BioPortal it works, my code is like this:
function getAgroPortalOntologies() {
var searchString = "http://data.agroportal.lirmm.fr/ontologies?apikey=72574b5d-b741-42a4-b449-4c1b64dda19a&display_links=false&display_context=false";
// we cache results and try to retrieve them on every new execution.
var cache = CacheService.getPrivateCache();
var text;
if (cache.get("ontologies_fragments") == null) {
text = UrlFetchApp.fetch(searchString).getContentText();
splitResultAndCache(cache, "ontologies", text);
} else {
text = getCacheResultAndMerge(cache, "ontologies");
}
var doc = JSON.parse(text);
var ontologies = doc;
var ontologyDictionary = {};
for (ontologyIndex in doc) {
var ontology = doc[ontologyIndex];
ontologyDictionary[ontology.acronym] = {"name":ontology.name, "uri":ontology["#id"]};
}
return sortOnKeys(ontologyDictionary);
}
var result2 = UrlFetchApp.fetch("http://data.agroportal.lirmm.fr/annotator", options).getContentText();
And what I did with BioPortal is very similar, I did this:
function getBioPortalOntologies() {
var searchString = "http://data.bioontology.org/ontologies?apikey=df3b13de-1ff4-4396-a183-80cc845046cb&display_links=false&display_context=false";
// we cache results and try to retrieve them on every new execution.
var cache = CacheService.getPrivateCache();
var text;
if (cache.get("ontologies_fragments") == null) {
text = UrlFetchApp.fetch(searchString).getContentText();
splitResultAndCache(cache, "ontologies", text);
} else {
text = getCacheResultAndMerge(cache, "ontologies");
}
var doc = JSON.parse(text);
var ontologies = doc;
var ontologyDictionary = {};
for (ontologyIndex in doc) {
var ontology = doc[ontologyIndex];
ontologyDictionary[ontology.acronym] = {"name":ontology.name, "uri":ontology["#id"]};
}
return sortOnKeys(ontologyDictionary);
}
var result = UrlFetchApp.fetch("http://data.bioontology.org/annotator", options).getContentText();
Can someone help me?
Thanks, my regards.

Google BigQuery returns only partial table data with C# application using .net Client Library

I am trying to execute the query (Basic select statement with 10 fields). My table contains more than 500k rows. C# application returns the response with only 4260 rows. However Web UI returns all the records.
Why my code returns only partial data, What is the best way to select all the records and load into C# Data Table? If there is any code snippet it would be more helpful to me.
using Google.Apis.Auth.OAuth2;
using System.IO;
using System.Threading;
using Google.Apis.Bigquery.v2;
using Google.Apis.Bigquery.v2.Data;
using System.Data;
using Google.Apis.Services;
using System;
using System.Security.Cryptography.X509Certificates;
namespace GoogleBigQuery
{
public class Class1
{
private static void Main()
{
try
{
Console.WriteLine("Start Time: {0}", DateTime.Now.ToString());
String serviceAccountEmail = "SERVICE ACCOUNT EMAIL";
var certificate = new X509Certificate2(#"KeyFile.p12", "notasecret", X509KeyStorageFlags.Exportable);
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
Scopes = new[] { BigqueryService.Scope.Bigquery, BigqueryService.Scope.BigqueryInsertdata, BigqueryService.Scope.CloudPlatform, BigqueryService.Scope.DevstorageFullControl }
}.FromCertificate(certificate));
BigqueryService Service = new BigqueryService(new BaseClientService.Initializer()
{
HttpClientInitializer = credential,
ApplicationName = "PROJECT NAME"
});
string query = "SELECT * FROM [publicdata:samples.shakespeare]";
JobsResource j = Service.Jobs;
QueryRequest qr = new QueryRequest();
string ProjectID = "PROJECT ID";
qr.Query = query;
qr.MaxResults = Int32.MaxValue;
qr.TimeoutMs = Int32.MaxValue;
DataTable DT = new DataTable();
int i = 0;
QueryResponse response = j.Query(qr, ProjectID).Execute();
string pageToken = null;
if (response.JobComplete == true)
{
if (response != null)
{
int colCount = response.Schema.Fields.Count;
if (DT == null)
DT = new DataTable();
if (DT.Columns.Count == 0)
{
foreach (var Column in response.Schema.Fields)
{
DT.Columns.Add(Column.Name);
}
}
pageToken = response.PageToken;
if (response.Rows != null)
{
foreach (TableRow row in response.Rows)
{
DataRow dr = DT.NewRow();
for (i = 0; i < colCount; i++)
{
dr[i] = row.F[i].V;
}
DT.Rows.Add(dr);
}
}
Console.WriteLine("No of Records are Readed: {0} # {1}", DT.Rows.Count.ToString(), DateTime.Now.ToString());
while (true)
{
int StartIndexForQuery = DT.Rows.Count;
Google.Apis.Bigquery.v2.JobsResource.GetQueryResultsRequest SubQR = Service.Jobs.GetQueryResults(response.JobReference.ProjectId, response.JobReference.JobId);
SubQR.StartIndex = (ulong)StartIndexForQuery;
//SubQR.MaxResults = Int32.MaxValue;
GetQueryResultsResponse QueryResultResponse = SubQR.Execute();
if (QueryResultResponse != null)
{
if (QueryResultResponse.Rows != null)
{
foreach (TableRow row in QueryResultResponse.Rows)
{
DataRow dr = DT.NewRow();
for (i = 0; i < colCount; i++)
{
dr[i] = row.F[i].V;
}
DT.Rows.Add(dr);
}
}
Console.WriteLine("No of Records are Readed: {0} # {1}", DT.Rows.Count.ToString(), DateTime.Now.ToString());
if (null == QueryResultResponse.PageToken)
{
break;
}
}
else
{
break;
}
}
}
else
{
Console.WriteLine("Response is null");
}
}
int TotalCount = 0;
if (DT != null && DT.Rows.Count > 0)
{
TotalCount = DT.Rows.Count;
}
else
{
TotalCount = 0;
}
Console.WriteLine("End Time: {0}", DateTime.Now.ToString());
Console.WriteLine("No. of records readed from google bigquery service: " + TotalCount.ToString());
}
catch (Exception e)
{
Console.WriteLine("Error Occurred: " + e.Message);
}
Console.ReadLine();
}
}
}
In this Sample Query get the results from public data set, In table contains 164656 rows but response returns 85000 rows only for the first time, then query again to get the second set of results. (But not known this is the only solution to get all the results).
In this sample contains only 4 fields, even-though it does not return all rows, in my case table contains more than 15 fields, I get response of ~4000 rows out of ~10k rows, I need to query again and again to get the remaining results for selecting 1000 rows takes time up to 2 minutes in my methodology so I am expecting best way to select all the records within single response.
Answer from User #:Pentium10
There is no way to run a query and select a large response in a single shot. You can either paginate the results, or if you can create a job to export to files, then use the files generated in your app. Exporting is free.
Step to run a large query and export results to files stored on GCS:
1) Set allowLargeResults to true in your job configuration. You must also specify a destination table with the allowLargeResults flag.
Example:
"configuration":
{
"query":
{
"allowLargeResults": true,
"query": "select uid from [project:dataset.table]"
"destinationTable": [project:dataset.table]
}
}
2) Now your data is in a destination table you set. You need to create a new job, and set the export property to be able to export the table to file(s). Exporting is free, but you need to have Google Cloud Storage activated to put the resulting files there.
3) In the end you download your large files from GCS.
It my turn to design the solution for better results.
Hoping this might help someone. One could retrieve next set of paginated result using PageToken. Here is the sample code for how to use PageToken. Although, I liked the idea of exporting for free. Here, I write rows to flat file but you could add them to your DataTable. Obviously, it is a bad idea to keep large DataTable in memory though.
public void ExecuteSQL(BigqueryService bqservice, String ProjectID)
{
string sSql = "SELECT r.Dealname, r.poolnumber, r.loanid FROM [MBS_Dataset.tblRemitData] R left join each [MBS_Dataset.tblOrigData] o on R.Dealname = o.Dealname and R.Poolnumber = o.Poolnumber and R.LoanID = o.LoanID Order by o.Dealname, o.poolnumber, o.loanid limit 100000";
QueryRequest _r = new QueryRequest();
_r.Query = sSql;
QueryResponse _qr = bqservice.Jobs.Query(_r, ProjectID).Execute();
string pageToken = null;
if (_qr.JobComplete != true)
{
//job not finished yet! expecting more data
while (true)
{
var resultReq = bqservice.Jobs.GetQueryResults(_qr.JobReference.ProjectId, _qr.JobReference.JobId);
resultReq.PageToken = pageToken;
var result = resultReq.Execute();
if (result.JobComplete == true)
{
WriteRows(result.Rows, result.Schema.Fields);
pageToken = result.PageToken;
if (pageToken == null)
break;
}
}
}
else
{
List<string> _fieldNames = _qr.Schema.Fields.ToList().Select(x => x.Name).ToList();
WriteRows(_qr.Rows, _qr.Schema.Fields);
}
}
The Web UI automatically flattens the data. This means that you see multiple rows for each nested field.
When you run the same query via the API, it won't be flattened, and you get fewer rows, as the nested fields are returned as objects. You should check if this is the case at you.
The other is that indeed you need to paginate through the results. Paging through list results has this explained.
If you want to do only one job, than you should write your query ouput to a table, than export the table as JSON, and download the export from GCS.

Iterate through collection of List<object>

I have what is probably simple problem, but I am stumped. I call a method from another assembly that returns me a List<object>, this data is Excel spreadsheet data queried using LinqToExcel. Under the scenes, that collection is actually a List<LinqToExcel.Cell>. In LinqToExcel, that makes up a LinqToExcel.Row. I want to be able to bind this data to a Telerik ASP.NET MVC grid for viewing. Here's my controller code:
TypeOfServiceCodeListingDetailViewModel model = new TypeOfServiceCodeListingDetailViewModel();
model.Excel_Data = new List<LinqToExcel.Row>();
using (LinqToExcelReader reader = new LinqToExcelReader(fileName, true))
{
previewData = reader.ReadRawDataByPage(5, 0);
foreach (LinqToExcel.Row item in previewData)
{
model.Excel_Data.Add(item);
}
return View(new GridModel(model.Excel_Data));
}
And in my view:
#(Html.Telerik().Grid(Model.Excel_Data)
.Name("Grid2")
.HtmlAttributes(new { style = "width:400px;" })
.DataBinding(dataBinding => dataBinding.Ajax().Select("GetExcelData", "TypeOfService"))
.Columns(columns =>
{
columns.AutoGenerate(column =>
{
column.Width = "150px";
});
}))
Here's what my grid has headers like the below with no data:
Capacity Count
Thanks for the help!
Here's the code that solved my problem. I'm sure there's a better approach.
using (LinqToExcelReader reader = new LinqToExcelReader(modelDetail.FileName, true))
{
var previewData = reader.ReadRawDataByPage(5, 0);
List<List<string>> masterList = new List<List<string>>();
for (int x = 0; x < previewData.Count; x++)
{
List<string> list = new List<string>();
foreach (var cell in (LinqToExcel.Row)previewData[x])
{
list.Add(cell);
}
masterList.Add(list);
}
var listTest = masterList;
modelDetail.ExcelData = new List<ExcelData>();
foreach (List<string> theList in masterList)
{
ExcelData xlsData = new ExcelData();
xlsData.Column1 = theList[0];
xlsData.Column2 = theList[1];
xlsData.Column3 = theList[2];
xlsData.Column4 = theList[3];
xlsData.Column5 = theList[4];
xlsData.Column6 = theList[5];
xlsData.Column7 = theList[6];
xlsData.Column8 = theList[7];
xlsData.Column9 = theList[8];
xlsData.Column10 = theList[9];
modelDetail.ExcelData.Add(xlsData);
}

Why does this controller double the inserts when I try to archive the results of the Bing Search API?

I'm trying to archive my search results for a term by
Using the Bing API in an async controller
Inserting them into database using Entity Framework
using the Bing API and insert them into a database using entity framework. For whatever reason it is returning 50 results, but then it enters 100 results into the database.
My Controller Code:
public class DHWebServicesController : AsyncController
{
//
// GET: /WebService/
private DHContext context = new DHContext();
[HttpPost]
public void RunReportSetAsync(int id)
{
int iTotalCount = 1;
AsyncManager.OutstandingOperations.Increment(iTotalCount);
if (!context.DHSearchResults.Any(xx => xx.CityMarketComboRunID == id))
{
string strBingSearchUri = #ConfigurationManager.AppSettings["BingSearchURI"];
string strBingAccountKey = #ConfigurationManager.AppSettings["BingAccountKey"];
string strBingUserAccountKey = #ConfigurationManager.AppSettings["BingUserAccountKey"];
CityMarketComboRun cityMarketComboRun = context.CityMarketComboRuns.Include(xx => xx.CityMarketCombo).Include(xx => xx.CityMarketCombo.City).First(xx => xx.CityMarketComboRunID == id);
var bingContainer = new Bing.BingSearchContainer(new Uri(strBingSearchUri));
bingContainer.Credentials = new NetworkCredential(strBingUserAccountKey, strBingAccountKey);
// now we can build the query
Keyword keyword = context.Keywords.First();
var bingWebQuery = bingContainer.Web(keyword.Name, "en-US", "Moderate", cityMarketComboRun.CityMarketCombo.City.Latitude, cityMarketComboRun.CityMarketCombo.City.Longitude, null, null, null);
var bingWebResults = bingWebQuery.Execute();
context.Configuration.AutoDetectChangesEnabled = false;
int i = 1;
DHSearchResult dhSearchResult = new DHSearchResult();
List<DHSearchResult> lst = new List<DHSearchResult>();
var webResults = bingWebResults.ToList();
foreach (var result in webResults)
{
dhSearchResult = new DHSearchResult();
dhSearchResult.BingID = result.ID;
dhSearchResult.CityMarketComboRunID = id;
dhSearchResult.Description = result.Description;
dhSearchResult.DisplayUrl = result.DisplayUrl;
dhSearchResult.KeywordID = keyword.KeywordID;
dhSearchResult.Created = DateTime.Now;
dhSearchResult.Modified = DateTime.Now;
dhSearchResult.Title = result.Title;
dhSearchResult.Url = result.Url;
dhSearchResult.Ordinal = i;
lst.Add(dhSearchResult);
i++;
}
foreach (DHSearchResult c in lst)
{
context.DHSearchResults.Add(c);
context.SaveChanges();
}
AsyncManager.Parameters["message"] = "The total number of results was "+lst.Count+". And there are " + context.DHSearchResults.Count().ToString();
}
else
{
AsyncManager.Parameters["message"] = "You have already run this report";
}
AsyncManager.OutstandingOperations.Decrement(iTotalCount);
}
public string RunReportSetCompleted(string message)
{
string str = message;
return str;
}
}
Here is how I am calling it from my asp.net mvc 4 page.
#Ajax.ActionLink("Run Report", "GatherKeywordsFromBing", "DHWebServices",
new { id=item.CityMarketComboRunID},
new AjaxOptions { OnSuccess = "ShowNotifier();", UpdateTargetId = "TopNotifierMessage", HttpMethod = "POST", InsertionMode = InsertionMode.Replace, LoadingElementId = strCityMarketComboProgressID, LoadingElementDuration = 1000 },
new { #class = "ViewLink" })
<span class="ProgressIndicator" id="#strCityMarketComboProgressID"><img src="#Url.Content("~/Content/img/SmallBall.gif")" alt="loading" /></span>
For whatever reason all of
Try saving only once:
foreach (DHSearchResult c in lst)
{
context.DHSearchResults.Add(c);
}
context.SaveChanges();
Also there's nothing asynchronous in your code, so there's no point of using asynchronous controller. Not only that it won't improve anything but it might make things worse.