I have a SQL query that is pulling back results ordered correctly when I try the query in MSSQL Studio.
I am using Datatables from Datatables.net and everything is working great apart from a sorting issue. I have some properties in the first column and I would like to order these like this:
1
1a
1b
2
3
4a
5
etc
However what comes back is something like this:
1
10
100
11
11a
I have looked though various posts but nothing seems to work and I believe that this must be something I should trigger from the datatables plugin but cannot find anything.
Could someone advise?
Your data contain numbers and characters, so they will be sorted as string by default. You should write your own plugin for sorting your data type. Have a look at here and here
to see how to write a plugin and how to use it with your table.
Edit: got some time today to work with the datatable stuff. If you still need a solution, here you go:
//Sorting plug-in
jQuery.extend( jQuery.fn.dataTableExt.oSort, {
//pre-processing
"numchar-pre": function(str){
var patt = /^([0-9]+)([a-zA-Z]+)$/; //match data like 1a, 2b, 1ab, 100k etc.
var matches = patt.exec($.trim(str));
var number = parseInt(matches[1]); //extract the number part
var str = matches[2].toLowerCase(); //extract the "character" part and make it case-insensitive
var dec = 0;
for (i=0; i<str.length; i++)
{
dec += (str.charCodeAt(i)-96)*Math.pow(26, -(i+1)); //deal with the character as a base-26 number
}
return number + dec; //combine the two parts
},
//sort ascending
"numchar-asc": function(a, b){
return a-b;
},
//sort descending
"numchar-desc": function(a, b){
return b-a;
}
});
//Automatic type detection plug-in
jQuery.fn.dataTableExt.aTypes.unshift(
function(sData)
{
var patt = /^([0-9]+)([a-zA-Z]+)$/;
var trimmed = $.trim(sData);
if (patt.test(trimmed))
{
return 'numchar';
}
return null;
}
);
You can use the automatic type detection function to let the data type automatically detected or you can set the data type for the column
"aoColumns": [{"sType": "numchar"}]
Related
I am fairly new to SSIS and need a little help getting started. I have several reports that come out of our mainframe. The reports are not in a columnar format. The date record is at the top then there might be some initial data then there might be a little more. So I need to read in each line look to see what the text reads and figure out if I need the data or move to the next row.
This is a VERY rough example of what the report I want to import into a SQL table.
DATE: 01/08/2020 FACILITY NAME PAGE1
REVENUE USAGE FOR ACCOUNTING PERIOD 02
----TOTAL---- ----TOTAL---- ----OTHER---- ----INSURANCE---- ----INSURANCE2----
SERVICE CODE - 123456789 DESCRIPTION: WIDGETS
CURR 2,077
IP 0.0000 3 2,345 0.00
143
OP 0.0000 2 1,231 0.00
YTD 5
IP 0.0000
76
OP 0.0000
etc . . . .. .
SERVICE CODE
After the SERVICE CODE the data will start to repeat like it is above. This is the basic idea of a report.
I want to get the Date then the Service Code, Description, Current IP Volume, Current IP Dollar, Current OP Volume, Current OP Dollar, YTD IP Volume, YTD IP Dollar, YTD OP Volume, YTD OP Dollar . . then repeat.
Just to clarify, I am not asking anyone to do this for me. I want to learn how to do this. I have looked on how to do this but every example I have looked at talks about doing this with a CSV, tab, or Excel file. i do not have that type of file so I was asking what I need to look at. I currently use Monarch to format the file, but again I want to learn more about SSIS and this is a perfect way to learn. Asking the vendor to redo the report is not an option plus I want to learn how to do this. Thank you I just wanted to get that out there.
Any help would be greatly appreciated.
Rodger
As stated in comments, you could do this using a script task. The basics steps are:
Define a DataTable to store your data.
Use a StreamReader to read your report.
Process this using a combination of conditionals, String Methods, and parsing to extract the relevant fields from the relevant line:
Write the DataTable to the database using SqlBulkCopy
The following would go inside your Main method in your script task:
//Define a table to store your data
var table = new DataTable
{
Columns =
{
{ "ServiceCode", typeof(string) },
{ "Description", typeof(string) },
{ "CurrentIPVolume", typeof(int) },
{ "CurrentIPDollar,", typeof(decimal) },
{ "CurrentOPVolume", typeof(int) },
{ "CurrentOPDollar", typeof(decimal) },
{ "YTDIPVolume", typeof(int) },
{ "YTDIPDollar,", typeof(decimal) },
{ "YTDOPVolume", typeof(int) },
{ "YTDOPDollar", typeof(decimal) }
}
};
var filePath = #"Your File Path";
using (var reader = new StreamReader(filePath))
{
string line = null;
DataRow row = null;
// As YTD and Curr are identical, we will need a flag later to mark our position within the record
bool ytdFlag= false;
//Loop through every line in the file
while ((line = reader.ReadLine()) != null)
{
//if the line is blank, move on to the next
if (string.IsNullOrWhiteSpace(line)
continue;
// If the line starts with service code, then it marks the start of a new record
if (line.StartsWith("SERVICE CODE"))
{
//If the current value for row is not null then this is
//not the first record, so we need to add the previous
//record to the tale before continuing
if (row != null)
{
table.Rows.Add(row);
ytdFlag= false; // New record, reset YTD flag
}
row = table.NewRow();
//Split the line now based on known values:
var tokens = line.Split(new string[] { "SERVICE CODE - ", "DESCRIPTION: "}, StringSplitOptions.None);
row[0] = tokens[0];
row[1] = tokens[1];
}
if (line.StartsWith("CURR"))
{
//Process the row --> "CURR 2,077"
//Not sure what 2,077 is, but this will parse it
int i = 0;
if (int.TryParse(line.Substring(4).Trim().Replace(",", ""), out i))
{
//Do something with your int
Console.WriteLine(i);
}
}
if (line.StartsWith(" IP"))
{
//Start at after IP then split the line into the 4 numbers
var tokens = line.Substring(3).Split(new [] { " "}, StringSplitOptions.RemoveEmptyEntries);
//If we have gone past the CURR record, then at to YTD Columns
if (ytdFlag)
{
row[6] = int.Parse(tokens[1]);
row[7] = decimal.Parse(tokens[1]);
}
//Otherwise we are still in the CURR section:
else
{
row[2] = int.Parse(tokens[1]);
row[3] = decimal.Parse(tokens[1]);
}
}
if (line.StartsWith(" OP"))
{
//Start at after OP then split the line into the 4 numbers
var tokens = line.Substring(3).Split(new [] { " "}, StringSplitOptions.RemoveEmptyEntries);
//If we have gone past the CURR record, then at to YTD Columns
if (ytdFlag)
{
row[8] = int.Parse(tokens[1]);
row[9] = decimal.Parse(tokens[1]);
}
//Otherwise we are still in the CURR section:
else
{
row[4] = int.Parse(tokens[1]);
row[5] = decimal.Parse(tokens[1]);
}
//After we have processed an OP record, we must set the YTD Flag to true.
//Doesn't matter if it is the YTD OP record, since the flag will be reset
//By the next line that starts with SERVICE CODE anyway
ytdFlag= true;
}
}
}
//Now that we have processed the file, we can write the data to a database
using (var sqlBulkCopy = new SqlBulkCopy("Your Connection String"))
{
sqlBulkCopy.DestinationTableName = "dbo.YourTable";
//If necessary add column mappings, but if your DataTable matches your database table
//then this is not required
sqlBulkCopy.WriteToServer(table);
}
This is a very quick example, far from the finished article, and I have done little or no testing, but it should give you the gist of how it could be done, and get you started on one possible solution.
It can definitely be cleaned up and refactored, but I have tried to make it as clear as possible what is going on, rather than trying to write the most efficient code ever. It should also (hopefully) demonstrate what a monumental pain this is to do, and very minor report changes things like an extra space be "OP" will break the whole thing.
So again, I would re-iterate, if you can get the data in a standard flat file format, with one line per record, you should. I do however appreciate that sometimes these things are out of your control, and I have had to write incredibly ugly import routines like this in the past, so I feel your pain if you can't get the data in a consumable format.
So let's all assume that column B is filled with multiple, short statements. These statements may be used more than once, not at all, or just once throughout the column. I want to be able to read what's in each cell of column B and assign a category to it in column F using the Google Sheets script editor. I'll include some pseudo-code of how I would do something like this normally.
for (var i = 0; i < statements.length; i++) {
if (statements[i] == 'Description One') {
category[i] = 'Category One';
}
else if (statements[i] == 'Description Two') {
category[i] = 'Category Two';
}
// and so on for all known categories....
}
How do I go about accessing a cell for a read and accessing a different cell for a write?
Thanks in advance for the help!
Ok, so after a little more thought on the subject, I've arrived at a solution. It's super simple, albeit tedious
function assignCategory(description) {
if (description == 'Description One') {
return 'Category One';
}
// and so on for all known categories
}
Hopefully someone will see this and be helped anyway, if you guys think of a more efficient and easier to maintain way of doing this, by all means do chime in.
Assuming a sheet such as this one, which has a header and six different columns (where B is the description, and F the category); you could use a dictionary to translate your values as follows:
// (description -> category) dictionary
var translations = {
"cooking": "Cooking",
"sports": "Sport",
"leisure": "Leisure",
"music": "Music",
"others": "Other"
}
function assignCategories() {
var dataRange = SpreadsheetApp.getActiveSheet().getDataRange();
for (var i=2; i<=dataRange.getNumRows(); i++) {
var description = dataRange.getCell(i, 2).getValue();
var category = translations[description];
dataRange.getCell(i, 6).setValue(category);
}
}
In case you need additional ruling (i.e. descriptions that contain cricket must be classified as sport), you could accomplish your desired results by implementing your own custom function and using string functions (such as indexOf) or regular expressions.
Using indexOf
// (description -> category) dictionary
var translations = {
"cooking": "Cooking",
"sports": "Sport",
"leisure": "Leisure",
"music": "Music",
"others": "Other"
}
function assignCategories() {
var dataRange = SpreadsheetApp.getActiveSheet().getDataRange();
for (var i=2; i<=dataRange.getNumRows(); i++) {
var description = dataRange.getCell(i, 2).getValue()
var category = assignCategory(description);
if (category) dataRange.getCell(i, 6).setValue(category);
}
}
function assignCategory(description) {
description = description.toLowerCase();
var keys = Object.keys(translations);
for (var i=0; i<categories.length; i++) {
var currentKey = keys[i];
if (description.indexOf(currentKey) > -1)
return translations[currentKey];
}
}
This version is a bit more sophisticated. It will make the 'description' of each row lowercase in order to better compare with your dictionary, and also uses indexOf for checking whether the 'translation key' appears in the description, rather than checking for an exact match.
You should be aware however that this method will be considerably slower, and that the script may timeout (see GAS Quotas). You could implement ways to 'resume' your script operations such that you can re-run it and continue where it left off, in case that this hinders your operations.
I am trying to add columnSummary to my table using Handsontable. But it seems that the function does not fire. The stretchH value gets set and is set properly. But it does not react to the columnSummary option:
this.$refs.hot.hotInstance.updateSettings({stretchH: 'all',columnSummary: [
{
destinationRow: 0,
destinationColumn: 2,
reversedRowCoords: true,
type: 'custom',
customFunction: function(endpoint) {
console.log("TEST");
}
}]
}, false);
I have also tried with type:'sum' without any luck.
Thanks for all help and guidance!
columnSummary cannot be changed with updateSettings: GH #3597
You can set columnSummary settings at the initialization of Handsontable.
One workaround would be to somehow manage your own column summary, since Handsontable one could give you some headeache. So you may try to add one additional row to put your arithmetic in, but it is messy (it needs fixed rows number and does not work with filtering and sorting operations. Still, it could work well under some circumstances.
In my humble opinion though, a summary column has to be fully functionnal. We then need to set our summary row out of the table data. What comes to mind is to take the above mentioned additional row and take it away from the table data "area" but it would force us to make that out of the table row always looks like it still was in the table.
So I thought that instead of having a new line we could just have to add our column summary within column header:
Here is a working JSFiddle example.
Once the Handsontable table is rendered, we need to iterate through the columns and set our column summary right in the table cell HTML content:
for(var i=0;i<tableConfig.columns.length;i++) {
var columnHeader = document.querySelectorAll('.ht_clone_top th')[i];
if(columnHeader) { // Just to be sure column header exists
var summaryColumnHeader = document.createElement('div');
summaryColumnHeader.className = 'custom-column-summary';
columnHeader.appendChild( summaryColumnHeader );
}
}
Now that our placeholders are set, we have to update them with some arithmetic results:
var printedData = hotInstance.getData();
for(var i=0;i<tableConfig.columns.length;i++) {
var summaryColumnHeader = document.querySelectorAll('.ht_clone_top th')[i].querySelector('.custom-column-summary'); // Get back our column summary for each column
if(summaryColumnHeader) {
var res = 0;
printedData.forEach(function(row) { res += row[i] }); // Count all data that are stored under that column
summaryColumnHeader.innerText = '= '+ res;
}
}
This piece of code function may be called anytime it should be:
var hotInstance = new Handsontable(/* ... */);
setMySummaryHeaderCalc(); // When Handsontable table is printed
Handsontable.hooks.add('afterFilter', function(conditionsStack) { // When Handsontable table is filtered
setMySummaryHeaderCalc();
}, hotInstance);
Feel free to comment, I could improve my answer.
So I have a standard service reference proxy calss for MS CRM 2013 (i.e. right-click add reference etc...) I then found the limitation that CRM data calls limit to 50 results and I wanted to get the full list of results. I found two methods, one looks more correct, but doesn't seem to work. I was wondering why it didn't and/or if there was something I'm doing incorrectly.
Basic setup and process
crmService = new CrmServiceReference.MyContext(new Uri(crmWebServicesUrl));
crmService.Credentials = System.Net.CredentialCache.DefaultCredentials;
var accountAnnotations = crmService.AccountSet.Where(a => a.AccountNumber = accountNumber).Select(a => a.Account_Annotation).FirstOrDefault();
Using Continuation (something I want to work, but looks like it doesn't)
while (accountAnnotations.Continuation != null)
{
accountAnnotations.Load(crmService.Execute<Annotation>(accountAnnotations.Continuation.NextLinkUri));
}
using that method .Continuation is always null and accountAnnotations.Count is always 50 (but there are more than 50 records)
After struggling with .Continutation for a while I've come up with the following alternative method (but it seems "not good")
var accountAnnotationData = accountAnnotations.ToList();
var accountAnnotationFinal = accountAnnotations.ToList();
var index = 1;
while (accountAnnotationData.Count == 50)
{
accountAnnotationData = (from a in crmService.AnnotationSet
where a.ObjectId.Id == accountAnnotationData.First().ObjectId.Id
select a).Skip(50 * index).ToList();
accountAnnotationFinal = accountAnnotationFinal.Union(accountAnnotationData).ToList();
index++;
}
So the second method seems to work, but for any number of reasons it doesn't seem like the best. Is there a reason .Continuation is always null? Is there some setup step I'm missing or some nice way to do this?
The way to get the records from CRM is to use paging here is an example with a query expression but you can also use fetchXML if you want
// Query using the paging cookie.
// Define the paging attributes.
// The number of records per page to retrieve.
int fetchCount = 3;
// Initialize the page number.
int pageNumber = 1;
// Initialize the number of records.
int recordCount = 0;
// Define the condition expression for retrieving records.
ConditionExpression pagecondition = new ConditionExpression();
pagecondition.AttributeName = "address1_stateorprovince";
pagecondition.Operator = ConditionOperator.Equal;
pagecondition.Values.Add("WA");
// Define the order expression to retrieve the records.
OrderExpression order = new OrderExpression();
order.AttributeName = "name";
order.OrderType = OrderType.Ascending;
// Create the query expression and add condition.
QueryExpression pagequery = new QueryExpression();
pagequery.EntityName = "account";
pagequery.Criteria.AddCondition(pagecondition);
pagequery.Orders.Add(order);
pagequery.ColumnSet.AddColumns("name", "address1_stateorprovince", "emailaddress1", "accountid");
// Assign the pageinfo properties to the query expression.
pagequery.PageInfo = new PagingInfo();
pagequery.PageInfo.Count = fetchCount;
pagequery.PageInfo.PageNumber = pageNumber;
// The current paging cookie. When retrieving the first page,
// pagingCookie should be null.
pagequery.PageInfo.PagingCookie = null;
Console.WriteLine("#\tAccount Name\t\t\tEmail Address");while (true)
{
// Retrieve the page.
EntityCollection results = _serviceProxy.RetrieveMultiple(pagequery);
if (results.Entities != null)
{
// Retrieve all records from the result set.
foreach (Account acct in results.Entities)
{
Console.WriteLine("{0}.\t{1}\t\t{2}",
++recordCount,
acct.EMailAddress1,
acct.Name);
}
}
// Check for more records, if it returns true.
if (results.MoreRecords)
{
// Increment the page number to retrieve the next page.
pagequery.PageInfo.PageNumber++;
// Set the paging cookie to the paging cookie returned from current results.
pagequery.PageInfo.PagingCookie = results.PagingCookie;
}
else
{
// If no more records are in the result nodes, exit the loop.
break;
}
}
I'm using node js 0.10.12 to perform querys to postgreSQL 9.1.
I get the error error invalid input synatx for integer: "{39}" (39 is an example number) when I try to perform an update query
I cannot see what is going wrong. Any advise?
Here is my code (snippets) in the front-end
//this is global
var gid=0;
//set websockets to search - works fine
var sd = new WebSocket("ws://localhost:0000");
sd.onmessage = function (evt)
{
//get data, parse it, because there is more than one vars, pass id to gid
var received_msg = evt.data;
var packet = JSON.parse(received_msg);
var tid = packet['tid'];
gid=tid;
}
//when user clicks button, set websockets to send id and other data, to perform update query
var sa = new WebSocket("ws://localhost:0000");
sa.onopen = function(){
sa.send(JSON.stringify({
command:'typesave',
indi:gid,
name:document.getElementById("typename").value,
}));
sa.onmessage = function (evt) {
alert("Saved");
sa.close;
gid=0;//make gid 0 again, for re-use
}
And the back -end (query)
var query=client.query("UPDATE type SET t_name=$1,t_color=$2 WHERE t_id = $3 ",[name, color, indi])
query.on("row", function (row, result) {
result.addRow(row);
});
query.on("end", function (result) {
connection.send("o");
client.end();
});
Why this not work and the number does not get recognized?
Thanks in advance
As one would expect from the initial problem, your database driver is sending in an integer array of one member into a field for an integer. PostgreSQL rightly rejects the data and return an error. '{39}' in PostgreSQL terms is exactly equivalent to ARRAY[39] using an array constructor and [39] in JSON.
Now, obviously you can just change your query call to pull the first item out of the JSON array. and send that instead of the whole array, but I would be worried about what happens if things change and you get multiple values. You may want to look at separating that logic out for this data structure.