DataTable change comma separator - pentaho

Hi I'm using Pentaho CE,
I'd like to know where I can chage the DataTable underlying the TableComponent propertis such as the thousand separator in order to show a point instead of default comma as thousand separator.
This is the oLanguage sInfoThounsand properties..
OK, I'm using the oLanguage in my table definition in the following way.
"oLanguage": {
"sSearch": "cerca",
"sInfoThousands": "."
},
But I have a problem, Why the propery sSearch is set and sInfoThousand doesn't change?

The dataTable is accessible using the dataTable property on the table component object.

Related

Error code: DelimitedTextMoreColumnsThanDefined Azure Data Factory

I am trying to copy data from a csv file to a sql table in Azure Data Factory
This is my type property for the CSV file
"typeProperties": {
"location": {
"type": "AzureBlobStorageLocation",
"fileName": "2020-09-16-stations.csv",
"container": "container"
},
"columnDelimiter": ",",
"escapeChar": "\\",
"firstRowAsHeader": true,
"quoteChar": "\""
I recieve following error:
ErrorCode=DelimitedTextMoreColumnsThanDefined,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error found when processing 'Csv/Tsv Format Text' source '2020-09-16-stations.csv' with row number 2: found more columns than expected column count 11.,Source=Microsoft.DataTransfer.Common,'
This is row #2
0e18d0d3-ed38-4e7f,Station2,Mainstreet33,,12207,Berlin,48.1807,11.4609,1970-01-01 01:00:00+01,"{""openingTimes"":[{""applicable_days"":96,""periods"":[{""startp"":""08:00"",""endp"":""20:00""}]},{""applicable_days"":31,""periods"":[{""startp"":""06:00"",""endp"":""20:00""}]}]}"
I think the last column, the JSON query is making trouble in this case. When I view the data it looks fine:
I thought exactly the "quoteChar": "\""would prevent that the last column makes problems. I have no idea why I am getting this error while i run debug
Try setting the escape character = " (a double quote). This should treat each pair of double quotes as an actual single quote and wont consider them as a "Quote Char" within the string, so you will end up with a string that looks like this (and which the system knows is a single string and not something it has to split):
{"openingTimes":[{"applicable_days":96,"periods":[{"startp":"08:00","endp":"20:00"}]},
{"applicable_days":31,"periods":[{"startp":"06:00","endp":"20:00"}]}]}
This is because this value "{""openingTimes"":[{""applicable_days"":96,""periods"":[{""startp"":""08:00"",""endp"":""20:00""}]},{""applicable_days"":31,""periods"":[{""startp"":""06:00"",""endp"":""20:00""}]}]}" contains several comma and your columnDelimiter is "," which leads to that value is split to several column. So you need to change your columnDelimiter.

how to add column mapping in sqlbulkcopy in c# where column names contains White Space

using (SqlBulkCopy sc = new SqlBulkCopy(conn))
{
sc.DestinationTableName = destination;
sc.ColumnMappings.Add("ID","ID #");
sc.ColumnMappings.Add("Amount","Amount in USD");
sc.WriteToServer(datatable);
}
I am getting error that column dose not match in given columnmapping
Thanks in Advance.
found solution need to add Square brackets around the fields only where the White space involved in column name

Rally: Creating a multiple query based filter

I have looked at this to create a complexQueryFilter.
Here's my code:
QueryFilter complexFilter = new QueryFilter("c_TestReady","=","Yes").and(new QueryFilter("c_ExternalID","=",""));
The c_ExternalID field is a custom field in Rally of type String and c_TestReady is another field with values Yes or no. The query should return all test cases that are TestReady and whose ExternalID field is empty.
I also tried:
1."c_ExternalID","=",null
2."c_ExternalID","contains",null
3."c_ExternalID","contains",""
None of these seem to work.
Try escaping the double quotes, as in new QueryFilter("c_ExternalID", "=", "\"\"")
This worked for me:
storyRequest.setQueryFilter((new QueryFilter("c_CustomString", "=", "\"\"")).and(new QueryFilter("c_CustomCheckBox", "=", "true")));

Sharepoint 2010 client model. Set multiple values in taxonomy field

How can I set multiple values in taxonomy field using client model?
For single value I use follow snippet:
var item = ctx
.get_web()
.get_lists()
.getByTitle('News')
.getItemById($("#newsId").val());
var newTag = "40;#term_title|cd1df680-fff6-4d37-a336-95a2fbc0719d";
item.set_item("NewsTag", newTag);
item.update();
ctx.executeQueryAsync(function () {
});
it works fine for single value.
I have tried to use newTag variable as array and tryed to concatenate two strings {id};#{title}|{guid} with ; separator but it does not work.
Can anyone help with that?
I've found out the right way to set multiple values. Actually the delimiter is combination of semicolon and sharp ";#" not just a semicolon ";"

SQL Bulk import from CSV

I need to import a large CSV file into an SQL server. I'm using this :
BULK
INSERT CSVTest
FROM 'c:\csvfile.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
problem is all my fields are surrounded by quotes (" ") so a row actually looks like :
"1","","2","","sometimes with comma , inside", ""
Can I somehow bulk import them and tell SQL to use the quotes as field delimiters?
Edit: The problem with using '","' as delimiter, as in the examples suggested is that :
What most examples do, is they import the data including the first " in the first column and the last " in the last, then they go ahead and strip that out. Alas my first (and last) column are datetime and will not allow a "20080902 to be imported as datetime.
From what I've been reading arround I think FORMATFILE is the way to go, but documentation (including MSDN) is terribly unhelpfull.
Try FIELDTERMINATOR='","'
Here is a great link to help with the first and last quote...look how he used the substring the SP
http://www.sqlteam.com/article/using-bulk-insert-to-load-a-text-file
Another hack which I sometimes use, is to open the CSV in Excel, then write your sql statement into a cell at the end of each row.
For example:
=concatenate("insert into myTable (columnA,columnB) values ('",a1,"','",b1,"'")")
A fill-down can populate this into every row for you. Then just copy and paste the output into a new query window.
It's old-school, but if you only need to do imports once in a while it saves you messing around with reading all the obscure documentation on the 'proper' way to do it.
Try OpenRowSet. This can be used to import Excel stuff. Excel can open CSV files, so you only need to figure out the correct [ConnectionString][2].
[2]: Driver={Microsoft Text Driver (*.txt; *.csv)};Dbq=c:\txtFilesFolder\;Extensions=asc,csv,tab,txt;
I know this isn't a real solution but I use a dummy table for the import with nvarchar set for everything. Then I do an insert which strips out the " characters and does the conversions. It isn't pretty but it does the job.
Id say use FileHelpers its an open source library
Do you need to do this programmatically, or is it a one-time shot?
Using the Enterprise Manager, right-click Import Data lets you select your delimiter.
You have to watch out with BCP/BULK INSERT because neither BSP or Bulk Insert handle this well if the quoting is not consistent, even with format files (even XML format files don't offer the option) and dummy ["] characters at the beginning and end and using [","] as the separator. Technically CSV files do not need to have ["] characters if there are no embedded [,] characters
It is for this reason that comma-delimited files are sometimes referred to as comedy-limited files.
OpenRowSet will require Excel on the server and could be problematic in 64-bit environments - I know it's problematic using Excel in Jet in 64-bit.
SSIS is really your best bet if the file is likely to vary from your expectations in the future.
u can try this code which is very sweet if you want ,
this will remove unwanted semicolons from your code.
if for example your data is like this :"Kelly","Reynold","kelly#reynold.com"
Bulk insert test1
from 'c:\1.txt' with (
fieldterminator ='","'
,rowterminator='\n')
update test1<br>
set name =Substring (name , 2,len(name))
where name like **' "% '**
update test1
set email=substring(email, 1,len(email)-1)
where email like **' %" '**
Firs you need to import CSV file into Data Table
Then you can insert bulk rows using SQLBulkCopy
using System;
using System.Data;
using System.Data.SqlClient;
namespace SqlBulkInsertExample
{
class Program
{
static void Main(string[] args)
{
DataTable prodSalesData = new DataTable("ProductSalesData");
// Create Column 1: SaleDate
DataColumn dateColumn = new DataColumn();
dateColumn.DataType = Type.GetType("System.DateTime");
dateColumn.ColumnName = "SaleDate";
// Create Column 2: ProductName
DataColumn productNameColumn = new DataColumn();
productNameColumn.ColumnName = "ProductName";
// Create Column 3: TotalSales
DataColumn totalSalesColumn = new DataColumn();
totalSalesColumn.DataType = Type.GetType("System.Int32");
totalSalesColumn.ColumnName = "TotalSales";
// Add the columns to the ProductSalesData DataTable
prodSalesData.Columns.Add(dateColumn);
prodSalesData.Columns.Add(productNameColumn);
prodSalesData.Columns.Add(totalSalesColumn);
// Let's populate the datatable with our stats.
// You can add as many rows as you want here!
// Create a new row
DataRow dailyProductSalesRow = prodSalesData.NewRow();
dailyProductSalesRow["SaleDate"] = DateTime.Now.Date;
dailyProductSalesRow["ProductName"] = "Nike";
dailyProductSalesRow["TotalSales"] = 10;
// Add the row to the ProductSalesData DataTable
prodSalesData.Rows.Add(dailyProductSalesRow);
// Copy the DataTable to SQL Server using SqlBulkCopy
using (SqlConnection dbConnection = new SqlConnection("Data Source=ProductHost;Initial Catalog=dbProduct;Integrated Security=SSPI;Connection Timeout=60;Min Pool Size=2;Max Pool Size=20;"))
{
dbConnection.Open();
using (SqlBulkCopy s = new SqlBulkCopy(dbConnection))
{
s.DestinationTableName = prodSalesData.TableName;
foreach (var column in prodSalesData.Columns)
s.ColumnMappings.Add(column.ToString(), column.ToString());
s.WriteToServer(prodSalesData);
}
}
}
}
}
This is an old question, so I write this to help anyone who stumble upon it.
SQL Server 2017 introduces the FIELDQUOTE parameter which is intended for this exact use case.
Yup, K Richard is right: FIELDTERMINATOR = '","'
See http://www.sqlteam.com/article/using-bulk-insert-to-load-a-text-file for more info.
You could also use DTS or SSIS.
Do you have control over the input format? | (pipes), and \t usually make for better field terminators.
If you figure out how to get the file parsed into a DataTable, I'd suggest the SqlBulkInsert class for inserting it into SQL Server.