Linqpad how do you use Util.WriteCSV to write datatable to excel using the Beta version of LinqPad - linqpad

This was my first mortal stab at it
Util.WriteCsv<DataTable>(myDataTable, filePath);
The exception i get is 'System.Data.DataTable' to
'System.Collections.Generic.IEnumerable'

It turns out the latest linqpad beta version (v4.44.9) supports datatables in
Util.WriteCSV
see http://www.linqpad.net/beta.aspx
In the meantime, if you need to generate csv file for a datatable in older versions, you can follow instructions below:
I ended up writing a custom procedure to do this since it seems the beta version (4.44.06) is expecting IEnumerable, and even after I put .AsIEnumerable() on myDataTable, it wasn't coming out right:
public static void ExportToCSV(DataTable table, string filePath)
{
var sb = new StringBuilder();
foreach (DataColumn column in table.Columns)
{
sb.Append(column.ColumnName + ",");
}
sb.Append(Environment.NewLine);
foreach (DataRow row in table.Rows)
{
for (int i = 0; i < table.Columns.Count; i++)
{
sb.Append(row[i].ToString().Replace(",", string.Empty) + ",");
}
sb.Append(Environment.NewLine);
}
System.IO.File.WriteAllText(filePath, sb.ToString() );
string.Format("wrote output to {0}", filePath).Dump();
}

Related

SQL for json path returning invalid json format and incomplete json result [duplicate]

I am using AZURE SQL (SQL Server 2016) and creating a query to give me output in JSON object. I am adding FOR JSON PATH at the end of query.
When I execute the procedure without adding FOR JSON PATH to the query, I get 244 rows (no of records in my table); but when I execute the procedure by adding FOR JSON PATH I get message 33 rows and also I get JSON object which is truncated.
I tested this with different types of queries including simple query selecting only 10 columns, but I always get less number of rows with FOR JSON PATH and JSON object truncated at the end.
Here is my query
SELECT
[Id]
,[countryCode]
,[CountryName]
,[FIPS]
,[ISO1]
,[ISO2]
,[ISONo]
,[capital]
,[region]
,[currency]
,[currencyCode]
,[population]
,[timeZone]
,[timeZoneCode]
,[ISDCode]
,[currencySymbol]
FROM
[dbo].[countryDB]
Above query returns 2 rows.
And I use following query to get output in JSON
SELECT
[Id]
,[countryCode]
,[CountryName]
,[FIPS]
,[ISO1]
,[ISO2]
,[ISONo]
,[capital]
,[region]
,[currency]
,[currencyCode]
,[population]
,[timeZone]
,[timeZoneCode]
,[ISDCode]
,[currencySymbol]
FROM
[dbo].[countryDB]
FOR JSON PATH
Above query returns 33 rows and output is
[{"Id":1,"countryCode":"AD","CountryName":"Andorra","FIPS":"AN","ISO1":"AD","ISO2":"AND","ISONo":20,"capital":"Andorra la Vella","region":"Europe","currency":"Euro","currencyCode":"EUR","population":67627,"timeZone":2.00,"timeZoneCode":"DST","ISDCode":"+376"},{"Id":2,"countryCode":"AE","CountryName":"United Arab Emirates","FIPS":"AE","ISO1":"AE","ISO2":"ARE","ISONo":784,"capital":"Abu Dhabi","region":"Middle East","currency":"UAE Dirham","currencyCode":"AED","population":2407460,"timeZone":4.00,"timeZoneCode":"STD","ISDCode":"+971"},{"Id":3,"countryCode":"AF","CountryName":"Afghanistan","FIPS":"AF","ISO1":"AF","ISO2":"AFG","ISONo":4,"capital":"Kabul","region":"Asia","currency":"Afghani","currencyCode":"AFA","population":26813057,"timeZone":4.50,"timeZoneCode":"STD","ISDCode":"+93"},{"Id":4,"countryCode":"AG","CountryName":"Antigua and Barbuda","FIPS":"AC","ISO1":"AG","ISO2":"ATG","ISONo":28,"capital":"Saint Johns","region":"Central America and the Caribbean","currency":"East Caribbean Dollar","currencyCode":"205","population":66970,"timeZone":-4.00,"timeZoneCode":"STD","ISDCode":"+1"},{"Id":5,"countryCode":"AI","CountryName":"Anguilla","FIPS":"AV","ISO1":"AI","ISO2":"AIA","ISONo":660,"capital":"The Valley","region":"Central America and the Caribbean","currency":"East Caribbean Dollar","currencyCode":"205","population":12132,"timeZone":-4.00,"timeZoneCode":"STD","ISDCode":"+1"},{"Id":6,"countryCode":"AL","CountryName":"Albania","FIPS":"AL","ISO1":"AL","ISO2":"ALB","ISONo":8,"capital":"Tirana","region":"Europe","currency":"Lek","currencyCode":"ALL","population":3510484,"timeZone":2.00,"timeZoneCode":"DST","ISDCode":"+355"},{"Id":7,"countryCode":"AM","CountryName":"Armenia","FIPS":"AM","ISO1":"AM","ISO2":"ARM","ISONo":51,"capital":"Yerevan","region":"Commonwealth of Independent States","currency":"Armenian Dram","currencyCode":"AMD","population":3336100,"timeZone":5.00,"timeZoneCode":"DST","ISDCode":"+374"},{"Id":8,"countryCode":"AN","CountryName":"Netherlands Antilles","FIPS":"NT","ISO1":"AN","ISO2":
I am trying to get output directly in JSON
When FOR JSON queries are returned to the client, the JSON text is returned as a single-column result set. The JSON is broken into fixed-length strings and sent over multiple rows.
It's really hard to see this properly in SSMS, as SSMS concatenates the results for you in "Results to Grid", and truncates each row in "Results to Text".
Why? Dunno. My guess is that only .NET clients know how to efficiently read large streams from SQL Server, and 99% of the time users will still just buffer the whole object. Breaking the JSON over multiple rows gives clients a simple API to read the data incrementally. And in .NET the fact that the de facto standard JSON library is not in the BCL means that SqlClient can't really have a first-class JSON API.
Anyway, from C#, you can use something like this to read the results:
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Common;
using System.Data.SqlClient;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp3
{
class SqlJSONReader: TextReader
{
SqlDataReader rdr;
string currentLine = "";
int currentPos = 0;
public SqlJSONReader(SqlDataReader rdr)
{
this.rdr = rdr;
}
public override int Peek()
{
return GetChar(false);
}
public override int Read()
{
return GetChar(true);
}
public int GetChar(bool Advance)
{
while (currentLine.Length == currentPos)
{
if (!rdr.Read())
{
return -1;
}
currentLine = rdr.GetString(0);
currentPos = 0;
}
int rv = (int)currentLine[currentPos];
if (Advance) currentPos += 1;
return rv;
}
public override void Close()
{
rdr.Close();
}
}
class Program
{
static void Main(string[] args)
{
using (var con = new SqlConnection("server=.;database=master;Integrated Security=true"))
{
con.Open();
var sql = #"
select o.object_id as [obj.Id], replicate('n', 2000) as [obj.foo], c.name as [obj.col.name]
from sys.objects o
join sys.columns c
on c.object_id = o.object_id
for json path;
"
;
var cmd = new SqlCommand(sql, con);
var sr = new StringBuilder();
using (var rdr = cmd.ExecuteReader())
{
using (var tr = new SqlJSONReader(rdr))
{
using (var jr = new Newtonsoft.Json.JsonTextReader(tr))
{
while (jr.Read())
{
Console.WriteLine($" {jr.TokenType} : {jr.Value}");
}
}
}
}
Console.WriteLine(sr.ToString());
}
}
}
}
With thanks to #David Browne. I found I had to use 'print' instead of 'select'
declare #json varchar(max) = (SELECT * FROM dbo.AppSettings FOR JSON AUTO)
print #json
Separation of Concerns dictates returning a string and parsing the JSON separately. The below snippet can be used with no dependency on JSON.net which can be be used separately or a different Json Deserializer can be used (e.g. the one built into RestSharp) and does not require the SqlJSONReader class.
try {
using (var conn = new SqlConnection(connectionString))
using (var cmd = new SqlCommand(sql, conn)) {
await conn.OpenAsync();
logger.LogInformation("SQL:" + sql);
var rdr = await cmd.ExecuteReaderAsync().ConfigureAwait(false);
var result = "";
var moreRows = rdr.HasRows;
while (moreRows) {
moreRows = await rdr.ReadAsync();
if (moreRows) result += rdr.GetString(0);
}
return result;
}
}
catch (Exception ex) {
//logger.LogError($"Error accessing Db:{ex}");
return null;
}
sql Query result of for json path is a long string that is divided to multi column or block
something like this statement: "i want to get result of for json path"
"i want"+
" to ge"+
"t resu"+
"lt of "+
"for js"+
"on path"
each block is equal to max size of column in sql
so just get them all to a list
public IHttpActionResult GetAdvertises()
{
var rEQUEST = db.Database.SqlQuery<string>("SELECT
ID,CITY_NAME,JSON_QUERY(ALBUM) AS ALBUM FOR JSON PATH").ToList();
foreach(string req in rEQUEST)
{
HttpContext.Current.Response.Write(req);
}
return Ok();
}
If your query returned more then 2033 charters then there will be rows. Each row contains 2033 charters data another row contains remaining data. So you need to merge to get actual json. As seen below code sample.
dynamic jsonReturned = unitOfWork
.Database
.FetchProc<string>("storedProcedureGetSaleData", new { ProductId = productId });
if (Enumerable.Count(jsonReturned) == 0)
{
return null;
}
dynamic combinedJson = "";
foreach (var resultJsonRow in jsonReturned)
{
combinedJson += resultJsonRow;
}
return combinedJsonResult;

FOR JSON path returns less number of Rows on AZURE SQL

I am using AZURE SQL (SQL Server 2016) and creating a query to give me output in JSON object. I am adding FOR JSON PATH at the end of query.
When I execute the procedure without adding FOR JSON PATH to the query, I get 244 rows (no of records in my table); but when I execute the procedure by adding FOR JSON PATH I get message 33 rows and also I get JSON object which is truncated.
I tested this with different types of queries including simple query selecting only 10 columns, but I always get less number of rows with FOR JSON PATH and JSON object truncated at the end.
Here is my query
SELECT
[Id]
,[countryCode]
,[CountryName]
,[FIPS]
,[ISO1]
,[ISO2]
,[ISONo]
,[capital]
,[region]
,[currency]
,[currencyCode]
,[population]
,[timeZone]
,[timeZoneCode]
,[ISDCode]
,[currencySymbol]
FROM
[dbo].[countryDB]
Above query returns 2 rows.
And I use following query to get output in JSON
SELECT
[Id]
,[countryCode]
,[CountryName]
,[FIPS]
,[ISO1]
,[ISO2]
,[ISONo]
,[capital]
,[region]
,[currency]
,[currencyCode]
,[population]
,[timeZone]
,[timeZoneCode]
,[ISDCode]
,[currencySymbol]
FROM
[dbo].[countryDB]
FOR JSON PATH
Above query returns 33 rows and output is
[{"Id":1,"countryCode":"AD","CountryName":"Andorra","FIPS":"AN","ISO1":"AD","ISO2":"AND","ISONo":20,"capital":"Andorra la Vella","region":"Europe","currency":"Euro","currencyCode":"EUR","population":67627,"timeZone":2.00,"timeZoneCode":"DST","ISDCode":"+376"},{"Id":2,"countryCode":"AE","CountryName":"United Arab Emirates","FIPS":"AE","ISO1":"AE","ISO2":"ARE","ISONo":784,"capital":"Abu Dhabi","region":"Middle East","currency":"UAE Dirham","currencyCode":"AED","population":2407460,"timeZone":4.00,"timeZoneCode":"STD","ISDCode":"+971"},{"Id":3,"countryCode":"AF","CountryName":"Afghanistan","FIPS":"AF","ISO1":"AF","ISO2":"AFG","ISONo":4,"capital":"Kabul","region":"Asia","currency":"Afghani","currencyCode":"AFA","population":26813057,"timeZone":4.50,"timeZoneCode":"STD","ISDCode":"+93"},{"Id":4,"countryCode":"AG","CountryName":"Antigua and Barbuda","FIPS":"AC","ISO1":"AG","ISO2":"ATG","ISONo":28,"capital":"Saint Johns","region":"Central America and the Caribbean","currency":"East Caribbean Dollar","currencyCode":"205","population":66970,"timeZone":-4.00,"timeZoneCode":"STD","ISDCode":"+1"},{"Id":5,"countryCode":"AI","CountryName":"Anguilla","FIPS":"AV","ISO1":"AI","ISO2":"AIA","ISONo":660,"capital":"The Valley","region":"Central America and the Caribbean","currency":"East Caribbean Dollar","currencyCode":"205","population":12132,"timeZone":-4.00,"timeZoneCode":"STD","ISDCode":"+1"},{"Id":6,"countryCode":"AL","CountryName":"Albania","FIPS":"AL","ISO1":"AL","ISO2":"ALB","ISONo":8,"capital":"Tirana","region":"Europe","currency":"Lek","currencyCode":"ALL","population":3510484,"timeZone":2.00,"timeZoneCode":"DST","ISDCode":"+355"},{"Id":7,"countryCode":"AM","CountryName":"Armenia","FIPS":"AM","ISO1":"AM","ISO2":"ARM","ISONo":51,"capital":"Yerevan","region":"Commonwealth of Independent States","currency":"Armenian Dram","currencyCode":"AMD","population":3336100,"timeZone":5.00,"timeZoneCode":"DST","ISDCode":"+374"},{"Id":8,"countryCode":"AN","CountryName":"Netherlands Antilles","FIPS":"NT","ISO1":"AN","ISO2":
I am trying to get output directly in JSON
When FOR JSON queries are returned to the client, the JSON text is returned as a single-column result set. The JSON is broken into fixed-length strings and sent over multiple rows.
It's really hard to see this properly in SSMS, as SSMS concatenates the results for you in "Results to Grid", and truncates each row in "Results to Text".
Why? Dunno. My guess is that only .NET clients know how to efficiently read large streams from SQL Server, and 99% of the time users will still just buffer the whole object. Breaking the JSON over multiple rows gives clients a simple API to read the data incrementally. And in .NET the fact that the de facto standard JSON library is not in the BCL means that SqlClient can't really have a first-class JSON API.
Anyway, from C#, you can use something like this to read the results:
using System;
using System.Collections.Generic;
using System.Data;
using System.Data.Common;
using System.Data.SqlClient;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp3
{
class SqlJSONReader: TextReader
{
SqlDataReader rdr;
string currentLine = "";
int currentPos = 0;
public SqlJSONReader(SqlDataReader rdr)
{
this.rdr = rdr;
}
public override int Peek()
{
return GetChar(false);
}
public override int Read()
{
return GetChar(true);
}
public int GetChar(bool Advance)
{
while (currentLine.Length == currentPos)
{
if (!rdr.Read())
{
return -1;
}
currentLine = rdr.GetString(0);
currentPos = 0;
}
int rv = (int)currentLine[currentPos];
if (Advance) currentPos += 1;
return rv;
}
public override void Close()
{
rdr.Close();
}
}
class Program
{
static void Main(string[] args)
{
using (var con = new SqlConnection("server=.;database=master;Integrated Security=true"))
{
con.Open();
var sql = #"
select o.object_id as [obj.Id], replicate('n', 2000) as [obj.foo], c.name as [obj.col.name]
from sys.objects o
join sys.columns c
on c.object_id = o.object_id
for json path;
"
;
var cmd = new SqlCommand(sql, con);
var sr = new StringBuilder();
using (var rdr = cmd.ExecuteReader())
{
using (var tr = new SqlJSONReader(rdr))
{
using (var jr = new Newtonsoft.Json.JsonTextReader(tr))
{
while (jr.Read())
{
Console.WriteLine($" {jr.TokenType} : {jr.Value}");
}
}
}
}
Console.WriteLine(sr.ToString());
}
}
}
}
With thanks to #David Browne. I found I had to use 'print' instead of 'select'
declare #json varchar(max) = (SELECT * FROM dbo.AppSettings FOR JSON AUTO)
print #json
Separation of Concerns dictates returning a string and parsing the JSON separately. The below snippet can be used with no dependency on JSON.net which can be be used separately or a different Json Deserializer can be used (e.g. the one built into RestSharp) and does not require the SqlJSONReader class.
try {
using (var conn = new SqlConnection(connectionString))
using (var cmd = new SqlCommand(sql, conn)) {
await conn.OpenAsync();
logger.LogInformation("SQL:" + sql);
var rdr = await cmd.ExecuteReaderAsync().ConfigureAwait(false);
var result = "";
var moreRows = rdr.HasRows;
while (moreRows) {
moreRows = await rdr.ReadAsync();
if (moreRows) result += rdr.GetString(0);
}
return result;
}
}
catch (Exception ex) {
//logger.LogError($"Error accessing Db:{ex}");
return null;
}
sql Query result of for json path is a long string that is divided to multi column or block
something like this statement: "i want to get result of for json path"
"i want"+
" to ge"+
"t resu"+
"lt of "+
"for js"+
"on path"
each block is equal to max size of column in sql
so just get them all to a list
public IHttpActionResult GetAdvertises()
{
var rEQUEST = db.Database.SqlQuery<string>("SELECT
ID,CITY_NAME,JSON_QUERY(ALBUM) AS ALBUM FOR JSON PATH").ToList();
foreach(string req in rEQUEST)
{
HttpContext.Current.Response.Write(req);
}
return Ok();
}
If your query returned more then 2033 charters then there will be rows. Each row contains 2033 charters data another row contains remaining data. So you need to merge to get actual json. As seen below code sample.
dynamic jsonReturned = unitOfWork
.Database
.FetchProc<string>("storedProcedureGetSaleData", new { ProductId = productId });
if (Enumerable.Count(jsonReturned) == 0)
{
return null;
}
dynamic combinedJson = "";
foreach (var resultJsonRow in jsonReturned)
{
combinedJson += resultJsonRow;
}
return combinedJsonResult;

Read file contents from VirtualFile - intellij plugin development

How can I read file contents from a virtual file. I am currently using this way
BufferedReader br = new BufferedReader(new InputStreamReader(virtualFile.getInputStream()));
String currentLine;
StringBuilder stringBuilder = new StringBuilder();
while ((currentLine = br.readLine()) != null) {
stringBuilder.append(currentLine);
stringBuilder.append("\n");
}
} catch (IOException e1) {
e1.printStackTrace();
return 0;
}
However am getting some garbled string appended when I print the stringbuilder.
Some common ways of reading VirtualFile contents are:
file.contentsToByteArray()
LoadTextUtil.loadText(file)
FileDocumentManager.getInstance().getDocument(file).get*CharSequence()
You can use VfsUtil.loadText(virtualFile);
Also, to make sure that the file is updated you can use virtualFile.refresh(false, false);
here you can find some more useful information.

ă ş ţ chars aren't rendered by iTextSharp

I use iTextSharp in asp.net mvc to return pdf like this:
public class Pdf : IPdf
{
public FileStreamResult Make(string s)
{
using (var ms = new MemoryStream())
{
using (var document = new Document())
{
PdfWriter.GetInstance(document, ms);
document.Open();
using (var str = new StringReader(s))
{
var htmlWorker = new HTMLWorker(document);
htmlWorker.Parse(str);
}
document.Close();
}
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("content-disposition", "attachment;filename=MyPdfName.pdf");
HttpContext.Current.Response.Buffer = true;
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.OutputStream.Write(ms.GetBuffer(), 0, ms.GetBuffer().Length);
HttpContext.Current.Response.OutputStream.Flush();
HttpContext.Current.Response.End();
return new FileStreamResult(HttpContext.Current.Response.OutputStream, "application/pdf");
}
}
}
the problem is that characters like: ă ţ ş are not rendered
yetanothercoder is correct. That would do the trick... but there's another very similar question which I answered in a bit more detail:
iText + HTMLWorker - How to change default font?
Please try the folowing:
var unicodeFont = iTextSharp.text.pdf.BaseFont.CreateFont(iTextSharp.text.FontFactory.TIMES_ROMAN, iTextSharp.text.pdf.BaseFont.CP1250, iTextSharp.text.pdf.BaseFont.EMBEDDED);
acroFields.SetFieldProperty("txtContractorBirthPlace", "textfont", unicodeFont, null);
And it should do it for you.
You have to add the field property to every field you wish to have diacritics.
Baftă!

Lucene: Wildcards are missing from index

i am building a search index that contains special names - containing ! and ? and & and + and ... I have to tread the following searches different:
me & you
me + you
But whatever i do (did try with queryparser escaping before indexing, escaped it manually, tried different indexers...) - if i check the search index with Luke they do not show up (question marks and #-symbols and the like show up)
The logic behind is that i am doing partial searches for a live suggestion (and the fields are not that large) so i split it up into "m" and "me" and "+" and "y" and "yo" and "you" and then index it (that way it is way faster than a wildcard query search (and the index size is not a big problem).
So what i would need is to also have this special wildcard characters be inserted into the index.
This is my code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using Lucene.Net.Analysis;
using Lucene.Net.Util;
namespace AnalyzerSpike
{
public class CustomAnalyzer : Analyzer
{
public override TokenStream TokenStream(string fieldName, TextReader reader)
{
return new ASCIIFoldingFilter(new LowerCaseFilter(new CustomCharTokenizer(reader)));
}
}
public class CustomCharTokenizer : CharTokenizer
{
public CustomCharTokenizer(TextReader input) : base(input)
{
}
public CustomCharTokenizer(AttributeSource source, TextReader input) : base(source, input)
{
}
public CustomCharTokenizer(AttributeFactory factory, TextReader input) : base(factory, input)
{
}
protected override bool IsTokenChar(char c)
{
return c != ' ';
}
}
}
The code to create the index:
private void InitIndex(string path, Analyzer analyzer)
{
var writer = new IndexWriter(path, analyzer, true);
//some multiline textbox that contains one item per line:
var all = new List<string>(txtAllAvailable.Text.Replace("\r","").Split('\n'));
foreach (var item in all)
{
writer.AddDocument(GetDocument(item));
}
writer.Optimize();
writer.Close();
}
private static Document GetDocument(string name)
{
var doc = new Document();
doc.Add(new Field(
"name",
DeNormalizeName(name),
Field.Store.YES,
Field.Index.ANALYZED));
doc.Add(new Field(
"raw_name",
name,
Field.Store.YES,
Field.Index.NOT_ANALYZED));
return doc;
}
(Code is with Lucene.net in version 1.9.x (EDIT: sorry - was 2.9.x) but is compatible with Lucene from Java)
Thx
Finally had the time to look into it again. And it was some stupid mistake in my denormalice method that did filter out single character parts (as it was in the beginning) and thus it did filter out the plus sign if surrounded by spaces :-/
Thx for your help though Moleski!
private static string DeNormalizeName(string name)
{
string answer = string.Empty;
var wordsOnly = Regex.Replace(name, "[^\\w0-9 ]+", string.Empty);
var filterText = (name != wordsOnly) ? name + " " + wordsOnly : name;
foreach (var subName in filterText.Split(' '))
{
if (subName.Length >= 1)
{
for (var j = 1; j <= subName.Length; j++)
{
answer += subName.Substring(0, j) + " ";
}
}
}
return answer.TrimEnd();
}