INSERT INTO MSSQL from textfile contains NULL values on INTEGER - sql

I have problems to insert/bulk NULL values from textfile into MSSQL.
When replace NULL value with a number it works with no problem.
2 Columns is set to ALLOW NULLS,
PublicationCaption and PublicationNumber
Here is example of text file
1#DI#Dagens Industri#435#358#2016-10-19
2#DN#Dagens Nyheter#NULL#359#2016-10-19
I think there is some problem with the foreach loop in code where I need add something to make this work.
Here is the code I'm using
public static DataTable Publication()
{
DataTable dtPublication = new DataTable();
dtPublication.Columns.AddRange(new DataColumn[6] { new DataColumn("ID", System.Type.GetType("System.Int32")),
new DataColumn("PublicationCode", System.Type.GetType("System.String")),
new DataColumn("PublicationCaption",System.Type.GetType("System.String")),
new DataColumn("PublicationNumber", System.Type.GetType("System.Int32")),
new DataColumn("ProductNumber", System.Type.GetType("System.Int32")),
new DataColumn("CreatedDate", System.Type.GetType("System.DateTime")),
});
for (int i = 0; i < dtPublication.Columns.Count; i++)
{
dtPublication.Columns[i].AllowDBNull = true;
}
string txtData = File.ReadAllText(#"C:\Publication2.txt", System.Text.Encoding.Default);
foreach (string row in txtData.Split('\n'))
{
if (!string.IsNullOrEmpty(row))
{
dtPublication.Rows.Add();
int i = 0;
foreach (string cell in row.Split('#'))
{
dtPublication.Rows[dtPublication.Rows.Count - 1][i] = cell;
i++;
}
}
}
return dtPublication;
}
Im getting (The input string had an incorrect format. Unable to store in the PublicationNumber column. Type Int32 is expected.) when DEBUGGING.
Please I need some advise, help with this to solve the problem.
Thanks for your time.

The database doesn't know that the "NULL" string you try to insert actually means a null value. To fix this, change the "NULL" string to DBNull.Value:
if (cell == "NULL")
dtPublication.Rows[dtPublication.Rows.Count - 1][i] = DBNull.Value;
else
dtPublication.Rows[dtPublication.Rows.Count - 1][i] = cell;

There is no feature that translates a string "NULL" to a nullable field in a DataTable. You have to implement it yourself:
object value = DBNull.Value;
if(!"NULL".Equals(cell, StringComparison.InvariantCultureIgnoreCase))
value = cell;
dtPublication.Rows[dtPublication.Rows.Count - 1][i] = value;

Related

Is there a way i can write to CSV Faster? [duplicate]

Could somebody please tell me why the following code is not working. The data is saved into the csv file, however the data is not separated. It all exists within the first cell of each row.
StringBuilder sb = new StringBuilder();
foreach (DataColumn col in dt.Columns)
{
sb.Append(col.ColumnName + ',');
}
sb.Remove(sb.Length - 1, 1);
sb.Append(Environment.NewLine);
foreach (DataRow row in dt.Rows)
{
for (int i = 0; i < dt.Columns.Count; i++)
{
sb.Append(row[i].ToString() + ",");
}
sb.Append(Environment.NewLine);
}
File.WriteAllText("test.csv", sb.ToString());
Thanks.
The following shorter version opens fine in Excel, maybe your issue was the trailing comma
.net = 3.5
StringBuilder sb = new StringBuilder();
string[] columnNames = dt.Columns.Cast<DataColumn>().
Select(column => column.ColumnName).
ToArray();
sb.AppendLine(string.Join(",", columnNames));
foreach (DataRow row in dt.Rows)
{
string[] fields = row.ItemArray.Select(field => field.ToString()).
ToArray();
sb.AppendLine(string.Join(",", fields));
}
File.WriteAllText("test.csv", sb.ToString());
.net >= 4.0
And as Tim pointed out, if you are on .net>=4, you can make it even shorter:
StringBuilder sb = new StringBuilder();
IEnumerable<string> columnNames = dt.Columns.Cast<DataColumn>().
Select(column => column.ColumnName);
sb.AppendLine(string.Join(",", columnNames));
foreach (DataRow row in dt.Rows)
{
IEnumerable<string> fields = row.ItemArray.Select(field => field.ToString());
sb.AppendLine(string.Join(",", fields));
}
File.WriteAllText("test.csv", sb.ToString());
As suggested by Christian, if you want to handle special characters escaping in fields, replace the loop block by:
foreach (DataRow row in dt.Rows)
{
IEnumerable<string> fields = row.ItemArray.Select(field =>
string.Concat("\"", field.ToString().Replace("\"", "\"\""), "\""));
sb.AppendLine(string.Join(",", fields));
}
And last suggestion, you could write the csv content line by line instead of as a whole document, to avoid having a big document in memory.
I wrapped this up into an extension class, which allows you to call:
myDataTable.WriteToCsvFile("C:\\MyDataTable.csv");
on any DataTable.
public static class DataTableExtensions
{
public static void WriteToCsvFile(this DataTable dataTable, string filePath)
{
StringBuilder fileContent = new StringBuilder();
foreach (var col in dataTable.Columns)
{
fileContent.Append(col.ToString() + ",");
}
fileContent.Replace(",", System.Environment.NewLine, fileContent.Length - 1, 1);
foreach (DataRow dr in dataTable.Rows)
{
foreach (var column in dr.ItemArray)
{
fileContent.Append("\"" + column.ToString() + "\",");
}
fileContent.Replace(",", System.Environment.NewLine, fileContent.Length - 1, 1);
}
System.IO.File.WriteAllText(filePath, fileContent.ToString());
}
}
A new extension function based on Paul Grimshaw's answer. I cleaned it up and added the ability to handle unexpected data. (Empty Data, Embedded Quotes, and comma's in the headings...)
It also returns a string which is more flexible. It returns Null if the table object does not contain any structure.
public static string ToCsv(this DataTable dataTable) {
StringBuilder sbData = new StringBuilder();
// Only return Null if there is no structure.
if (dataTable.Columns.Count == 0)
return null;
foreach (var col in dataTable.Columns) {
if (col == null)
sbData.Append(",");
else
sbData.Append("\"" + col.ToString().Replace("\"", "\"\"") + "\",");
}
sbData.Replace(",", System.Environment.NewLine, sbData.Length - 1, 1);
foreach (DataRow dr in dataTable.Rows) {
foreach (var column in dr.ItemArray) {
if (column == null)
sbData.Append(",");
else
sbData.Append("\"" + column.ToString().Replace("\"", "\"\"") + "\",");
}
sbData.Replace(",", System.Environment.NewLine, sbData.Length - 1, 1);
}
return sbData.ToString();
}
You call it as follows:
var csvData = dataTableOject.ToCsv();
If your calling code is referencing the System.Windows.Forms assembly, you may consider a radically different approach.
My strategy is to use the functions already provided by the framework to accomplish this in very few lines of code and without having to loop through columns and rows. What the code below does is programmatically create a DataGridView on the fly and set the DataGridView.DataSource to the DataTable. Next, I programmatically select all the cells (including the header) in the DataGridView and call DataGridView.GetClipboardContent(), placing the results into the Windows Clipboard. Then, I 'paste' the contents of the clipboard into a call to File.WriteAllText(), making sure to specify the formatting of the 'paste' as TextDataFormat.CommaSeparatedValue.
Here is the code:
public static void DataTableToCSV(DataTable Table, string Filename)
{
using(DataGridView dataGrid = new DataGridView())
{
// Save the current state of the clipboard so we can restore it after we are done
IDataObject objectSave = Clipboard.GetDataObject();
// Set the DataSource
dataGrid.DataSource = Table;
// Choose whether to write header. Use EnableWithoutHeaderText instead to omit header.
dataGrid.ClipboardCopyMode = DataGridViewClipboardCopyMode.EnableAlwaysIncludeHeaderText;
// Select all the cells
dataGrid.SelectAll();
// Copy (set clipboard)
Clipboard.SetDataObject(dataGrid.GetClipboardContent());
// Paste (get the clipboard and serialize it to a file)
File.WriteAllText(Filename,Clipboard.GetText(TextDataFormat.CommaSeparatedValue));
// Restore the current state of the clipboard so the effect is seamless
if(objectSave != null) // If we try to set the Clipboard to an object that is null, it will throw...
{
Clipboard.SetDataObject(objectSave);
}
}
}
Notice I also make sure to preserve the contents of the clipboard before I begin, and restore it once I'm done, so the user does not get a bunch of unexpected garbage next time the user tries to paste. The main caveats to this approach is 1) Your class has to reference System.Windows.Forms, which may not be the case in a data abstraction layer, 2) Your assembly will have to be targeted for .NET 4.5 framework, as DataGridView does not exist in 4.0, and 3) The method will fail if the clipboard is being used by another process.
Anyways, this approach may not be right for your situation, but it is interesting none the less, and can be another tool in your toolbox.
I did this recently but included double quotes around my values.
For example, change these two lines:
sb.Append("\"" + col.ColumnName + "\",");
...
sb.Append("\"" + row[i].ToString() + "\",");
Try changing sb.Append(Environment.NewLine); to sb.AppendLine();.
StringBuilder sb = new StringBuilder();
foreach (DataColumn col in dt.Columns)
{
sb.Append(col.ColumnName + ',');
}
sb.Remove(sb.Length - 1, 1);
sb.AppendLine();
foreach (DataRow row in dt.Rows)
{
for (int i = 0; i < dt.Columns.Count; i++)
{
sb.Append(row[i].ToString() + ",");
}
sb.AppendLine();
}
File.WriteAllText("test.csv", sb.ToString());
4 lines of code:
public static string ToCSV(DataTable tbl)
{
StringBuilder strb = new StringBuilder();
//column headers
strb.AppendLine(string.Join(",", tbl.Columns.Cast<DataColumn>()
.Select(s => "\"" + s.ColumnName + "\"")));
//rows
tbl.AsEnumerable().Select(s => strb.AppendLine(
string.Join(",", s.ItemArray.Select(
i => "\"" + i.ToString() + "\"")))).ToList();
return strb.ToString();
}
Note that the ToList() at the end is important; I need something to force an expression evaluation. If I was code golfing, I could use Min() instead.
Also note that the result will have a newline at the end because of the last call to AppendLine(). You may not want this. You can simply call TrimEnd() to remove it.
Try to put ; instead of ,
Hope it helps
The error is the list separator.
Instead of writing sb.Append(something... + ',') you should put something like sb.Append(something... + System.Globalization.CultureInfo.CurrentCulture.TextInfo.ListSeparator);
You must put the list separator character configured in your operating system (like in the example above), or the list separator in the client machine where the file is going to be watched. Another option would be to configure it in the app.config or web.config as a parammeter of your application.
To write to a file, I think the following method is the most efficient and straightforward: (You can add quotes if you want)
public static void WriteCsv(DataTable dt, string path)
{
using (var writer = new StreamWriter(path)) {
writer.WriteLine(string.Join(",", dt.Columns.Cast<DataColumn>().Select(dc => dc.ColumnName)));
foreach (DataRow row in dt.Rows) {
writer.WriteLine(string.Join(",", row.ItemArray));
}
}
}
Read this and this?
A better implementation would be
var result = new StringBuilder();
for (int i = 0; i < table.Columns.Count; i++)
{
result.Append(table.Columns[i].ColumnName);
result.Append(i == table.Columns.Count - 1 ? "\n" : ",");
}
foreach (DataRow row in table.Rows)
{
for (int i = 0; i < table.Columns.Count; i++)
{
result.Append(row[i].ToString());
result.Append(i == table.Columns.Count - 1 ? "\n" : ",");
}
}
File.WriteAllText("test.csv", result.ToString());
To mimic Excel CSV:
public static string Convert(DataTable dt)
{
StringBuilder sb = new StringBuilder();
IEnumerable<string> columnNames = dt.Columns.Cast<DataColumn>().
Select(column => column.ColumnName);
sb.AppendLine(string.Join(",", columnNames));
foreach (DataRow row in dt.Rows)
{
IEnumerable<string> fields = row.ItemArray.Select(field =>
{
string s = field.ToString().Replace("\"", "\"\"");
if(s.Contains(','))
s = string.Concat("\"", s, "\"");
return s;
});
sb.AppendLine(string.Join(",", fields));
}
return sb.ToString().Trim();
}
Here is an enhancement to vc-74's post that handles commas the same way Excel does. Excel puts quotes around data if the data has a comma but doesn't quote if the data doesn't have a comma.
public static string ToCsv(this DataTable inDataTable, bool inIncludeHeaders = true)
{
var builder = new StringBuilder();
var columnNames = inDataTable.Columns.Cast<DataColumn>().Select(column => column.ColumnName);
if (inIncludeHeaders)
builder.AppendLine(string.Join(",", columnNames));
foreach (DataRow row in inDataTable.Rows)
{
var fields = row.ItemArray.Select(field => field.ToString().WrapInQuotesIfContains(","));
builder.AppendLine(string.Join(",", fields));
}
return builder.ToString();
}
public static string WrapInQuotesIfContains(this string inString, string inSearchString)
{
if (inString.Contains(inSearchString))
return "\"" + inString+ "\"";
return inString;
}
Here is my solution, based on previous answers by Paul Grimshaw and Anthony VO.
I've submitted the code in a C# project on Github.
My main contribution is to eliminate explicitly creating and manipulating a StringBuilder and instead working only with IEnumerable. This avoids the allocation of a big buffer in memory.
public static class Util
{
public static string EscapeQuotes(this string self) {
return self?.Replace("\"", "\"\"") ?? "";
}
public static string Surround(this string self, string before, string after) {
return $"{before}{self}{after}";
}
public static string Quoted(this string self, string quotes = "\"") {
return self.Surround(quotes, quotes);
}
public static string QuotedCSVFieldIfNecessary(this string self)
{
return (self == null) ? "" : (self.Contains('"') || self.Contains('\r') || self.Contains('\n') || self.Contains(',')) ? self.Quoted() : self;
}
public static string ToCsvField(this string self) {
return self.EscapeQuotes().QuotedCSVFieldIfNecessary();
}
public static string ToCsvRow(this IEnumerable<string> self){
return string.Join(",", self.Select(ToCsvField));
}
public static IEnumerable<string> ToCsvRows(this DataTable self) {
yield return self.Columns.OfType<object>().Select(c => c.ToString()).ToCsvRow();
foreach (var dr in self.Rows.OfType<DataRow>())
yield return dr.ItemArray.Select(item => item.ToString()).ToCsvRow();
}
public static void ToCsvFile(this DataTable self, string path) {
File.WriteAllLines(path, self.ToCsvRows());
}
}
This approach combines nicely with converting IEnumerable to DataTable as asked here.
StringBuilder sb = new StringBuilder();
SaveFileDialog fileSave = new SaveFileDialog();
IEnumerable<string> columnNames = tbCifSil.Columns.Cast<DataColumn>().
Select(column => column.ColumnName);
sb.AppendLine(string.Join(",", columnNames));
foreach (DataRow row in tbCifSil.Rows)
{
IEnumerable<string> fields = row.ItemArray.Select(field =>string.Concat("\"", field.ToString().Replace("\"", "\"\""), "\""));
sb.AppendLine(string.Join(",", fields));
}
fileSave.ShowDialog();
File.WriteAllText(fileSave.FileName, sb.ToString());
public void ExpoetToCSV(DataTable dtDataTable, string strFilePath)
{
StreamWriter sw = new StreamWriter(strFilePath, false);
//headers
for (int i = 0; i < dtDataTable.Columns.Count; i++)
{
sw.Write(dtDataTable.Columns[i].ToString().Trim());
if (i < dtDataTable.Columns.Count - 1)
{
sw.Write(",");
}
}
sw.Write(sw.NewLine);
foreach (DataRow dr in dtDataTable.Rows)
{
for (int i = 0; i < dtDataTable.Columns.Count; i++)
{
if (!Convert.IsDBNull(dr[i]))
{
string value = dr[i].ToString().Trim();
if (value.Contains(','))
{
value = String.Format("\"{0}\"", value);
sw.Write(value);
}
else
{
sw.Write(dr[i].ToString().Trim());
}
}
if (i < dtDataTable.Columns.Count - 1)
{
sw.Write(",");
}
}
sw.Write(sw.NewLine);
}
sw.Close();
}
Possibly, most easy way will be to use:
https://github.com/ukushu/DataExporter
especially in case of your data of datatable containing /r/n characters or separator symbol inside of your dataTable cells. Almost all of other answers will not work with such cells.
only you need is to write the following code:
Csv csv = new Csv("\t");//Needed delimiter
var columnNames = dt.Columns.Cast<DataColumn>().
Select(column => column.ColumnName).ToArray();
csv.AddRow(columnNames);
foreach (DataRow row in dt.Rows)
{
var fields = row.ItemArray.Select(field => field.ToString()).ToArray;
csv.AddRow(fields);
}
csv.Save();
Most existing answers can easily cause OutOfMemoryException, so I decided to write my own answer.
DON' T DO THIS:
using a DataSet + StringBuilder causes the data to occupy the memory 3x at once:
Load All Data into DataSet
Copy all data into StringBuilder
Copy the data to string using StringBuilder.ToString();
Instead you should write each row to a FileStream separately. There is no need to create the whole CSV in memory.
Even better, use a DataReader instead DataSet. That way you can read from database billions of records one by one a write the to a file one by one.
If you don't mind using an external library for CSV, I can recommend the most popular CsvHelper, which has no dependencies.
using (var writer = new FileWriter("test.csv"))
using (var csv = new CsvWriter(writer, CultureInfo.InvariantCulture))
{
foreach (DataColumn dc in dt.Columns)
{
csv.WriteField(dc.ColumnName);
}
csv.NextRecord();
foreach (DataRow dr in dt.Rows)
{
foreach (DataColumn dc in dt.Columns)
{
csv.WriteField(dr[dc]);
}
csv.NextRecord();
}
writer.ToString().Dump();
}
In case anyone else stumbles on this, I was using File.ReadAllText to get CSV data and then I modified it and wrote it back with File.WriteAllText. The \r\n CRLFs were fine but the \t tabs were ignored when Excel opened it. (All solutions in this thread so far use a comma delimiter but that doesn't matter.) Notepad showed the same format in the resulting file as in the source. A Diff even showed the files as identical. But I got a clue when I opened the file in Visual Studio with a binary editor. The source file was Unicode but the target was ASCII. To fix, I modified both ReadAllText and WriteAllText with third argument set as System.Text.Encoding.Unicode, and from there Excel was able to open the updated file.

Why the last empty row has been pulled as an additional test with null values in selenium apache POI

I have four rows in an excel, first row is for the heading and rest of the three rows has values in it. I have entered the code in a way to avoid the header and read the rows which contains values only. However instead of fetching only three rows it comes with one additional null value rows as below, Why it is fetching the null values did I miss anything? Find the code and error message.
Message
PASSED: testShipment("Mumbai", "New York", "18000", "10000", "20000")
PASSED: testShipment("Mumbai", "Cochin", "2000", "30000", "5000")
PASSED: testShipment("Cochin", "Farah", "16000", "18000", "19000")
FAILED: testShipment(null, null, null, null, null)
Code
int TotalCol = sh.getRow(0).getLastCellNum();
int Totalrows = sh.getLastRowNum()+1;
String[][] data = new String[Totalrows][TotalCol];
DataFormatter formatter = new DataFormatter(); // creating formatter using the default locale
for (int i = 1; i < Totalrows; i++) {
Row r = sh.getRow(i);
for (int j = 0; j < TotalCol; j++) {
Cell c = r.getCell(j);
try {
if (c.getCellType() == Cell.CELL_TYPE_STRING) {
String j_username = formatter.formatCellValue(c);
data[i][j] = j_username;
System.out.println("data[i][j]" + data[i][j]);
} else {
data[i][j] = String.valueOf(c.getNumericCellValue());
String j_username = formatter.formatCellValue(c);
data[i][j] = j_username;
System.out.println("data[i][j] numeric val" + data[i][j]);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
Try with the below code like checking null condition
for (int k = 1; k <= totalRows; k++) {
String testCaseID = sheet.getRow(k).getCell(0).getStringCellValue();
if (testCaseID.equalsIgnoreCase(tcID)) {
for (int l = 1; l < totalCols; l++) {
String testData_FieldName = sheet.getRow(0).getCell(l).getStringCellValue();
if (testData_FieldName.equalsIgnoreCase(header)) {
cell = sheet.getRow(k).getCell(l);
if (cell != null) {
switch (cell.getCellType()) {
case Cell.CELL_TYPE_NUMERIC:// numeric value in excel
result = cell.getNumericCellValue();
break;
case Cell.CELL_TYPE_STRING: // string value in excel
result = cell.getStringCellValue();
break;
case Cell.CELL_TYPE_BOOLEAN: // boolean value in excel
result = cell.getBooleanCellValue();
break;
case Cell.CELL_TYPE_BLANK: // blank value in excel
result = cell.getStringCellValue();
break;
case Cell.CELL_TYPE_ERROR: // Error value in excel
result = cell.getErrorCellValue() + "";
break;
default:
throw new CustomException("The cell data type is invalid");
}
}
}
}
k = totalRows + 1;
}
}
You need to change either the data array declaration part or Totalrows calculation part. Currently, you have created 4 rows object and only 3 rows values are assigned and hence 4th row values are holding null value.
String[][] data = new String[Totalrows][TotalCol];
In your string array, you are not persisting the header value and storing only values. So, please modify your code with any one of the below options (I would suggest you to use option 1)
Option 1:
Remove the +1 from Totalrows variable and add the equal condition in your first for loop
//Removed the +1
int Totalrows = sh.getLastRowNum();
String[][] data = new String[Totalrows][TotalCol];
DataFormatter formatter = new DataFormatter(); // creating formatter using the default locale
//Condition is modified as i <= Totalrows
for (int i = 1; i <= Totalrows; i++) {
Option 2:
Change the data[][] declaration part
int Totalrows = sh.getLastRowNum()+1;
String[][] data = new String[Totalrows-1][TotalCol];
Here is the code that works, thanks to everyone for helping on this!
int TotalCol = sh.getRow(0).getLastCellNum();
int Totalrows = sh.getLastRowNum()+1;
//Entering minus one(-1) during data declaration ignores the first row as first row is a header
String[][] data = new String[Totalrows-1][TotalCol];
DataFormatter formatter = new DataFormatter(); // creating formatter using the default locale
for (int i = 1; i <Totalrows; i++) {
Row r = sh.getRow(i);
for (int j = 0; j < TotalCol; j++) {
Cell c = r.getCell(j);
try {
if (c.getCellType() == Cell.CELL_TYPE_STRING) {
String j_username = formatter.formatCellValue(c);
//Adding minus on(data[i-1]) helps to read the first cell which is (0,1), in this case (0,1) would not read the header since we have skipping the header from the table in the previous step on top, therefore the actual table starts from the second row.
data[i-1][j] = j_username;
System.out.println("data[i-1][j]" + data[i-1][j]);
} else {
data[i-1][j] = String.valueOf(c.getNumericCellValue());
String j_username = formatter.formatCellValue(c);
data[i-1][j] = j_username;
System.out.println("data[i-1][j] numeric val" + data[i-1][j]);
}
} catch (Exception e) {
e.printStackTrace();
}
}
"

ASP.Net MVC Passing array value to database table columns

I used the code from http://arranmaclean.wordpress.com/2010/07/20/net-mvc-upload-a-csv-file-to-database-with-bulk-upload/#comment-188 to upload, read and insert to DB the csv file. My problem now is how can I pass the values from the csv file to the specific database table columns.
string Feedback = string.Empty;
string line = string.Empty;
string[] strArray;
DataTable dt = new DataTable();
DataRow row;
Regex r = new Regex(",(?=(?:[^\"]*\"[^\"]*\")*(?![^\"]*\"))");
StreamReader sr = new StreamReader(fileName);
line = sr.ReadLine();
strArray = r.Split(line);
Array.ForEach(strArray, s => dt.Columns.Add(new DataColumn()));
while ((line = sr.ReadLine()) != null)
{
row = dt.NewRow();
row.ItemArray = r.Split(line);
dt.Rows.Add(row);
}
and ...
private static String ProcessBulkCopy(DataTable dt)
{
string Feedback = string.Empty;
string connString = ConfigurationManager.ConnectionStrings["DataBaseConnectionString"].ConnectionString;
using( SqlConnection conn = new SqlConnection(connString))
{
using (var copy = new SqlBulkCopy(conn))
{
conn.Open();
copy.DestinationTableName = "BulkImportDetails";
copy.BatchSize = dt.Rows.Count;
try
{
copy.WriteToServer(dt);
Feedback = "Upload complete";
}
catch (Exception ex)
{
Feedback = ex.Message;
}
}
}
return Feedback;
}
Below are my sample contents:
08/01/12,05:20:12 AM,243752,,South Lobby3,522557,IN
08/01/12,05:26:03 AM,188816,,North Lobby1,358711,IN
My DB table columns:
empno | date | time
I only need to insert the first three arrays.. (e.g. 08/01/12,05:20:12 AM,243752) and proceed to the next row to insert it again to its specified columns. My csv file doesn't have headers. I saw a code about passing the array values but it requires headers. How can I pass the values even without having a header in my csv file? Please help me guys.. Thank you..

Existing posts keep on re-add upon deletion of selected row in jTable

I try to refresh the data of jTable upon deletion of selected row. Here are my codes to set up table :
private JTable getJTableManageReplies() {
jTableManageReplies.setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
jTableManageReplies.getSelectionModel().addListSelectionListener(
new ListSelectionListener() {
#Override
public void valueChanged(ListSelectionEvent e) {
if (!e.getValueIsAdjusting()) {
int viewRow = jTableManageReplies.getSelectedRow();
// Get the first column data of the selectedrow
int replyID = Integer.parseInt(jTableManageReplies.getValueAt(
viewRow, 0).toString());
eForumRepliesAdmin reply = new eForumRepliesAdmin(replyID);
replyID = JOptionPane.showConfirmDialog(null, "Are you sure that you want to delete the selected reply? " , "Delete replies", JOptionPane.YES_NO_OPTION);
if(replyID == JOptionPane.YES_OPTION){
reply.deleteReply();
JOptionPane.showMessageDialog(null, "Reply has been deleted successfully.");
SetUpJTableManageReplies();
}
}
}
});
return jTableManageReplies;
}
public void SetUpJTableManageReplies() {
DefaultTableModel tableModel = (DefaultTableModel) jTableManageReplies
.getModel();
String[] data = new String[5];
db.setUp("IT Innovation Project");
String sql = "Select forumReplies.reply_ID,forumReplies.reply_topic,forumTopics.topic_title,forumReplies.reply_content,forumReplies.reply_by from forumReplies,forumTopics WHERE forumReplies.reply_topic = forumTopics.topic_id ";
ResultSet resultSet = null;
resultSet = db.readRequest(sql);
jTableManageReplies.repaint();
tableModel.getDataVector().removeAllElements();
try {
while (resultSet.next()) {
data[0] = resultSet.getString("reply_ID");
data[1] = resultSet.getString("reply_topic");
data[2] = resultSet.getString("topic_title");
data[3] = resultSet.getString("reply_content");
data[4] = resultSet.getString("reply_by");
tableModel.addRow(data);
}
resultSet.close();
} catch (Exception e) {
System.out.println(e);
}
}
And this is my sql statement :
public boolean deleteReply() {
boolean success = false;
DBController db = new DBController();
db.setUp("IT Innovation Project");
String sql = "DELETE FROM forumReplies where reply_ID = " + replyID
+ "";
if (db.updateRequest(sql) == 1)
success = true;
db.terminate();
return success;
}
I called the repaint() to update the table data with the newest data in database and it works. I mean the data after deletion of certain row. However, the existing posts will keep on re-add. Then I add the removeAllElement method to remove all the existing posts because my sql statement is select * from table. Then, there is an error message which is ArrayIndexOutOfBoundsException. Any guides to fix this? Thanks in advance.
I called the repaint() to update the table data with the newest data
in database and it works.
There is no need to call repaint method when data is changed. Data change is handled by the Table Model (DefaultTableModel in this case.) And fireXXXMethods are required to be called whenever data is changed but you are using DefaultTableModel even those are not required. (Since by default it call these methods when ever there is a change.)
I think the problem is in the valuesChanged(..) method. You are getting the value at row 0 but not checking whether table has rows or not. So keep a constraint.
int viewRow = jTableManageReplies.getSelectedRow();
// Get the first column data of the selectedrow
if(jTableManageReplies.getRowCount() > 0)
int replyID = Integer.parseInt(jTableManageReplies.getValueAt(viewRow, 0).toString());

StyledDocument adding extra count to indexof for each line of file

I have a strange problem (at least it appears that way) that when searching for a string in a textPane, I get an extra index for each line number that is searched and returned when using StyledDoc verses just getting the text from a textPane. I get the same text from the same pane, it's just that one is from the plain text the other is from the styled doc. Am I missing something here. I'll try to list as many of the changes between the two versions I am working with.
The plain text version:
public int displayXMLFile(String path, int target){
InputStreamReader inputStream;
FileInputStream fileStream;
BufferedReader buffReader;
if(target == 1){
try{
File file = new File(path);
fileStream = new FileInputStream(file);
inputStream = new InputStreamReader(fileStream,"UTF-8");
buffReader = new BufferedReader(inputStream);
StringBuffer content = new StringBuffer("");
String line = "";
while((line = buffReader.readLine())!=null){
content.append(line+"\n");
}
buffReader.close();
xhw.txtDisplay_1.setText(content.toString());
}
catch(Exception e){
e.printStackTrace();
return -1;
}
}
}
verses the Styled Doc (without the styles applied)
protected void openFile(String path, StyledDocument sDoc, int target)
throws BadLocationException {
FileInputStream fileStream;
String file;
if(target == 1){
file = "Openning First File";
} else {
file = "Openning Second File";
}
try {
fileStream = new FileInputStream(path);
// Get the object of DataInputStream
//DataInputStream in = new DataInputStream(fileStream);
ProgressMonitorInputStream in = new ProgressMonitorInputStream(
xw.getContentPane(), file, fileStream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
String strLine;
//Read File Line By Line
while ((strLine = br.readLine()) != null) {
sDoc.insertString(sDoc.getLength(), strLine + "\n", sDoc.getStyle("regular"));
xw.updateProgress(target);
}
//Close the input stream
in.close();
} catch (Exception e){//Catch exception if any
System.err.println("Error: " + e.getMessage());
}
This is how I search:
public int searchText(int sPos, int target) throws BadLocationException{
String search = xhw.textSearch.getText();
String contents;
JTextPane searchPane;
if(target == 1){
searchPane = xhw.txtDisplay_1;
} else {
searchPane = xhw.txtDisplay_2;
}
if(xhw.textSearch.getText().isEmpty()){
xhw.displayDialog("Nothing to search for");
highlight(searchPane, null, 0,0);
} else {
contents = searchPane.getText();
// Search for the desired string starting at cursor position
int newPos = contents.indexOf( search, sPos );
// cycle cursor to beginning of doc window
if (newPos == -1 && sPos > 0){
sPos = 0;
newPos = contents.indexOf( search, sPos );
}
if ( newPos >= 0 ) {
// Select occurrence if found
highlight(searchPane, contents, newPos, target);
sPos = newPos + search.length()+1;
} else {
xhw.displayDialog("\"" + search + "\"" + " was not found in File " + target);
}
}
return sPos;
}
The sample file:
<?xml version="1.0" encoding="UTF-8"?>
<AlternateDepartureRoutes>
<AlternateDepartureRoute>
<AdrName>BOIRR</AdrName>
<AdrRouteAlpha>..BROPH..</AdrRouteAlpha>
<TransitionFix>
<FixName>BROPH</FixName>
</TransitionFix>
</AlternateDepartureRoute>
<AlternateDepartureRoute>
</AlternateDepartureRoutes>
And my highlighter:
public void highlight(JTextPane tPane, String text, int position, int target) throws BadLocationException {
Highlighter highlighter = new DefaultHighlighter();
Highlighter.HighlightPainter painter = new DefaultHighlighter.DefaultHighlightPainter(Color.LIGHT_GRAY);
tPane.setHighlighter(highlighter);
String searchText = xhw.textSearch.getText();
String document = tPane.getText();
int startOfSString = document.indexOf(searchText,position);
if(startOfSString >= 0){
int endOfSString = startOfSString + searchText.length();
highlighter.addHighlight(startOfSString, endOfSString, painter);
tPane.setCaretPosition(endOfSString);
int caretPos = tPane.getCaretPosition();
javax.swing.text.Element root = tPane.getDocument().getDefaultRootElement();
int lineNum = root.getElementIndex(caretPos) +1;
if (target == 1){
xhw.txtLineNum1.setText(Integer.toString(lineNum));
} else if (target == 2){
xhw.txtLineNum2.setText(Integer.toString(lineNum));
} else {
xhw.txtLineNum1.setText(null);
xhw.txtLineNum2.setText(null);
}
} else {
highlighter.removeAllHighlights();
}
}
When I do a search for Alt with the indexof() I get 40 for the plain text (which is what it should return) and 41 when searching with the styled doc. And for each additional line that Alt appears on I get and extra index (so that the indexof() call returns 2 more then needed in line 3). This happens for every additional line that it finds. Am I missing something obvious? (If I need to push this to a smaller single class to make it easier to check I can do this later when I have some more time).
Thanks in advance...
If you are on Windows, then the TextComponent text (searchPane.getText()) can contain carriage-return+newline characters (\r\n), but the TextComponent's Styled Document (sSearchPane.getText(0, sSearchPane.getLength())) contains only newline characters (\n). That's why your newPos is always larger than newPosS by the number of newlines at that point. To fix this, in your search function you can change:
contents = searchPane.getText();
to:
contents = searchPane.getText().replaceAll("\r\n","\n");
That way the search occurs with the same indices that the Styled Document is using.
OK I have found a solution (basicly). I approached this from the aspect that I am getting text from the same text componet in two different ways...
String search = xw.textSearch.getText();
String contents;
String contentsS;
JTextPane searchPane;
StyledDocument sSearchPane;
searchPane = xw.txtDisplay_left;
sSearchPane = xw.txtDisplay_left.getStyledDocument();
contents = searchPane.getText();
contentsS = sSearchPane.getText(0, sSearchPane.getLength());
// Search for the desired string starting at cursor position
int newPos = contents.indexOf( search, sPos );
int newPosS = contentsS.indexOf(search, sPos);
So when comparing the two variables "newPos" & "newPosS", newPos retruned 1 more then newPosS for each line that the search string was found on. So when looking at the sample file and searching for "Alt" the first instance is found on line 2. "newPos" returns 41 and "newPosS returns 40 (which then highlights the correct text). The next occurance (which is found in line 3) "newPos" returns 71 and "newPosS" returns 69. As you can see, every new line increases the count by the line number the occurance begins in. I would suspect that there is an extra character being added in for each new line from the textPane that is not present in the StyledDoc.
I'm sure there is a reasonable explaination but I don't have it at this time.