SQLite with Android Studio, return all records selected by 2 arguments - sql

let's assume that i have a table with columns such as:
ID SSID BSSID RSSI
1 abcd hs:hd:sd -60
2 abcd hs:hd:po -68
There are about 5000 records with the same SSID, slighltly different BSSID and the LEVEL values. My device is scanning the nearest environment for WiFi networks, therefore I know their MAC address and level of RSSI. I pick 3 with the highest value od RSSI.
First thing I would like to know if it is possible to search through the database to get all the records with the LEVEL value equal or close to 60, for instance 59,58,61.
Secondly, is there a way to query the database to return all the records with the same MAC addresses and RSSI values as from the 3 best scan result? If so, how would that query look like?
EDIT: Thanks for all the answers. What I'm trying to do now is to compare 3 scans with records stored in database with getRequiredData function. I would like to pass 2 parameters to this function, mac address and level and find records with same value for both parameters. The rawQuery seems to be fine, code is compiling but the app is crashing with the first scan. I cant find the cause of it, is it because my logic of getting these parameters is wrong or does it have something to do with query?
public Cursor getRequiredData(String mac, int level){
SQLiteDatabase db = this.getWritableDatabase();
Cursor res = db.rawQuery("SELECT BSSID, RSSI FROM TABLE_NAME WHERE BSSID =? AND RSSI=?", new String[] {mac, level});
return res;
}
scan part:
class WifiReceiver extends BroadcastReceiver {
#Override
public void onReceive(Context context, Intent intent) {
sb = new StringBuilder();
Comparator<ScanResult> comparator = new Comparator<ScanResult>() {
#Override
public int compare(ScanResult o1, ScanResult o2) {
return (o1.level>o2.level ? -1 : (o1.level==o2.level ? 0 : 1));
}
};
lista = wifiManager.getScanResults();
Collections.sort(lista, comparator);
for (int i = 0; i < lista.size(); i++) {
scanResult = wifiManager.getScanResults().get(i);
sb.append(new Integer(i + 1).toString() + ". " + (lista.get(i)).SSID + " " + (lista.get(i)).BSSID + " " + (lista.get(i)).level + "\n");
boolean isInserted = myDb.insertData(lista.get(i).SSID.toString(), lista.get(i).BSSID.toString(), lista.get(i).level);
if (isInserted = true)
Toast.makeText(MainActivity.this, "Data inserted", Toast.LENGTH_LONG).show();
else
Toast.makeText(MainActivity.this, "Data not inserted", Toast.LENGTH_LONG).show();
}
for (int i=0; i<4; i++)
{
scanResult = wifiManager.getScanResults().get(i);
match = myDb.getRequiredData(lista.get(i).BSSID.toString(), lista.get(i).level);
}
Log.i("match values: ", DatabaseUtils.dumpCursorToString(match));
txt.setText(sb);
wifiManager.startScan();
}
}
Here is what match contains:
2018-12-10 16:36:26.334 13347-13347/com.example.maciek.wifiscann I/match values:: >>>>> Dumping cursor android.database.sqlite.SQLiteCursor#e1a86d1
0 {
BSSID=f4:c5:ed:5c:s6:20
RSSI=-69
}
1 {
BSSID=f4:c5:ed:5c:s6:20
RSSI=-69
}
2 {
BSSID=f4:c5:ed:5c:s6:20
RSSI=-69
}
3 {
BSSID=f4:c5:ed:5c:s6:20
RSSI=-69
}
4 {
BSSID=f4:c5:ed:5c:s6:20
RSSI=-69
}
5 {
BSSID=f4:c5:ed:5c:s6:20
RSSI=-69
}
<<<<<

To get the 3 rows with the closest values to 60 in column LEVEL:
SELECT * FROM tablename ORDER BY ABS(LEVEL - 60), LEVEL LIMIT 3
For the 2nd part of your question, you should provide sample data of the table. Edit:
From the sample data that you posted I don't see a column RSSI, but if it exists in the table then the SELECT statement is ok.
Change the 2nd parameter of rawQuery() to:
new String[] {mac, String.valueOf(level)}
because level is int.
In onReceive() you use myDb. I don't know how you initialize it.
If the app crashes you must copy the log, the part that identifies the problem and post it.

First thing I would like to know if it is possible to search through
the database to get all the records with the LEVEL value equal or
close to 60, for instance 59,58,61.
SELECT * FROM your_table WHERE level BETWEEN 59 AND 61;
where your_table is the respective table name.
Note if levels are negative (as per example data) then BETWEEN requires the lowest value first so it would be BETWEEN -61 AND -59.
Secondly, is there a way to query the database to return all the
records with the same MAC addresses and RSSI values as from the 3 best
scan result? If so, how would that query look like?
SELECT * FROM your_table WHERE your_mac_address_column = 'the_mac_address_value' AND RSSI = 'the_rssi_value' ORDER BY LEVEL DESC LIMIT 3
Note the above assumes that the MAC address is stored in a column (if NOT then cannot be done unless the mac address can be correlated to a column).
Assumes best LEVEL is lowest so -1 is better than -60 (if not then use ASC instead of DESC)
Again your_table, your_mac_address_column, the_mac_address_value and the_rssi_value would be replaced accordingly with actual values (note that strings should be in single quotes).

Related

Why does this Linq Query Return Different Results than SQL Equivalent?

I'm sure I'm missing something simple but I have a linq query here:
public static List<Guid> GetAudience()
{
var createdOn = new DateTime(2018, 6, 30, 0, 0, 0);
var x = new List<Guid>();
try
{
var query = from acc in Account
where acc.num != null
&& acc.StateCode.Equals(0)
&& acc.CreatedOn < createdOn
select new
{
acc.Id
};
foreach (var z in query)
{
if (z.Id != null)
{
x.Add(z.Id.Value);
}
}
}
catch (Exception e)
{
Console.WriteLine(e);
}
return x;
}
I wanted to verify the count in SQL because it would only take a couple seconds so:
select count(*)
from Account a
where a.num is not null
and a.statecode = 0
and a.createdon < '2018-06-30 00:00:00'
And now the SQL query is returning 9,329 whereas Linq is returning 10,928. Why are my counts so far off when the queries are doing the same thing (so I thought)? What simple thing am I missing?
Thanks in advance--
Your method is returning a list of records where the Id values are not null (plus the other criteria). The SQL query is returning a count of the number of records (plus the other criteria). Without the definition of your table, it's hard to know whether that is significant.
Unrelated tip: it's not a good idea to catch and swallow exceptions like that - the caller of your method will have no idea that anything went wrong, so processing will continue; but it will be using incomplete data, potentially leading to other problems in your program later.

Hive combine column values based upon condition

I was wondering if it is possible to combine column values based upon a condition. Let me explain...
Let say my data looks like this
Id name offset
1 Jan 100
2 Janssen 104
3 Klaas 150
4 Jan 160
5 Janssen 164
An my output should be this
Id fullname offsets
1 Jan Janssen [ 100, 160 ]
I would like to combine the name values from two rows where the offset of the two rows are no more apart then 1 character.
My question is if this type of data manipulation is possible with and if it is could someone share some code and explaination?
Please be gentle but this little piece of code return some what what I want...
ArrayList<String> persons = new ArrayList<String>();
// write your code here
String _previous = "";
//Sample output form entities.txt
//USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Berkowitz,PERSON,9,10660
//USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Marottoli,PERSON,9,10685
File file = new File("entities.txt");
try {
//
// Create a new Scanner object which will read the data
// from the file passed in. To check if there are more
// line to read from it we check by calling the
// scanner.hasNextLine() method. We then read line one
// by one till all line is read.
//
Scanner scanner = new Scanner(file);
while (scanner.hasNextLine()) {
if(_previous == "" || _previous == null)
_previous = scanner.nextLine();
String _current = scanner.nextLine();
//Compare the lines, if there offset is = 1
int x = Integer.parseInt(_previous.split(",")[3]) + Integer.parseInt(_previous.split(",")[4]);
int y = Integer.parseInt(_current.split(",")[4]);
if(y-x == 1){
persons.add(_previous.split(",")[1] + " " + _current.split(",")[1]);
if(scanner.hasNextLine()){
_current = scanner.nextLine();
}
}else{
persons.add(_previous.split(",")[1]);
}
_previous = _current;
}
} catch (Exception e) {
e.printStackTrace();
}
for(String person : persons){
System.out.println(person);
}
Working of this piece sample data
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Richard,PERSON,7,2732
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Marottoli,PERSON,9,2740
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Marottoli,PERSON,9,2756
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Marottoli,PERSON,9,3093
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Marottoli,PERSON,9,3195
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Berkowitz,PERSON,9,3220
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Berkowitz,PERSON,9,10660
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Marottoli,PERSON,9,10685
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Lea,PERSON,3,10858
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Lea,PERSON,3,11063
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Ken,PERSON,3,11186
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Marottoli,PERSON,9,11234
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Berkowitz,PERSON,9,17073
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Lea,PERSON,3,17095
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Stephanie,PERSON,9,17330
USER.A-GovDocs-f83c6ca3-9585-4c66-b9b0-f4c3bd57ccf4,Putt,PERSON,4,17340
Which produces this output
Richard Marottoli
Marottoli
Marottoli
Marottoli
Berkowitz
Berkowitz
Marottoli
Lea
Lea
Ken
Marottoli
Berkowitz
Lea
Stephanie Putt
Kind regards
Load the table using below create table
drop table if exists default.stack;
create external table default.stack
(junk string,
name string,
cat string,
len int,
off int
)
ROW FORMAT DELIMITED
FIELDS terminated by ','
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
location 'hdfs://nameservice1/....';
Use below query to get your desired output.
select max(name), off from (
select CASE when b.name is not null then
concat(b.name," ",a.name)
else
a.name
end as name
,Case WHEN b.off1 is not null
then b.off1
else a.off
end as off
from default.stack a
left outer join (select name
,len+off+ 1 as off
,off as off1
from default.stack) b
on a.off = b.off ) a
group by off
order by off;
I have tested this it generates your desired result.

How to group overlapping data in SQL

I have data in following fashion
Prog_Id Low_latency Max_Latency
a 1 4
a -1 5
a 3 8
a 11 12
a 12 15
Now I wish to see output as
Prog_Id Low_latency Max_Latency
a -1 8
a 11 15
Basically I wish to merge overlapping data. Can anyone help me with the code. I can manage time at the place of latency, if there is a solution with OVERLAPS clause.
Thanks
Rishabh
My initial answer was not always working. Now it looks like it is:
select distinct *
from (
select
t1.Prog_ID,
min(least(l, Low_latency)),
max(greatest(g, Max_Latency))
from yourtable t1 inner join (select
t1.Prog_ID,
least(t1.Low_latency, t2.Low_latency) l,
greatest(t1.Max_Latency, t2.Max_Latency) g
from
yourtable t1 inner join yourtable t2
on t1.Prog_ID=t2.Prog_ID
and t1.Low_latency<=t2.Max_Latency
and t1.Max_Latency>=t2.Low_Latency) t2
on t1.Prog_ID=t2.Prog_ID
and t1.Low_latency<=t2.g
and t1.Max_Latency>=t2.l
group by t1.Low_latency, t1.Max_latency) s
please see here. It's MySql code but can be converted for other DBMS.
It depends on which databse server (DBMS) you use. But there is no easy solution for. There could be a possibility to use Stored procedures . But I would prefer to do this in a programming language (which language do you use?)
After testing some around with querys of other people, I found no way in SQL.
Here is something simular to map reduce in java
public class YourData {
Double Low_latency;
Dobule Max_Latency;
int Prog_Id;
// add getter and setter here
public boolean tesetOverlapping(YourData data) {
if ((this.Low_latency<=data.Low_latency && data.Low_latency<=t1.Max_Latency) ¦¦ (this.Low_latency<=data.Max_Latency && data.Max_Latency<=this.Max_Latency)) {
this.Low_latency = Math.min(this.Low_latency, data.Low_latency);
this.Max_Latency = Math.min(this.Max_Latency, data.Max_Latency);
return true
}
return false;
}
}
String sql = "
SELECT
t1.Prog_Id,
t1.Low_latency,
t1.Max_Latency
FROM yourtable t1"
ArrayList<ArrayList<Double>> values = new ArrayList<ArrayList<Double>>();
while (row = get sql rows) {
int progIndex = values.indexOf(row.Prog_Id);
if (progIndex == -1) {
progIndex = values.indexOf(row.Prog_Id);
values.add(progIndex, new ArrayList<Double>());
}
values[progIndex].add(new YourData(row));
}
boolean foundOverlapping = false;
for (int progIndex = 0; progIndex < values.size(); progIndex++) {
// Do map reduce for each progIndex
do {
foundOverlapping = false;
for (int i = 0; i < values[progIndex].size(); i++) {
if (!values[progIndex].contains(i)) {
continue;
}
YourData cur = values[progIndex][i];
for (int x = 0; x < values[progIndex].size(); x++) {
if (i != x && values[progIndex].contains(x)) {
if (cur.tesetOverlapping(values[progIndex][x])) {
foundOverlapping = true;
values[progIndex].remove(x);
}
}
}
}
} while (foundOverlapping == true);
}
Assuming you want to group in a -infinity...9, 10...19, 20...29 pattern for the lower latency, you would need something like
SELECT
Prog_Id,
MIN(Low_latency) AS Low_latency,
MAX(Max_Latency) AS Max_Latency
FROM
your_table_name
GROUP BY
Prog_Id,
IF(FLOOR(Low_latency/10)<0,0,FLOOR(Low_latency/10))
Obviously the last line will depend on the RDBMS used, but should be quite similar among most.
You might also want to add an ORDER BY clause.

Hibernate Criteria - Restricting Data Based on Field in One-to-Many Relationship

I need some hibernate/SQL help, please. I'm trying to generate a report against an accounting database. A commission order can have multiple account entries against it.
class CommissionOrderDAO {
int id
String purchaseOrder
double bookedAmount
Date customerInvoicedDate
String state
static hasMany = [accountEntries: AccountEntryDAO]
SortedSet accountEntries
static mapping = {
version false
cache usage: 'read-only'
table 'commission_order'
id column:'id', type:'integer'
purchaseOrder column: 'externalId'
bookedAmount column: 'bookedAmount'
customerInvoicedDate column: 'customerInvoicedDate'
state column : 'state'
accountEntries sort : 'id', order : 'desc'
}
...
}
class AccountEntryDAO implements Comparable<AccountEntryDAO> {
int id
Date eventDate
CommissionOrderDAO commissionOrder
String entryType
String description
double remainingPotentialCommission
static belongsTo = [commissionOrder : CommissionOrderDAO]
static mapping = {
version false
cache usage: 'read-only'
table 'account_entry'
id column:'id', type:'integer'
eventDate column: 'eventDate'
commissionOrder column: 'commissionOrder'
entryType column: 'entryType'
description column: 'description'
remainingPotentialCommission formula : SQLFormulaUtils.AccountEntrySQL.REMAININGPOTENTIALCOMMISSION_FORMULA
}
....
}
The criteria for the report is that the commissionOrder.state==open and the commissionOrder.customerInvoicedDate is not null. And the account entries in the report should be between the startDate and the endDate and with remainingPotentialCommission > 0.
I'm looking to display information on the CommissionOrder mainly (and to display account entries on that commission order between the dates), but when I use the following projection:
def results = accountEntryCriteria.list {
projections {
like ("entryType", "comm%")
ge("eventDate", beginDate)
le("eventDate", endDate)
gt("remainingPotentialCommission", 0.0099d)
and {
commissionOrder {
eq("state", "open")
isNotNull("customerInvoicedDate")
}
}
}
order("id", "asc")
}
I get the correct accountEntries with the proper commissionOrders, but I'm going in backwards: I have loads of accountEntries which can reference the same commissionOrder. Aut when I look at the commissionOrders that I've retrieved, each one has ALL its accountEntries not just the accountEntries between the dates.
I then loop through the results, get the commissionOrder from the accountEntriesList, and remove accountEntries on that commissionOrder after the end date to get the "snapshot" in time that I need.
def getCommissionOrderListByRemainingPotentialCommissionFromResults(results, endDate) {
log.debug("begin getCommissionOrderListByRemainingPotentialCommissionFromResults")
int count = 0;
List<CommissionOrderDAO> commissionOrderList = new ArrayList<CommissionOrderDAO>()
if (results) {
CommissionOrderDAO[] commissionOrderArray = new CommissionOrderDAO[results?.size()];
Set<CommissionOrderDAO> coDuplicateCheck = new TreeSet<CommissionOrderDAO>()
for (ae in results) {
if (!coDuplicateCheck.contains(ae?.commissionOrder?.purchaseOrder) && ae?.remainingPotentialCommission > 0.0099d) {
CommissionOrderDAO co = ae?.commissionOrder
CommissionOrderDAO culledCO = removeAccountEntriesPastDate(co, endDate)
def lastAccountEntry = culledCO?.accountEntries?.last()
if (lastAccountEntry?.remainingPotentialCommission > 0.0099d) {
commissionOrderArray[count++] = culledCO
}
coDuplicateCheck.add(ae?.commissionOrder?.purchaseOrder)
}
}
log.debug("Count after clean is ${count}")
if (count > 0) {
commissionOrderList = Arrays.asList(ArrayUtils.subarray(commissionOrderArray, 0, count))
log.debug("commissionOrderList size = ${commissionOrderList?.size()}")
}
}
log.debug("end getCommissionOrderListByRemainingPotentialCommissionFromResults")
return commissionOrderList
}
Please don't think I'm under the impression that this isn't a Charlie Foxtrot. The query itself doesn't take very long, but the cull process takes over 35 minutes. Right now, it's "manageable" because I only have to run the report once a month.
I need to let the database handle this processing (I think), but I couldn't figure out how to manipulate hibernate to get the results I want. How can I change my criteria?
Try to narrow down the bottle neck of that process. If you have a lot of data, then maybe this check could be time expensive.
coDuplicateCheck.contains(ae?.commissionOrder?.purchaseOrder)
in Set contains have O(n) complexity. You can use i.e. Map to store keys that you would check and then search for "ae?.commissionOrder?.purchaseOrder" as key in the map.
The second thought is that maybe when you're getting ae?.commissionOrder?.purchaseOrder it is always loaded from db by lazy mechanism. Try to turn on query logging and check that you don't have dozens of queries inside this processing function.
Finally and again I would suggest to narrow down where is the most expensive part and time waste.
This plugin maybe helpful.

error, string or binary data would be truncated when trying to insert

I am running data.bat file with the following lines:
Rem Tis batch file will populate tables
cd\program files\Microsoft SQL Server\MSSQL
osql -U sa -P Password -d MyBusiness -i c:\data.sql
The contents of the data.sql file is:
insert Customers
(CustomerID, CompanyName, Phone)
Values('101','Southwinds','19126602729')
There are 8 more similar lines for adding records.
When I run this with start > run > cmd > c:\data.bat, I get this error message:
1>2>3>4>5>....<1 row affected>
Msg 8152, Level 16, State 4, Server SP1001, Line 1
string or binary data would be truncated.
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
<1 row affected>
Also, I am a newbie obviously, but what do Level #, and state # mean, and how do I look up error messages such as the one above: 8152?
From #gmmastros's answer
Whenever you see the message....
string or binary data would be truncated
Think to yourself... The field is NOT big enough to hold my data.
Check the table structure for the customers table. I think you'll find that the length of one or more fields is NOT big enough to hold the data you are trying to insert. For example, if the Phone field is a varchar(8) field, and you try to put 11 characters in to it, you will get this error.
I had this issue although data length was shorter than the field length.
It turned out that the problem was having another log table (for audit trail), filled by a trigger on the main table, where the column size also had to be changed.
In one of the INSERT statements you are attempting to insert a too long string into a string (varchar or nvarchar) column.
If it's not obvious which INSERT is the offender by a mere look at the script, you could count the <1 row affected> lines that occur before the error message. The obtained number plus one gives you the statement number. In your case it seems to be the second INSERT that produces the error.
Just want to contribute with additional information: I had the same issue and it was because of the field wasn't big enough for the incoming data and this thread helped me to solve it (the top answer clarifies it all).
BUT it is very important to know what are the possible reasons that may cause it.
In my case i was creating the table with a field like this:
Select '' as Period, * From Transactions Into #NewTable
Therefore the field "Period" had a length of Zero and causing the Insert operations to fail. I changed it to "XXXXXX" that is the length of the incoming data and it now worked properly (because field now had a lentgh of 6).
I hope this help anyone with same issue :)
Some of your data cannot fit into your database column (small). It is not easy to find what is wrong. If you use C# and Linq2Sql, you can list the field which would be truncated:
First create helper class:
public class SqlTruncationExceptionWithDetails : ArgumentOutOfRangeException
{
public SqlTruncationExceptionWithDetails(System.Data.SqlClient.SqlException inner, DataContext context)
: base(inner.Message + " " + GetSqlTruncationExceptionWithDetailsString(context))
{
}
/// <summary>
/// PArt of code from following link
/// http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
/// </summary>
/// <param name="context"></param>
/// <returns></returns>
static string GetSqlTruncationExceptionWithDetailsString(DataContext context)
{
StringBuilder sb = new StringBuilder();
foreach (object update in context.GetChangeSet().Updates)
{
FindLongStrings(update, sb);
}
foreach (object insert in context.GetChangeSet().Inserts)
{
FindLongStrings(insert, sb);
}
return sb.ToString();
}
public static void FindLongStrings(object testObject, StringBuilder sb)
{
foreach (var propInfo in testObject.GetType().GetProperties())
{
foreach (System.Data.Linq.Mapping.ColumnAttribute attribute in propInfo.GetCustomAttributes(typeof(System.Data.Linq.Mapping.ColumnAttribute), true))
{
if (attribute.DbType.ToLower().Contains("varchar"))
{
string dbType = attribute.DbType.ToLower();
int numberStartIndex = dbType.IndexOf("varchar(") + 8;
int numberEndIndex = dbType.IndexOf(")", numberStartIndex);
string lengthString = dbType.Substring(numberStartIndex, (numberEndIndex - numberStartIndex));
int maxLength = 0;
int.TryParse(lengthString, out maxLength);
string currentValue = (string)propInfo.GetValue(testObject, null);
if (!string.IsNullOrEmpty(currentValue) && maxLength != 0 && currentValue.Length > maxLength)
{
//string is too long
sb.AppendLine(testObject.GetType().Name + "." + propInfo.Name + " " + currentValue + " Max: " + maxLength);
}
}
}
}
}
}
Then prepare the wrapper for SubmitChanges:
public static class DataContextExtensions
{
public static void SubmitChangesWithDetailException(this DataContext dataContext)
{
//http://stackoverflow.com/questions/3666954/string-or-binary-data-would-be-truncated-linq-exception-cant-find-which-fiel
try
{
//this can failed on data truncation
dataContext.SubmitChanges();
}
catch (SqlException sqlException) //when (sqlException.Message == "String or binary data would be truncated.")
{
if (sqlException.Message == "String or binary data would be truncated.") //only for EN windows - if you are running different window language, invoke the sqlException.getMessage on thread with EN culture
throw new SqlTruncationExceptionWithDetails(sqlException, dataContext);
else
throw;
}
}
}
Prepare global exception handler and log truncation details:
protected void Application_Error(object sender, EventArgs e)
{
Exception ex = Server.GetLastError();
string message = ex.Message;
//TODO - log to file
}
Finally use the code:
Datamodel.SubmitChangesWithDetailException();
Another situation in which you can get this error is the following:
I had the same error and the reason was that in an INSERT statement that received data from an UNION, the order of the columns was different from the original table. If you change the order in #table3 to a, b, c, you will fix the error.
select a, b, c into #table1
from #table0
insert into #table1
select a, b, c from #table2
union
select a, c, b from #table3
on sql server you can use SET ANSI_WARNINGS OFF like this:
using (SqlConnection conn = new SqlConnection("Data Source=XRAYGOAT\\SQLEXPRESS;Initial Catalog='Healthy Care';Integrated Security=True"))
{
conn.Open();
using (var trans = conn.BeginTransaction())
{
try
{
using cmd = new SqlCommand("", conn, trans))
{
cmd.CommandText = "SET ANSI_WARNINGS OFF";
cmd.ExecuteNonQuery();
cmd.CommandText = "YOUR INSERT HERE";
cmd.ExecuteNonQuery();
cmd.Parameters.Clear();
cmd.CommandText = "SET ANSI_WARNINGS ON";
cmd.ExecuteNonQuery();
trans.Commit();
}
}
catch (Exception)
{
trans.Rollback();
}
}
conn.Close();
}
I had the same issue. The length of my column was too short.
What you can do is either increase the length or shorten the text you want to put in the database.
Also had this problem occurring on the web application surface.
Eventually found out that the same error message comes from the SQL update statement in the specific table.
Finally then figured out that the column definition in the relating history table(s) did not map the original table column length of nvarchar types in some specific cases.
I had the same problem, even after increasing the size of the problematic columns in the table.
tl;dr: The length of the matching columns in corresponding Table Types may also need to be increased.
In my case, the error was coming from the Data Export service in Microsoft Dynamics CRM, which allows CRM data to be synced to an SQL Server DB or Azure SQL DB.
After a lengthy investigation, I concluded that the Data Export service must be using Table-Valued Parameters:
You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.
As you can see in the documentation above, Table Types are used to create the data ingestion procedure:
CREATE TYPE LocationTableType AS TABLE (...);
CREATE PROCEDURE dbo.usp_InsertProductionLocation
#TVP LocationTableType READONLY
Unfortunately, there is no way to alter a Table Type, so it has to be dropped & recreated entirely. Since my table has over 300 fields (😱), I created a query to facilitate the creation of the corresponding Table Type based on the table's columns definition (just replace [table_name] with your table's name):
SELECT 'CREATE TYPE [table_name]Type AS TABLE (' + STRING_AGG(CAST(field AS VARCHAR(max)), ',' + CHAR(10)) + ');' AS create_type
FROM (
SELECT TOP 5000 COLUMN_NAME + ' ' + DATA_TYPE
+ IIF(CHARACTER_MAXIMUM_LENGTH IS NULL, '', CONCAT('(', IIF(CHARACTER_MAXIMUM_LENGTH = -1, 'max', CONCAT(CHARACTER_MAXIMUM_LENGTH,'')), ')'))
+ IIF(DATA_TYPE = 'decimal', CONCAT('(', NUMERIC_PRECISION, ',', NUMERIC_SCALE, ')'), '')
AS field
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '[table_name]'
ORDER BY ORDINAL_POSITION) AS T;
After updating the Table Type, the Data Export service started functioning properly once again! :)
When I tried to execute my stored procedure I had the same problem because the size of the column that I need to add some data is shorter than the data I want to add.
You can increase the size of the column data type or reduce the length of your data.
A 2016/2017 update will show you the bad value and column.
A new trace flag will swap the old error for a new 2628 error and will print out the column and offending value. Traceflag 460 is available in the latest cumulative update for 2016 and 2017:
https://support.microsoft.com/en-sg/help/4468101/optional-replacement-for-string-or-binary-data-would-be-truncated
Just make sure that after you've installed the CU that you enable the trace flag, either globally/permanently on the server:
...or with DBCC TRACEON:
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15
Another situation, in which this error may occur is in
SQL Server Management Studio. If you have "text" or "ntext" fields in your table,
no matter what kind of field you are updating (for example bit or integer).
Seems that the Studio does not load entire "ntext" fields and also updates ALL fields instead of the modified one.
To solve the problem, exclude "text" or "ntext" fields from the query in Management Studio
This Error Comes only When any of your field length is greater than the field length specified in sql server database table structure.
To overcome this issue you have to reduce the length of the field Value .
Or to increase the length of database table field .
If someone is encountering this error in a C# application, I have created a simple way of finding offending fields by:
Getting the column width of all the columns of a table where we're trying to make this insert/ update. (I'm getting this info directly from the database.)
Comparing the column widths to the width of the values we're trying to insert/ update.
Assumptions/ Limitations:
The column names of the table in the database match with the C# entity fields. For eg: If you have a column like this in database:
You need to have your Entity with the same column name:
public class SomeTable
{
// Other fields
public string SourceData { get; set; }
}
You're inserting/ updating 1 entity at a time. It'll be clearer in the demo code below. (If you're doing bulk inserts/ updates, you might want to either modify it or use some other solution.)
Step 1:
Get the column width of all the columns directly from the database:
// For this, I took help from Microsoft docs website:
// https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlconnection.getschema?view=netframework-4.7.2#System_Data_SqlClient_SqlConnection_GetSchema_System_String_System_String___
private static Dictionary<string, int> GetColumnSizesOfTableFromDatabase(string tableName, string connectionString)
{
var columnSizes = new Dictionary<string, int>();
using (var connection = new SqlConnection(connectionString))
{
// Connect to the database then retrieve the schema information.
connection.Open();
// You can specify the Catalog, Schema, Table Name, Column Name to get the specified column(s).
// You can use four restrictions for Column, so you should create a 4 members array.
String[] columnRestrictions = new String[4];
// For the array, 0-member represents Catalog; 1-member represents Schema;
// 2-member represents Table Name; 3-member represents Column Name.
// Now we specify the Table_Name and Column_Name of the columns what we want to get schema information.
columnRestrictions[2] = tableName;
DataTable allColumnsSchemaTable = connection.GetSchema("Columns", columnRestrictions);
foreach (DataRow row in allColumnsSchemaTable.Rows)
{
var columnName = row.Field<string>("COLUMN_NAME");
//var dataType = row.Field<string>("DATA_TYPE");
var characterMaxLength = row.Field<int?>("CHARACTER_MAXIMUM_LENGTH");
// I'm only capturing columns whose Datatype is "varchar" or "char", i.e. their CHARACTER_MAXIMUM_LENGTH won't be null.
if(characterMaxLength != null)
{
columnSizes.Add(columnName, characterMaxLength.Value);
}
}
connection.Close();
}
return columnSizes;
}
Step 2:
Compare the column widths with the width of the values we're trying to insert/ update:
public static Dictionary<string, string> FindLongBinaryOrStringFields<T>(T entity, string connectionString)
{
var tableName = typeof(T).Name;
Dictionary<string, string> longFields = new Dictionary<string, string>();
var objectProperties = GetProperties(entity);
//var fieldNames = objectProperties.Select(p => p.Name).ToList();
var actualDatabaseColumnSizes = GetColumnSizesOfTableFromDatabase(tableName, connectionString);
foreach (var dbColumn in actualDatabaseColumnSizes)
{
var maxLengthOfThisColumn = dbColumn.Value;
var currentValueOfThisField = objectProperties.Where(f => f.Name == dbColumn.Key).First()?.GetValue(entity, null)?.ToString();
if (!string.IsNullOrEmpty(currentValueOfThisField) && currentValueOfThisField.Length > maxLengthOfThisColumn)
{
longFields.Add(dbColumn.Key, $"'{dbColumn.Key}' column cannot take the value of '{currentValueOfThisField}' because the max length it can take is {maxLengthOfThisColumn}.");
}
}
return longFields;
}
public static List<PropertyInfo> GetProperties<T>(T entity)
{
//The DeclaredOnly flag makes sure you only get properties of the object, not from the classes it derives from.
var properties = entity.GetType()
.GetProperties(System.Reflection.BindingFlags.Public
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.DeclaredOnly)
.ToList();
return properties;
}
Demo:
Let's say we're trying to insert someTableEntity of SomeTable class that is modeled in our app like so:
public class SomeTable
{
[Key]
public long TicketID { get; set; }
public string SourceData { get; set; }
}
And it's inside our SomeDbContext like so:
public class SomeDbContext : DbContext
{
public DbSet<SomeTable> SomeTables { get; set; }
}
This table in Db has SourceData field as varchar(16) like so:
Now we'll try to insert value that is longer than 16 characters into this field and capture this information:
public void SaveSomeTableEntity()
{
var connectionString = "server=SERVER_NAME;database=DB_NAME;User ID=SOME_ID;Password=SOME_PASSWORD;Connection Timeout=200";
using (var context = new SomeDbContext(connectionString))
{
var someTableEntity = new SomeTable()
{
SourceData = "Blah-Blah-Blah-Blah-Blah-Blah"
};
context.SomeTables.Add(someTableEntity);
try
{
context.SaveChanges();
}
catch (Exception ex)
{
if (ex.GetBaseException().Message == "String or binary data would be truncated.\r\nThe statement has been terminated.")
{
var badFieldsReport = "";
List<string> badFields = new List<string>();
// YOU GOT YOUR FIELDS RIGHT HERE:
var longFields = FindLongBinaryOrStringFields(someTableEntity, connectionString);
foreach (var longField in longFields)
{
badFields.Add(longField.Key);
badFieldsReport += longField.Value + "\n";
}
}
else
throw;
}
}
}
The badFieldsReport will have this value:
'SourceData' column cannot take the value of
'Blah-Blah-Blah-Blah-Blah-Blah' because the max length it can take is
16.
Kevin Pope's comment under the accepted answer was what I needed.
The problem, in my case, was that I had triggers defined on my table that would insert update/insert transactions into an audit table, but the audit table had a data type mismatch where a column with VARCHAR(MAX) in the original table was stored as VARCHAR(1) in the audit table, so my triggers were failing when I would insert anything greater than VARCHAR(1) in the original table column and I would get this error message.
I used a different tactic, fields that are allocated 8K in some places. Here only about 50/100 are used.
declare #NVPN_list as table
nvpn varchar(50)
,nvpn_revision varchar(5)
,nvpn_iteration INT
,mpn_lifecycle varchar(30)
,mfr varchar(100)
,mpn varchar(50)
,mpn_revision varchar(5)
,mpn_iteration INT
-- ...
) INSERT INTO #NVPN_LIST
SELECT left(nvpn ,50) as nvpn
,left(nvpn_revision ,10) as nvpn_revision
,nvpn_iteration
,left(mpn_lifecycle ,30)
,left(mfr ,100)
,left(mpn ,50)
,left(mpn_revision ,5)
,mpn_iteration
,left(mfr_order_num ,50)
FROM [DASHBOARD].[dbo].[mpnAttributes] (NOLOCK) mpna
I wanted speed, since I have 1M total records, and load 28K of them.
This error may be due to less field size than your entered data.
For e.g. if you have data type nvarchar(7) and if your value is 'aaaaddddf' then error is shown as:
string or binary data would be truncated
You simply can't beat SQL Server on this.
You can insert into a new table like this:
select foo, bar
into tmp_new_table_to_dispose_later
from my_table
and compare the table definition with the real table you want to insert the data into.
Sometime it's helpful sometimes it's not.
If you try inserting in the final/real table from that temporary table it may just work (due to data conversion working differently than SSMS for example).
Another alternative is to insert the data in chunks, instead of inserting everything immediately you insert with top 1000 and you repeat the process, till you find a chunk with an error. At least you have better visibility on what's not fitting into the table.