I am new to sharepoint, I have a custom field type derived from SpFieldChoice , my field allows users to select multiple values, I have a requirement of replacing some old custom columns with the new column and copy the data in old column to the new column. the old column also allows the users to select multiple values by ticking checkboxes, I have the following code to copy the data to new field.
foreach (SPListItem item in list.Items)
{
if (item[oldField.Title] == null)
{
item[newFld.Title] = string.Empty;
item.Update();
}
else
{
string[] itemvalues = item[oldField.Title].ToString().Split(new string[] {";#"}, StringSplitOptions.None);
StringBuilder multiLookupValues = new StringBuilder();
multiLookupValues.Append(";#");
for (int cnt = 0; cnt < (itemvalues.Length) / 2; cnt++)
{
multiLookupValues.Append (itemvalues[(cnt * 2) + 1].ToString() + ";#");
}
item[newFld.Title] = multiLookupValues.ToString();
item.SystemUpdate(false) ;
}
}
This code works fine until the length of resulting stringbuilder is less than 255 charachters , but when this length is greater then 255 I get the following Exception.
Invalid choice Value. A choice field contains invalid data.Please check the value and try again.
Is there any other way of copying data to SpFiledChoice, How can I resolve this problem? please help me.
Do the update multiple times so that the string doesn't exceed - i.e. value +=. However, if the problem is that the value can't be longer that 255, you have to consider how you are doing the choices. If it is exceeding the length and updating the value multiple times doesn't work (and a Site Column will have the same limitations), you can do the next best thing:
1) Create a new list that will hold the choices
2) Change the destination field to be a lookup
3) Update accordingly for each item (picking up the ID from the lookup field)
There's no limit to this.
David Sterling
david_sterling#sterling-consulting.com
www.sterling-consulting.com
Related
We have a storage table where we want to add a new integer column (It is in fact an enum of 3 values converted to int). We want a row to be required when:
It is an older row and the column does not exist
It is a new row and the column exists and does not match a particular value
When I just use a not equal operator on the column the old rows do not get returned. How can this be handled?
Update
Assuming a comparison always returns false for the non-existent column I tried somethinglike below (the value of the property will be always > 0 when it exists), which does not work either:
If the (Prop GreaterThanOrEqual -1) condition returns false I assume the value is null.
If not then, the actual comparison happens.
string propNullCondition = TableQuery.GenerateFilterConditionForInt(
"Prop",
QueryComparisons.GreaterThanOrEqual,
-1);
propNullCondition = $"{TableOperators.Not}({propNullCondition})";
string propNotEqualValueCondition = TableQuery.CombineFilters(
propNullCondition,
TableOperators.Or,
TableQuery.GenerateFilterConditionForInt(
"Prop",
QueryComparisons.NotEqual,
XXXX));
Note: The table rows written so far do not have "Prop" and only new rows will have this column. And expectation is the query should return all old rows and the new ones only when Prop != XXXX.
It seems that your code is correct, maybe there is a minor error there. You can follow my code below, which works fine as per my test:
Note: in the filter, the column name is case-sensitive.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("test1");
string propNullCondition = TableQuery.GenerateFilterConditionForInt(
"prop1", //note the column name shoud be case-sensitive here.
QueryComparisons.GreaterThanOrEqual,
-1);
propNullCondition = $"{TableOperators.Not}({propNullCondition})";
TableQuery<DynamicTableEntity> propNotEqualValueCondition = new TableQuery<DynamicTableEntity>()
.Where(
TableQuery.CombineFilters(
propNullCondition,
TableOperators.Or,
TableQuery.GenerateFilterConditionForInt(
"prop1",//note the column name shoud be case-sensitive here.
QueryComparisons.NotEqual,
2)));
var query = table.ExecuteQuery(propNotEqualValueCondition);
foreach (var q in query)
{
Console.WriteLine(q.PartitionKey);
}
The test result:
Here is my table in azure:
Tired brain - perhaps you can help.
My table has two bit fields:
1) TestedByPCL and
2) TestedBySPC.
Both may = 1.
The user interface has two corresponding check boxes. In the code I convert the checks to int.
int TestedBySPC = SearchSPC ? 1 : 0;
int TestedByPCL = SearchPCL ? 1 : 0;
My WHERE clause looks something like this:
WHERE TestedByPCL = {TestedByPCL.ToString()} AND TestedBySPC = {TestedBySPC.ToString()}
The problem is when only one checkbox is selected I want to return rows having the corresponding field set to 1 or both fields set to 1.
Now when both fields are set to 1 my WHERE clause requires both check boxes to be checked instead of only one.
So, if one checkbox is ticked return records with with that field = 1 , regardless of whether the other field = 1.
Second attempt (I think I've got it now):
WHERE ((TestedByPCL = {chkTestedByPCL.IsChecked} AND TestedBySPC = {chkTestedBySPC.IsChecked})
OR
(TestedByPCL = 1 AND TestedBySPC = 1 AND 1 IN ({chkTestedByPCL.IsChecked}, {chkTestedBySPC.IsChecked})))
Misunderstood the question.
Change the AND to an OR:
WHERE TestedByPCL = {chkTestedByPCL.IsChecked} OR TestedBySPC = {chkTestedBySPC.IsChecked}
Also:
SQL Server does not have a Boolean data type, it's closest option is a bit data type.
The usage of curly brackets suggests using string concatenations to build your where clause. This might not be a big deal when you're handling checkboxes but it's a security risk when handling free text input as it's an open door for SQL injection attacks. Better use parameters whenever you can.
I am trying to fill down the missing values using Pentaho pdi.
Input:
Desired output:
Found so far only Filling data gaps in a stream in Pentaho Data Integration, is it possible? but it fills in with the last known value.
Potentially, I thought I could work with the above solution, I also added the next amount to the analytical query, along with the next date. Then, I added the flag in the clone step and filter the original results from the input into Dummy and generated results (from the calculator) to a calculator (at the moment). Then, potentially, I can dump that separate stream to a temp table in a database and run the sql query which will do the rolling subtraction. I am also investigating the javascript step.
I disregarded the Python or R Executor step because at the end I will be running the job on the aws vm and I already foresee the pain I will go through with the installation.
What would be your suggestions? Is there a simple way to do interpolation?
Updated for the question
The method provided in your link does work from my testing, (I am using LAG instead of LEAD for your tasks though). Here I am not looking to replicate that method, just another option for you by using JavaScript to build the logic which you might also extend to other applications:
In the testing below (tested on PDI-8.0), the transformation has 5 steps, see below
Data Grid step to create testing data with three fields: date, account number and amount
Sort rows to sort the rows based on account number and date. this is required for Analytic Query step, if your source data are already sorted, then skip this step
Analytic Query step, see below, create two more fields: prev_date and prev_amount
Modified Java Script Value step, add the following code, nothing else is needed to configure in this step:
var days_diff = dateDiff(prev_date, date, "d")
if (days_diff > 0) {
/* retrieve index for two fields: 'date', 'amount'
* and modify their values accordingly
*/
var idx_date = getInputRowMeta().indexOfValue("date")
var idx_amount = getInputRowMeta().indexOfValue("amount")
/* amount to increment by each row */
var delta_amount = (amount - prev_amount)/days_diff
for (var i = 1; i < days_diff; i++) {
newRow = createRowCopy(getOutputRowMeta().size());
newRow[idx_date] = dateAdd(prev_date, "d", i);
newRow[idx_amount] = prev_amount + delta_amount * i;
putRow(newRow);
}
}
Select values step to remove unwanted fields, i.e.: prev_date, prev_amount
Run the transformation, you will have the following shown under the Preview data tab of Modified Java Script Value step:
UPDATE:
Per your comments, you can do the following, assume you have a new field account_type:
in Analytic Query step, add a new field prev_account_type, similar to two other prev_ fields, just from different Subject: account_type
in Modified Java Script Value step, you need to retrieve the Row index for account_type and modify the logic to compute delta_amount, so when prev_account_type is not the same as the current account_type, the delta_amount is ZERO, see below code:
var days_diff = dateDiff(prev_date, date, "d")
if (days_diff > 0) {
/* retrieve index for three fields: 'date', 'amount', 'account_type' */
var idx_date = getInputRowMeta().indexOfValue("date")
var idx_amount = getInputRowMeta().indexOfValue("amount")
var idx_act_type = getInputRowMeta().indexOfValue("account_type")
/* amount to increment by each row */
var delta_amount = prev_account_type.equals(account_type) ? (amount - prev_amount)/days_diff : 0;
/* copy the current Row into newRow and modify fields accordingly */
for (var i = 1; i < days_diff; i++) {
newRow = createRowCopy(getOutputRowMeta().size());
newRow[idx_date] = dateAdd(prev_date, "d", i);
newRow[idx_amount] = prev_amount + delta_amount * i;
newRow[idx_act_type] = prev_account_type;
putRow(newRow);
}
}
Note: invoking Javascript interpreter does have some performance hit, so if that matters to you, stick to the method in the link you provided.
Using sql query on a DB and importing information into another DB. So, I want to change the value of a column if the date in Column B is less than a certain value.
e.g.- If Column B is less than 01/01/2015 then column A = 0, otherwise leave A alone.
I have tried a few variations my latest incarnation is which obviously doesn't work.
CASE
WHEN ColB <" + Constants.StartOfYear.ToString("yyyy-MM-dd") + #"
THEN ColA = 0
END
I use lots of other CASE statement and have already selected all my columns from the table
If I uderstand you right, all you want is update some values.
If it's your case, you can use UPDATE DML:
String sql =
#"update MyTable
set ColA = 0
where ColB < #prm_ColB"; // '#' - For MS SQL, ':' for Oracle etc.
then assign value to prm_ColB and execute it like that:
// Assuming that you're working with MS SQL
using (var con = new SqlConnection(YourConnectionString)) {
con.Connect();
using (var q = new SqlCommand(con)) {
q.CommandText = sql;
// Put actual parameter value here
q.Parameters.AddWithValue("#prm_ColB", new DateTime(DateTime.Now.Year, 1, 1));
q.ExecuteNonQuery();
}
}
give your RDBMS an actual DateTime value via binding variable (#prm_ColB) do not try converting the date into string for hardcoding.
EDIT: My only pending issue is c) (True and False on file, bit on database, I can't change neither the file nor the database scheme, there's hundreds of terabytes I can't touch).
The system receives a file (hundreds of thousands of them, actually) with a certain format. Things are:
a) First type is a uniqidentifier (more on this later)
b) On the database, the table's first 4 values are generated by the database (they are related to dates), meaning that those 4 values are not found on the files (all the rest are -and are in order-, even if it's always their representation as text or they are empty)
c) Bit values are represented with a False/True on the file.
So, the issue for 1 is that in the text file I receive as input, the uniqidentifier is using brackets. When I tried to generate the file with the format nul options using the bcp command tool, it would make it a sqlchar with 37 characters (which makes no sense to me, since it would either be 36 or 38).
Row separator is "+++\r\n", column separator is "©®©".
How would I go about generating the format files? I've been stuck with this for some time, I never used bcp before and errors I've got don't really tell much ("Unexpected EOF encountered in BCP data-file")
Am I supposed to specify all the columns in the format file or just the ones I desire to read from the files I get?
Thanks!
NOTE: I can't provide the SQL schema since it's for the company I work for. But it's pretty much: smalldate, tinyint tinyint tinyint (this four are generated by the db), uniqidentifier, chars, chars, more varchars, some bits, more varchars, some nvarchar. ALL values, except for those generated by the db, accept null.
My current problem is with the skipping the first 4 columns.
http://msdn.microsoft.com/en-us/library/ms179250(v=SQL.105).aspx
I followed that guide but somehow it's not working. Here's the changes (I'm just hard-changing column names to keep privacy of the project, even if it sounds stupid)
This is the one generated with bcp (with format nul -c) -note I put it as link 'cause it's not that short-
http://pastebin.com/4UkpPp1n
The second one, which is supposed to do the same but ignoring the first 4 columns is in the next pastebin:
http://pastebin.com/Lqj6XSbW
Yet it is not working. The error is "Error = [Microsoft][SQL Native Client]The number of fields provided for bcp operation is less than the number of columns on the server.", which was supposed to be the purpose of all that.
Any help will be greatly appreciated.
I'd create a new table with a CHAR(38) for the GUID. Import your data into this staging table, then translate it with CAST(SUBSTRING(GUID, 2, 36) AS UNIQUEIDENTIFIER) to import the staging data into your permanent table. This approach also works well for dates in odd formats, numbers with currency symbols, or generally any kind of poorly-formatted input.
BCP format files are a little touchy, but fundamentally aren't too complicated. If that part continues to give you trouble, one option is to import the whole row as a single VARCHAR(1000) field, then split it up within SQL - if you're comfortable with SQL text processing that is.
Alternately, if you are familiar with some other programming language, like Perl or C#, you can create a script to pre-process your inputs into a more friendly form, like tab-delimited. If you're not familiar with some other programming language, I suggest you pick one and get started! SQL is a great language, but sometimes you need a different tool; it's not great for text processing.
If you're familiar with C#, here's my code to generate a format file. No one gets to make fun of my Whitestone indentation :P
private static string CreateFormatFile(string filePath, SqlConnection connection, string tableName, string[] sourceFields, string[] destFields, string fieldDelimiter, string fieldQuote)
{
string formatFilePath = filePath + ".fmt";
StreamWriter formatFile = null;
SqlDataReader data = null;
try
{
// Load the metadata for the destination table, so we can look up fields' ordinal positions
SqlCommand command = new SqlCommand("SELECT TOP 0 * FROM " + tableName, connection);
data = command.ExecuteReader(CommandBehavior.SchemaOnly);
DataTable schema = data.GetSchemaTable();
Dictionary<string, Tuple<int, int>> metadataByField = new Dictionary<string, Tuple<int, int>>();
foreach (DataRow row in schema.Rows)
{
string fieldName = (string)row["ColumnName"];
int ordinal = (int)row["ColumnOrdinal"] + 1;
int maxLength = (int)row["ColumnSize"];
metadataByField.Add(fieldName, new Tuple<int, int>(ordinal, maxLength));
}
// Begin the file, including its header rows
formatFile = File.CreateText(formatFilePath);
formatFile.WriteLine("10.0");
formatFile.WriteLine(sourceFields.Length);
// Certain strings need to be escaped to use them in a format file
string fieldQuoteEscaped = fieldQuote == "\"" ? "\\\"" : fieldQuote;
string fieldDelimiterEscaped = fieldDelimiter == "\t" ? "\\t" : fieldDelimiter;
// Write a row for each source field, defining its metadata and destination field
for (int i = 1; i <= sourceFields.Length; i++)
{
// Each line contains (separated by tabs): the line number, the source type, the prefix length, the field length, the delimiter, the destination field number, the destination field name, and the collation set
string prefixLen = i != 1 || fieldQuote == null ? "0" : fieldQuote.Length.ToString();
string fieldLen;
string delimiter = i < sourceFields.Length ? fieldQuoteEscaped + fieldDelimiterEscaped + fieldQuoteEscaped : fieldQuoteEscaped + #"\r\n";
string destOrdinal;
string destField = destFields[i - 1];
string collation;
if (destField == null)
{
// If a field is not being imported, use ordinal position zero and a placeholder name
destOrdinal = "0";
fieldLen = "32000";
destField = "DUMMY";
collation = "\"\"";
}
else
{
Tuple<int, int> metadata;
if (metadataByField.TryGetValue(destField, out metadata) == false) throw new ApplicationException("Could not find field \"" + destField + "\" in table \"" + tableName + "\".");
destOrdinal = metadata.Item1.ToString();
fieldLen = metadata.Item2.ToString();
collation = "SQL_Latin1_General_CP1_CI_AS";
}
string line = String.Join("\t", i, "SQLCHAR", prefixLen, fieldLen, '"' + delimiter + '"', destOrdinal, destField, collation);
formatFile.WriteLine(line);
}
return formatFilePath;
}
finally
{
if (data != null) data.Close();
if (formatFile != null) formatFile.Close();
}
}
There was some reason I didn't use a using block for the data reader at the time.
It seems as if it is not possible for BCP to understand True and False as bit values. It's better to either go with SSIS or first replace the contents of the text (not a good idea to create views or anything like that, it is more overhead).