Lotus Notes Script - lotus-domino

I need to have a script which can copy the value from InternetAddress Field in the person document one by one in names.nsf and append it in the User Name field. Can someone help with a working script?

This can literally be done with a one-line formula, using a simple assignment statement and the list append operator. It is really the most basic thing you can do in all of Lotus Notes programming. It doesn't even need to use any of the #functions.
The syntax for an assignment is:
FIELD fieldThatYouWantToSet := value;
The syntax for the list append operator is:
value1 : value2;
The only other thing you need to know is that value1 and value2 can simply be the names of existing items (a.k.a. fields), and the list of values in the second item will be appended to the list of values in the first item.
Then, all you will need to do is to put this one-liner into a formula agent that runs against all documents in the current view, and run the agent from the Actions menu while you are in the view.

Option-1:
There are a few ways to accomplish this, the simplest being , as the user specified before, a simple field function would do.
FIELD UserName := UserNAme:InternetAddress;
You could set and run the above field function in an agent in $UsersView
In option-2, you could also use the "UnProcessedDoc" feature and set and run as an agent.
But, here, I have written one another way to set and run as an agent in $UsersView.
Hope this helps, please approve this answer if this had helped anyone.
Option:2
At present , no Domino in my System, in order to test this snippet. I wish there exists any method online to test this snippet before I post.
But, logically, consider this snippet as a pseudo-code to accomplish ur objective.
Above all, it has been more than a decade or two ever since I have programmed in Domino. Because, I moved on to RDMS, DW, BI and now to Cloud... well Cloud-9. :)
Here is a portion of lotus script code to copy the value from InternetAddress Field in the person document one by one in names.nsf and append it in the UserName field.
Dim db As New NotesDatabase( "Names Db", "names.nsf" )
Dim view As NotesView
Dim doc As NotesDocument
Dim InternetAddress_value As Variant
Set view = db.GetView( "$Users" )
// loop thru all docs in $Users view.
// Get InternetAddress_value and rep it in UserName
Set doc = view.GetFirstDocument
If doc.HasItem("InternetAddress") Then
While Not(doc Is Nothing)
// in this line we concatenate username with IAaddress_value
new_value= doc.GetItemValue( "username" ) + doc.GetItemValue( "InternetAddress" )
Call doc.ReplaceItemValue( "UserName", new_value)
Call doc.Save( False, True )
Set doc = view.GetNextDocument(doc)
Wend
End If
This solution is posted by Mangai#Notes#Domino#Now_HCL . Encourage to post more solutions by approving the answer.

Related

npgsql executescalar() allways returns nothing

I'm using npgsql as a nuget package in visual studio 2017 with visual basic.
Various commands do work very well but an ExecuteScalar allways returns 'nothing' although it should give a result.
The command looks like this:
Dim ser As Integer
Dim find = New NpgsqlCommand("SELECT serial from dbo.foreigncode WHERE code = '#code';", conn)
Dim fcode = New NpgsqlParameter("code", NpgsqlTypes.NpgsqlDbType.Varchar)
find.Parameters.Add(fcode)
find.Prepare()
fcode.Value = "XYZ"
ser = find.ExecuteScalar() ==> nothing
When the command string is copied as a value during debugging and pasted into the query tool of PGADMIN it delivers the correct result. The row is definitely there.
Different Commands executed with ExecuteNonQuery() work well, including ones performing UPDATE statements on the row in question.
When I look into the properties of the parameter fcode immediately before the ExecuteScalar it shows 'fcode.DataTypeName' caused an exception 'System.NotImplementedException'.
If I change my prepared statement to "SELECT #code" and set the value of the parameter to an arbitrary value just this value is returned. There is no access to the table taking place because the table name is not part of the SELECT in this case. If I remove the WHERE CLAUSE in the SELECT and just select one column, I would also expect that something has to be returned. But again it is nothing.
Yes there is a column named serial. It is of type bigint and can not contain NULL.
A Query shows that there is no single row that contains NULL in any column.
Latest findings:
I queried a different table where the search column and the result column happen to have the same datatype. It works, so syntax, passing of parameter, prepare etc. seems to work in principal.
The System.NotImplementedException in the DataTypeName property of the parameter occurs as well but it works anyway.
I rebuilt the index of the table in question. No change.
Still: when I copy/paste the CommandText and execute it in PGAdmin it shows the correct result.
Modifying the Command and using plain text there without parameter and without prepare still does yield nothing. The plain text CommandText was copy/pasted from PGAdmin where it was successfully executed before.
Very strange.
Reverting search column and result column also gives nothing as a result.
Please try these two alternatives and post back your results:
' Alternative 1: fetch the entire row, see what's returned
Dim dr = find.ExecuteReader()
While (dr.Read())
Console.Write("{0}\t{1} \n", dr[0], dr[1])
End While
' Alternative 2: Check if "ExecuteScalar()" returns something other than an int
Dim result = find.ExecuteScalar()
... and (I just noticed Honeyboy Wilson's response!) ...
Fix your syntax:
' Try this first: remove the single quotes around "#code"!
Dim find = New NpgsqlCommand("SELECT serial from dbo.foreigncode WHERE code = #code;", conn)
Update 1
Please try this:
Dim find = New NpgsqlCommand("SELECT * from dbo.foreigncode;", conn)
Q: Does this return anything?
Dim dr = find.ExecuteReader()
While (dr.Read())
Console.Write("{0}\t{1} \n", dr[0], dr[1])
End While
Q: Does this?
Dim result = find.ExecuteScalar()
Q: Do you happen to have a column named "serial"? What is it's data type? Is it non-null for the row(s) with 'XYZ'?
Please update your original post with this information.
Update 2
You seem to be doing ":everything right":
You've confirmed that you can connect,
You've confirmed that non-query updates to the same table work (with npgsql),
You've confirmed that the SQL queries themselves are valid (by copying/pasting the same SQL into PGAdmin and getting valid results).
As Shay Rojansky said, "System.NotImplementedException in the DataTypeName property" is a known issue stepping through the debugger. It has nothing to do with your problem: https://github.com/npgsql/npgsql/issues/2520
SUGGESTIONS (I'm grasping at straws)::
Double-check "permissions" on your database and your table.
Consider installing a different version of npgsql.
Be sure your code is detecting any/all error returns and exceptions (it sounds like you're probably already doing this, but it never hurts to ask)
... and ...
Enable verbose logging, both client- and server-side:
https://www.npgsql.org/doc/logging.html
https://www.postgresql.org/docs/9.0/runtime-config-logging.html
... Finally ...
Q: Can you make ANY query, from ANY table, using ANY query method (ExecuteReader(), ExecuteScalar(), ... ANYTHING) from your npgsql/.Net client AT ALL?
I finally found it. It's often the small things that can have a big impact.
When the value was assigned to the parameter a substring index was incorect.
Now it works perfectly.
Thanks to everybody who spent his time on this.

Return multiple records from subroutine and parse into datatable [Unidata][U2.NET]

I am working with Unidata, and ADO.NET using the U2 .NET Provider. This may be a shot in the dark as there are not many resources for Unidata and .NET these days.
Currently I can only return a single MV record 153926þIþ and parse it using MV_To_DataTable. I'd like to return multiple records like 153926þIþÿ153926þIþÿ. Is there any built in mechanism for doing this? I fear I will have to write the extension to best accomodate me.
I retrieve a single record in a unidata subroutine this way:
SUBROUTINE GETITEMS(results)
EXECUTESQL "SELECT ID, STATUS, DESC FROM ITEMS TO GETITEM_LIST;"
DONE = 0
RECCNT = 0
LOOP
RECCNT += 1
READNEXTTUPLE REC FROM "GETITEM_LIST" ELSE DONE = 1
results := REC
IF RECCNT EQ 1 THEN EXIT
UNTIL DONE
REPEAT
results
CLEARSQL
RETURN
Simple subroutine that returns one record without any record marks. This works when I use the U2Parameter method called MV_To_DataTable to parse it into an existing datatable.
However when I change the subroutine line:
results:= REC to results:= REC : #RM to append the record marks and remove the limit of 1, the MV_To_DataTable no longer is able to parse it correctly. In fact it will throw System.IndexOutOfRangeException: Cannot find column 3.
VB.NET Code:
' ... Open database connection called U2Connection ...
Dim cmd = U2Connection.CreateCommand
cmd.CommandText = "CALL GETITEMS(?)"
cmd.CommandType = CommandType.StoredProcedure
cmd.Parameters.Clear()
cmd.Parameters.Add(New U2Parameter("#arg1", "") With {.Direction = ParameterDirection.InputOutput})
cmd.ExecuteNonQuery()
Dim tb As New DataTable
tb.Columns.Add("ID")
tb.Columns.Add("STATUS")
tb.Columns.Add("DESC")
cmd.Parameters.Item(0).MV_To_DataTable(tb) ' Error happens here
' System.IndexOutOfRangeException: Cannot find column 3.
It appears the method does not separate records. I could be interpretting this incorrectly.
*****UPDATE 2/9/2019
I went ahead and wrote my own extension method to support my return format with the record markers. It populates a datatable with records allowing me to continue as I normally would.
You are kind of straddling the Multivalue/System.Data divide here. If you have not already done so, I would suggest looking into the U2 Toolkit for .NET, which I believe is generally readily available if you are current on maintenance. It comes with some samples of how to do things like this in C# and VB as well some Entity Framework stuff.
But as to what is going on here, You are trying to put a U2Type.DynArray into a System.DataTable is kind of tricky as the DynArray is a Record state, which could contain multiple rows from multiple tables within a DataSet. As #RM is the terminator for a record so it turns DynArray into DynArray[] and you can't have that as a parameter as such.
To fix this with the minimum refactoring, you can still use MV_To_DataTable, but note that it is expecting your data to be tabular and in in a single record. These example assume a newline is an Attribute mark (#FM/#AM)
Here is the contents of what you are returning
Row1Column1
Row1Column2
Row1Column3:#RM
Row2Column1
Row2Column2
Row2Column3:#RM
Row13olumn1
Row13olumn2
Row13olumn3:#RM
And here is what MV_To_DataTable expects
Row1Column1:#VM:Row1Column2:#VM:Row1Column3
Row2Column1:#VM:Row2Column2:#VM:Row2Column3
Row3Column1:#VM:Row3Column2:#VM:Row3Column3
If you adjust your U2 sub to output that, it should work.
Additionally, you could try using your SQL command directly for .net, but that becomes perilous for other reasons depending on your data.
Good Luck!

Zoho Creator making a custom function for a report

Trying to wrap my head around zoho creator, its not as simple as they make it out to be for building apps… I have an inventory database, and i have four fields that I call to fill a field called Inventory Number (Inv_Num1) –
First Name (First_Name)
Last Name (Last_Name)
Year (Year)
Number (Number)
I have a Custom Function script that I call through a Custom Action in the form report. What I am trying to do is upload a CSV file with 900 entries. Of course, not all of those have those values (first/last/number) so I need to bulk edit all of them. However when I do the bulk edit, the Inv_Num1 field is not updated with the new values. I use the custom action to populate the Inv_Num1 field with the values of the other 4 fields.
Heres is my script:
void onetime.UpdateInv()
{
for each Inventory_Record in Management
{
FN = Inventory_Record.First_Name.subString(0,1);
LN = Inventory_Record.Last_Name.subString(0,1);
YR = Inventory_Record.Year.subString(2,4);
NO = Inventory_Record.Number;
outputstr = FN + LN + YR + NO;
Inventory_Record.Inv_Num1 = outputstr;
}
}
I get this error back when I try to run this function
Error.
Error in executing UpdateInv workflow.
Error in executing For Each Record task.
Error in executing Set Variable task. Unable to update template variable FN.
Error evaluating STRING expression :
Even though there is a First Name for example, it still thinks there is none. This only happens on the fields I changed with Bulk Edit. If I do each one by hand, then the custom action works—but of course then the Inv_Num1 is already updated through my edit on success functions and makes the whole thing moot.
this may be one year late, you might have found the solution but just to highlight, the error u were facing was just due to the null value in first name.
you just have put a null check on each field and u r good to go.
you can generate the inv_number on the time of bulk uploading also by adding null check in the same code on and placing the code on Add> On Submt.( just the part inside the loop )
the Better option would be using a formula field, you just have to put this formula in that formula field and you'll get your inventory_number autogenerated , you can rename the formula_field to Inv Number or whaterver u want.
Since you are using substring directly in year Field, I am assuming the
year field as string.else you would have to user Year.tostring().substring(2,4) & instead of if(Year=="","",...) you have to put if(Year==null , null,...);
so here's the formula
if(First_Name=="","",First_Name.subString(0,1))+if(Last_Name =="","",Last_Name.subString(0,1)) + if(Year=="","",Year.subString(2,4)+Number
Let me know ur response if u implement this.
Without knowing the datatype its difficult to fix, but making the assumption that your Inventory_Record.number is a numeric data item you are adding a string to a number:
The "+" is used for string Concatenation - Joiner but it also adds two numbers together so think "a" + "b" = "ab" for strings but for numbers 1 + 2 = 3.
All good, but when you do "a" + 2 the system doesn't know whether to add or concatenate so gives an error.

os.execute variables

I have multiple Steam accounts that I want to launch via a single Lua script with the options I specify. I've got pretty much everything sorted, except for launching with the code provided. I have no idea how to "pass" the variable with this format.
function Steam(n, opt1, opt2, opt3)
os.execute[["start C:\Program" "Files\Sandboxie\Start.exe /box:Steam2 D:\Steam\steam.exe -login username password -opt1 -opt2 -opt3"]]
end
I have my usernames and Sandboxes setup so that only the number needs changing (fenriros2, fenriros3, Steam2, Steam3 etc) with the same password.
Basically, I want this;
Steam(3, -tf, -exit, -textmode)
to do;
os.execute[["start C:\Program" "Files\Sandboxie\Start.exe /box:Steam3 D:\Steam\steam.exe -login fenriros3 password -applaunch 440 -textmode"]]
I'll use -exit to close the lua window after it's done.
I realize my code isn't exactly efficient, but that's a worry for a later time. Right now I just need to get it working.
Any help is greatly appreciated, and I apologize if I missed something obvious, I'm still fairly new at Lua.
First obvious one. The [[ ]] delimit a string so all you need to do is to create a variable for the string and replace the contents as needed.
function Steam(n, opt1, opt2, opt3)
-- Set up execute string with placeholders for the parameters.
local strExecute = [["start C:\Program" "Files\Sandboxie\Start.exe /box:Steam{n} D:\Steam\steam.exe -login fenriros{n} password -{opt1} -{opt2} -{opt3}"]]
-- Use gsub to replace the parameters
-- You could just concat the string but I find it easier to work this way.
strExecute = strExecute:gsub('{n}',n)
strExecute = strExecute:gsub('{opt1}',opt1:gsub('%%','%%%%'))
strExecute = strExecute:gsub('{opt2}',opt2:gsub('%%','%%%%'))
strExecute = strExecute:gsub('{opt3}',opt3:gsub('%%','%%%%'))
os.execute(strExecute)
end
Steam(1,'r1','r2','r3')

MOSS 2007: What is the source of "Directories"?

I'm trying to generate a new SharePoint list item directly using SQL server. What's stopping me is damn tp_DirName column. I have no ideas how to create this value.
Just for instance, I have selected all tasks from AllUserData, and there are possible values for the column: 'MySite/Lists/Task', 'Lists/Task' and even 'MySite/Lists/List2'.
MySite is the FullUrl value from Webs table. I can obtain it. But what about 'Lists/Task' and '/Lists/List2'? Where they are stored?
If try to avoid SQL context, I can formulate it the following way: what is the object, that has such attribute as '/Lists/List2'? Where can I set it up in GUI?
Just a FYI. It is VERY not supported to try and write directly to SharePoint's SQL Tables. You should really try and write something that utilizes the SharePoint Object Model. Writing to the SharePoint database directly mean Microsoft will not support the environment.
I've discovered, that [AllDocs] table, in contrast to its title, contains information about "directories", that can be used to generate tp_DirName. At least, I've found "List2" and "Task" entries in [AllDocs].[tp_Leaf] column.
So the solution looks like this -- concatenate the following 2 components to get tp_DirName:
[Webs].[FullUrl] for the web, containing list, containing item.
[AllDocs].[tp_Leaf] for the list, containing item.
Concatenate the following 2 components to get tp_Leaf for an item:
(Item count in the list) + 1
'_.000'
Regards,
Well, my previous answer was not very useful, though it had a key to the magic. Now I have a really useful one.
Whatever they said, M$ is very liberal to the MOSS DB hackers. At least they provide the following documents:
http://msdn.microsoft.com/en-us/library/dd304112(PROT.13).aspx
http://msdn.microsoft.com/en-us/library/dd358577(v=PROT.13).aspx
Read? Then, you know that all folders are listed in the [AllDocs] table with '1' in the 'Type' column.
Now, let's look at 'tp_RootFolder' column in AllLists. It looks like a folder id, doesn't it? So, just SELECT the single row from the [AllDocs], where Id = tp_RootFolder and Type = 1. Then, concatenate DirName + LeafName, and you will know, what the 'tp_DirName' value for a newly generated item in the list should be. That looks like a solid rock solution.
Now about tp_LeafName for the new items. Before, I wrote that the answer is (Item count in the list) + 1 + '_.000', that corresponds to the following query:
DECLARE #itemscount int;
SELECT #itemscount = COUNT(*) FROM [dbo].[AllUserData] WHERE [tp_ListId] = '...my list id...';
INSERT INTO [AllUserData] (tp_LeafName, ...) VALUES(CAST(#itemscount + 1 AS NVARCHAR(255)) + '_.000', ...)
Thus, I have to say I'm not sure that it works always. For items - yes, but for docs... I'll inquire into the question. Leave a comment if you want to read a report.
Hehe, there is a stored procedure named proc_AddListItem. I was almost right. MS people do the same, but instead of (count + 1) they use just... tp_ID :)
Anyway, now I know THE SINGLE RIGHT answer: I have to call proc_AddListItem.
UPDATE: Don't forget to present the data from the [AllUserData] table as a new item in [AllDocs] (just insert id and leafname, see how SP does it itself).