getgroup() is very slow - vb.net

I am using the function getgroup() to read all of the groups of a user in the active directory.
I'm not sure if I'm doing something wrong but it is very very slow. Each time it arrives at this point, it takes several seconds. I'm also accessing the rest of Active directory using the integrated function of "Accountmanagement" and it executes instantly.
Here's the code:
For y As Integer = 0 To AccountCount - 1
Dim UserGroupArray As PrincipalSearchResult(Of Principal) = UserResult(y).GetGroups()
UserInfoGroup(y) = New String(UserGroupArray.Count - 1) {}
For i As Integer = 0 To UserGroupArray.Count - 1
UserInfoGroup(y)(i) = UserGroupArray(i).ToString()
Next
Next
Later on...:
AccountChecker_Listview.Groups.Add(New ListViewGroup(Items(y, 0), HorizontalAlignment.Left))
For i As Integer = 0 To UserInfoGroup(y).Count - 1
AccountChecker_Listview.Items.Add(UserInfoGroup(y)(i)).Group = AccountChecker_Listview.Groups(y)
Next
Item(,) contains my normal Active directory data that I display Item(y, 0) contain the username.
y is the number of user accounts in AD. I also have some other code for the other information in this loop but it's not the issue here.
Anyone know how to make this goes faster or if there is another solution?

I'd recommend trying to find out where the time is spent. One option is to use a profiler, either the one built into Visual Studio or a third-party profiler like Redgate's Ants Profiler or the Yourkit .Net Profiler.
Another is to trace the time taken using the System.Diagnostics.Stopwatch class and use the results to guide your optimization efforts. For example time the function that retrieves data from Active Directory and separately time the function that populates the view to narrow down where the bottleneck is.
If the bottleneck is in the Active Directory lookup you may want to consider running the operation asynchronously so that the window is not blocked and populates as new data is retrieved. If it's in the listview you may want to consider for example inserting the data in a batch operation.

Related

synchronization between 2 applications pooling a SQL table

I have 2 instances of a VB.NET application each running on their own dedicated servers. The said application runs a While true loop with a 5s sleep on IDLE (IDLE is when the Table doesn't have any ProcessQuery to be treated). On each iteration, the application questions a table in the SQL Database to know if there is anything it could process.
The problem is that i sometimes encounter the problem where both of the instances are "taking" the same ProcessQuery.
I'm using EntityFramework6. I have looked into EntityState but i don't think it does exactly what i'm trying to accomplish.
I was wondering what would be my solution to have perfect parallel instances. It's not impossible at some point i have 12 instances running on 12 machines.
Thanks!
Dim conn As New Info_IndusEntities()
Dim DemandeWilma As WilmaDemandes = conn.WilmaDemandes.Where(Function(x) x.Site = 'LONDON' AndAlso x.Statut = 'toProcess').OrderBy(Function(x) x.RequestDate).FirstOrDefault
If Not IsNothing(DemandeWilma) Then
DemandeWilma.Statut = Statuts.EnTraitement.ToString
DemandeWilma.ServerName = Environment.MachineName
DemandeWilma.ProcessDate = DateTime.Now
conn.SaveChanges()
Return DemandeWilma
end if
UPDATE (21/06/19)
I found an article that I find interesting.
I started by adding a column to my Table :
UPDATED (21/06/19)
I then refreshed my model and changed the Concurrency Check property of RowVersion column in my ORM :
When I tested the update, here's the log of EF6 :
UPDATE [dbo].[WilmaDemandes] SET [Statut] = #0, [ServerName] = #1,
[DateDebut] = #2 WHERE (([ID] = #3) AND ([RowVersion] = #4)) SELECT
[RowVersion] FROM [dbo].[WilmaDemandes] WHERE ##ROWCOUNT > 0 AND [ID]
= #3
-- #0: 'EnTraitement' (Type = String, Size = 20)
-- #1: 'TRB5995' (Type = String, Size = 20)
-- #2: '2019-06-25 7:31:01 AM' (Type = DateTime2)
-- #3: '124373' (Type = Int32)
-- #4: 'System.Byte[]' (Type = Binary, Size = 8)
-- Executing at 2019-06-25 7:31:24 AM -04:00
-- Completed in 95 ms with result: SqlDataReader
Closed connection at 2019-06-25 7:31:24 AM -04:00
Exception thrown:
'System.Data.Entity.Infrastructure.DbUpdateConcurrencyException' in
EntityFramework.dll
UPDATED (25/06/19)
The problems, as explained in this post, starts when you are using DB-First instead of Code-First. Your property will get overwritten silently as soon as you update the model. Some people back then coded a console app workaround that they run on pre-build. I'm not sure i'm quite ready to take this solution as final solution.
Interesting tutorial on how to test optimistic concurrency and ways to resolve such an exception.
Add an "owner" column to your queue table
Your application updates one record (TOP 1) and sets the owner value to their identifier (WHERE Owner IS NULL)
Now your application goes back and reads their owned rows and processes them
It's a simple pattern and it works great. If any processes happen to take ownership 'simultaneously', only one will actually get the reservation.
I'm not very good at LINQ so here's a brute force method, multiline for clarity:
// First try reserving a row
conn.Database.ExecuteSqlCommand(
"WITH UpdateTop1 AS
(SELECT TOP 1 * FROM WilmaDemandes
WHERE Owner IS NULL
AND Site = 'LONDON'
ORDER BY RequestDate)
UPDATE UpdateTop1 SET Owner='ThisApplication'"
);
// See if we got one
Dim DemandeWilma As WilmaDemandes =
conn.WilmaDemandes.
Where(x => x.Owner=='ThisApplication').FirstOrDefault
// If we got a row, process it. Otherwise Idle and repeat
There's also no reason that you must reserve one row. You could reserve all the free rows and work your way through them. Meanwhile other processes will pick up any subsequently arriving rows
Personally I would refactor your status column and make it NULL for new records ready to be processed, otherwise it's the worker ID that has reserved it.
It also helps to add things like timestamp columns to record when the row was reserved etc.

Exceeded max configured index size while indexing document while running LS Agent

In our project we have a LS agent which is supposed to delete and create a new FT index. We haven't figured out yet why updating the index cannot be accomplished automatically(although in our database option this is set to update the index daily). So we decided to make roughly the same thing, but using our very simple agent. The agent looks like this and runs daily in the night:
Option Public
Option Declare
Dim s As NotesSession
Dim ndb As NotesDatabase
Sub Initialize
Set s = New NotesSession
Set ndb = s.CurrentDatabase
Print("BEFORE REMOVING INDEXES")
Call ndb.Removeftindex()
Print("INDEXES HAVE BEEN REMOVED SUCCESSFULLY")
Call ndb.createftindex(FTINDEX_ALL_BREAKS, true)
Print("INDEXES HAVE BEEN CREATED SUCCESSFULLY")
End Sub
In most cases it works very well, but sometimes when somebody creates a document which exceeds 12MB (we really don't know how this is possible) the agent fails to create the index. (but the latest index is already deleted).
Error message is:
31.05.2018 03:01:25 Full Text Error (FTG): Exceeded max configured index
size while indexing document NT000BD992 in database index *path to FT file*.ft
My question is how to avoid this problem? We've already expanded the limit of 6MB by the following command SET CONFIG FTG_INDEX_LIMIT=12582912. Can we expand it even more? And in general, how to solve the problem? Thanks in advance.
Using FTG_INDEX_LIMIT is an option to avoid this error, yes. But it will impact server performance in two ways - FTI-update processes will take more time and more memory.
There's no max size of this limit (in theory), but! As update-processes eats memory from common heap it can lead to 'out-of-memory-overheaping' and server crash.
You can try to exclude attachments from index - i don't think someone can put more than 1mb of text in one doc, but users can attach some big text files - and this will produce the error you writing about.
p.s. and yeah, i agree with Scott - why do you need such agent anyway? common ft-indexing working fine usualy.

multi threading - add more threads and continue the operation

ok here is my code :
For i = 0 To 10
Dim tTemp As Threading.Thread = New Threading.Thread(AddressOf dwnld)
tTemp.IsBackground = True
'tTemp.Start(geturl)
lThreads.Add(tTemp)
'MsgBox(lThreads.Item(i).ThreadState)
Next
I create a list of threads with 10 threads, assign them a function, properties and add them to the list.
'While ListBox2.Items.Count > 0
For i = 0 To lThreads.Count - 1
If (lThreads.Item(i).ThreadState = 12) Then
If (ListBox2.Items.Count > 0) Then
lThreads.Item(i).Start(geturl)
If (i = lThreads.Count - 1) Then
i = 0
End If
Else
Exit For
End If
'MsgBox(lThreads.Item(i).ThreadState)
ElseIf (lThreads.Item(i).ThreadState = 16) Then
lThreads.RemoveAt(i)
Dim tTemp As Threading.Thread = New Threading.Thread(AddressOf dwnld)
tTemp.IsBackground = True
lThreads.Add(tTemp)
If (i = lThreads.Count - 1) Then
i = 0
End If
End If
Next
What's happening is, i see the threads stop after the function dwnld is completed. So i first check for the state (12 means background and unstarted). On case 12 start the thread and in case 16 (stopped) remove that particular thread and add a different thread like i add 10 above.
Also there is a check when the i counter reaches last number, restart the whole loop by assigning i=0.
The program downloads some web pages, the url is passed from the listbox2. The geturl will pass the url and remove it from the list. So when the listbox is empty, exit the for loop.
But the above code is running for only 11 times and it does not restart. I tried using a lable and goto but it simple hangs.
Can anyone tell me what to do?
What i want is to maintain 10 threads to keep downloading the web pages and when the list is empty, exit the function.
Trying to manually manage your own custom pool of threads is probably the wrong approach here. Use ThreadPool.QueueUserWorkItem or preferrably the new Task class. The thread pooling is managed for you which greatly simplifies the code. Completely scrap this code and start over using one of the techniques I just mentioned. If you run into problems implementing either of these techniques then post a more specific question.
Micro-management of threads is, well, just a really bad idea. The moment I see anyone trying to maintain a list of threads that are continually created, terminated and destroyed I just know they are doomed. I have seen experienced professionals trying to do it - it's fun looking on, waiting for the inevitable spectacular failure after months of trying to fix the unfixable.
Thread pools are, typically, nothing of the sort. They are usually a pool of tasks - task class instances on a producer-consumer queue - that several threads feed off as and when they are free to do work. The work threads auto-manage themselves by getting new tasks themselves when they have finished with the old one - no need for any higher-level micro management.
Listen to #Brian - forget managing lists of threads, checking their state and all that gunge. It'll just make you ill. Go with ThreadPool.QUWI or Tasks.

Linq to SQL Transaction Insert then Select really, really slow

I'm developing a piece of a system that basically migrates data from one set of tables to another set. Everything works fine, but I've decided to employ transactions instead of just failing on things that are partially completed. (That is, if some exception occurs, I want to rollback instead of having partial data migrated.)
I have a service (in the 3-tier architecture way, not web) which begins a transaction on the data access layer. The data context is shared in the data access class which contains many methods. Those methods use various LINQ-to-SQL techniques to update/insert/delete. All the LINQ-to-SQL "selects" are within CompiledQueries.
The "BeginTransaction" method starts a transaction like this:
Public Sub BeginTransaction() Implements ITransactionalQueriesBase.BeginTransaction
Me.Context.Connection.Open()
Me.Context.Transaction = Context.Connection.BeginTransaction()
IsInTransaction = True
End Sub
Basically, I have written a test which starts a transaction, inserts into a table, and then attempts to retrieve the value that was just inserted, all during the transaction. I did this because I wanted to assert that the insert method actually tries to insert. Then, during the test I would rollback, then test to ensure that the newly inserted value is not actually committed to the table. The test looks something like this:
<TestMethod()>
Public Sub FacilityService_Can_Rollback_A_Transaction()
faciService.BeginTransaction()
Dim devApp = UnitTestHelper.CreateDevelopmentApplication(devService.GetDevelopmentType("NEWFACI").ID, 1, 1, 1, 1)
Dim devInsertRes = devService.InsertDevelopmentApplication(devApp)
Assert.IsTrue(devInsertRes.ReturnValue > 0)
For Each dir1 In devInsertRes.Messages
Assert.Fail(dir1)
Next
Dim migrationResult = faciService.ProcessNewFacilityDevelopment(devInsertRes.ReturnValue)
Assert.IsTrue(migrationResult.ReturnValue.InsertResult)
Dim faciRetrieval1 = faciService.GetFacilityByID(migrationResult.ReturnValue.FacilityID)
Assert.IsNotNull(faciRetrieval1.ReturnValue)
faciService.Rollback()
Dim faciRetrieval2 = faciService.GetFacilityByID(migrationResult.ReturnValue.FacilityID)
Assert.IsNull(faciRetrieval2.ReturnValue)
End Sub
So, to my problem...
When the test gets to the "faciRetrieval1" step, it stays there for about 30-60 seconds before moving on. I'm not sure why this is happening. If I run the same queries in a transaction within SSMS it happens instantly. Does anyone have any ideas? The database is a SQL Server 2008 SP1 (R2?).
I figured out that if you have a data context using a transaction, any other data context appears to not be able to select from another context of the same type.
I ended up fixing it by using the same context throughout every select/update/delete while a transaction was happening.

SELECT through oledbcommand in vb.net not picking up recent changes

I'm using the following code to work out the next unique Order Number in an access database. ServerDB is a "System.Data.OleDb.OleDbConnection"
Dim command As New OleDb.OleDbCommand("", serverDB)
command.CommandText = "SELECT max (ORDERNO) FROM WORKORDR"
iOrder = command.ExecuteScalar()
NewOrderNo = (iOrder + 1)
If I subsequently create a WORKORDR (using a different DB connection), the code will not pick up the new "next order number."
e.g.
iFoo = NewOrderNo
CreateNewWorkOrderWithNumber(iFoo)
iFoo2 = NewOrderNo
will return the same value to both iFoo and iFoo2.
If I Close and then reopen serverDB, as part of the "NewOrderNo" function, then it works. iFoo and iFoo2 will be correct.
Is there any way to force a "System.Data.OleDb.OleDbConnection" to refresh the database in this situation without closing and reopening the connection.
e.g. Is there anything equivalent to serverdb.refresh or serverdb.FlushCache
How I create the order.
I wondered if this could be caused by not updating my transactions after creating the order. I'm using an XSD for the order creation, and the code I use to create the record is ...
Sub CreateNewWorkOrderWithNumber(ByVal iNewOrder As Integer)
Dim OrderDS As New CNC
Dim OrderAdapter As New CNCTableAdapters.WORKORDRTableAdapter
Dim NewWorkOrder As CNC.WORKORDRRow = OrderDS.WORKORDR.NewWORKORDRRow
NewWorkOrder.ORDERNO = iNewOrder
NewWorkOrder.name = "lots of fields filled in here."
OrderDS.WORKORDR.AddWORKORDRRow(NewWorkOrder)
OrderAdapter.Update(NewWorkOrder)
OrderDS.AcceptChanges()
End Sub
From MSDN
Microsoft Jet has a read-cache that is
updated every PageTimeout milliseconds
(default is 5000ms = 5 seconds). It
also has a lazy-write mechanism that
operates on a separate thread to main
processing and thus writes changes to
disk asynchronously. These two
mechanisms help boost performance, but
in certain situations that require
high concurrency, they may create
problems.
If you possibly can, just use one connection.
Back in VB6 you could force the connection to refresh itself using ADO. I don't know whether it's possible with VB.NET. My Google-fu seems to be weak today.
You can change the PageTimeout value in the registry but that will affect all programs on the computer that use the Jet engine (i.e. programmatic use of Access databases)
I always throw away a Connection Object after I used it. Due to Connection Pooling getting a new Connection is cheap.