Unable to shake off hardcoded delay from my script - vba

I've written a script in vba in combination with selenium to parse all the company names available in a webpage. The webpage has got lazyloading method active so there are only 20 links become visible in each scroll. If I scroll 2 times then the number of links visible are 40 and so on. There are 1000 links available in that webpage. My below script can reach the bottom of that page handling all the scroll and fetch all the names available in that webpage.
However, it is necessary to wait a certain time after each scroll for that webpage to update the content. This is where I've used hardcoded delay but the process of hardcoding thing is very inconsistent and sometimes it makes the browser quit before the completion of the whole operation.
How can I modify this portion .Wait 6000 to make it Explicit Wait instead of Hardcoded Wait.
This is what I've written so far:
Sub Getlinks()
Dim driver As New ChromeDriver, prevlen&, curlen&
Dim posts As Object, post As Object
With driver
.get "http://fortune.com/fortune500/list/"
prevlen = .FindElementsByClass("company-title").Count
Do
prevlen = curlen
.ExecuteScript ("window.scrollTo(0, document.body.scrollHeight);")
.Wait 6000 ''I like to kick out this hardcoded delay and use explicit wait in place
Set posts = .FindElementsByClass("company-title")
curlen = posts.Count
If prevlen = curlen Then Exit Do
Loop
For Each post In posts
R = R + 1: Cells(R, 1) = post.Text
Next post
End With
End Sub

Here is a completely different approach that doesn't require using a browser, instead it submits a series of web requests. With this approach, waiting for a page to load isn't a concern.
Typically, with lazy loading pages, it will submit a new request to load up the data for the page as you scroll. If you monitor the web traffic you can spot the requests made and emulate those, I have done that below.
The result should be a list of company names, in ascending order in whatever the first sheet of Excel is.
Things you'll need:
Add References to:
Microsoft Scripting Runtime
Microsoft XML v6.0
Add the VBA-JSON code to your project. You can find that here
Edit
Changed the code to keep pulling data from the site, until there is no more items in the list. Thanks #Qharr for pointing this out.
Code
Public Sub SubmitRequest()
Const baseURL As String = "http://fortune.com/api/v2/list/2358051/expand/item/ranking/asc/"
Dim Url As String
Dim startingNumber As Long
Dim j As Long
Dim getRequest As MSXML2.XMLHTTP60
Dim Json As Object
Dim Companies As Object
Dim Company As Variant
Dim CompanyArray As Variant
'Create an array to hold each company
ReDim CompanyArray(0 To 50000)
'Create a new XMLHTTP object so we can place a get request
Set getRequest = New MSXML2.XMLHTTP60
'The api seems to only support returning 100 records at a time
'So do in batches of 100
Do
'Build the url, the format is something like
'0/100, where 0 is the starting position, and 100 is the ending position
Url = baseURL & startingNumber & "/" & startingNumber + 100
With getRequest
.Open "GET", Url
.send
'The response is a JSON object, for this code to work -
'You'll need this code https://github.com/VBA-tools/VBA-JSON
'What is returned is a dictionary
Set Json = JsonConverter.ParseJson(.responseText)
Set Companies = Json("list-items")
'Keep checking in batches of 100 until there are no more
If Companies.Count = 0 Then Exit Do
'Iterate the dictionary and return the title (which is the name)
For Each Company In Companies
CompanyArray(j) = Company("title")
j = j + 1
Next
End With
startingNumber = startingNumber + 100
Loop
ReDim Preserve CompanyArray(j - 1)
'Dump the data to the first sheet
ThisWorkbook.Sheets(1).Range("A1:A" & j) = WorksheetFunction.Transpose(CompanyArray)
End Sub

There you go:
Sub Getlinks()
Dim driver As New ChromeDriver
Dim pcount As Long, R as long
Dim posts As Object, post As Object
With driver
.get "http://fortune.com/fortune500/list/"
Do
.ExecuteScript ("window.scrollTo(0, document.body.scrollHeight);")
Set posts = .FindElementsByClass("company-title")
pcount = posts.Count
Loop Until pcount = 1000
For Each post In posts
R = R + 1: Cells(R, 1) = post.Text
Next post
End With
End Sub
Or even better, print as you go:
Sub Getlinksasyougo()
Dim driver As New ChromeDriver
Dim pcount As Long, R As Long, i As Long
Dim posts As Object, post As Object
With driver
.get "http://fortune.com/fortune500/list/"
i = 1
Do
.ExecuteScript ("window.scrollTo(0, document.body.scrollHeight);")
Set posts = .FindElementsByClass("company-title")
pcount = posts.Count
If i <> pcount Then
For R = i To pcount - 1
Cells(R, 1) = posts(R + 1).Text
Next R
i = pcount
End If
Loop Until pcount = 1000
End With
End Sub

Here's a way to approach it using the "look for the spinner element" method discussed in one of the comments, which helps you avoid having to specify the number of elements you're expecting the page to load. The class name of the spinner actually changes depending on whether or not it's visible, which makes it pretty easy to just wait for the spinner to become visible + disappear again before getting the page elements.
This method still involves some waiting; by default, it waits 1/10th of a second after each attempt to find the spinner, either until the spinner is found or for some maximum number of attempts. But that's much faster than waiting 5 seconds every time.
Also, unrelated, but don't write stuff to cells one at a time, it's really slow. It's much faster to write it to an array first + write the entire array at once.
Sub getLinks()
Dim bot As New ChromeDriver
bot.Get "http://fortune.com/fortune500/list/"
Dim posts As WebElements
Dim numPosts As Long
Dim finishedScrolling As Boolean
finishedScrolling = False
Do Until finishedScrolling
'Set beginning post count and scroll down
Dim startPosts As Long
startPosts = numPosts
bot.ExecuteScript "window.scrollTo(0, document.body.scrollHeight);"
'Wait for spinner to become visible, then wait for up to 5 seconds for rehide
Call waitForElements(bot, "div[class^='F500-spinner ']", 50)
Call waitForElements(bot, "div[class^='F500-spinner hide']", 50)
'See if any new posts have loaded
Set posts = bot.FindElementsByClass("company-title")
numPosts = posts.Count
If numPosts = startPosts Then
finishedScrolling = True
End If
Loop
'Write text to results array
Dim post As WebElement
ReDim resultsArr(1 To posts.Count, 1 To 1) As String
Dim i As Long
i = 1
For Each post In posts
resultsArr(i, 1) = post.Text
i = i + 1
Next
'Write array to sheet
With ActiveSheet
.Range(.Cells(1, 1), .Cells(UBound(resultsArr, 1), 1)).Value = resultsArr
End With
End Sub
Sub waitForElements(bot As WebDriver, css As String, maxAttempts As Long, Optional waitTimeMS As Long = 100)
'Use a CSS selector string to wait for element(s) to appear on a page or to reach max number of attempts
'By default, bot waits 0.1 second after each attempt
Dim i As Long
Dim foundElem As Boolean
foundElem = False
Do Until foundElem
i = i + 1
If bot.FindElementsByCss(css).Count > 0 Then
foundElem = True
ElseIf i = maxAttempts Then
foundElem = True
Else
bot.Wait waitTimeMS
End If
Loop
End Sub

Define a timeout (specified period of time that will be allowed to elapse) to get rid of the hardcoded delay. The timeout needs to be hardcoded.
The differences between this and your original code are:
The loop itself is running over and over (doesn't wait 6 s on each iteration) and checks for new content until new content is found or the timeout is reached.
If the lazy loading takes more time than expected for instance when loading number 21 to 50 the loop "waits" and tries to get new content for the maximum time defined in timeout.
Downside: On the last step when all content is loaded the loop will take as many seconds as the timeout is set to.
Code:
Sub Getlinks()
Dim driver As New ChromeDriver, prevlen&, curlen&
Dim posts As Object, post As Object
Dim timeout As Integer, startTime As Double
timeout = 10 ' set the timeout to 10 seconds
With driver
.get "http://fortune.com/fortune500/list/"
prevlen = .FindElementsByClass("company-title").Count
startTime = Timer ' set the initial starting time
Do
.ExecuteScript ("window.scrollTo(0, document.body.scrollHeight);")
Set posts = .FindElementsByClass("company-title")
curlen = posts.Count
If curlen > prevlen Then
startTime = Timer ' reset start time if new elements found
prevlen = curlen ' set new prevlen
End If
Loop While Round(Timer - startTime, 2) <= timeout ' check if timeout is reached
For Each post In posts
R = R + 1: Cells(R, 1) = post.Text
Next post
End With
End Sub

I don't know if this will help as it's still a 'hard-coded' solution but you could try a delay function rather than the wait function and see if that helps with the program exiting issue.
Function Delay(Seconds As Single)
Dim StopTime As Single: StopTime = Timer + Seconds
Do While Timer < StopTime
DoEvents
Loop
End Function

I think you are almost there.
Although I don't think you can avoid waiting, the work around is to keep a number of times checking for new posts as you scroll down with a shorter wait.
Example below is to check for new posts 5 times each with 2 seconds wait, so a total of 10 seconds before declaring end of the page. Adjust these 2 parameters to suit.
Sub Getlinks()
Dim driver As New ChromeDriver, prevlen&, curlen&
Dim posts As Object, post As Object
' Counter for number of times when there are NO NEW POSTS
Dim NoIncreaseCount As Integer
Const MaxNoIncreaseCount As Integer = 5
Const WaitTime As Integer = 2000 ' 2 seconds wait time each scroll down
With driver
.get "http://fortune.com/fortune500/list/"
prevlen = .FindElementsByClass("company-title").Count
NoIncreaseCount = 0
Do Until NoIncreaseCount = MaxNoIncreaseCount
.ExecuteScript ("window.scrollTo(0, document.body.scrollHeight);")
.Wait WaitTime
Set posts = .FindElementsByClass("company-title")
curlen = posts.Count
If prevlen < curlen Then
' There are new Posts
prevlen = curlen
NoIncreaseCount = 0
Else
' No new Posts
NoIncreaseCount = NoIncreaseCount + 1
End If
Loop
For Each post In posts
R = R + 1: Cells(R, 1) = post.Text
Next post
End With
End Sub

Related

Failed to find any tables on the website - VBA MSXML2

I'm trying to fetch some data from a website using MSXML2 library. There are no errors, however the list of elements within tag is 0.
While I use IE, it works, but its much slower and sometimes website doesn't even load.
Edit: I've noticed that website is showing "Loading" page before loaded and I think it might be an issue.
Here is the code:
Sub Test()
Dim Data As Variant
Dim Tables As Object
Dim Website As Object
Dim x As Long, y As Long
Dim oRow As Object, oCell As Object
Dim FirstCol As Integer, LastCol As Integer, FirstRow As Integer, LastRow As Integer
Set Website = CreateObject("htmlFile")
y = 0: x = 0 'X - row, Y - column
With CreateObject("MSXML2.serverXMLHTTP")
.Open "GET", "www.example.com", False
.send
Website.body.innerHTML = .responseText
End With
Set Tables = Website.getElementsByTagName("table")
'And when I go to debug mode and check, there are 0 tables. However, on the website, there are many tables within <table> tag
End Sub

Programatically sort pages in a Visio Document using VBA

Does anyone know a method to sort Visio pages alphabetically using VBA?
I looked to see if a method such as vzdVisioDocument.Pages.Sort exists, but found nothing in documentation or through internet searches.
Do I need to write my own sorting function using the Application.ActiveDocument.Pages.ItemU("Page Name").Index property? That seems to be the method suggested by recording a macro of the action.
So that wasn't as painful as expected. With vzdVisioDocument as an already defined Visio.Document:
' Make a collection of titles to iterate through
Dim colPageTitles As Collection
Set colPageTitles = New Collection
Dim intPageCounter As Integer
For intPageCounter = 1 To vzdVisioDocument.Pages.Count
colPageTitles.Add vzdVisioDocument.Pages.Item(intPageCounter).Name
Next intPageCounter
' For each title in the collection, iterate through pages and find the appropriate new index
Dim intPageIndex As Integer
Dim varPageTitle As Variant
For Each varPageTitle In colPageTitles
For intPageIndex = 1 To vzdVisioDocument.Pages.Count
' Check to see if the title comes before the index's current page title
If StrComp(varPageTitle, vzdVisioDocument.Pages.Item(intPageIndex).Name) < 0 Then
' If so, set the new page index
vzdVisioDocument.Pages.ItemU(varPageTitle).Index = intPageIndex
Exit For
End If
Next intPageIndex
Next varPageTitle
' Clean up
Set colPageTitles = Nothing
I mentioned this in another comment, but when I made some test pages, it was always shuffling the pages around when I ran it because I the way that this is implemented, I don't believe that Exit For should be in there.
I also swapped the comparison to StrCompare due to personal preference along with the order of the for loops.
Sub PageSort()
Dim titlesColl As Collection
Set titlesColl = New Collection
Dim i As Long
For i = 1 To ActiveDocument.Pages.Count
titlesColl.Add ActiveDocument.Pages.Item(i).Name
Next i
Dim title As Variant
For i = 1 To ActiveDocument.Pages.Count
For Each title In titlesColl
If StrComp(ActiveDocument.Pages.Item(i).Name, title, vbTextCompare) < 0 Then
ActiveDocument.Pages.Item(title).index = i
End If
Next title
Next i
Set titlesColl = Nothing
End Sub
Private Sub reorderPages()
Dim PageNameU() As String
Dim isBackgroundPage As Boolean
Dim vsoPage As Visio.Page
Dim vsoCellObj As Visio.Cell
'// Get All Pages
Dim i As Integer
For Each vsoPage In ActiveDocument.Pages
i = i + 1
ReDim Preserve PageNameU(i)
PageNameU(i) = vsoPage.NameU
Next vsoPage
For i = 1 To UBound(PageNameU)
Set vsoPage = vsoPages.ItemU(PageNameU(i))
Set vsoCellObj = vsoPage.PageSheet.Cells("UIVisibility")
isBackgroundPage = vsoPage.Background
'// Make foreground page to set page index
If isBackgroundPage = True Then
vsoCellObj.FormulaU = visUIVNormal
vsoPage.Background = False
End If
vsoPage.Index = NumNonAppSysPages + i
'// Set to background page
If isBackgroundPage = True Then
vsoCellObj.FormulaU = visUIVHidden
vsoPage.Background = True
End If
Next i
End Sub

Excel VBA Scrape amazon for inventory

I am seeking for scraping amazon inventory .. Here's the link I used
https://www.amazon.com/Stratford-Pharmaceuticals-Omega-Fatty-Strength/dp/B006JCU54Y/ref=sr_1_2?s=pet-supplies&ie=UTF8&qid=1518816130&sr=1-2&keywords=stratford
There is a part with the title "Compare with similar items" in which I need to extract prices (I have already done that) and also the inventory quantity ..
The second part is not directly obtained .. Manually I have to cick "Add to Cart" then from the next page click "Cart" then from the next page select "Quantity drop down and select 10+ and manually type any large number say 999 and click "Update"
There will be alert message that contains the remaining in inventory like that
(This seller has only 35 of these available. To see if more are available from another seller,) >> so this is the desired number which is 35
Here's the excel file and snapshots that illustrates the manual steps ..
I used IE but if it is possible to use XMLHTTP it would be great of course
Here's the code I devised till now
Sub Test()
Dim ws As Worksheet
Dim ie As Object
Dim allLnks As Object
Dim lnk As Object
Dim r As Long
Dim liElem As Object
Dim prElem As Object
Dim crtElem As Object
Dim elem As Object
Dim cnt As Integer
Dim inputElem As Object
Dim inputEle As Object
Set ws = ThisWorkbook.Worksheets("Sheet2")
Set ie = CreateObject("InternetExplorer.Application")
With ie
.Visible = True
.navigate ("https://www.amazon.com/Stratford-Pharmaceuticals-Omega-Fatty-Strength/dp/B006JCU54Y/ref=sr_1_2?s=pet-supplies&ie=UTF8&qid=1518816130&sr=1-2&keywords=stratford")
Do: DoEvents: Loop Until .readystate = 4
ws.Range("B2").Value = Format(Now(), "dd/mm/yyyy - hh:mm:ss")
Set liElem = .document.getelementbyid("detail-bullets").getelementsbytagname("table")(0).getelementsbytagname("ul")(0)
For Each elem In liElem.getelementsbytagname("li")
If InStr(elem.innerText, "ASIN") > 0 Then ws.Range("B1").Value = Replace(elem.innerText, "ASIN: ", "")
If InStr(elem.innerText, "Rank:") > 0 Then ws.Range("B3").Value = MyUDF(elem.innerText, "Rank: ", "(")
If InStr(elem.innerText, "Review:") > 0 Then ws.Range("B4").Value = Replace(Split(Trim(Split(elem.innerText, "Review: ")(1)), vbLf)(1), Chr(13), "")
Next elem
Set prElem = .document.getelementbyid("comparison_price_row")
For Each elem In prElem.getelementsbytagname("td")
cnt = cnt + 1
ws.Range("A" & cnt + 4).Value = "Seller " & cnt
ws.Range("B" & cnt + 4).Value = elem.getElementsByClassName("a-offscreen")(0).innerText
Next elem
cnt = 0
Set crtElem = .document.getelementbyid("HLCXComparisonTable").getElementsByClassName("a-button-inner")
For Each elem In crtElem
.navigate elem.getelementsbytagname("a")(0).href
Do: DoEvents: Loop Until .readystate = 4
.navigate .document.getElementsByClassName("a-button-inner")(0).getelementsbytagname("a")(0).href
Do: DoEvents: Loop Until .readystate = 4
cnt = cnt + 1
ws.Range("C" & cnt + 4).Value = Replace(Split(Split(MyUDF(.document.getElementsByClassName("a-row a-spacing-base sc-action-quantity sc-action-quantity-right")(0).innerHTML, "maxlength=", "quantity="), "autocomplete")(0), "=")(1), """", "")
Next elem
Stop
'.Quit
End With
End Sub
Function MyUDF(s As String, b As String, a As String) As String
Dim arr() As String
Dim r As String
arr = Split(s, b)
If UBound(arr) > 0 Then
r = arr(1)
arr = Split(r, a)
If UBound(arr) > 0 Then
r = arr(0)
End If
End If
MyUDF = Trim(r)
End Function
Here are snapshots that may help
]4
CSS Selector to get stock info
Taking the following example from your code:
You can use a CSS selector to target the text regarding stock levels.
.sc-product-availability
CSS query example using cart view page (generated by your code):
E.g. CSS query for associated cart view html
The . is the selector for ClassName.
VBA
You can use the .document.querySelectorAll method to retrieve a nodeList of the matching items (2 in the example)
Dim nodeList As Object
Set nodeList = .document.querySelectorAll(".sc-product-availability")
You would then loop over its length to retrieve items (not tested, but this is general method).
Dim i As Long
For i = 0 to nodeList.Length - 1
Debug.Print nodeList.Item(i).innerText
Next i
Hopefully that is useful to you.
Give it a try. It should fetch you the number you are after. I used xmlhttp and Selenium combinedly to make the script run a little faster. I could not use xmlhttp request in my second approach as the link were javascript encrypted.
Upon running the below script you can find out how many of these items the seller has. Even if the seller has no such items, the script will not break as I've already managed that.
There it is:
Sub GetInfo()
Const base As String = "https://www.amazon.com"
Const mainurl As String = "https://www.amazon.com/Stratford-Pharmaceuticals-Omega-Fatty-Strength/dp/B006JCU54Y/ref=sr_1_2?s=pet-supplies&ie=UTF8&qid=1518816130&sr=1-2&keywords=stratford"
Dim Http As New XMLHTTP60, Htmldoc As New HTMLDocument, itext As Object
Dim driver As New ChromeDriver, idic As New Scripting.Dictionary
Dim post As Object, oinput As Object, posts As Object, elem As Object
Dim idrop As Object, oclick As Object, I&, key As Variant
With Http
.Open "GET", mainurl, False
.send
Htmldoc.body.innerHTML = .responseText
End With
With Htmldoc.querySelectorAll("[id^='comparison_add_to_cart_'].a-button-text")
For I = 0 To .Length - 1
idic(base & Replace(.item(I).getAttribute("href"), "about:", "")) = 1
Next I
End With
For Each key In idic.keys
driver.get key
Set post = driver.FindElementByCss("input[value='addToCart']", Raise:=False, timeout:=10000)
If Not post Is Nothing Then
post.Click
End If
Set posts = driver.FindElementById("hlb-view-cart-announce", timeout:=10000)
posts.Click
Set elem = driver.FindElementByCss("span#a-autoid-0-announce", timeout:=10000)
elem.Click
Set idrop = driver.FindElementById("dropdown1_9", timeout:=10000)
idrop.Click
Set oinput = driver.FindElementByCss("input[name='quantityBox']", timeout:=10000)
oinput.SendKeys "100"
Set oclick = driver.FindElementByCss("#a-autoid-1", timeout:=10000)
oclick.Click
Set itext = driver.FindElementByCss(".sc-quantity-update-message span.a-size-base", Raise:=False, timeout:=5000)
If Not itext Is Nothing Then
R = R + 1: Cells(R, 1) = itext.Text
Else
R = R + 1: Cells(R, 1) = "Sorry dear nothing found"
End If
Next key
End Sub
Reference to add:
Selenium Type Library
Microsoft HTML Object Library
Microsoft XML, v6.0
Microsoft Scripting Runtime
Output you may get like below. Now, you can use regex to parse the number 48:
This seller has only 48 of these available. To see if more are available from another seller, go to the product detail page.

VBA is it possible to get location of keyword on IE webpage

I want to use VBA to open up a webpage for me (this webpage is made up of HTML with cells of data), find some keywords, and email out the keywords and a certain number of rows of data above and below the keywords. To do this though, I need to be able to find the location of the keywords (eg. row 3, column 2, or line 4 characters 4-10, etc.). Are there any commands in the Internet Explorer Library that will allow me to do this? So far I have code for one keyword only, that will go to the keyword and select/highlight it. Now I need to find out how to grab a certain amount of rows above/below it and send it out.
Also a side question: If you know a good way to modify my current code to create a nested loop that scans through the whole webpage, and for multiple keywords that would be very helpful!
Sub subFindScrollIE()
Dim boolFound As Boolean
Dim ie As InternetExplorer
Set ie = New InternetExplorer
ie.Navigate "my URL HERE"
strTemp = "KEYWORD1"
Do Until ie.ReadyState = READYSTATE_COMPLETE
'DoEvents
Loop
ie.Visible = True
Set txt = ie.Document.body.createTextRange()
boolFound = txt.findText(strTemp)
txt.moveStart "character", -1
txt.findText strTemp
txt.Select
txt.ScrollIntoView
Set ie = Nothing
End Sub
You could continue with the approach you have started using if you use Regular Expressions to locate the text you are after and the surrounding text.
Personally, I would favor using html objects to find what you're after. Here is some example code to iterate through generic tables:
Sub subFindScrollIE()
Dim strTemp() As Variant, output() As String, txt As String
Dim tr As HTMLTableRow, r As Integer, i As Integer
Dim tRows As IHTMLElementCollection
Dim xlR As Byte, c As Byte
Dim ie As InternetExplorerMedium
Set ie = New InternetExplorerMedium
ie.Visible = True
ie.Navigate "E:\Dev\table.htm"
strTemp = Array("abc", "mno", "vwx", "efg")
Do Until (ie.ReadyState = 4 Or ie.ReadyState = 3): Loop
Set tRows = ie.Document.body.getElementsByTagName("tr")
xlR = 2
' loop through rows
For r = 0 To tRows.Length - 1
Set tr = tRows(r)
' loop through search text
For i = 0 To UBound(strTemp)
' search row for string
txt = LCase(tr.innerHTML)
If (InStr(txt, LCase(strTemp(i))) > 0) Then
' search string found. split table data into array
txt = tr.innerHTML
txt = Replace(txt, "</td><td>", "~")
txt = Replace(txt, "<td>", "")
txt = Replace(txt, "</td>", "")
output = Split(txt, "~")
' populate cells from array
For c = 0 To UBound(output)
Sheet1.Cells(xlR, c + 2) = output(c)
Next c
xlR = xlR + 2
End If
Next i
Next r
ie.Quit
Set ie = Nothing
End Sub

Collection was modified; enumeration operation may not execute. VB thearding

Here is my code,
Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
For Each kthread As Thread In _threads
If kthread.Name = "123" Then
_threads.Remove(kthread)
kthread.Abort()
killedthreads += 1 'a integer
End If
Next
End Sub
I added the killedthreads integer at last as a check, vb executes the whole function good but at the last line it always throw the error said in title.
Not sure why, if killedthreads += 1 is not there then the error goes to kthread.Abort()
I had the same problem with C# with a different app earlier this year.
Edit,
Public Sub KillThread(kThread As Thread)
For i As Integer = (_threads.Count - 1) To 0 Step -1
If _threads.Item(i).Name = kThread.Name Then
_threads.Item(i).Abort()
_threads.RemoveAt(i)
End If
Next
End Sub
I did this code as Eminem said it. This gets in kThread from the running threads if something is not good or it has finished all its functions. But my problem is that, only the first thread that sends it gets abort and removed from list, others seem to get stuck once the first thread is aborted.
I create threads using,
Public Sub multiThreader(int As Integer, link As String)
Dim tCount As Integer = _threads.Count
If tCount >= Form1.ListView1.Items.Count Then
Else
Dim dy As Integer = DateTime.Now.Day
Dim mo As Integer = DateTime.Now.Month
Dim fileNum As String = dy.ToString() + "-" + mo.ToString() + "_" + int.ToString
botThread = New Thread(Sub() MainThread(fileNum, link, botThread, int.ToString()))
botThread.IsBackground = True
botThread.Name = String.Format("AutoBotThread{0}", fileNum)
_threads.Add(botThread)
botThread.Start()
End If
End Sub
and _threads is publicly, Public _threads As New List(Of Thread)
MainThread is a Public Sub which runs functions and gets return and send KillThread under certain conditions.
The problem is that you remove an item from an enumeration, before you finished iterating through it.
It's like trying to iterate from 0 to list.count, when the count changes from an iteration to another. As Bjørn-Roger Kringsjå said, you should do something like this:
For i As Integer = (_threads.count - 1) to 0 Step -1
If _threads.Item(i).Name = "123" Then
_threads.Item(i).Abort
_threads.RemoveAt(i)
killedthreads += 1 'a integer
End If
Next
By using Step -1 you make sure that an Index was out of range error will not occur, and make sure that your operations are fitted, and execute on the right order/item.