Errors after "forever server.js" - towerjs

I'm using for the first time Tower.js and I'm following the README.md instructions. When I try to start the server with the "forever server.js" command, hereby the result:
$ forever server.js
info: socket.io started
Tower development server listening on port 3000
TypeError: Object Mac OS X 10.8.2 has no method 'match'
at Object.Tower.MiddlewareAgent [as handle] (…)/node_modules/tower/lib/tower-middleware/server/agent.js:13:21)
at next (…)/node_modules/tower/node_modules/connect/lib/proto.js:199:15)
at Object.handle (…)/app/config/server/bootstrap.coffee:23:14)
at next (…)/node_modules/tower/node_modules/connect/lib/proto.js:199:15)
at Object.methodOverride [as handle] (…)/node_modules/tower/node_modules/connect/lib/middleware/methodOverride.js:37:5)
at next (…)/node_modules/tower/node_modules/connect/lib/proto.js:199:15)
at multipart (…)/node_modules/tower/node_modules/connect/lib/middleware/multipart.js:64:37)
at module.exports (…)/node_modules/tower/node_modules/connect/lib/middleware/bodyParser.js:57:9)
at urlencoded (…)/node_modules/tower/node_modules/connect/lib/middleware/urlencoded.js:51:37)
at module.exports (…)/node_modules/tower/node_modules/connect/lib/middleware/bodyParser.js:55:7)
127.0.0.1 - - [Sat, 12 Jan 2013 18:10:11 GMT] "GET / HTTP/1.1" 500 1718 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/536.26.17 (KHTML, like Gecko) Version/6.0.2 Safari/536.26.17"
Thank you.

An issue has already been set two days ago https://github.com/viatropos/tower/issues/375.
Seems fixed and the pull request #376 has just been merged (5 hours ago).
Next build should be good.

Related

Failed to use vscode remote ssh, but use ssh directly can work

Problem
I re-installed my server system.Before then, I can use remote-ssh normally.However, I can't use remote-ssh to connect to my server anymore.But I can still use ssh directly to connect to the server.
I suppose it managed to get into the system but somehow it broke down.
The error log is below:
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-77-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Tue 14 Sep 2021 09:56:58 PM CST
System load: 0.07 Processes: 117
Usage of /: 6.5% of 59.00GB Users logged in: 1
Memory usage: 10% IPv4 address for eth0: 10.0.12.2
Swap usage: 0%
* Super-optimized for small spaces - read how we shrank the memory
footprint of MicroK8s to make it the smallest full K8s around.
https://ubuntu.com/blog/microk8s-memory-optimisation
ready: 6425958cce28
Linux 5.4.0-77-generic #86-Ubuntu SMP Thu Jun 17 02:35:03 UTC 2021
6425958cce28: running
bash: line 1: _exitcode: command not found
bash: line 2: syntax error near unexpected token `elif'
bash: line 2: ` elif [[ $ALLOW_CLIENT_DOWNLOAD == "1" ]]; then'
-sh: 4: function: not found
-sh: 69: [[: not found
-sh: 90: [[: not found
-sh: 155: Syntax error: "(" unexpected (expecting "then")
Transferred: sent 17180, received 4016 bytes, in 0.5 seconds
Bytes per second: sent 35433.6, received 8283.0
local-server-1> ssh child died, shutting down
[21:56:58.587] Failed to parse remote port from server output
[21:56:58.588] Resolver error: Error:
at Function.Create (/Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:64659)
at Object.t.handleInstallOutput (/Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:63302)
at Object.e [as tryInstallWithLocalServer] (/Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:387573)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async /Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:294473
at async Object.t.withShowDetailsEvent (/Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:406463)
at async /Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:386112
at async E (/Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:382710)
at async Object.t.resolveWithLocalServer (/Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:385728)
at async Object.t.resolve (/Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:1:295870)
at async /Users/luther/.vscode/extensions/ms-vscode-remote.remote-ssh-0.65.7/out/extension.js:127:110656
[21:56:58.592] ------
Tried
I tried delete the know_hosts file from host, re-install the remote-ssh plugin, but can't work
I am pretty new to remote-ssh, hope can give me more detailed solution.
Thanks :)
I downgraded remote-ssh.Then I changed my default shell into zsh and upgrade remote-ssh.It began to install '.vscode-server' file again and magically it worked.

Selenium::WebDriver::Error Chrome Crashed on M1 chip

I've spent several days trying to solve this issue I'm encountering with the following code:
caps = Selenium::WebDriver::Remote::Capabilities.chrome(
"chromeOptions" => {
:args => ['--user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36 LegalMonsterNoBlock"']
}
)
driver = Selenium::WebDriver.for :remote, url: selenium_host, :desired_capabilities => caps
driver.get(url)
I'm trying to run a test that calls this method. The test runs fine. It opens up Chrome runs the test, but whenever I reach the part of my application that calls the above method, the test fails with the following error:
Minitest::UnexpectedError: Selenium::WebDriver::Error::UnknownError: unknown error: Chrome failed to start: crashed
(chrome not reachable)
(The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:25:53'
System info: host: '7a6aaccda364', ip: '172.17.0.2', os.name: 'Linux', os.arch: 'amd64', os.version: '4.19.121-linuxkit', java.version: '1.8.0_232'
Driver info: driver.version: unknown
remote stacktrace: #0 0x0040004b6479 <unknown>
My setup:
Macbook with Apple M1, running Big Sur 11.2.2
ruby version 2.7.2
ChromeDriver 89.0.4389.23 (for m1 chip)
Chrome version 89.0.4389.72 (Official Build) (arm64)
gem selenium-webdriver version 3.142.3
Running a docker selenium/standalone-chrome-debug:3.141.59-zinc
I have tried several things already:
Adding --headless, --no-sandbox options to args: args => ['--headless', '--no-sandbox' ...
Installing chromedriver and chrome via brew instead of downloading binary
Reinstalling chrome and chromedriver
Explicitly speficying path to both chrome and chromedriver (Selenium::WebDriver::Chrome.path = '/Applications/Google Chrome.app/Contents/MacOS/Google Chrome' and Selenium::WebDriver::Chrome.driver_path="/path/to/chrome_driver_binary/chromedriver")
Any other experiencing such issues?
It turned out that it is my docker image, that does not support the arm64-architecture, so it was this step:
Running a docker selenium/standalone-chrome-debug:3.141.59-zinc.
There was no issue if I disabled the part of the tests that used the docker container. I imagine this is no possible for everyone, but lets hope that there'll be a selenium image that will support arm64 architecture soon.
See Selenium issue here.
For me using a docker image compatible with the arm64 architecture solved the issue. It was easy to set it up after understanding the problem using an image from https://hub.docker.com/u/seleniarm
I just run the command
docker run -d -p 4444:4444 seleniarm/standalone-chromium
This set it up where I needed and worked just fine.
I had the same error when I was trying to run in a docker using the below command.
docker run -d -p 4444:4444 selenium/standalone-chrome
My solution was mentioned in the github answer for the issue posted above and the link to the answer so we can give the proper credits is this:
https://github.com/SeleniumHQ/docker-selenium/issues/1076#issuecomment-788343926
Hope this sheds some light for others.

Chrome Browser Not supported is visible when open using selenium webdriver . Whereas same URL works when open directly in chrome

I try to open an application using webdriver code but getting Browser doesn't support. where as same url I can open directly
checked about::config to check the difference both the windows
13 June 2019
17:22
Normal chrome window:
Google Chrome 74.0.3729.169 (Official Build) (64-bit) (cohort: Stable)
Revision 78e4f8db3ce38f6c26cf56eed7ae9b331fc67ada-refs/branch-heads/3729#{#1013}
OS Windows 10 OS Build 17134.706 JavaScript V8 7.4.288.28
Flash 32.0.0.207 C:\Users\xxx\AppData\Local\Google\Chrome\User
Data\PepperFlash\32.0.0.207\pepflashplayer.dll User Agent Mozilla/5.0
(Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/74.0.3729.169 Safari/537.36 Command Line "C:\Program Files
(x86)\Google\Chrome\Application\chrome.exe" --flag-switches-begin
--flag-switches-end Executable Path C:\Program Files (x86)\Google\Chrome\Application\chrome.exe Profile
Path C:\Users\xxxx\AppData\Local\Google\Chrome\User Data\Default
Variations d01ab0d3-ca7d8d80
Webdriver window
Google Chrome 74.0.3729.169 (Official Build) (64-bit) (cohort: Stable)
Revision 78e4f8db3ce38f6c26cf56eed7ae9b331fc67ada-refs/branch-heads/3729#{#1013}
OS Windows 10 OS Build 17134.706 JavaScript V8 7.4.288.28
Flash 24.0.0.189 internal-not-yet-present User Agent Mozilla/5.0
(Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/74.0.3729.169 Safari/537.36 Command Line "C:\Program Files
(x86)\Google\Chrome\Application\chrome.exe"
--disable-background-networking --disable-client-side-phishing-detection --disable-default-apps --disable-hang-monitor --disable-popup-blocking --disable-prompt-on-repost --disable-sync --disable-web-resources --enable-automation --enable-blink-features=ShadowDOMV0 --enable-logging --force-fieldtrials=SiteIsolationExtensions/Control --ignore-certificate-errors --load-extension="C:\Users\xxx\AppData\Local\Temp\scoped_dir21628_16208\internal"
--log-level=0 --no-first-run --password-store=basic --remote-debugging-port=0 --start-maximized --test-type=webdriver --use-mock-keychain --user-data-dir="C:\Users\xxx\AppData\Local\Temp\scoped_dir21628_14652" --flag-switches-begin --flag-switches-end data:, Executable Path C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
Profile
Path C:\Users\xxx\AppData\Local\Temp\scoped_dir21628_14652\Default
After Modification:
Google Chrome 74.0.3729.169 (Official Build) (64-bit) (cohort: Stable)
Revision 78e4f8db3ce38f6c26cf56eed7ae9b331fc67ada-refs/branch-heads/3729#{#1013}
OS Windows 10 OS Build 17134.706 JavaScript V8 7.4.288.28
Flash 32.0.0.207 C:\Users\xxx\AppData\Local\Google\Chrome\User
Data\PepperFlash\32.0.0.207\pepflashplayer.dll User Agent Mozilla/5.0
(Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/74.0.3729.169 Safari/537.36 Command Line "C:\Program Files
(x86)\Google\Chrome\Application\chrome.exe"
--disable-background-networking --disable-bundled-ppapi-flash --disable-client-side-phishing-detection --disable-default-apps --disable-hang-monitor --disable-popup-blocking --disable-prompt-on-repost --disable-sync --disable-web-resources --enable-automation --enable-blink-features=ShadowDOMV0 --enable-logging --force-fieldtrials=SiteIsolationExtensions/Control --ignore-certificate-errors --load-extension="C:\Users\xxx\AppData\Local\Temp\scoped_dir40376_20418\internal"
--log-level=0 --no-first-run --password-store=basic --ppapi-flash-path="C:\Users\xxx\AppData\Local\Google\Chrome\User Data\PepperFlash\32.0.0.207\pepflashplayer.dll"
--ppapi-flash-version=32.0.0.207 --remote-debugging-port=0 --start-maximized --test-type=webdriver --use-mock-keychain --user-data-dir="C:\Users\xxx\AppData\Local\Temp\scoped_dir40376_6930" --flag-switches-begin --flag-switches-end data:, Executable Path C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
Profile
Path C:\Users\xxx\AppData\Local\Temp\scoped_dir40376_6930\Default
After modification the code I was getting the proper flash version but the issue still remains same

Selenium 'dies' when visiting some websites using Behat / Mink

I'm trying to create a a custom scenario in Behat / Mink using the javascript capabilities of Selenium but I've hit a peculiar snag. I've stripped everything back to the bare bones to lay the problem out as simply as possible but in summary when calling visit() in Selenium, some websites run fine but other (including my own) return the error "Error communicating with the remote browser. It may have died." in behat which terminates the scenario.
In detail:
My behat.yml file looks like this:
default:
paths:
features: features
bootstrap: %behat.paths.features%/bootstrap
extensions:
Behat\MinkExtension\Extension:
base_url: http://www.foo.bar
goutte: ~
selenium2:
browser: 'firefox'
In my FeatureContext.php file I have the following custom function:
public function iAmLoggedIn()
{
$session = $this->getSession();
$session->visit("http://www.foo.bar");
{
Now when I run the scenario in Behat that uses the custom function I get the following error:
PHP Fatal error: Uncaught exception 'WebDriver\Exception\UnknownError' with message 'Error communicating with the remote browser. It may have died.
Build info: version: '2.44.0', revision: '76d78cf', time: '2014-10-23 20:02:37'
Driver info: driver.version: EventFiringWebDriver' in /var/www/behat/vendor/instaclick/php-webdriver/lib/WebDriver/Exception.php:157
Stack trace:
#0 /var/www/behat/vendor/instaclick/php-webdriver/lib/WebDriver/AbstractWebDriver.php(140): WebDriver\Exception::factory(13, 'Error communica...')
#1 /var/www/behat/vendor/instaclick/php-webdriver/lib/WebDriver/Session.php(151): WebDriver\AbstractWebDriver->curl('DELETE', '')
#2 /var/www/behat/vendor/behat/mink-selenium2-driver/src/Behat/Mink/Driver/Selenium2Driver.php(292): WebDriver\Session->close()
#3 /var/www/behat/vendor/behat/mink/src/Behat/Mink/Session.php(70): Behat\Mink\Driver\Selenium2Dri in /var/www/behat/vendor/behat/mink-selenium2-driver/src/Behat/Mink/Driver/Selenium2Driver.php on line 294
But here's the oddity: thinking that this may have been an issue with my website (www.foo.bar) I tried a different website by editing the line on the function:
$session->visit("http://www.bbc.co.uk");
This time no error and the scenario continues as normal. Good old BBC. To make sure I also tried Goole:
$session->visit("http://www.google.com");
But this time I get exactly the same error: 'Error communicating with the remote browser. It may have died.'. Weird. So I try a number of other websites, some work fine, others return this error. There doesn't seem to be any obvious similarity between those sites that kill Selenium, and those that don't. So what is Selenium saying?
Output from a site that returns the 'It may have died' error:
11:41:41.428 INFO - Executing: [new session: Capabilities [{platform=ANY, browserVersion=8, browserName=firefox, deviceType=tablet, selenium-version=2.31.0, name=Behat test, browser=firefox, deviceOrientation=portrait, max-duration=300, version=8}]])
11:41:41.430 INFO - Creating a new session for Capabilities [{platform=ANY, browserVersion=8, browserName=firefox, deviceType=tablet, selenium-version=2.31.0, name=Behat test, browser=firefox, deviceOrientation=portrait, max-duration=300, version=8}]
11:41:44.024 INFO - Done: [new session: Capabilities [{platform=ANY, browserVersion=8, browserName=firefox, deviceType=tablet, selenium-version=2.31.0, name=Behat test, browser=firefox, deviceOrientation=portrait, max-duration=300, version=8}]]
11:41:44.031 INFO - Executing: [get: http://www.foo.bar//])
11:41:50.478 INFO - Executing: [delete all cookies])
11:41:50.494 INFO - Executing: [delete session: cee7cfa5-bc53-4804-a9a4-f6b52b0f48df])
Output from a site that does not return the error:
11:19:19.930 INFO - Executing: [new session: Capabilities [{platform=ANY, browserVersion=8, browserName=firefox, deviceType=tablet, selenium-version=2.31.0, name=Behat test, browser=firefox, deviceOrientation=portrait, max-duration=300, version=8}]])
11:19:19.936 INFO - Creating a new session for Capabilities [{platform=ANY, browserVersion=8, browserName=firefox, deviceType=tablet, selenium-version=2.31.0, name=Behat test, browser=firefox, deviceOrientation=portrait, max-duration=300, version=8}]
11:19:24.607 INFO - Done: [new session: Capabilities [{platform=ANY, browserVersion=8, browserName=firefox, deviceType=tablet, selenium-version=2.31.0, name=Behat test, browser=firefox, deviceOrientation=portrait, max-duration=300, version=8}]]
11:19:24.614 INFO - Executing: [get: http://www.bbc.co.uk/])
11:19:43.454 INFO - Done: [get: http://www.bbc.co.uk/]
11:19:43.463 INFO - Executing: [delete all cookies])
11:19:46.263 INFO - Done: [delete all cookies]
11:19:49.935 INFO - Executing: [delete all cookies])
11:19:49.955 INFO - Done: [delete all cookies]
11:19:50.389 INFO - Executing: [delete session: a092aa77-ad26-4f6f-8fc1-f290b688d7fa])
11:19:50.488 INFO - Done: [delete session: a092aa77-ad26-4f6f-8fc1-f290b688d7fa]
No clue there apart from the fact that Selenium completes the 'get' of bbc.co.uk but not of foo.bar. So what about the access logs for foo.bar? They seem normal:
10.179.?.? - - [06/Jan/2015:10:52:57 +0000] "GET / HTTP/1.1" 401 486 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0"
10.179.?.? - tester [06/Jan/2015:10:52:57 +0000] "GET / HTTP/1.1" 200 33141 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0"
10.179.?.? - tester [06/Jan/2015:10:53:00 +0000] "GET /css/page_specific_css/index.css HTTP/1.1" 200 10234 "http://www.foo.bar/" "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0"
10.179..?.? - tester [06/Jan/2015:10:53:00 +0000] "GET /library/jquery-tools.min.js HTTP/1.1" 200 5920 "http://www.foo.bar/" "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0"
10.179.?.? being the IP of the CI server so it's communicating with the server fine and retrieving all the assets. So I'm not sure if this is a Selenium issue or a Behat / Mink issue but I'm at a loss. There doesn't seem to be any logical reason why some sites work and other's don't. Any help would be greatly appreciated.
This issue occurs for me when my selenium lib and browser version are not compatible. In this case it is best to update both your browser and selenium lib to its latest version.

Scrapyd cant find the project name

I am getting an error when I try to run an existing scrapy project on scrapyd.
I have a working scrapy project (url_finder) and a working spider in that project used for test purpose (test_ip_spider_1x) that simply downloads whatismyip.com.
I succesffully installed scrapyd (with apt-get) and now I would like to run the spider on scrapyd. So i execute:
curl http://localhost:6800/schedule.json -d project=url_finder -d spider=test_ip_spider_1x
This returns:
{"status": "error", "message": "'url_finder'"}
Which seems to suggest that there is a problem with the project. However when I execute: scrapy crawl test_ip_spider_1x
Everything runs fine.
When I check the scrapyd log in the web interface, this is what I get:
2014-04-01 11:40:22-0400 [HTTPChannel,0,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:40:21 +0000] "POST /schedule.json HTTP/1.1" 200 47 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"
2014-04-01 11:40:58-0400 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:40:57 +0000] "GET / HTTP/1.1" 200 747 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36"
2014-04-01 11:41:01-0400 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:41:00 +0000] "GET /logs/ HTTP/1.1" 200 1203 "http://localhost:6800/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36"
2014-04-01 11:41:03-0400 [HTTPChannel,1,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:41:02 +0000] "GET /logs/scrapyd.log HTTP/1.1" 200 36938 "http://localhost:6800/logs/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36"
2014-04-01 11:42:02-0400 [HTTPChannel,2,127.0.0.1] Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/twisted/web/http.py", line 1730, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/http.py", line 826, in requestReceived
self.process()
File "/usr/local/lib/python2.7/dist-packages/twisted/web/server.py", line 189, in process
self.render(resrc)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/server.py", line 238, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 18, in render
return JsonResource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/scrapy/utils/txweb.py", line 10, in render
r = resource.Resource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/twisted/web/resource.py", line 250, in render
return m(request)
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 37, in render_POST
self.root.scheduler.schedule(project, spider, **args)
File "/usr/lib/pymodules/python2.7/scrapyd/scheduler.py", line 15, in schedule
q = self.queues[project]
exceptions.KeyError: 'url_finder'
2014-04-01 11:42:02-0400 [HTTPChannel,2,127.0.0.1] 127.0.0.1 - - [01/Apr/2014:15:42:01 +0000] "POST /schedule.json HTTP/1.1" 200 47 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"
Any ideas?
In order to run a scrapyd project you must first deploy it first. This wasn't well explained in the documentation online (especially for first time users). Here is one solution that worked for me:
Install scrapyd-deploy: if you have Ubuntu or similar you can run:
apt-get install scrapyd-deploy
In your scrapy project folder edit scrapy.cfg and uncomment the line
url = http://localhost:6800/
This is your deploy target -- scrapy will deploy projects at this location.
Next, check to make sure scrapyd can see the deploy target:
scrapyd-deploy -l
This should output something similar to:
default http://localhost:6800/
Next you can deploy the project (url_finder):
scrapyd-deploy default -p url_finder
And finally run the spider:
curl http://localhost:6800/schedule.json -d project=url_finder -d spider=test_ip_spider_1x