On Ubuntu,
[guy#guy-laptop tmp]$ sudo /etc/init.d/tor status
tor is running
[guy#guy-laptop tmp]$ sudo /etc/init.d/polipo start
Starting polipo: /usr/bin/polipo already running -- doing nothing
polipo.
also:
$ python -c 'import urllib; print urllib.getproxies()'
{'ftp': 'ftp://127.0.0.1:8118/', 'all': 'socks://127.0.0.1:8118/',
'http': 'http://127.0.0.1:8118/', 'https': 'https://127.0.0.1:8118/',
'no': 'localhost,127.0.0.0/8,*.local'}
When running scrapy I get:
ERROR: Error downloading https://registration.example.com/login.fcc:
[Failure instance: Traceback (failure with no frames): : [('SSL routines', 'SSL23_READ', 'ssl handshake
failure')]
]
While on the same time FireFox managed to get the page correctly
through proxy
Any help will be appreciated,
Thanks,
Guy
Have you tried running the script through proxychains? To install on ubuntu:
:~$ sudo apt-get install proxychains
then configure the /etc/proxychains.conf file to work with TOR (socks4/5).
# defaults set to "tor"
socks4 127.0.0.1 9050
Then you can run anything though TOR
:~$ proxychains scriptwhatever.py target
Once you know TOR is working correctly, I recommend quiet mode in the proxychains.conf file.
# Quiet mode (no output from library).
quiet_mode
Related
I'm working on the legacy code which requires the outdated FF to run functional tests. After downloading FF 41 and unpacking it to /opt I can launch it successfully from console via
$ /opt/firefox/firefox
the problem is that I can not run it from Intellij IDEA 2022.2.3. I added proper VM options to test run config:
-Dwebdriver.firefox.bin=/opt/firefox/firefox
-Dgeb.driver=firefox
however it throws an exception:
org.openqa.selenium.firefox.NotConnectedException: Unable to connect to host 127.0.0.1 on port 7055 after 45000 ms. Firefox console output:
XPCOMGlueLoad error for file /opt/firefox/libxul.so:
libdbus-glib-1.so.2: cannot open shared object file: No such file or directory
Couldn't load XPCOM.
at org.openqa.selenium.firefox.internal.NewProfileExtensionConnection.start(NewProfileExtensionConnection.java:122)
at org.openqa.selenium.firefox.FirefoxDriver.startClient(FirefoxDriver.java:276)
at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:116)
at org.openqa.selenium.firefox.FirefoxDriver.<init>(FirefoxDriver.java:221)
I already tried:
sudo apt install --reinstall libdbus-glib-1-2 #reinstalling
sudo apt install libdbus-glib-1-2:i386 #installing 386 version
sudo /sbin/ldconfig -v #handling shared libs according to https://itsfoss.com/solve-open-shared-object-file-quick-tip/
unfortunately nothing works.
How to handle it? Similar config seems to work on similar desktop with the old Ubuntu 20.04LTS
Started learning Ansible and want to facilitate ansible-galaxy search nginx command, but I'm getting:
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/api': <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)>
Had try to use ansible-galaxy --ignore-certs search nginx and ansible-galaxy -c search nginx but now getting ansible-galaxy: error: unrecognized arguments: --ignore-certs for booth.
OS :
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
Ansible version:
ansible 2.9.5
config file = /home/maciej/projects/priv/ansible_nauka/packt_course/ansible.cfg
configured module search path = ['/home/maciej/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/maciej/.local/lib/python3.6/site-packages/ansible
executable location = /home/maciej/.local/bin/ansible
python version = 3.6.9 (default, Jul 17 2020, 12:50:27) [GCC 8.4.0]
I had the same issue, but on Mac OS X.
The underlying problem is that your Python environment is not finding/making use of the default root certificates that are installed on your OS. These root certs are required to connect securely (via TLS) with Ansible Galaxy.
For Mac OS X I was able to solve this based on this answer:
How to make Python use CA certificates from Mac OS TrustStore?
i.e. by running the script to install the certs, shipped with the installation:
cd /Applications/Python\ 3.7/
./Install\ Certificates.command
For Ubuntu / Debian:
Update: As pointed out by Maciej in the accepted answer, certs can be regenerated and added to the environment:
sudo update-ca-certificates --fresh
export SSL_CERT_DIR=/etc/ssl/certs
P.S.: I would not suggest to use --ignore-certs, this will skip verification of the certificate in the TLS connection, making the connection insecure (allowing Man-in-the-middle attacks)
Worked for me:
ansible-galaxy search --ignore-certs postgresql
Had back to this issue... life is best motivator. What helped me is:
sudo update-ca-certificates --fresh
export SSL_CERT_DIR=/etc/ssl/certs
For RHEL/CENTOS
You may want to check the cryptopolicy, if the policy is set to future temporarily set it to default
sudo update-crypto-policies --set=DEFAULT
bahrathkumaraju#Bahrathkumarajus-MacBook-Pro vault_ansible % ansible-galaxy collection install community.hashi_vault --ignore-certs
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Downloading https://galaxy.ansible.com/download/community-hashi_vault-3.0.0.tar.gz to /Users/bahrathkumaraju/.ansible/tmp/ansible-local-91443c5vh69v3/tmp76qmz32a/community-hashi_vault-3.0.0-635b3qde
Installing 'community.hashi_vault:3.0.0' to '/Users/bahrathkumaraju/.ansible/collections/ansible_collections/community/hashi_vault'
community.hashi_vault:3.0.0 was installed successfully
bahrathkumaraju#Bahrathkumarajus-MacBook-Pro vault_ansible %
in case someone else is looking at this, the args are order dependent. On rhel8 with a cntlm proxy ....
declare -x https_proxy='127.0.0.1:3128'
declare -x http_proxy='127.0.0.1:3128'
# this works through a proxy
ansible-galaxy collection install ovirt.ovirt --ignore-certs
# this does not
ansible-galaxy --ignore-certs collection install ovirt.ovirt
# and this does not
ansible-galaxy collection --ignore-certs install ovirt.ovirt
I use brew to install polipo through Mac OS terminal. It seems successfully install, but I can not find the config file and edit it.Can anyone help me figure out the reason?
The config file will not be created automatically. You need to get sample config file. Run this command in Terminal:
curl -o ~/.polipo https://raw.githubusercontent.com/jech/polipo/master/config.sample
and for forbidden URLs:
curl -o ~/.polipo-forbidden https://raw.githubusercontent.com/jech/polipo/master/forbidden.sample
Then restart polipo to ensure that it will use the config file:
launchctl unload /usr/local/opt/polipo/homebrew.mxcl.polipo.plist
launchctl load /usr/local/opt/polipo/homebrew.mxcl.polipo.plist
If it produced Service is disabled error, try this command to restart polio:
brew services restart polipo
Now open this address in your browser:
http://127.0.0.1:8123/polipo/config
You should see this line at the top:
configFile /Users/YourUserName/.polipo Configuration file.
If so, you need to modify ~/.polipo to configure your polipo instance.
There is another way that is not recommended. You can make your config file at /usr/local/etc/polipo/config and then create soft link to /etc/polipo/config with these commands:
mkdir /usr/local/etc/polipo/
curl -o /usr/local/etc/polipo/config https://raw.githubusercontent.com/jech/polipo/master/config.sample
sudo ln -sfv /usr/local/etc/polipo/config /etc/polipo/config
Then restart polipo and ensure that your config file location is correct. You can modify config file at /usr/local/etc/polipo/config.
I also used the brew to install polipo on Mac OS. Same problem as you met.
In fact, you need create the config file. The fail's path is ~/.polipo.
After you start the polipo service(brew services start polipo)
Open the link: http://127.0.0.1:8123/polipo/config
This is my config:
socksParentProxy = "127.0.0.1:1086"
socksParentProxy
socksProxyType = socks5
proxyAddress = "::0" # both IPv4 and IPv6
proxyPort = 8123
If polipo does not work:
I wanted to install polipo too, but even with the configuration files provided here, I was getting:
Error: polipo has been disabled because it is not supported upstream!
(MacBook Pro M1 chip. My socks proxy was generated with ssh -D 8000 -C -N myuser#statichost because of some static IP requirement).
So, I found out that you can also use an npm package to convert socks proxy to http proxy: https://www.npmjs.com/package/http-proxy-to-socks
# install hpts:
npm install -g http-proxy-to-socks
# launch http proxy:
hpts -s 127.0.0.1:8000 -p 8001
# here my socks5 proxy is at 127.0.0.1:8000 and the http proxy is now on port 8001
Npm does not support socks5 proxies, for example. So I used hpts to get an http proxy. After, I told npm to use that proxy with:
npm config set proxy http://127.0.0.1:8001
npm config set https-proxy http://127.0.0.1:8001
I am trying to update my gitLab installation from 7.7.2.
When I am running the following command nothing downloads.
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
And I get this error:
0* Unknown SSL protocol error in connection to packages.gitlab.com:443
0 0 0 0 0 0 0 0 --:--:-- 0:02:00 --:--:--
0
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to packages.gitlab.com:44
3
curl is unable to connect to packagecloud.io over TLS when running:
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/co
nfig_file.list?os=Ubuntu&dist=trusty&name=git.curuba2.fr&source=script
This is usually due to one of two things:
1.) Missing CA root certificates (make sure the ca-certificates package is i
nstalled)
2.) An old version of libssl. Try upgrading libssl on your system to a more
recent version
My ubuntu Trusty is up to date, I have ca-certificates installed and I also did update-ca-certificates.
No idea what's wrong. I need to migrate my server. I installed it properly on the new one but I fail updating the old one...
[EDIT]
I also tried with -k with no luck...
I ran into the same problem trying to install the runner via a non-https proxy.
I tried using -x [proxy] --insecure in the command but it still failed.
I decided to look at the script itself and realised the issue is with the curl calls inside the script.
I update the calls I could find in a local copy of script.deb.sh to include -x [proxy] --insecure then just executed that using sudo ./script.deb.sh and it worked.
That's more a wrkaround than an answer.
I finally downgraded my future server to 7.7.2, restored there my backup and upgraded back to 7.12.0.
Here are the commands I ran on the future server:
sudo gitlab-ctl stop unicorn
sudo gitlab-ctl stop sidekiq
wget https://downloads-packages.s3.amazonaws.com/ubuntu-14.04/gitlab_7.7.2-omnibus.5.4.2.ci-1_amd64.deb
sudo dpkg -r gitlab-ce
sudo dpkg -i git*.deb
sudo gitlab-ctl reconfigure
cd /var/opt/gitlab/backups/ # This is where backups should be located
sudo gitlab-rake gitlab:backup:restore BACKUP=1435537802
sudo gitlab-ctl start unicorn
sudo gitlab-ctl start sidekiq
sudo gitlab-ctl status
sudo apt-get update
sudo apt-get install gitlab-ce
I am trying to setup OpenStack on Ubuntu 12.04 using devstack. Now, the error I am getting is:
Setting up rabbitmq-server (2.7.1-0ubuntu4) ...
Starting rabbitmq-server: FAILED - check /var/log/rabbitmq/startup_{log, _err}
rabbitmq-server.
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
dpkg: error processing rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
++ err_trap
++ local r=100
++ set +o xtrace
stack.sh failed
Any idea why am I getting this error?
I had this issue twice, when either hostname or ip address in the hosts file didn't match.
Therefore, check that you provide the correct ip address and hostname in the /etc/hosts file
Run sudo cat /etc/hostname to see your hostname
Output:
yoursite
Run sudo nano /etc/hosts
File contains:
127.0.0.1 yoursite
As you see from cat /etc/hostname, hostname is the same as in the /etc/hosts:
Run sudo rabbitmq-server start to start the rabbitmq-server
Try deleting the folder /var/lib/rabbitmq and re-running ./stack.sh
If that doesn't work either, run the following after stach.sh fails:
chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
chown -R rabbitmq:rabbitmq /var/log/rabbitmq
service rabbitmq-server restart
and check the status of rabbitmq using "rabbitmqctl status"
Similar thing happen to me. Rabbit depends on being able to resolve a hostname, run this:
echo "127.0.0.1 $(hostname -s)" | sudo tee -a /etc/hosts
This way works for me.
First go to
sudo vim /etc/hosts
and set
127.0.0.1 <hostname>
then open firewall
sudo rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
For a clean environment, this will not happen. You must run devstack for several times, and one of them failed but you didn't get it cleaned.
run command pf -ef | grep rabbitmq, kill all rabbitmq processes. then it would be fine to run ./stack.sh
it is highly recommended to run ./unstack.sh && ./clean.sh before ./stack.sh
Just to be sure, take a look to your local network
ip add
If there's no lo network, then you should enable it:
ifconfig lo up
Then restart the server again and let's see if it works again now
systemctl start rabbitmq-server
I had the same problem though my /etc/hosts and DNS were OK. I suspect that SystemV init script was started too early when the network was not ready yet. I rewrote the startup script to systemd on CentOS 7.8 and it seems to work well now.
[Unit]
Description=RabbitMQ
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
RuntimeDirectory=rabbitmq
PrivateTmp=true
Restart=on-failure
RestartSec=10
WorkingDirectory=/opt/data/rabbitmq/
User=rabbitmq
Group=rabbitmq
ExecStart=/opt/app/rabbitmq/default/sbin/rabbitmq-server
ExecStop=/opt/app/rabbitmq/default/sbin/rabbitmqctl stop
ExecStop=/bin/sh -c "while ps -p $MAINPID >/dev/null 2>&1; do sleep 1; done"
StandardOutput=journal
StandardError=inherit
[Install]
WantedBy=multi-user.target