mounting 500k files in google drive on colab wont work - google-colaboratory

)
Unfortunately i created lots of files over colab in my google drive. Now it seems its impossible to mount it on colab anymore.
I guess the process gets killed because of a timeout.
Any suggestions?
Thx
TIMEOUT: <pexpect.popen_spawn.PopenSpawn object at 0x7ff3dbe17390>
searcher: searcher_re:
0: re.compile('google.colab.drive MOUNTED')
1: re.compile('root#e84b0df49b86-09bc9916abe94fe5875728ad91a75e71: ')
2: re.compile('(Go to this URL in a browser: https://.*)$')
3: re.compile('Drive File Stream encountered a problem and has stopped')
4: re.compile('drive EXITED')
5: re.compile('Authorization failed')
6: re.compile('The domain policy has disabled Drive File Stream')
<pexpect.popen_spawn.PopenSpawn object at 0x7ff3dbe17390>
searcher: searcher_re:
0: re.compile('google.colab.drive MOUNTED')
1: re.compile('root#e84b0df49b86-09bc9916abe94fe5875728ad91a75e71: ')
2: re.compile('(Go to this URL in a browser: https://.*)$')
3: re.compile('Drive File Stream encountered a problem and has stopped')
4: re.compile('drive EXITED')
5: re.compile('Authorization failed')
6: re.compile('The domain policy has disabled Drive File Stream')

Related

SFTPOperator not able to authenticate with a host that requires both password and public key authentication

Airflow version: 2.0.0
When I use the sftp command to manually connect to the host from any airflow worker everything works fine. Here is the error log from when I try to use the operator which under the hood uses the paramiko library to transfer files:
{ssh.py:202} WARNING - No Host Key Verification. This wont protect against Man-In-The-Middle attacks
{transport.py:1819} INFO - Connected (version 2.0, client 1.91)
{transport.py:1819} INFO - Auth banner: b'MOMENTUM SYSTEMS - SSH Server\nAuthentication Methods Supported:\nPUBLICKEY, PASSWORD'
{transport.py:1819} INFO - Authentication continues...
{transport.py:1819} INFO - Disconnect (code 2): unexpected service request
{taskinstance.py:1396} ERROR - Authentication failed.
Traceback (most recent call last):
File "/home/centos/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1086, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/centos/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1260, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/centos/.local/lib/python3.7/site-packages/airflow/models/taskinstance.py", line 1300, in _execute_task
result = task_copy.execute(context=context)
File "/home/centos/airflow-dags/utils/operators/s3_to_sftp.py", line 76, in execute
sftp_client = ssh_hook.get_conn().open_sftp()
File "/home/centos/.local/lib/python3.7/site-packages/airflow/providers/ssh/hooks/ssh.py", line 225, in get_conn
client.connect(**connect_kwargs)
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/client.py", line 446, in connect
passphrase,
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/client.py", line 764, in _auth
raise saved_exception
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/client.py", line 751, in _auth
self._transport.auth_password(username, password)
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/transport.py", line 1509, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "/home/centos/.local/lib/python3.7/site-packages/paramiko/auth_handler.py", line 236, in wait_for_response
raise e
paramiko.ssh_exception.AuthenticationException: Authentication failed.
The Airflow connection that I use has the password and no additional options in extra.
The answer provided to the linked question worked for my use case:
Multi-factor authentication (password and key) with Paramiko

fail to create a connection with nessus server

I am trying to get a connection with the Nessus server with the bellow command in python but it failed with an error message can you tell me what can be the cause. I have checked my network connection it is fine.
requests.post( 'https://164.99.175.30:8834/'+ '/session',data={'username':'admin','password':'micro#123'},verify=False)```
error message
Traceback (most recent call last):
File "nessus.py", line 425, in <module>
login()
File "nessus.py", line 111, in login
res = requests.post(url + '/session',data={'username':username,'password':password},verify=verify)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 119, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='localhost', port=8834): Max retries exceeded with url: /session (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f46f2d6d410>: Failed to establish a new connection: [Errno 111] Connection refused',))
The nessus api is depreciated as of version 7.x, this is the best source I could find.
EDIT: I have found a better source directly from tenable
What has been removed from Nessus 7:
There is a restriction in scan API capabilities.
The ability to manage scans via API and CLI has been removed in v7. All Nessus Pro scanning operations must be done through the user interface.
So currently the ability of the Nessus API is as follows:
Removed the ability to run scans or reports and create new objects
The Read features, where the ability to pull scan data so GET /scan/scan ID now works again and this aids with some of the integration processes.
https://community.tenable.com/s/article/The-differences-between-Nessus-6-and-Nessus-7
This is only for Nessus pro versions

ncclient: connecting to a NETCONF server

I want use the python library ncclient 0.6.6 with Python 2.7.15 to connect to a NETCONF server (netopeer2) and read out the running config.
I tried to follow the example from the manual, running this code in the console:
with manager.connect(host="*the IP adress*", port=*the port*, timeout=None, username="*user*", password="*pwd*") as m:
c = m.get_config(source='running').data_xml
with open("%s.xml" % host, 'w') as f:
f.write(c)
As written in the manual, I try to disable public-key authentification with allow_agent and look_for_keys as False. Unfortunately, this does not work properly, because I get the error message:
File "<stdin>", line 1, in <module>
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/manager.py", line 177, in connect
return connect_ssh(*args, **kwds)
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/manager.py", line 143, in connect_ssh
session.connect(*args, **kwds)
File "/home/sisc/.local/lib/python2.7/site-packages/ncclient/transport/ssh.py", line 481, in connect
raise SSHUnknownHostError(known_hosts_lookup, fingerprint)
ncclient.transport.errors.SSHUnknownHostError: Unknown host key [e3:8d:35:a9:43:f9:3c:8a:f4:d3:88:5b:a9:36:93:59] for [[192.168.56.2]:1831]
I do not get why it still complains about the unknown host key, even though I explicitly disabled public-key authentification.
The netopeer NETCONF server is definitely running, for I get a "Hello" Message as soon as I try to SSH into it from out of the terminal.
Did I miss something?
m = manager.connect(host="172.17.0.2", port=830, username="netconf", password="netconf", hostkey_verify=False)
Did the trick. Hostkey_verify has to be false.

Synchronise and read Gmail offline, using isync OR offlineimap

My goal is to sync my emails from a Gmail account and index them to search and read within Emacs. The latter is not yet relevant as I cannot get the emails to sync to my laptop.
I am running Mavericks and so working in Mac Terminal.
I have followed SO accepted answer and the answer in the same thread trying to use offlineimap, as well as a second method in this (more promising) tutorial on using isync (and so mbsync). Both ways end up using mu and the interface for Emacs: mu4e.
The certificates are not being read/interpreted correctly. I do not know why as I do not understand the error messages. Here is the one from offlineimap :
> OfflineIMAP 6.5.7
Licensed under the GNU GPL v2 or any later version (with an OpenSSL exception)
Account sync Gmail:
*** Processing account Gmail
Establishing connection to imap.gmail.com:993
PLAIN authentication failed: [ALERT] Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure)
LOGIN authentication failed: [ALERT] Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure)
ERROR: All authentication types failed:
PLAIN: [ALERT] Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure)
LOGIN: [ALERT] Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure)
*** Finished account 'Gmail' in 0:01
ERROR: Exceptions occurred during the run!
ERROR: All authentication types failed:
PLAIN: [ALERT] Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure)
LOGIN: [ALERT] Please log in via your web browser: https://support.google.com/mail/accounts/answer/78754 (Failure)
>
> Traceback: File
> "/usr/local/Cellar/offline-imap/6.5.7/libexec/offlineimap/accounts.py",
> line 263, in syncrunner
> self.__sync() File "/usr/local/Cellar/offline-imap/6.5.7/libexec/offlineimap/accounts.py",
> line 326, in __sync
> remoterepos.getfolders() File "/usr/local/Cellar/offline-imap/6.5.7/libexec/offlineimap/repository/IMAP.py",
> line 351, in getfolders
> imapobj = self.imapserver.acquireconnection() File "/usr/local/Cellar/offline-imap/6.5.7/libexec/offlineimap/imapserver.py",
> line 451, in acquireconnection
> self.__authn_helper(imapobj) File "/usr/local/Cellar/offline-imap/6.5.7/libexec/offlineimap/imapserver.py",
> line 366, in __authn_helper
> "failed:\n\t%s"% msg, OfflineImapError.ERROR.REPO)
Here is the one from mbsync:
> C: 0/3 B: 0/2 M: +0/0 *0/0 #0/0 S: +0/0 *0/0 #0/0
Error while loading certificate file '/usr/local/etc/openssl/certs/Equifax.crt': error:00000000:lib(0):func(0):reason(0)
C: 3/3 B: 0/2 M: +0/0 *0/0 #0/0 S: +0/0 *0/0 #0/0
I have installed everything with Homebrew and am using the folders as per the tutorials. The problem is coming from the certificates, but I don't know what the problem could be. I have the setting within Gmail to allow IMAP and also allowed connection for less secure apps.
How might I deal with the certificates differently?
For the offlineimap error, Google is complaining that you aren't using Oauth2. I got past the same offlineimap issue following the explanation here: https://github.com/OfflineIMAP/offlineimap/issues/228
You need to configure your .offlineimaprc to use Oauth2 instead of specifying a username/password. Here's the template and instructions on how to generate the tokens:
https://github.com/OfflineIMAP/offlineimap/blob/master/offlineimap.conf#L764
Here's the important settings:
auth_mechanisms = GSSAPI, CRAM-MD5, XOAUTH2, PLAIN, LOGIN
oauth2_client_secret = ...
oauth2_client_id = ...
oauth2_refresh_token = ...

Ssh client.py not working, showing connection error

My config file:
Host server
User new_user
HostName 10.0.1.193
Port 55555
LocalForward 3000 10.0.1.193:6000
IdentityFile ~/.ssh/server
Client.py
import xmlrpclib
s = xmlrpclib.ServerProxy('http://localhost:3000')
print s.pow(2,3) # Returns 2**3 = 8
print s.add(2,3) # Returns 5
print s.div(5,2) # Returns 5//2 = 2
# Print list of available methods
print s.system.listMethods()
Server.py
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
# Restrict to a particular path.
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = ('/RPC2',)
# Create server
server = SimpleXMLRPCServer(("localhost", 6000),
requestHandler=RequestHandler)
server.register_introspection_functions()
# Register pow() function; this will use the value of
# pow.__name__ as the name, which is just 'pow'.
server.register_function(pow)
# Register a function under a different name
def adder_function(x,y):
return x + y
server.register_function(adder_function, 'add')
# Register an instance; all the methods of the instance are
# published as XML-RPC methods (in this case, just 'div').
class MyFuncs:
def div(self, x, y):
return x // y
server.register_instance(MyFuncs())
# Run the server's main loop
server.serve_forever()
My server.py is running fine, but when I run my client.py, it gives the following error:
Traceback (most recent call last):
File "client.py", line 4, in <module>
print s.pow(2,3) # Returns 2**3 = 8
File "/usr/lib/python2.7/xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.7/xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "/usr/lib/python2.7/xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib/python2.7/xmlrpclib.py", line 1292, in single_request
self.send_content(h, request_body)
File "/usr/lib/python2.7/xmlrpclib.py", line 1439, in send_content
connection.endheaders(request_body)
File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 776, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 757, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused
I have checked that my ssh if working and I can ssh into remote server with the given configuration i.e.
ssh server
works find. Can anyone explain what might be going wrong?
Your server runs and perhaps it does not complain, but this does not mean it "runs correctly" or more pointedly it doesn't mean the server is in a working state that the client expects.
The above is somewhat cryptic for a reason: something unknown has gone wrong, and even though you don't know yet what's broken, you want to start testing things you know should work and verify they are in fact working. This is a useful debugging skill even if the error is meaningless to you.
In this case, the client error message is "connection refused", meaning "refused [at the server]".
Try this:
on your "client" PC in a Terminal/DOS window, run:
telnet [your server ip] [your server port]
You should expect the same error - a connection refused. Perhaps the server is not actually opening the port. Or perhaps the server opened the port, but you can not see it remotely on another host due to a firewall on the server.
Also, running both client and server code on the same host can sometime reveal more clues (it should work but if it doesn't then there's maybe more than 1 problem).