Cuckoo sandbox, api error after installation - api

I'm investigating the possibility of using cuckoo sandbox as a malware detonator in series with Cortex.
I've (seemingly) installed all of the dependencies, enabled reporting, and elasticsearch in the config files, and started the webserver using the below command without issues.
sudo cuckoo web runserver [ip redacted]:[port]
I am able to connect to my web instance without errors on the browser side. But, in the stdout, I get the following:
2018-07-06 05:32:19,152 [django.request] ERROR: Internal Server Error: /cuckoo/api/status
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/cuckoo/web/utils.py", line 55, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/views/decorators/http.py", line 45, in inner
return func(request, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/cuckoo/web/controllers/cuckoo/api.py", line 45, in status
temp_file = Files.temp_put("")
File "/usr/local/lib/python2.7/dist-packages/cuckoo/common/files.py", line 97, in temp_put
prefix="upload_", dir=path or temppath()
File "/usr/lib/python2.7/tempfile.py", line 314, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags)
File "/usr/lib/python2.7/tempfile.py", line 244, in _mkstemp_inner
fd = _os.open(file, flags, 0600)
OSError: [Errno 2] No such file or directory: '/tmp/cuckoo-tmp-root/upload_IUQt4r'
[06/Jul/2018 05:32:19] "POST /analysis/api/tasks/recent/ HTTP/1.1" 200 13
[06/Jul/2018 05:32:19] "GET /cuckoo/api/status HTTP/1.1" 500 12976
In addition to this error, I both cannot upload a file, or submit a URL, both resulting in exactly the same error.
Does anyone here have experience setting up Cuckoo that can give me a hint? Not sure if this is a dependency issue, or a configuration issue after installation?
Thanks in advance!

Had the same problem. Mine was due to the fact that my virtual environment's root did not include the default folder "/tmp/" that cuckoo tries to establish as a default temp file path in its "files.py". Yours could be related to the directory structure changing in "~" when sudo'ing to run the server.
Either way, the fix was to update "cuckoo.conf"'s "tmppath" setting from blank to an explicit directory with no permissions issues (i.e. "/tmp/").
Once I updated this, the error stopped and my cuckoo api was able to run properly.

Related

Tensorflow TFX pipeline in Windows machine is failing when trying to create a folder with Linux like folder naming structure

I am trying to run the simple TFX pipeline in Windows 10 machine. I am using the codes as given in Tensorflow website (https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple). While trying to run the pipeline, it is throwing below error. The folder name is using a mix of '\' and '/' while TFX is trying to create the folder. I am not sure, how to solve this issue as it is happening within Tensorflow internal code.
ERROR:absl:Failed to make stateful working dir: pipelines\penguin-simple\CsvExampleGen.system\stateful_working_dir\2021-06-24T20:11:37.715669
Traceback (most recent call last):
File "G:\Anaconda3\lib\site-packages\tfx\orchestration\portable\outputs_utils.py", line 211, in get_stateful_working_directory
fileio.makedirs(stateful_working_dir)
File "G:\Anaconda3\lib\site-packages\tfx\dsl\io\fileio.py", line 83, in makedirs
_get_filesystem(path).makedirs(path)
File "G:\Anaconda3\lib\site-packages\tfx\dsl\io\plugins\tensorflow_gfile.py", line 76, in makedirs
tf.io.gfile.makedirs(path)
File "G:\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 483, in recursive_create_dir_v2
_pywrap_file_io.RecursivelyCreateDir(compat.path_to_bytes(path))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to create a directory: pipelines\penguin-simple\CsvExampleGen.system\stateful_working_dir/2021-06-24T20:11:37.715669; Invalid argument

`tensorflow_io.bigquery` returns `Empty update [Op:IO>BigQueryReadSession]` error

I'm following this end to end example for BigQuery TensorFlow reader on my Mac laptop, but when I run the following line
read_session = tensorflow_io_bigquery_client.read_session(...)
I get the following error
E0504 17:14:36.436042000 4592741888 ssl_utils.cc:463] load_file: {"created":"#1620173676.435988000","description":"Failed to load file","file":"external/com_github_grpc_grpc/src/core/lib/iomgr/load_file.cc","file_line":71,"filename":"/usr/share/grpc/roots.pem","referenced_errors":[{"created":"#1620173676.435986000","description":"No such file or directory","errno":2,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/load_file.cc","file_line":45,"os_error":"No such file or directory","syscall":"fopen"}]}
E0504 17:14:36.436075000 4592741888 ssl_security_connector.cc:420] Could not get default pem root certs.
E0504 17:14:36.436086000 4592741888 secure_channel_create.cc:132] Failed to create secure subchannel for secure name 'bigquerystorage.googleapis.com'
E0504 17:14:36.436098000 4592741888 secure_channel_create.cc:50] Failed to create channel args during subchannel creation.
E0504 17:14:36.436142000 4592741888 ssl_security_connector.cc:420] Could not get default pem root certs.
E0504 17:14:36.436152000 4592741888 secure_channel_create.cc:132] Failed to create secure subchannel for secure name 'bigquerystorage.googleapis.com'
E0504 17:14:36.436161000 4592741888 secure_channel_create.cc:50] Failed to create channel args during subchannel creation.
Traceback (most recent call last):
File "<stdin>", line 32, in <module>
File "/Users/someuser/.virtualenvs/py36-tf2/lib/python3.6/site-packages/tensorflow_io/bigquery/python/ops/bigquery_api.py", line 134, in read_session
row_restriction=row_restriction)
File "<string>", line 1093, in io_big_query_read_session
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnavailableError: Error reading from Cloud BigQuery: Empty update [Op:IO>BigQueryReadSession]
On the same machine if I create a container and install the same dependencies with the same versions, the example code runs with no problems. Any ideas what this error means, what may be causing it, and how to fix it?
Behind the scene, BigQuery client is using gRPC to stream the data.
On Windows and macOS, gRPC requires an environment variable to find the root of trust for SSL, which is probably already included in the container that you built.
You can install the certificate manually to fix the issue.

Deploy Scrapy project to remote Scrapyd service error

I tried to deploy a test Scrapy project to the remote Scrapyd server. I got the following error message in client side.
curl http://IP:6800/addversion.json -d project=test_project -d spider=quotes
{"status": "error", "message": "'version'", "node_name": "serverName"}
Error message in server-side
2018-11-13T12:22:22+0000 [_GenericHTTPChannelProtocol,0,IP Address] Unhandled Error
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/twisted/web/http.py", line 2190, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/lib64/python2.7/site-packages/twisted/web/http.py", line 917, in requestReceived
self.process()
File "/usr/lib64/python2.7/site-packages/twisted/web/server.py", line 199, in process
self.render(resrc)
File "/usr/lib64/python2.7/site-packages/twisted/web/server.py", line 259, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/lib/python2.7/site-packages/scrapyd/webservice.py", line 21, in render
return JsonResource.render(self, txrequest).encode('utf-8')
File "/usr/lib/python2.7/site-packages/scrapyd/utils.py", line 20, in render
r = resource.Resource.render(self, txrequest)
File "/usr/lib64/python2.7/site-packages/twisted/web/resource.py", line 250, in render
return m(request)
File "/usr/lib/python2.7/site-packages/scrapyd/webservice.py", line 83, in render_POST
version = txrequest.args[b'version'][0].decode('utf-8')
exceptions.KeyError: 'version'
I checked both client and server sides, the Scrapy version are all 1.5.1. The python version are 2.7.*
The sample curl command you shown earlier is not supposed to work. According to the documentation, you'll also need:
A version argument, which is believed to be the cause of issue you experience now.
A egg argument containing the actual project code, otherwise scrapyd won't be able to receive it when you pass in only the project name and spider name.

UnsupportedMethod error when deploy scrapy project in EC2

I was trying to deploy my Scrapy code to AWS by using scrapyd, but I got into this issue I could not figure out. It has been two days. I saw similar problems on the web, but did not find any helpful solution to fix this issue.
2016-02-15 08:41:20+0000 [HTTPChannel,1,xx.xxx.x.xxx] Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/web/http.py", line 1730, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/lib/python2.7/dist-packages/twisted/web/http.py", line 826, in requestReceived
self.process()
File "/usr/lib/python2.7/dist-packages/twisted/web/server.py", line 189, in process
self.render(resrc)
File "/usr/lib/python2.7/dist-packages/twisted/web/server.py", line 238, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/local/lib/python2.7/dist-packages/scrapyd/webservice.py", line 17, in render
return JsonResource.render(self, txrequest)
File "/usr/local/lib/python2.7/dist-packages/scrapyd/utils.py", line 19, in render
r = resource.Resource.render(self, txrequest)
File "/usr/lib/python2.7/dist-packages/twisted/web/resource.py", line 249, in render
raise UnsupportedMethod(allowedMethods)
twisted.web.error.UnsupportedMethod: ['HEAD', 'object', 'POST']
I have tried to run the scrapy code alone in both my macbook, and EC2 server. It works in both cases. It's just not working when I use my macbook to schedule a job in EC2.
These are the steps I followed to set things up.

upgrade plone 3.3.6 to plone 4.0.7 File Error

i tried to migrate plone 3.3.6 to a newer plone 4.0.7 version (and then to 4.3.x) but I ran in multiple errors:
Full traceback
2013-10-07 13:51:33 INFO ProgressHandler Process started (1842 objects to go)
2013-10-07 13:51:33 ERROR plone.app.upgrade Upgrade aborted. Error:
Traceback (most recent call last):
File "/Users/iie/Projects/plone4.0/rwa/eggs/Plone-4.0.7-py2.6.egg/Products/CMFPlone/MigrationTool.py", line 175, in upgrade
step['step'].doStep(setup)
File "/Users/iie/Projects/plone4.0/rwa/eggs/Products.GenericSetup-1.6.3-py2.6.egg/Products/GenericSetup/upgrade.py", line 142, in doStep
self.handler(tool)
File "/Users/iie/Projects/plone4.0/rwa/eggs/plone.app.upgrade-1.0.7-py2.6.egg/plone/app/upgrade/v40/betas.py", line 117, in updateIconMetadata
obj = brain.getObject()
File "/Users/iie/Projects/plone4.0/rwa/eggs/Zope2-2.12.18-py2.6-macosx-10.7-x86_64.egg/Products/ZCatalog/CatalogBrains.py", line 92, in getObject
target = parent.restrictedTraverse(path[-1])
File "/Users/iie/Projects/plone4.0/rwa/eggs/Zope2-2.12.18-py2.6-macosx-10.7-x86_64.egg/OFS/Traversable.py", line 310, in restrictedTraverse
return self.unrestrictedTraverse(path, default, restricted=True)
File "/Users/iie/Projects/plone4.0/rwa/eggs/Zope2-2.12.18-py2.6-macosx-10.7-x86_64.egg/OFS/Traversable.py", line 278, in unrestrictedTraverse
raise e
AttributeError: pa_20120810.pdf
If I delete "pa_20120810.pdf" another file throws an error, and so on ...
I hope you understand me and someone can help me
Thanks
Something to try: before migration use collective.catalogcleanup to remove broken references from your catalog. It's easy to use: add to your buildout, restart the site, go to /##collective-catalogcleanup?dry_run=false in your browser.
As collective.catalogcleanup's documentation states:
The goal is to get rid of outdated brains that could otherwise cause problems, for example during an upgrade to Plone 4.