Cuckoo Sandbox Error "You're not running the Cuckoo Agent as Administrator" - dynamic

I'm doing dynamic analysis using cuckoo sandbox but I have a problem there. I'm putting our malware files on Cuckoo. However, some files do not give any reports. I put the some example of the cuckoo's logs under the writing. Please help me!!!
"2023-02-14 20:50:37,792 [modules.auxiliary.dumptls] WARNING: You're not running the Cuckoo Agent as Administrator. Doing so will improve your analysis results!\n",
"Traceback (most recent call last):\n",
" File \"C:/tmpqcushl/analyzer.py\", line 808, in <module>\n",
" success = analyzer.run()\n",
" File \"C:/tmpqcushl/analyzer.py\", line 657, in run\n",
" pids = self.package.start(self.target)\n",
" File \"C:\\tmpqcushl\\modules\\packages\\exe.py\", line 23, in start\n",
" return self.execute(path, args=shlex.split(args))\n",
" File \"C:\\tmpqcushl\\lib\\common\\abstracts.py\", line 166, in execute\n",
" \"Unable to execute the initial process, analysis aborted.\"\n",
"CuckooPackageError: Unable to execute the initial process, analysis aborted.\n",
"\n",

Related

Debugging: "Message: no such element: Unable to locate element:"

I am learning Pythons so please bear with me.
I adopted LinkedIn-Easy-Apply-Bot from: https://github.com/voidbydefault/LinkedIn-Easy-Apply-Bot
While the bot is working perfectly fine on my test account but when I change the email ID/password to real account (with everything being the same), I start getting these errors:
Traceback (most recent call last):
File "/home/me/Documents/LinkedIn-Easy-Apply-Bot/linkedineasyapply.py", line 124, in apply_jobs
job_results = self.browser.find_element_by_class_name("jobs-search-results")
File "/home/me/PycharmProjects/Better-LinkedIn-EasyApply-Bot/venv/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 766, in find_element_by_class_name
return self.find_element(by=By.CLASS_NAME, value=name)
File "/home/me/PycharmProjects/Better-LinkedIn-EasyApply-Bot/venv/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 1251, in find_element
return self.execute(Command.FIND_ELEMENT, {
File "/home/me/PycharmProjects/Better-LinkedIn-EasyApply-Bot/venv/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 430, in execute
self.error_handler.check_response(response)
File "/home/me/PycharmProjects/Better-LinkedIn-EasyApply-Bot/venv/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 247, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".jobs-search-results"}
(Session info: chrome=102.0.5005.61)
I have tried deleting chromedriver to ensure version conflicts do not exit, I have tried adding time.sleep(5) in line 124 (above) and also tried driver.implicitly_wait(10). Unfortunately, the error persists with my real account.
Note that there are no issues with my real account if used manually. I am able to apply all sorts of jobs whether EasyApply or otherwise and the bot is working 100% fine on my test account hence the elements the code is looking for exist.
Please help in fixing the problem.
Thanks.

How to find rejected files due to errors in apache beam java sdk

I Have N number of same type files to be processed and I will be giving a wildcard input pattern(C:\\users\\*\\*).
So now how do I find the file name and record ,that has been rejected while uploading to bigquery in java.
I guess BQ writes to the temp location path that you pass to your pipeline and not to local [honestly not sure about this].
In my case, with python, I used to pass tmp location as GCS bucket, and when I error is show, they usually shows the name of the log file that contains the rejected errors in the command line logs.
And then I use gsutil cp command to copy it to my local computer and read it
BigQuery I/O (Java and Python SDK) supports deadletter pattern: https://beam.apache.org/documentation/patterns/bigqueryio/.
Java
result
.getFailedInsertsWithErr()
.apply(
MapElements.into(TypeDescriptors.strings())
.via(
x -> {
System.out.println(" The table was " + x.getTable());
System.out.println(" The row was " + x.getRow());
System.out.println(" The error was " + x.getError());
return "";
}));
Python
errors = (
result['FailedRows']
| 'PrintErrors' >>
beam.FlatMap(lambda err: print("Error Found {}".format(err))))

SQLite3 database is Locked in Azure

I have a Flask server Running on Azure provided by Azure App services with sqlite3 as a database. I am unable to update sqlite3 as it is showing that database is locked
2018-11-09T13:21:53.854367947Z [2018-11-09 13:21:53,835] ERROR in app: Exception on /borrow [POST]
2018-11-09T13:21:53.854407246Z Traceback (most recent call last):
2018-11-09T13:21:53.854413046Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
2018-11-09T13:21:53.854417846Z response = self.full_dispatch_request()
2018-11-09T13:21:53.854422246Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
2018-11-09T13:21:53.854427146Z rv = self.handle_user_exception(e)
2018-11-09T13:21:53.854431646Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception
2018-11-09T13:21:53.854436146Z reraise(exc_type, exc_value, tb)
2018-11-09T13:21:53.854440346Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
2018-11-09T13:21:53.854444746Z raise value
2018-11-09T13:21:53.854448846Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
2018-11-09T13:21:53.854453246Z rv = self.dispatch_request()
2018-11-09T13:21:53.854457546Z File "/home/site/wwwroot/antenv/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
2018-11-09T13:21:53.854461846Z return self.view_functions[rule.endpoint](**req.view_args)
2018-11-09T13:21:53.854466046Z File "/home/site/wwwroot/application.py", line 282, in borrow
2018-11-09T13:21:53.854480146Z cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,))
2018-11-09T13:21:53.854963942Z sqlite3.OperationalError: database is locked
Here is the route -
#app.route('/borrow',methods=["POST"])
def borrow():
# import pdb; pdb.set_trace()
body = request.get_json()
user_id = body["userid"]
bookid = body["bookid"]
conn = sqlite3.connect("database.db")
cursor = conn.cursor()
date = datetime.now()
expiry_date = date + timedelta(days=30)
cursor.execute("UPDATE books SET stock = stock - 1 WHERE bookid = ?",(bookid,))
# conn.commit()
cursor.execute("INSERT INTO borrowed (issuedate,returndate,memberid,bookid) VALUES (?,?,?,?)",("xxx","xxx",user_id,bookid,))
conn.commit()
cursor.close()
conn.close()
return json.dumps({"status":200,"conn":"working with datess update"})
I tried checking the database integrity using pragma. There was no integrity loss. So I don't know what might be causing that error. Any help is Appreciated :)
I use Azure app service on Docker on Linux, and have the same issue. If you are using Azure app service on Windows, the problem is different from mine.
The problem is that /home is mounted as CIFS filesystem which can not deal with SQLite3 lock.
My workaround is to copy db.sqlite3 file to some directory other than /home, and properly set permissions and ownerships of the db.sqlite3 file and its directory as well. Then, let my project read/write it. However, this workaround is pretty awkward. I don't recommned.
Presumably this solution is not safe for production workloads but at least I got it working by executing the following command:
sqlite3 <database-file> 'PRAGMA journal_mode=wal;'
After running the above command, my database stored on an Azure File share works inside a container Web App.
I got it by setting up the azure mount options with the following configuration:
dir_mode=0777,file_mode=0777,uid=0,gid=0,mfsymlinks,nobrl,cache=strict
But the real solution is to add the flag nobrl (Byte-Range Lock).
Add storageclass example for kubernetes:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azureclass
provisioner: kubernetes.io/azure-file
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- nobrl
- cache=strict
parameters:
skuName: Standard_LRS
This answer appears toward the top of a typical Google search for this issue so I thought I'd add a couple of additional tips:
For those running JavaScript and using Sequelize as the interface to your SQLite DB, running
await sequelize.query('PRAGMA journal_mode=WAL;')
prior to creating your database will allow you to read/write the DB file in an Azure web app running under a Linux service plan. I have a separate script that creates one via a call to sequelize.sync(). I'm storing the DB file in a separate directory under /home within the file system for the Linux container. It seems to run fine and my workload is expected to be very light. Note that you don't need to set the journal mode again when your app starts and you try to connect to the database, that mode will be set in the file itself (this wasn't obvious from the SQLite docs).

pyhs2/hive No files matching path file and file Exists

Using the hive or beeline client, I have no problem executing this statement:
hive -e "LOAD DATA LOCAL INPATH '/tmp/tmpBKe_Mc' INTO TABLE unit_test_hs2"
The data from the file is loaded successfully into hive.
However, when using pyhs2 from the same machine, the file is not found:
import pyhs2
conn_str = {'authMechanism':'NOSASL', 'host':'azus',}
conn = pyhs2.connect(conn_str)
with conn.cursor() as cur:
cur.execute("LOAD DATA LOCAL INPATH '/tmp/tmpBKe_Mc' INTO TABLE unit_test_hs2")
Throws exception:
Traceback (most recent call last):
File "data_access/hs2.py", line 38, in write
cur.execute("LOAD DATA LOCAL INPATH '%s' INTO TABLE %s" % (csv_file.name, table_name))
File "/edge/1/anaconda/lib/python2.7/site-packages/pyhs2/cursor.py", line 63, in execute
raise Pyhs2Exception(res.status.errorCode, res.status.errorMessage)
pyhs2.error.Pyhs2Exception: "Error while compiling statement: FAILED: SemanticException Line 1:23 Invalid path ''/tmp/tmpBKe_Mc'': No files matching path file:/tmp/tmpBKe_Mc"
I've seen similar questions posted about this problem, and the usual answer is that the query is running on a different server that doesn't have the local file '/tmp/tmpBKe_Mc' stored on it. However, if that is the case, why would running the command directly from the CLI work but using pyhs2 not work?
(Secondary question: how can I show which server is trying to handle the query? I've tried cur.execute("set"), which returns all configuration parameters but when grepping for "host" the returned parameters don't seem to contain a real hostname.)
Thanks!
This happens because pyhs2 trying to find file on cluster
Solution is to have your source saved in related hdfs location instead of /tmp

Python 2.7.3 process.Popen() failures

I'm using python 2.7.3 on a Windows 7 64-bit machine currently (also developing in Linux 32 bit, Ubuntu 12.04) and am having odd difficulties getting python to communicate with the command prompt/terminal successfully. My code looks like this:
import subprocess, logging, platform, ctypes
class someClass (object):
def runTerminalCommand:
try:
terminalOrCmdLineCommand = self.toString()
process = subprocess.Popen(terminalOrCmdLineCommand, shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
output = process.stdout.readlines()
if len(output) < 1:
logging.exception("{} error : {}".format(self.cmd,process.stderr.readlines()))
raise ConfigError("cmd issue : {}".format(self.cmd))
return output
except ValueError as err:
raise err
except Exception as e:
logging.exception("Unexpected error : " + e.message)
raise ConfigError("unexpected error")
Now, I know that the self.toString() returned value will process correctly if I enter it manually, so I'm limiting this to an issue with how I'm sending it to the command line via the subprocess. I've read the documentation, and found that the subprocess.check_call() doesn't return anything if it encounters an error, so I'm using .Popen()
The exception I get is,
[date & time] ERROR: Unexpected error :
Traceback (most recent call last):
File "C:\[...]"
raise ConfigError("cmd issue : {}".format(self.cmd))
"ConfigError: cmd issue : [the list self.cmd printed out]"
What I am TRYING to do, is run a command, and read the input back. But I seem to be unable to automate the call I want to run. :(
Any thoughts? (please let me know if there are any needed details I left out)
Much appreciated, in advance.
The docs say:
Use communicate() rather than .stdin.write, .stdout.read or
.stderr.read to avoid deadlocks due to any of the other OS pipe
buffers filling up and blocking the child process.
You could use .communicate() as follows:
p = Popen(cmd, stdout=PIPE, stderr=PIPE)
stdout_data, stderr_data = p.communicate()
if p.returncode != 0:
raise RuntimeError("%r failed, status code %s stdout %r stderr %r" % (
cmd, p.returncode, stdout_data, stderr_data))
output_lines = stdout_data.splitlines() # you could also use `keepends=True`
See other methods to get subprocess output in Python.