WMIC throwing an Description = exception occured when trying to retrieve users in table format - wmic

I'm trying to extract list of the users (name and description fields) into a text file. This used to work fine until recently.
This is a Windows 8.1 64bit machine which is being used as server and has a few thousand user accounts.
The command I'm running is:
wmic USERACCOUNT GET Name,FullName
Recently it started throwing and error (with no output beyond before or after):
ERROR:
Description = Exception occurred.
It used to output the information in a table format which I used to pipe into a text file.
After searching the internet I found a suggestion to try and use the list format instead of the default table format, so I tried:
wmic USERACCOUNT GET Name,FullName /format:list
This seems to work but the output is all wrong in the list format.
Does anyone know why it suddenly could start throwing an error in the table format and how to get it working in the table format again?
UPDATE: When i use the /trace on switch with the table format I get the following (at the end of a long list of successes).
SUCCESS: CoCreateInstance(CLSID_FreeThreadedDOMDocument, NULL, CLSCTX_INPROC_SERVER, IID_IXMLDOMDocument2, -)
Line: 206 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: IXMLDOMDocument::loadXML(-, -)
Line: 237 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: CoCreateInstance(CLSID_XSLTemplate, NULL, CLSCTX_SERVER, IID_IXSLTemplate, -)
Line: 3246 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: CoCreateInstance(CLSID_FreeThreadedDOMDocument, NULL, CLSCTX_SERVER,IID_IXMLDOMDocument2, -)
Line: 3269 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: IXSLDOMDocument2::put_async(VARIANT_FALSE)
Line: 3281 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: IXSLDOMDocument2::load(L"C:\Windows\system32\wbem\\texttable.xsl", -)
Line: 3296 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: IXSTemplate::putref_stylesheet(-)
Line: 3310 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: IXSTemplate::createProcessor(-)
Line: 3322 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
SUCCESS: IXSProcessor::put_input(-)
Line: 3359 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
FAIL: IXSProcessor::tranform(-)
Line: 3400 File: admin\wmi\wbem\tools\wmic\formatengine.cpp
ERROR:
Code = 0x80020009
Description = Exception occurred.
Facility = Dispatch

Related

How to resolve an error in Importing CSV file from PostgreSQL

I want to import a CSV file to PostgreSQL.
Shown below is the syntax used to import a CSV file to a created table i.e. salary_supervisor in PostgreSQL,
COPY salary_supervisor
FROM 'X:\XX\XXX\XXX\XXX\XXX\XXX\supervisor_salaries.csv' /* File path */
WITH (FORMAT CSV, HEADER);
However, it indicates an error as shown below,
ERROR: could not open file "X:\XX\XXX\XXX\XXX\XXX\XXX\supervisor_salaries.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
Since an error was shown above, the below syntax was used to import the CSV file to PostgreSQL,
\copy (Select * From salary_supervisor)
FROM 'X:\XX\XXX\XXX\XXX\XXX\XXX\supervisor_salaries.csv' /* File path */
WITH (FORMAT CSV, HEADER);
However, it also indicated a syntax error shown below,
ERROR: syntax error at or near "\"
LINE 1: \copy (Select * From salary_supervisor)
^
SQL state: 42601
Character: 1

Postgresql 12 command line arguments initdb via Python 3 code, how to proceed?

Using Python 3.8.1 code, I unzipped the distribution of postgresql12 in a specific folder of root C, inside the code it creates two other folders, data and log. In the code I write the following line to set up a first database:
subP = subprocess.run([link_initdb, "-U NewDataBase", "-A AlphaBeta", "-E utf8", "-D C:\\Database\\pgsql\\data"], stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding="utf-8")
But I get the following error message:
ubP.stderr: initdb: error: could not create directory " C:": Invalid argument
I don't know what I'm wrong!
For whatever reason there should be no space between -D and the filename.
To solve it you can use:
subP = subprocess.run([
link_initdb,
"-U postgres",
"-A password",
"-E utf8",
"--pgdata=C:\Database\pgsql\data",
],
shell=True,
)

Uploading job fails on the same file that was uploaded successfully before

I'm running regular uploading job to upload csv into BigQuery. The job runs every hour. According to recent fail log, it says:
Error: [REASON] invalid [MESSAGE] Invalid argument: service.geotab.com [LOCATION] File: 0 / Offset:268436098 / Line:218637 / Field:2
Error: [REASON] invalid [MESSAGE] Too many errors encountered. Limit is: 0. [LOCATION]
I went to line 218638 (the original csv has a headline, so I assume 218638 should be the actual failed line, let me know if I'm wrong) but it seems all right. I checked according table in BigQuery, it has that line too, which means I actually successfully uploaded this line before.
Then why does it causes failure recently?
project id: red-road-574
Job ID: Job_Upload-7EDCB180-2A2E-492B-9143-BEFFB36E5BB5
This indicates that there was a problem with the data in your file, where it didn't match the schema.
The error message says it occurred at File: 0 / Offset:268436098 / Line:218637 / Field:2. This means the first file (it looks like you just had one), and then the chunk of the file starting at 268436098 bytes from the beginning of the file, then the 218637th line from that file offset.
The reason for the offset portion is that bigquery processes large files in parallel in multiple workers. Each file worker starts at an offset from the beginning of the file. The offset that we include is the offset that the worker started from.
From the rest of the error message, it looks like the string service.geotab.com showed up in the second field, but the second field was a number, and service.geotab.com isn't a valid number. Perhaps there was a stray newline?
You can see what the lines looked like around the error by doing:
cat <yourfile> | tail -c +268436098 | tail -n +218636 | head -3
This will print out three lines... the one before the error (since I used -n +218636 instead of +218637), the one that had the error, and the next line as well.
Note that if this is just one line in the file that has a problem, you may be able to work around the issue by specifying maxBadRecords.

file parsing using hive query issue

I have a complex text file to parse and load for analysis.
I started off with a simple Hive query to parse a text file and load as a table in HDFS.
I am using beeswax to run this query.
name_area.txt
arun:salem
anand:vnr
Cheeli:guntur
Hive Query
CREATE TABLE test(
name STRING,
area STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES ("input.regex" = "^(.*):(.*)$","output.format.string" = "%1$s %2$s")
LOCATION '/user/name_area.txt';
The file is copied to HDFS.
When i execute the query, I am getting the following exception.
NoReverseMatch at /beeswax/execute/6
Reverse for ‘execute_parameterized_query’ with arguments ‘(6,)’ and keyword arguments ‘{}’ not found.
Request Method: POST
Request URL: http://192.168.58.128:8000/beeswax/execute/6
Django Version: 1.2.3
Exception Type: NoReverseMatch
Exception Value:
Reverse for ‘execute_parameterized_query’ with arguments ‘(6,)’ and keyword arguments ‘{}’ not found.
Exception Location: /usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/urlresolvers.py in reverse, line 297
Python Executable: /usr/bin/python2.6
Python Version: 2.6.6
Python Path: [”, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pip-0.6.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Babel-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/BabelDjango-0.2.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Beaker-1.4.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Markdown-2.0.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MarkupSafe-0.9.3-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Paste-1.7.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/PyYAML-3.09-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Pygments-1.3.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/South-0.7-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Spawning-0.9.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/anyjson-0.3.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/avro-1.5.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/billiard-2.7.3.28-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/celery-3.0.19-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/configobj-4.6.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_auth_ldap-1.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_celery-3.0.17-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_extensions-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/django_nose-0.5-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/elementtree-1.2.6_20050316-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/enum-0.4.4-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/eventlet-0.9.14-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/greenlet-0.3.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/httplib2-0.8-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/importlib-1.0.2-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/kerberos-1.1.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/kombu-2.5.10-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lockfile-0.8-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/lxml-3.3.5-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/moxy-1.0.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/openpyxl-1.6.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/ordereddict-1.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pam-0.1.3-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/processing-0.52-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pyOpenSSL-0.13-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pycrypto-2.6-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pysqlite-2.5.5-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_daemon-1.5.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_dateutil-2.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/python_ldap-2.3.13-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/pytidylib-0.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/requests-2.2.1-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/requests_kerberos-0.4-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/sasl-0.1.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/sh-1.08-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/simplejson-2.0.9-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/threadframe-0.2-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/thrift-0.9.1-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/urllib2_kerberos-0.1.6-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/xlrd-0.9.0-py2.6.egg’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages/zope.interface-3.5.2-py2.6-linux-x86_64.egg’, ‘/usr/lib/hue/desktop/core/src’, ‘/usr/lib/hue/desktop/libs/hadoop/src’, ‘/usr/lib/hue/desktop/libs/liboozie/src’, ‘/usr/lib/hue/build/env/lib/python2.6/site-packages’, ‘/usr/lib/hue/apps/about/src’, ‘/usr/lib/hue/apps/beeswax/src’, ‘/usr/lib/hue/apps/filebrowser/src’, ‘/usr/lib/hue/apps/hcatalog/src’, ‘/usr/lib/hue/apps/help/src’, ‘/usr/lib/hue/apps/jobbrowser/src’, ‘/usr/lib/hue/apps/jobsub/src’, ‘/usr/lib/hue/apps/oozie/src’, ‘/usr/lib/hue/apps/pig/src’, ‘/usr/lib/hue/apps/proxy/src’, ‘/usr/lib/hue/apps/useradmin/src’, ‘/usr/lib/hue/build/env/bin’, ‘/usr/lib64/python2.6′, ‘/usr/lib64/python2.6/plat-linux2′, ‘/usr/lib64/python2.6/lib-dynload’, ‘/usr/lib64/python2.6/site-packages’, ‘/usr/lib/python2.6/site-packages’, ‘/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/hue/apps/beeswax/gen-py’, ‘/usr/lib/hue’, ‘/usr/lib64/python26.zip’, ‘/usr/lib64/python2.6/lib-tk’, ‘/usr/lib64/python2.6/lib-old’, ‘/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info’, ‘/usr/lib/hue/apps/beeswax/src/beeswax/../../gen-py’, ‘/usr/lib/hue/apps/jobbrowser/src/jobbrowser/../../gen-py’, ‘/usr/lib/hue/apps/proxy/src/proxy/../../gen-py’]
Server time: Fri, 24 Apr 2015 07:37:07 -0700
Appreciate your help on this.
Your create statement query does not seems write. The location should be the directory name where input file exists and not the file name.

Unexpected error while loading data

I am getting an "Unexpected" error. I tried a few times, and I still could not load the data. Is there any other way to load data?
gs://log_data/r_mini_raw_20120510.txt.gzto567402616005:myv.may10c
Errors:
Unexpected. Please try again.
Job ID: job_4bde60f1c13743ddabd3be2de9d6b511
Start Time: 1:48pm, 12 May 2012
End Time: 1:51pm, 12 May 2012
Destination Table: 567402616005:myvserv.may10c
Source URI: gs://log_data/r_mini_raw_20120510.txt.gz
Delimiter: ^
Max Bad Records: 30000
Schema:
zoneid: STRING
creativeid: STRING
ip: STRING
update:
I am using the file that can be found here:
http://saraswaticlasses.net/bad.csv.zip
bq load -F '^' --max_bad_record=30000 mycompany.abc bad.csv id:STRING,ceid:STRING,ip:STRING,cb:STRING,country:STRING,telco_name:STRING,date_time:STRING,secondary:STRING,mn:STRING,sf:STRING,uuid:STRING,ua:STRING,brand:STRING,model:STRING,os:STRING,osversion:STRING,sh:STRING,sw:STRING,proxy:STRING,ah:STRING,callback:STRING
I am getting an error "BigQuery error in load operation: Unexpected. Please try again."
The same file works from Ubuntu while it does not work from CentOS 5.4 (Final)
Does the OS encoding need to be checked?
The file you uploaded has an unterminated quote. Can you delete that line and try again? I've filed an internal bigquery bug to be able to handle this case more gracefully.
$grep '"' bad.csv
3000^0^1.202.218.8^2f1f1491^CN^others^2012-05-02 20:35:00^^^^^"Mozilla/5.0^generic web browser^^^^^^^^
When I run a load from my workstation (Ubuntu), I get a warning about the line in question. Note that if you were using a larger file, you would not see this warning, instead you'd just get a failure.
$bq show --format=prettyjson -j job_e1d8636e225a4d5f81becf84019e7484
...
"status": {
"errors": [
{
"location": "Line:29057 / Field:12",
"message": "Missing close double quote (\") character: field starts with: <Mozilla/>",
"reason": "invalid"
}
]
My suspicion is that you have rows or fields in your input data that exceed the 64 KB limit. Perhaps re-check the formatting of your data, check that it is gzipped properly, and if all else fails, try importing uncompressed data. (One possibility is that the entire compressed file is being interpreted as a single row/field that exceeds the aforementioned limit.)
To answer your original question, there are a few other ways to import data: you could upload directly from your local machine using the command-line tool or the web UI, or you could use the raw API. However, all of these mechanisms (including the Google Storage import that you used) funnel through the same CSV parser, so it's possible that they'll all fail in the same way.