When using the osqueryi interactive shell for osquery I'm running into an issue where a WARNING is displayed even though logging is supposed to be disabled. Is this a bug?
Docs explain the following:
--logger_min_status
The minimum level for status log recording. Use the following values: INFO = 0, WARNING = 1, ERROR = 2. To disable all status messages use 3+.
--logger_min_sterr
The minimum level for status logs written to stderr. Use the following values: INFO = 0, WARNING = 1, ERROR = 2. To disable all status messages use 3+.
What I have: (results truncated for brevity)
# osqueryi --json --logger_min_status=3 --logger_min_stderr=3 'select * from block_devices'
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[{"block_size":"512","label":"","model":"VBOX HARDDISK","name":"/dev/sda","parent":"","size":"83886080","type":"","uuid":"","vendor":"ATA"},...]
What I expect:
# osqueryi --json --logger_min_status=3 --logger_min_stderr=3 'select * from block_devices'
[{"block_size":"512","label":"","model":"VBOX HARDDISK","name":"/dev/sda","parent":"","size":"83886080","type":"","uuid":"","vendor":"ATA"},...]
This logging seems to be coming from the LVM library, so is likely not controllable by osquery. I couldn't find the exact log line in the LVM2 source.
I believe it is the populatePVChildren function that would be calling an LVM function that performs the logging.
Your interpretation of the documentation around debugging looks correct.
Related
I had the following situation: I'm in a live user mode debugging session and I wanted to show the win32k!_W32Process structure. Unfortunately, win32k is a kernel mode SYS file, so the symbols are not available in the user mode session.
I know that I can always load a DLL, EXE or SYS as a dump file and then inspect the symbols. Usually I would do that via File/Open Crash Dump.
This time, I wanted to show the participants of a debugging workshop that it's possible to debug multiple systems at the same time, so I opened the Win32K.sys via WinDbg's command prompt:
0:003> |
. 0 id: 10fc attach name: [...]\NetHeaps.exe
0:003> .opendump C:\Windows\winsxs\[...]\win32k.sys
Loading Dump File [C:\Windows\winsxs\[...]\win32k.sys]
Opened 'C:\Windows\winsxs\[...]\win32k.sys'
||0:0:003>
As we can now see, we have 2 systems and I'm currently on the live debugging system:
||0:0:003> ||
. 0 Live user mode: <Local>
1 Image file: C:\Windows\winsxs\[...]\win32k.sys
I thought I could switch to the other system now, but that does not work:
||0:0:003> ||1s
^ Illegal debuggee error in '||1s'
I would not have worried too much, but it can't find the symbols of win32k in this case:
||0:0:003> .reload
Reloading current modules
...........................
||0:0:003> dt win32k!_W32Process
Symbol win32k!_W32Process not found.
The problem is not in the || command, it's in the .opendump command.
The help says:
After you use the .opendump command, you must use the g (Go) command to finish loading the dump file.
Be aware that this will also run your live process. Therefore, freeze the threads first (~*f) and unfreeze later (~*u).
After that you can switch the system and display the type:
||1:1:004> ||
0 Live user mode: <Local>
. 1 Image file: C:\Windows\winsxs\[...]\win32k.sys
||1:1:004> dt _W32Process
win32k!_W32PROCESS
+0x000 Process : Ptr64 _EPROCESS
+0x008 RefCount : Uint4B
+0x00c W32PF_Flags : Uint4B
[...]
I have this program that listens to the microphone and tries to recognize what was said:
#!/usr/bin/env python3
import speech_recognition
recognizer = speech_recognition.Recognizer()
class Recognition:
def __init__(self):
self.recognizer = speech_recognition.Recognizer()
def listen(self):
with speech_recognition.Microphone() as source:
self.recognizer.adjust_for_ambient_noise(source)
self.audio = self.recognizer.listen(source)
def recognize_sphinx(self):
decoder = self.recognizer.recognize_sphinx(self.audio, show_all=True)
for best, i in zip(decoder.nbest(), range(10)):
return best.hypstr
r = Recognition()
r.listen()
print(r.recognize_sphinx())
Although the code works, it does display the following warnings along with the output:
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2266:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for 4294967295, skipping unlock
JackShmReadWritePtr::~JackShmReadWritePtr - Init not done for 4294967295, skipping unlock
potato
The problem is I want a script to execute the code above and read its output (potato), but the returned result is mixed with all those errors. I tried the solution I saw here, but it only suppress the alsa warnings. Probably messing with alsa/pulseaudio configs will solve this problem, but I don't want to do that on every machine that runs the code.
I also tried redirecting stdin/stderr to null, but didn't suppress anything.
I have a workaround in mind that is, when the script runs this program, to get only the last line, but I want that only as a last resource.
The warning/status messages go to the standard error output (stderr), while the output of your print() call goes to the standard output (stdout).
It should be trivial to separate the two.
If you want to actually suppress the warning/status messages, you can have a look at my answer to the SO-question you were mentioning, or the more verbose answer I've linked to from there.
I'm using Hue for PIG scripts on amazon EMR. I am using the declare and default statements as mentioned in the documentation.
I have some %default and %declare statements and it looks like they are
not preprocessed within Hue. Therefore, although the parameters are defined
in my script, the editor keeps popping in a parameter configuration window. If I leave the parameter blank, the job fails with an error.
Sample Script
%declare OUTPUT_FOLDER 'testingOutput01';
ts = LOAD 's3://testbucket1/input/testdata-00000.gz' USING PigStorage('\t');
STORE ts INTO 's3://testbucket1/$OUTPUT_FOLDER' USING PigStorage('\t');
Upon execution, it shows the pop-up window asking for values for OUTPUT_FOLDER. If I leave it blank it fails with the following error:
2015-06-23 20:15:54,908 [main] ERROR org.apache.pig.Main - ERROR 2997:
Encountered IOException. org.apache.pig.tools.parameters.ParseException:
Encountered "<EOF>" at line 1, column 12.
Was expecting one of:
<IDENTIFIER> ...
<OTHER> ...
<LITERAL> ...
<SHELLCMD> ...
Is that the expected behavior? Is this a known issue or am I missing something?
Configuration details:
AMI version:3.7.0
Hadoop distribution:Amazon 2.4.0
Applications:Hive 0.13.1, Pig 0.12.0, Impala 1.2.4, Hue
The same behavior is seen with default instead of declare.
If you need any clarifications then please do comment on this question. I will update it as needed.
Hue does not support %declare with a default statement. It will be fixed with: https://issues.cloudera.org/browse/HUE-2508
The current temporary workaround is to put any value in the popup.
I wrote some script which renders scenes and want see output on console I am using print but it not works what I should use to print something.
I run script with:
blender -b -P render.py
Want output such string from render.py:
print '#' * 80
It is little trivial question but print not works and not know how to progress development without debug messages.
use the logging module to setup your custom logger.
you can setup a Console handler to log the content to the console or/and
formatter = logging.Formatter('%(message)s')
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(formatter)
setup a File handler if you want to log to a file:
file_handler = logging.FileHandler(log_file)
file_handler.setFormatter(formatter)
# Add the handler to the logger:
logger.addHandler(console_handler)
logger.addHandler(file_handler)
They can both have different log levels which you can set via script or environement variable:
log_level = level
if 'LOG_LEVEL' in os.environ:
log_level = os.environ['LOG_LEVEL']
console_handler.setLevel(log_level)
file_handler.setLevel('INFO')
Read trough:
https://docs.python.org/3/howto/logging.html
I set CELERY_RESULT_BACKEND = "amqp" in celeryconfig.py
but I get:
>>> from tasks import add
>>> result = add.delay(3,5)
>>> result.ready()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 105, in ready
return self.state in self.backend.READY_STATES
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 184, in state
return self.backend.get_status(self.task_id)
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 414, in _is_disabled
raise NotImplementedError("No result backend configured. "
NotImplementedError: No result backend configured. Please see the documentation for more information.
I just went through this so I can shed some light on this. One might think for all of the great documentation stating some of this would have been a bit more obvious.
I'll assume you have both RabbitMQ up and functioning (it needs to be running), and that you have dj-celery installed.
Once you have that then all you need to do is to include this single line in your setting.py file.
BROKER_URL = "amqp://guest:guest#localhost:5672//"
Then you need to run syncdb and start this thing up using:
python manage.py celeryd -E -B --loglevel=info
The -E states that you want events captured and the -B states you want celerybeats running. The former enable you to actually see something in the admin window and the later allows you to schedule. Finally you need to ensure that you are actually going to capture the events and the status. So in another terminal run this:
./manage.py celerycam
And then finally your able to see the working example provided in the docs.. -- Again assuming you created the tasks.py that is says to.
>>> result = add.delay(4, 4)
>>> result.ready() # returns True if the task has finished processing.
False
>>> result.result # task is not ready, so no return value yet.
None
>>> result.get() # Waits until the task is done and returns the retval.
8
>>> result.result # direct access to result, doesn't re-raise errors.
8
>>> result.successful() # returns True if the task didn't end in failure.
True
Furthermore then you are able to view your status in the admin panel.
I hope this helps!! I would add one more thing which helped me. Watching the RabbitMQ Log file was key as it helped me identify that django-celery was actually talking to RabbitMQ.
Are you running django celery?
If so, you need to start a python shell in the context of django (or whatever the technical term is).
Type:
python manage.py shell
And try your commands from that shell
HI tried everything to work celery v3.1.25 with Django 1.8 version nothing worked..
Finally below line helped me ,feeling happy
app = Celery('documents',backend="celery.backends.amqp:AMQPBackend")
Setting backend="celery.backends.amqp:AMQPBackend" fixed my error.