Does redis-py automatically use evalsha for registered scripts? - redis

When I register a Lua script to a redis client:
script = redis_client.register_script(lua_string)
and then run the script with the default client:
script(keys, args)
does this automatically use evalsha internally or does it send the whole script to the server every time?

Yes. Here's the (abridged) source code:
class Script(object):
def __call__(self, keys=[], args=[], client=None):
if isinstance(client, BasePipeline):
# Make sure the pipeline can register the script before executing.
client.scripts.add(self)
return client.evalsha(self.sha, len(keys), *args)

Related

Prefect not finding .env file

While running a prefect flow from Pycharm everything works fine but when I start it from Prefect Server, the flow doesn't find the .env file with my credentials and fails with my own assertion error from this code:
class MyDotenv:
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
dotenv_file = ".\\04_keep_local\\.env"
assert os.path.isfile(dotenv_file), "\n-> Could't locate .env file!"
dotenv.load_dotenv(dotenv_file)
I've used these commands on my virtual environment (venv) to start the server and the agent:
prefect backend server
prefect server start
prefect agent local start
Any ideas?
Did you perhaps start your LocalAgent in a directory that doesn't contain your .env file?
The LocalAgent runs a flow as a subprocess of itself. Meaning the directory your flow runs in is the directory you executed prefect agent local start

Can we single step QEMU using libvert

I am developing a peripheral hardware and want to use QEMU to test it.
The plan is to run the device driver in QEMU and use libvert (or something else?) to interface the VM with a python based simulation model of the peripheral.
I aware that QEMU can be single stepped via GDB, but I am looking at a python approach to do the following.
Wait for a write to a specific memory location.
Suspend QEMU
Run some background task in the host.
Run QEMU for N Cycles.
Write to a memory location
Continue
Is this possible with libvert or any other toolkit?
I needed to do something similar, and came across two approaches:
Run Python in GDB, using a python script of the commands
Use a Python API to GDB like pygdbmi
The latter ended up being more flexible, so I'll explain those steps here.
Configure qemu with debugging information:
./configure --enable-debug
Build qemu and invoke it halted, with debug hooks:
make
sudo make install
qemu-system-x86_64 -S -s
Now, use a Python script to attach to and interact with qemu via pygdbmi(instructions here):
from pygdbmi.gdbcontroller import GdbController
from pprint import pprint
# Start gdb process
gdbmi = GdbController()
print(gdbmi.get_subprocess_cmd()) # print actual command run as subprocess
gdbmi.write('target remote localhost:1234'); # attach to QEMU GDB socket
pprint(response)
response = gdbmi.write('-break-insert main') # machine interface (MI) commands start with a '-'
response = gdbmi.write('break main') # normal gdb commands work too, but the return value is slightly different
response = gdbmi.write('-exec-run')
response = gdbmi.write('run')
response = gdbmi.write('-exec-next', timeout_sec=0.1) # the wait time can be modified from the default of 1 second
response = gdbmi.write('next')
response = gdbmi.write('next', raise_error_on_timeout=False)
response = gdbmi.write('next', raise_error_on_timeout=True, timeout_sec=0.01)
response = gdbmi.write('-exec-continue')
response = gdbmi.send_signal_to_gdb('SIGKILL') # name of signal is okay
response = gdbmi.send_signal_to_gdb(2) # value of signal is okay too
response = gdbmi.interrupt_gdb() # sends SIGINT to gdb
response = gdbmi.write('si 20') # step 20 instructions
response = gdbmi.write('continue')
response = gdbmi.exit()
If you have trouble with kernel symbols, you might also need to issue a command 'file myKernel' to load the symbol table from that file, assuming it was compiled with debugging information.
For reference, the '-s' command adds GDB hooks at localhost:1234. So the first command you issue must direct gdb to look there:
gdbmi.write('target remote localhost:1234');

Is there a way to run command lines asynchronously in Azure-DevOps Build Pipeline?

I'm setting up an Azure-DevOps pipeline in which I want to include automated tests via the Newman CLI.
Imagine a pipeline like this.
Build Project
Copy build to test folder
Run the application => (API-Server)
Run Newman
Kill API Server Process
On Success Copy Build to another folder.
My Problem is that my server application is in a waiting state after it's initialization.
The next Task in my build pipeline won't start.
Is there a way to run multiple command lines asynchronously in Azure-DevOps?
Starting the process in via "start" won't work since it throws me an
ERROR: Input redirection is not supported, exiting the process immediately.
start "%TESTDIR%\foo\bar.exe"
timeout 10

Convert .odt to .docx using libreoffice5.0 in python

command = "libreoffice5.0 --headless --convert-to odt /data/Format/000001535edbaf8f27a9c331003600c900520045/test.docx --outdir /data/Format/000001535edbaf8f27a9c331003600c900520045"
When we run this command on terminal that gives me output
/data/Format/000001535edbaf8f27a9c331003600c900520045/test.odt
But whenever I am trying with apache request os.system(command) it goes in process but doesn't return anything. The process keeps running in background continously.
Have you thought about using
subprocess.call(["ls", "-l"])
"The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several older modules and functions:"
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Ref Python 2.7.x module subprocess

How do I run a script on VxWorks Tornado Shell?

I am trying to run a script on VxWorks Shell, which will load a module.
I use a Perl script to telnet into the system, login and get access to the shell.
I am able to run the basic commands like 'i', 'time', 'ls' 'pwd' and 'h' and so on.
But I would like to run a script, say 'test.o'.
If I do : <C:\Path\subfolder\test.o the script file WILL run from, the TORNADO Shell.
But I have connected to using Telnet using Perl.
So I connect this way:
use Net::Telnet;
my $username = "username";
my $password = "password";
my $t = new Net::Telnet(Timeout=>10, Errmode=>'die');
$t->open('10.42.177.123');
$t->login($username,$password); # Logins as expected.
my #lines = $t->cmd('i'); # To test
print #lines # This works
#lines = $t->cmd('<C:\\Path\\Subfolder\\test.o'); # This is not working for me. HELP!
print #lines; # Prints the Error below
I get an error saying :
Unknown directory: /C:\Path\Subfolder
can't open input 'C:\Path\Subfolder\test.o
errno = 0x1f5
-
How do I run my script file if it is residing at a particular folder of the host PC?
I am able to run the script manually from the TORNADO SHELL window where the prompt looks like ->. and hence it is a working script. And as I have said, I am able to run and print the basic VxWorks Shell commands ("build-in functions").
Any help? [ My OS is Win7 ]
Thanks!
This is issue is now resolved. Two issues was there, and one was because TORNADO, another VxWorks Client was also logged into the system at the same time, while I am trying run my perl script which sends commands and do instructions using Telnet, and having two clients (Tornado, and my scripts Telnet session) running at the same time (despite the VxWorks OS running on the Embedded system having TelnetDeamon running) it didn't like it.
As for the Error above, why it didn't work and gave an error was a syntax error. I should have used
$t->cmd('<\\Path\\subfolder\\test.o');
No need to give C: