Is there a way to clear Redis shell (Redis-cli) from the output of the previous commands?
Basically I need completely the same action like I have answered in this question, but instead MongoDB I need it for Redis.
P.S. I tried clc, cls, clear, CTR + L but as you understood with no results.
On terminals supported by Linenoise (used in redis-cli), both clear and CTRL-L work fine. It does with my ssh connection. Linenoise implements clear screen in the following way:
void linenoiseClearScreen(void) {
if (write(STDIN_FILENO,"\x1b[H\x1b[2J",7) <= 0) {
/* nothing to do, just to avoid warning. */
}
}
So I guess this sequence does not work on your terminal ... or perhaps, you are using a very old version of redis-cli?
Related
I can't figure out how to get unbuffered input.
I tried:
method get-selection() {
getc();
}
Also tried Term::ReadKey module:
use Term::ReadKey;
method get-selection() {
read-key();
}
But I still have to hit enter before I can capture the input. Couldn't find anything in the docs that might help.
I'm on macOS.
https://docs.raku.org/type/IO::Handle#routine_getc states:
Using getc to get a single keypress from a terminal will only work properly if you've set the terminal to "unbuffered".
For MacOS, a Google search gets me to:
https://apple.stackexchange.com/questions/193138/to-install-unbuffer-in-osx
TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
$apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp': }
}
$apply_results.each | $result | {
notice($result.report)
}
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.
class { 'base_windows::testpp': }
}
$apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
out::message("${target} returned a value: ${result.value}")
} else {
out::message("${target} errored with a message: ${result.error.message}")
}
}
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log
For some unknown reason (not even in Redis log), this piece of code will stuck forever... Please help..
use v6;
use Redis;
my $redis = Redis.new("127.0.0.1:6379");
$redis.auth("xxxxxxxxx");
$redis.set("key", "value");
say $redis.get("key");
say $redis.info();
$redis.quit();
I wonder if the issue is because the Redis library is a bit old and there's been a few changes to the runtime in the intervening time.
Have you tried Redis::Async? It seems more up to date.
I started to deal with ffmpeg API ( not the command prompt ) to build a movie editor, and I'm trying to find a good tutorial about how to extract keyframes from video, but I didn't find it.
Someone did it before and can write the code here?
Someone has a good tutorial about ffmpeg API?
Thank you!
In your demuxing loop, check for the AV_PKT_FLAG_KEY flag in AVPacket::flags after calling av_read_frame() with your AVFormatContext and confirming the read packet is from the correct stream of the input. Example:
AVPacket packet;
if (av_read_frame(pFormatCtx, &packet) < 0) {
break;
}
if (videoStream/* e.g. 0 or 1 */ == packet.stream_index) {
if (packet.flags & AV_PKT_FLAG_KEY) { //do something
Note that, in my experience, you sometimes need to decode up to 2 keyframes before the desired frame in order to produce a good picture.
See the doc/examples directory in the ffmpeg distribution for some API usage examples, e.g. demuxing_decoding.c. You can also reference ffmpeg.c (the source of the famous CLI) if you are brave and/or have a good IDE.
I have a lot of trouble with the combination of symfony2 and doctrine2. I have to deal with huge datasets (around 2-3 million write and read) and have to do a lot of additional effort to avoid running out of memory.
I figgured out 2 main points, that "leak"ing memory (they are actually not really leaking, but allocating a lot).
The Entitymanager entity storage (I don't know the real name of this one) it seems like it keeps all processed entities and you have to clear this storage regularly with
$entityManager->clear()
The Doctrine QueryCache - it caches all used Queries and the only configuration I found was, that you are able to decide what kind of Cache you wanna use. I didn't find a global disable neither a useful flag for each query to disable it.
So usually I disable it for every query object with the function
$qb = $repository->createQueryBuilder($a);
$query = $qb->getQuery();
$query->useQueryCache(false);
$query->execute();
So.. that's all I figured out right now..
My questions are:
Is there a easy way to deny some objects from the Entitymanagerstorage?
Is there a way to set the querycache use in the entitymanager?
Can I configure this caching behaviors somewhere in the Symfony/doctrine configuration?
Would be very cool if someone has some nice tips for me.. otherwise this may help some rookie..
cya
As stated by the Doctrine Configuration Reference by default logging of the SQL connection is set to the value of kernel.debug, so if you have instantiated AppKernel with debug set to true the SQL commands get stored in memory for each iteration.
You should either instantiate AppKernel to false, set logging to false in you config YML, or either set the SQLLogger manually to null before using the EntityManager
$em->getConnection()->getConfiguration()->setSQLLogger(null);
Try running your command with --no-debug. In debug mode the profiler retains informations about every single query in memory.
1. Turn off logging and profiling in app/config/config.yml
doctrine:
dbal:
driver: ...
...
logging: false
profiling: false
or in code
$this->entityManager->getConnection()->getConfiguration()->setSQLLogger(null);
2. Force garbage collector. If you actively use CPU then garbage collector waits and you can find yourself with no memory soon.
At first enable manual garbage collection managing. Run gc_enable() anywhere in the code. Then run gc_collect_cycles() to force garbage collector.
Example
public function execute(InputInterface $input, OutputInterface $output)
{
gc_enable();
// I'm initing $this->entityManager in __construct using DependencyInjection
$customers = $this->entityManager->getRepository(Customer::class)->findAll();
$counter = 0;
foreach ($customers as $customer) {
// process customer - some logic here, $this->em->persist and so on
if (++$counter % 100 == 0) {
$this->entityManager->flush(); // save unsaved changes
$this->entityManager->clear(); // clear doctrine managed entities
gc_collect_cycles(); // PHP garbage collect
// Note that $this->entityManager->clear() detaches all managed entities,
// may be you need some; reinit them here
}
}
// don't forget to flush in the end
$this->entityManager->flush();
$this->entityManager->clear();
gc_collect_cycles();
}
If your table is very large, don't use findAll. Use iterator - http://doctrine-orm.readthedocs.org/projects/doctrine-orm/en/latest/reference/batch-processing.html#iterating-results
Set SQL logger to null
$em->getConnection()->getConfiguration()->setSQLLogger(null);
Manually call function gc_collect_cycles() after $em->clear()
$em->clear();
gc_collect_cycles();
Don't forget to set zend.enable_gc to 1, or manually call gc_enable() before use gc_collect_cycles()
Add --no-debug option if you run command from console.
got some "funny" news from doctrine developers itself on the symfony live in berlin - they say, that on large batches, us should not use an orm .. it is just no efficient to build stuff like that in oop
.. yeah.. maybe they are right xD
As per the standard Doctrine2 documentation, you'll need to manually clear or detatch entities.
In addition to that, when profiling is enabled (as in the default dev environment). The DoctrineBundle in Symfony2 configures a several loggers use quite a bit of memory. You can disable logging completely, but it is not required.
An interesting side effect, is the loggers affect both Doctrine ORM and DBAL. One of loggers will result in additional memory usage for any service that uses the default logger service. Disabling all of these would be ideal in commands-- since the profiler isn't used there yet.
Here is what you can do to disable the memory-intense loggers while keeping profiling enabled in other parts of Symfony2:
$c = $this->getContainer();
/*
* The default dbalLogger is configured to keep "stopwatch" events for every query executed
* the only way to disable this, as of Symfony 2.3, Doctrine Bundle 1.2, is to reinistiate the class
*/
$dbalLoggerClass = $c->getParameter('doctrine.dbal.logger.class');
$dbalLogger = new $dbalLoggerClass($c->get('logger'));
$c->set('doctrine.dbal.logger', $dbalLogger);
// sometimes you need to configure doctrine to use the newly logger manually, like this
$doctrineConfiguration = $c->get('doctrine')->getManager()->getConnection()->getConfiguration();
$doctrineConfiguration->setSQLLogger($dbalLogger);
/*
* If profiling is enabled, this service will store every query in an array
* fortunately, this is configurable with a property "enabled"
*/
if($c->has('doctrine.dbal.logger.profiling.default'))
{
$c->get('doctrine.dbal.logger.profiling.default')->enabled = false;
}
/*
* When profiling is enabled, the Monolog bundle configures a DebugHandler that
* will store every log messgae in memory.
*
* As of Monolog 1.6, to remove/disable this logger: we have to pop all the handlers
* and then push them back on (in the correct order)
*/
$handlers = array();
try
{
while($handler = $logger->popHandler())
{
if($handler instanceOf \Symfony\Bridge\Monolog\Handler\DebugHandler)
{
continue;
}
array_unshift($handlers, $handler);
}
}
catch(\LogicException $e)
{
/*
* As of Monolog 1.6, there is no way to know if there's a handler
* available to pop off except for the \LogicException that's thrown.
*/
if($e->getMessage() != 'You tried to pop from an empty handler stack.')
{
/*
* this probably doesn't matter, and will probably break in the future
* this is here for the sake of people not knowing what they're doing
* so than an unknown exception is not silently discarded.
*/
// remove at your own risk
throw $e;
}
}
// push the handlers back on
foreach($handlers as $handler)
{
$logger->pushHandler($handler);
}
Try disabling any Doctrine caches that exist. (If you're not using APC / other as a cache then memory is used).
Remove Query Cache
$qb = $repository->createQueryBuilder($a);
$query = $qb->getQuery();
$query->useQueryCache(false);
$query->useResultCache(false);
$query->execute();
There's no way to globally disable it
Also this is an alternative to clear that might help (from here)
$connection = $em->getCurrentConnection();
$tables = $connection->getTables();
foreach ( $tables as $table ) {
$table->clear();
}
I just posted a bunch of tips for using Symfony console commands with Doctrine for batch processing here.