I use guard for test automation, and it sends notifications to tmux when tests runs complete.
However, some of my tests are fairly long in running, and I don't have any clear way to know, if the tmux pane guard runs in is hidden, whether tests have completed. This is especially true if the tests complete with the same status two runs in a row.
Does guard have support for a different notification which shows that there are running tests?
If so, what's an example configuration if, say, I wanted the tmux session title to turn white while tests are running and then red/green/yellow when they complete?
If not, where should I look in the guard source code if I wanted to develop and pull request that feature?
Check out all the TMux options here:
https://github.com/guard/guard/blob/45ac8e1013767e1d84fcc590418f9a8469b0d3b2/lib/guard/notifiers/tmux.rb#L24-L38
There's a display_on_all_clients option - which should flash in any other TMUX clients you have created.
There's also a color_location option (see TMUX man page for possible values).
Here's some example settings you can place in your ~/.guard.rb file:
notification(:tmux, {
timeout: 0.5,
display_message: true,
display_title: true,
default_message_color: 'black',
display_on_all_clients: true,
success: 'colour150',
failure: 'colour174',
pending: 'colour179',
color_location: %w[status-left-bg pane-active-border-fg pane-border-fg],
}) if ENV['TMUX']
I had this issue today, solved it by creating ~/.guard.rb and adding:
# Guardfile
notification :tmux,
display_message: true,
timeout: 5, # in seconds
default_message_format: '%s >> %s',
# the first %s will show the title, the second the message
# Alternately you can also configure *success_message_format*,
# *pending_message_format*, *failed_message_format*
line_separator: ' > ', # since we are single line we need a separator
color_location: 'status-left-bg', # to customize which tmux element will change color
# Other options:
default_message_color: 'black',
success: 'colour150',
failure: 'colour174',
pending: 'colour179',
# Notify on all tmux clients
display_on_all_clients: false,
color_location: %w[status-left-bg pane-active-border-fg pane-border-fg]
Related
I made a game that can be controlled with voice command. To convert all the voice command into text I used IBM Cloud Speech to Text service. Everything is done except it is showing me the BAD LENGTH ERROR as you can see in the image.
This is the code for speech to text:
###############################################
#### Initalize queue to store the recordings ##
###############################################
CHUNK = 1024
# Note: It will discard if the websocket client can't consumme fast enough
# So, increase the max size as per your choice
BUF_MAX_SIZE = CHUNK * 10
# Buffer to store audio
q = Queue(maxsize=int(round(BUF_MAX_SIZE / CHUNK)))
# Create an instance of AudioSource
audio_source = AudioSource(q, True, True)
###############################################
#### Prepare Speech to Text Service ########
###############################################
# initialize speech to text service
authenticator = IAMAuthenticator('i3gkxvESZRUHnt0_Iv2PtMQaHd2roF1YgvTTIzq0tbop')
speech_to_text = SpeechToTextV1(authenticator=authenticator)
speech_to_text.set_service_url("https://api.eu-gb.speech-to-
text.watson.cloud.ibm.com/instances/54f44656-b15c-4a16-8dac-c5b782482f93")
actions = []
I got that error solved by just uninstalling all the packages and reinstalling the required one.
It will simply run successfully by just that and even if you receive that error in the future, try doing this process again. It will work.
Apart from this, I was not been able to find any other solution.
websocket.create_connection has an option enable_multithread. This ensures multithreading is correctly handled. Enabling this may fix.
Source
I am writing a code to start , stop, undeploy and deploy my application on weblogc.
My components need to be deployed on few managed servers.
When I do new deployments manually I can start and stop the servers in parallel, by ticking multiple boxes and selecting start and stop from the dop down. See below.
but when trying from WLST, i could do that in one server at a time.
ex:
start(name='ServerX',type='Server',block='true')
start(name='ServerY',type='Server',block='true')
shutdown(name='ServerX',entityType='Server',ignoreSessions='true',timeOut=600,force='true',block='true')
shutdown(name='ServerY',entityType='Server',ignoreSessions='true',timeOut=600,force='true',block='true')
Is there a way I can start stop multiple servers in once command?
Instead of directly starting and stopping servers, you create tasks, then wait for them to complete.
e.g.
tasks = []
for server in cmo.getServerLifeCycleRuntimes():
# to shut down all servers
if (server.getName() != ‘AdminServer’ and server.getState() != ‘RUNNING’ ):
tasks.append(server.start())
#or to start them up:
#if (server.getName() != ‘AdminServer’ and server.getState() != ‘SHUTDOWN’ ):
# tasks.append(server.shutdown())
#wait for tasks to complete
while len(tasks) > 0:
for task in tasks:
if task.getStatus() != ‘TASK IN PROGRESS’ :
tasks.remove(task)
java.lang.Thread.sleep(5000)
I know this is an old post, today I was reading this book "Advanced WebLogic Server Automation" written by Martin Heinzl so in the page 282 I found this.
def startCluster(clustername):
try:
start(clustername, 'Cluster')
except Exception, e:
print 'Error while starting cluster', e
dumpStack()
I tried it and it started managed servers in parallel.
Just keep in mind the AdminServer must be started first and your script must connect to the AdminServer before trying it.
Perhaps this would not be useful for you as the servers should be in a cluster, but I wanted to share this :)
I have the following code
# -*- coding: utf-8 -*-
# 好
##########################################
import time
from twisted.internet import reactor, threads
from twisted.web.server import Site, NOT_DONE_YET
from twisted.web.resource import Resource
##########################################
class Website(Resource):
def getChild(self, name, request):
return self
def render(self, request):
if request.path == "/sleep":
duration = 3
if 'duration' in request.args:
duration = int(request.args['duration'][0])
message = 'no message'
if 'message' in request.args:
message = request.args['message'][0]
#-------------------------------------
def deferred_activity():
print 'starting to wait', message
time.sleep(duration)
request.setHeader('Content-Type', 'text/plain; charset=UTF-8')
request.write(message)
print 'finished', message
request.finish()
#-------------------------------------
def responseFailed(err, deferred):
pass; print err.getErrorMessage()
deferred.cancel()
#-------------------------------------
def deferredFailed(err, deferred):
pass; # print err.getErrorMessage()
#-------------------------------------
deferred = threads.deferToThread(deferred_activity)
deferred.addErrback(deferredFailed, deferred) # will get called indirectly by responseFailed
request.notifyFinish().addErrback(responseFailed, deferred) # to handle client disconnects
#-------------------------------------
return NOT_DONE_YET
else:
return 'nothing at', request.path
##########################################
reactor.listenTCP(321, Site(Website()))
print 'starting to serve'
reactor.run()
##########################################
# http://localhost:321/sleep?duration=3&message=test1
# http://localhost:321/sleep?duration=3&message=test2
##########################################
My issue is the following:
When I open two tabs in the browser, point one at http://localhost:321/sleep?duration=3&message=test1 and the other at http://localhost:321/sleep?duration=3&message=test2 (the messages differ) and reload the first tab and then ASAP the second one, then the finish almost at the same time. The first tab about 3 seconds after hitting F5, the second tab finishes about half a second after the first tab.
This is expected, as each request got deferred into a thread, and they are sleeping in parallel.
But when I now change the URL of the second tab to be the same as the one of the first tab, that is to http://localhost:321/sleep?duration=3&message=test1, then all this becomes blocking. If I press F5 on the first tab and as quickly as possible F5 on the second one, the second tab finishes about 3 seconds after the first one. They don't get executed in parallel.
As long as the entire URI is the same in both tabs, this server starts to block. This is the same in Firefox as well as in Chrome. But when I start one in Chrome and another one in Firefox at the same time, then it is non-blocking again.
So it may not neccessarily be related to Twisted, but maybe because of some connection reusage or something like that.
Anyone knows what is happening here and how I can solve this issue?
Coincidentally, someone asked a related question over at the Tornado section. As you suspected, this is not an "issue" in Twisted but rather a "feature" of web browsers :). Tornado's FAQ page has a small section dedicated to this issue. The proposed solution is appending an arbitrary query string.
Quote of the day:
One dev's bug is another dev's undocumented feature!
Just starting with noflo, I'm baffled as why I'm not able to get a simple flow working. I started today, installing noflo and core components following the example pages, and the canonical "Hello World" example
Read(filesystem/ReadFile) OUT -> IN Display(core/Output)
'package.json' -> IN Read
works... so far fine, then I wanted to change it slightly adding "noflo-rss" to the mix, and then changing the example to
Read(rss/FetchFeed) OUT -> IN Display(core/Output)
'http://xkcd.com/rss.xml' -> IN Read
Running like this
$ /node_modules/.bin/noflo-nodejs --graph graphs/rss.fbp --batch --register=false --debug
... but no cigar -- there is no output, it just sits there with no output at all
If I stick a console.log into the sourcecode of FetchFeed.coffee
parser.on 'readable', ->
while item = #read()
console.log 'ITEM', item ## Hack the code here
out.send item
then I do see the output and the content of the RSS feed.
Question: Why does out.send in rss/FetchFeed not feed the data to the core/Output for it to print? What dark magic makes the first example work, but not the second?
When running with --batch the process will exit when the network has stopped, as determined by a non-zero number of open connections between nodes.
The problem is that rss/FetchFeed does not open a connection on its outport, so the connection count drops to zero and the process exists.
One workaround is to run without --batch. Another one I just submitted as a pull request (needs review).
I'm adding a backend for Celery results, and I'm having an issue where I send tasks, and some are accepted while others aren't.
Tasks that are and aren't executed both show this log output:
[2014-06-09 15:50:59,091: INFO/MainProcess] Received task: tasks.multithread_device_listing[e3ae6d12-ad4b-4114-9383-5802c91541f2]
Ones that ARE executed then show this output:
[2014-06-09 15:50:59,093: DEBUG/MainProcess] Task accepted: tasks.multithread_device_listing[e3ae6d12-ad4b-4114-9383-5802c91541f2] pid:2810
While tasks that AREN'T executed never arrive at the above line.
How I send tasks:
from celery import group
from time import sleep
signatures = []
signature = some_method_with_task_decorator.subtask()
signatures.append(signature)
signature = some_other_method_with_task_decorator.subtask()
signatures.append(signature)
job = group(signatures)
result = job.apply_async()
while not result.ready():
sleep(60)
My celery config from having it report it is:
software -> celery:3.1.11 (Cipater) kombu:3.0.18 py:2.7.5
billiard:3.3.0.17 py-amqp:1.4.5
platform -> system:Darwin arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:amqp results:amqp://username:pass#localhost:5672/automated_reports
CELERY_QUEUES:
(<unbound Queue automated_reports -> <unbound Exchange default(direct)> -> automated_reports>,)
CELERY_DEFAULT_ROUTING_KEY: '********'
CELERY_INCLUDE:
('celery.app.builtins',
'automated_reports.queue.tasks',
'automated_reports.queue.subtasks')
CELERY_IMPORTS:
('automated_reports.queue.tasks', 'automated_reports.queue.subtasks')
CELERY_RESULT_PERSISTENT: True
CELERY_ROUTES: {
'automated_reports.queue.tasks.run_device_info_report': { 'queue': 'automated_reports'},
'uploader.queue.subtasks.multithread_device_listing': { 'queue': 'automated_reports'},
'uploader.queue.subtasks.multithread_individual_device': { 'queue': 'automated_reports'},
'uploader.queue.tasks.multithread_device_listing': { 'queue': 'automated_reports'},
'uploader.queue.tasks.multithread_individual_device': { 'queue': 'automated_reports'}}
CELERY_DEFAULT_QUEUE: 'automated_reports'
BROKER_URL: 'amqp://username:********#localhost:5672/automated_reports'
CELERY_RESULT_BACKEND: 'amqp://username:pass#localhost:5672/automated_reports'
My startup command is:
~/Documents/Development/automated_reports/bin/celery worker --loglevel=DEBUG --autoreload -A automated_reports.queue.tasks -Q automated_reports -B --schedule=~/Documents/Development/automated_reports/log/celerybeat --autoscale=10,3
Also, when I stop celery, it pulls tasks out of my queue that were never accepted. Then when I restart, it accepts them and executes them.
Any help with this behavior is much appreciated. I'm certain it has something to do with my backend configuration, but I'm having difficulty isolating the issue or its correction. Thanks!
I found the answer to this.
I noticed that the 'inqueue' seemed to be properly receiving tasks in some cases, but not others. When I searched the Celery docs, I found this note: http://celery.readthedocs.org/en/latest/whatsnew-3.1.html?highlight=inqueue#caveats
I was executing the subtasks from within a long-running task, so this sounded very much like the behavior I was seeing. Also, I'm on the version mentioned, whereas on previous versions I hadn't had this problem with the same config.
I added the -Ofair parameter to starting the worker, and it immediately resolved the issue.