I am attempting to use python to start up multiple processes using COM to asynchronously process several files (using concurrent.futures), but I can only manage to start up one process at a time.
Here's an easy way to see the problem using Excel:
import win32com.client
# start first instance
exl1 = win32com.client.Dispatch("Excel.Application")
# start second instance
exl2 = win32com.client.Dispatch("Excel.Application")
The second Excel process doesn't start up (I only see the process id of the first instance). Is there any way to accomplish this?
Found the answer (from here: https://stackoverflow.com/a/517975/4755456). Use the DispatchEx method instead:
import win32com.client
# start first instance
exl1 = win32com.client.DispatchEx("Excel.Application")
# start second instance
exl2 = win32com.client.DispatchEx("Excel.Application")
Related
I'm writing a small Flask application and am having it connect to Rserve using pyRserve. I want every session to initiate and then maintain its own Rserve connection.
Something like this:
session['my_connection'] = pyRserve.connect()
doesn't work because the connection object is not JSON serializable. On the other hand, something like this:
flask.g.my_connection = pyRserve.connect()
doesn't work because it does not persist between requests. To add to the difficulty, it doesn't seem as though pyRserve provides any identifier for a connection, so I can't store a connection ID in the session and use that to retrieve the right connection before each request.
Is there a way to accomplish having a unique connection per session?
The following applies to any global Python data that you don't want to recreate for each request, not just rserve, and not just data that is unique to each user.
We need some common location to create an rserve connection for each user. The simplest way to do this is to run a multiprocessing.Manager as a separate process.
import atexit
from multiprocessing import Lock
from multiprocessing.managers import BaseManager
import pyRserve
connections = {}
lock = Lock()
def get_connection(user_id):
with lock:
if user_id not in connections:
connections[user_id] = pyRserve.connect()
return connections[user_id]
#atexit.register
def close_connections():
for connection in connections.values():
connection.close()
manager = BaseManager(('', 37844), b'password')
manager.register('get_connection', get_connection)
server = manager.get_server()
server.serve_forever()
Run it before starting your application, so that the manager will be available:
python rserve_manager.py
We can access this manager from the app during requests using a simple function. This assumes you've got a value for "user_id" in the session (which is what Flask-Login would do, for example). This ends up making the rserve connection unique per user, not per session.
from multiprocessing.managers import BaseManager
from flask import g, session
def get_rserve():
if not hasattr(g, 'rserve'):
manager = BaseManager(('', 37844), b'password')
manager.register('get_connection')
manager.connect()
g.rserve = manager.get_connection(session['user_id'])
return g.rserve
Access it inside a view:
result = get_rserve().eval('3 + 5')
This should get you started, although there's plenty that can be improved, such as not hard-coding the address and password, and not throwing away the connections to the manager. This was written with Python 3, but should work with Python 2.
If you look at the Tensorboard dashboard for the cifar10 demo, it shows data for multiple runs. I am having trouble finding a good example showing how to set the graph up to output data in this fashion. I am currently doing something similar to this, but it seems to be combining data from runs and whenever a new run starts I see the warning on the console:
WARNING:root:Found more than one graph event per run.Overwritting the graph with the newest event
The solution turned out to be simple (and probably a bit obvious), but I'll answer anyway. The writer is instantiated like this:
writer = tf.train.SummaryWriter(FLAGS.log_dir, sess.graph_def)
The events for the current run are written to the specified directory. Instead of having a fixed value for the logdir parameter, just set a variable that gets updated for each run and use that as the name of a sub-directory inside the log directory:
writer = tf.train.SummaryWriter('%s/%s' % (FLAGS.log_dir, run_var), sess.graph_def)
Then just specify the root log_dir location when starting tensorboard via the --logdir parameter.
As mentioned in the documentation, you can specify multiple log directories when running tensorboard. Alternatively, you can create multiple run subfolder in the log directory to visualize different plots in the same graph.
I'm creating a web-based interface for a number of different command line executables, and am using cherrypy behind apache (using mod_rewrite). I'm very new to this, and am having difficulty getting things configured properly. On my development machine, everything works reasonable well, but when I installed the code on a second machine I can't get anything to work properly.
The basic workflow for the applications is: 1. upload a dataset, 2. process the data (using python with some calls to executables using subprocess.call), 3. display the results on the web page.
After uploading and processing one dataset, everytime I attempt to process a second dataset the system stops responding. I'm not seeing any output in the terminal from the cherrypy process, or in the site log that shows any errors have occurred.
I'm starting cherrypy with the following conf file:
[global]
environment: 'production'
log.error_file: 'logs/site.log'
log.screen: True
tools.sessions.on: True
tools.session.storage_type: "file"
tools.session.storage_path: "sessions/"
tools.sessions.timeout: 60
tools.auth.on: True
tools.caching.on: False
server.socket_host: '0.0.0.0'
server.max_request_body_size: 0
server.socket_timeout: 60
server.thread_pool: 20
server.socket_queue_size: 10
engine.autoreload.on:True
My init.py file:
import cherrypy
import os
import string
from os.path import exists, join
from os import pathsep
from string import split
from mako.template import Template
from mako.lookup import TemplateLookup
from auth import AuthController, require, member_of, name_is
from twopoint import TwoPoint
current_dir = os.path.dirname(os.path.abspath(__file__))
lookup = TemplateLookup(directories=[current_dir + '/templates'])
def findInSubdirectory(filename, subdirectory=''):
if subdirectory:
path = subdirectory
else:
path = os.getcwd()
for root, dirs, names in os.walk(path):
if filename in names:
return os.path.join(root, filename)
return None
class Root:
#cherrypy.expose
#require()
def index(self):
tmpl = lookup.get_template("main.html")
return tmpl.render(usr=WebUtils.getUserName(),source="")
if __name__=='__main__':
conf_path = os.path.dirname(os.path.abspath(__file__))
conf_path = os.path.join(conf_path, "prod.conf")
cherrypy.config.update(conf_path)
cherrypy.config.update({'server.socket_host': '127.0.0.1',
'server.socket_port': 8080});
def nocache():
cherrypy.response.headers['Cache-Control']='no-cache,no-store,must-revalidate'
cherrypy.response.headers['Pragma']='no-cache'
cherrypy.response.headers['Expires']='0'
cherrypy.tools.nocache = cherrypy.Tool('before_finalize',nocache)
cherrypy.config.update({'tools.nocache.on':'True'})
cherrypy.tree.mount(Root(), '/')
cherrypy.tree.mount(TwoPoint(), '/twopoint')
cherrypy.engine.start()
cherrypy.engine.block()
For one example where this occurs, I've got the following javascript function that calls my python code:
function compTwoPoint(dataset,orig){
// call python code to generate images
$.post("/twopoint/compTwoPoint/"+dataset,
function(result){
res=jQuery.parseJSON(result);
if(res.success==true){
showTwoPoint(res.path,orig);
}
else{
alert(res.exception);
$('#display_loading').html("");
}
});
}
This calls the python code:
def twopoint(in_matrix):
"""proprietary code, can't share"""
def twopoint_file(in_file_name,out_file_name):
k = imread(in_file_name);
figure()
imshow(twopoint(k))
colorbar()
savefig(out_file_name,bbox_inches="tight")
close()
class TwoPoint:
#cherrypy.expose
def compTwoPoint(self,dataset):
try:
fnames=WebUtils.dataFileNames(dataset)
twopoint_file(fnames['filepath'],os.path.join(fnames['savebase'],"twopt.png"))
return encoder.iterencode({"success": True})
These functions work together to give the expected result. The problem is that after processing one input file, I am unable to process a second file. I don't seem to get a response from the server.
On the machine where things are working, I'm running python 2.7.6 and cherrypy 3.2.3. On the second machine, I have python 2.7.7 and cherrypy 3.3.0. While this may explain the difference in behavior, I'd like to find a way to make my code portable enough to overcome the difference in version (going from older to newer)
I'm not sure what the problem is, or even what to search for. I would appreciate any guidance or help you can offer.
(edit: Digging a bit more, I discovered something is happening with matplotlib. if I put print statments before and after the figure() command in twopoint_file, only the first one prints. Calling this function directly from a python interpreter (removing cherrypy from the equation) I get the following error:
can't invoke "event" command: application has been destroyed while executing "event generate $w{{ThemeChanged}}"
procedure "ttk::ThemeChanged" line 6 invoked from within "ttk::ThemeChanged"
end edit)
I don't understand what this error means, and haven't had much luck searching.
Old question, but I got the same problem which I fixed by changing backend in Matplotlib:
import matplotlib
matplotlib.use("qt4agg")
I'm new in Twisted and have one question. How can I organize a persistent connection in Twisted? I have a queue and every second checks it. If have some - send on client. I can't find something better than call dataReceived every second.
Here is the code of Protocol implementation:
class SyncProtocol(protocol.Protocol):
# ... some code here
def dataReceived(self, data):
if(self.orders_queue.has_new_orders()):
for order in self.orders_queue:
self.transport.write(str(order))
reactor.callLater(1, self.dataReceived, data) # 1 second delay
It works how I need, but I'm sure that it is very bad solution. How can I do that in different way (flexible and correct)? Thanks.
P.S. - the main idea and alghorithm:
1. Client connect to server and wait
2. Server checks for update and pushes data to client if anything changes
3. Client do some operations and then wait for other data
Without knowing how the snippet you provided links into your internet.XXXServer or reactor.listenXXX (or XXXXEndpoint calls), its hard to make head-or-tails of it, but...
First off, in normal use, a twisted protocol.Protocol's dataReceived would only be called by the framework itself. It would be linked to a client or server connection directly or via a factory and it would be automatically called as data comes into the given connection. (The vast majority of twisted protocols and interfaces (if not all) are interrupt based, not polling/callLater, thats part of what makes Twisted so CPU efficient)
So if your shown code is actually linked into Twisted via a Server or listen or Endpoint to your clients then I think you will find very bad things will happen if your clients ever send data (... because twisted will call dataReceived for that, which (among other problems) would add extra reactor.callLater callbacks and all sorts of chaos would ensue...)
If instead, the code isn't linked into twisted connection framework, then your attempting to reuse twisted classes in a space they aren't designed for (... I guess this seems unlikely because I don't know how non-connection code would learn of a transport, unless your manually setting it...)
The way I've been build building models like this is to make a completely separate class for the polling based I/O, but after I instantiate it, I push my client-list (server)factory into the polling instance (something like mypollingthing.servfact = myserverfactory) there-by making a way for my polling logic to be able to call into the clients .write (or more likely a def I built to abstract to the correct level for my polling logic)
I tend to take the examples in Krondo's Twisted Introduction as one of the canonical examples of how to do twisted (other then twisted matrix), and the example in part 6, under "Client 3.0" PoetryClientFactory has a __init__ that sets a callback in the factory.
If I try blend that with the twistedmatrix chat example and a few other things, I get:
(You'll want to change sendToAll to whatever your self.orders_queue.has_new_orders() is about)
#!/usr/bin/python
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ServerFactory
class PollingIOThingy(object):
def __init__(self):
self.sendingcallback = None # Note I'm pushing sendToAll into here in main
self.iotries = 0
def pollingtry(self):
self.iotries += 1
print "Polling runs: " + str(self.iotries)
if self.sendingcallback:
self.sendingcallback("Polling runs: " + str(self.iotries) + "\n")
class MyClientConnections(Protocol):
def connectionMade(self):
print "Got new client!"
self.factory.clients.append(self)
def connectionLost(self, reason):
print "Lost a client!"
self.factory.clients.remove(self)
class MyServerFactory(ServerFactory):
protocol = MyClientConnections
def __init__(self):
self.clients = []
def sendToAll(self, message):
for c in self.clients:
c.transport.write(message)
def main():
client_connection_factory = MyServerFactory()
polling_stuff = PollingIOThingy()
# the following line is what this example is all about:
polling_stuff.sendingcallback = client_connection_factory.sendToAll
# push the client connections send def into my polling class
# if you want to run something ever second (instead of 1 second after
# the end of your last code run, which could vary) do:
l = task.LoopingCall(polling_stuff.pollingtry)
l.start(1.0)
# from: https://twistedmatrix.com/documents/12.3.0/core/howto/time.html
reactor.listenTCP(5000, client_connection_factory)
reactor.run()
if __name__ == '__main__':
main()
To be fair, it might be better to inform PollingIOThingy of the callback by passing it as an arg to it's __init__ (that is what is shown in Krondo's docs), For some reason, I tend to miss connections like this when I read code and find class-cheating easier to see, but that may just by my personal brain-damage.
I'm trying to organize a large number of CloudWatch alarms for maintainability, and the web console grays out the name field on an edit. Is there another method (preferably something scriptable) for updating the name of CloudWatch alarms? I would prefer a solution that does not require any programming beyond simple executable scripts.
Here's a script we use to do this for the time being:
import sys
import boto
def rename_alarm(alarm_name, new_alarm_name):
conn = boto.connect_cloudwatch()
def get_alarm():
alarms = conn.describe_alarms(alarm_names=[alarm_name])
if not alarms:
raise Exception("Alarm '%s' not found" % alarm_name)
return alarms[0]
alarm = get_alarm()
# work around boto comparison serialization issue
# https://github.com/boto/boto/issues/1311
alarm.comparison = alarm._cmp_map.get(alarm.comparison)
alarm.name = new_alarm_name
conn.update_alarm(alarm)
# update actually creates a new alarm because the name has changed, so
# we have to manually delete the old one
get_alarm().delete()
if __name__ == '__main__':
alarm_name, new_alarm_name = sys.argv[1:3]
rename_alarm(alarm_name, new_alarm_name)
It assumes you're either on an ec2 instance with a role that allows this, or you've got a ~/.boto file with your credentials. It's easy enough to manually add yours.
Unfortunately it looks like this is not currently possible.
I looked around for the same solution but it seems neither console nor cloudwatch API provides that feature.
Note:
But we can copy the existing alram with the same parameter and can save on new name
.