We are using Blender to open a .blend model, apply some python code via the python console and create a .dae file.
Now we need to implement this functionality on a linux server to serve .dae files to a browser on request.
That means we (conceptually) need to trigger blender from the console, pass the .blend-file and the python script as arguments and make blender to output the .dae file.
We are not blender experts, so maybe you can tell me a) whether this is possible without starting the blender GUI and doing it manually or not and b) what options we have to achieve that functionality.
Blender is quite flexible. You can run it on a server without the GUI (in background mode) and also execute a python script within blender to manipulate the scene (e.g. export .DAE):
./blender --background --python yourExportDAEScript.py
More command line options available in the manual
yourExportDAEScript.py could manipulate the model and finally do something like:
bpy.ops.wm.collada_export(filepath="/DAE/EXPORT/PATH/file.dae")
More details in the Blender Python API
Related
I am using a remote computer in order to run my program on its GPU. My program contains some code with tensorflow functions, and for easier debugging with Pycharm I would like to connect via ssh with remote interpreter to the computer with the GPU. This part can be done easily since Pycharm has this option so I can connect there. However, tensorflow is not loaded automatically so I get import error.
Note that in our institution, we run module load cuda/10.0 and module load tensorflow/1.14.0 each time the computer is loaded. Now this part is the tricky one. Opening a remote terminal creates another session which is not related to the remote interpreter session so it's not affecting remote interpreter modules.
I know that module load in general configures env, however I am not sure how can I export the environment variables to Pycharm's environment variables that are configured before a run.
Any help would be appreciated. Thanks in advance.
The workaround after all was relatively simple: first, I have installed the EnvFile plugin, as it was explained here: https://stackoverflow.com/a/42708476/13236698
Them I created an .env file with a quick script on python, extracting all environment variables and their values from os.environ and wrote them to a file in the following format: <env_variable>=<variable_value>, and saved the file with .env extension. Then I loaded it to PyCharm, and voila - all tensorflow modules were loaded fine.
I automatically load a particular network when I start Cytoscape using a command script (-S flag). I'd like to also load a style file (.xml file) and apply it to the network. That is, the equivalent of:
File->Import->Styles from File...
Styles->Style Drop Down->Select new loaded style
Can this be done via any of the automation mechanisms?
The Cytoscape Automation wiki has instructions on how to find documentation on all the available commands:
https://github.com/cytoscape/cytoscape-automation/wiki/Trying-Automation
The following commands should help you perform your task:
vizmap/load file
vizmap/apply
I am trying to perform an optimzation task using ISight. I need to run a Python script through Abaqus for the latter. However, when using the Abaqus Application component, some parameters cannot be selected. I am therefore using the Data Exchanger with an OS Command but what is the command to launch a script through Abaqus using the OS Command Component?
In the Windows Command line, I would normally type:
abaqus cae script=scriptname.py
However with this line, the ISight log outputs that the "system cannot find the log specified".
In addition, in the OS Command window, under "find a program",when I search in the SIMULIA folder, only ISight is present and not CAE. I am using the same ISight and Abaqus version.
Does anyone have an advice regarding this problem?
Thank you in advance!
Try going into the 'OS Command' component and changing Type to 'Windows Batch'
I've been using Flowhub.io to do my development on the nodejs device. Now that the GUI-based design is done, I'm ready to take it offline and run the code via the command line. How would do I do this? I have the JSON file corresponding to the graph I created online, but not sure how to use the noflo nodejs module.
Could someone help me by showing me an example of how to load a graph using the noflo module, please? Thanks!
f you want to run an existing graph, you can use the --graph option.
noflo-nodejs --graph graphs/MyMainGraph.json
If you also want the process to exit when the network stops, you can pass --batch.
PS: I added this to the noflo-nodejs README.
What is the application used by canopy when running a python file?
This application opens in a new window when using matplotlib. See screenshot below.
Is it possible to use this application directly without canopy?
Matplotlib opens a displays a figure that has been rendered by the selected backend when you call show. You can find out what backend is in use with:
matplotlib.get_backend()
and set the backend by updating the matplotlibrc file or with:
matplotlib.use('PS')
matplotlib.use() has only effect if called before pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time.
Running the same Python program with the same backend in an environment other than Canopy will display the same figure.
The application that displays the plot is Python (in particular Canopy User Python), using the Matplotlib library with a Qt backend. To run this from outside Canopy:
1) Ensure that Canopy User Python is your default Python; the simplest way is to open a "Canopy Command Prompt Window" from the Start Menu, or see https://support.enthought.com/entries/23646538-Make-Canopy-User-Python-be-your-default-Python).
2) Run the following commands:
set ets_toolkit=qt4
python my_scripty.py