Python logging application - python-logging

When I create a Python application and want to have logs of the application, will I have to create a logger module for every module or I have to create a single logger module? Is there a logging masterbook for Python?

Related

How configure Azure function project in Intellij/Pycharm run/debug configurations on mac

How to configure Azure function project in Intellij/Pycharm run/debug configurations on Mac because I've tried to set it by my own but it doesnt work.
I would like to replace shell command: func start with run config
The image below from Pycharm
UPDATE
I've added path to Azure-CLI and imported my app-settings
I'm trying to configure the run configs but it asks to select the module but there is no any module in dropdown (see pic)
UPDATE-UPDATE:
Here is what azure-tools-for-intellij team me answered:
They dont support pure python functions run yet
I think you shouldn't use row shell scrip with IntelliJ/PyCharm instead of this you should use Azure Toolkit for IntelliJ and run/debug your functions like in this guide.
Also when you install Azure Toolkit for IntelliJ you will have the opportunity to create run/debug configuration with predefined Azure Function ready template.
Just example:
I found the way to debug in Intellij Idea/PyCharm.
add these lines at the top of your file/module:
import pydevd_pycharm
pydevd_pycharm.settrace('127.0.0.1', port=9091, stdoutToServer=True, stderrToServer=True)
set Python Debug Server with host and port in Run/Debug Configs
run your azure functions as used to (func host start) and press debugging button.
The way I do it in PyCharm is to define a shell script configuration (Edit Configurations > Shell Script) and set:
Execute: Script set
Script text: func start
Working directory: should be to project directory where host.json etc. is located
Environment variables: leave empty
Execute in the terminal: checked
Run this configuration, which will start the test server in terminal. Then go to Run > Attach to Process... and select the process (usually it's the one without any path after the number).

nifi pyspark - "no module named boto3"

I'm trying to run a pyspark job I created that downloads and uploads data from s3 using the boto3 library. While the job runs fine in pycharm, when I try to run it in nifi using this template https://github.com/Teradata/kylo/blob/master/samples/templates/nifi-1.0/template-starter-pyspark.xml
The ExecutePySpark errors with "No module named boto3".
I made sure it was installed on my conda environment that is active.
Any ideas, im sure im missing something obvious.
Here is a picture of the nifi spark processor.
Thanks,
tim
The Python environment where PySpark should run on is configured via the PYSPARK_PYTHON variable.
Go to Spark installation directory
Go to conf
Edit spark-env.sh
Add this line: export PYSPARK_PYTHON=PATH_TO_YOUR_CONDA_ENV

backand platform: how debug server platform code localy

The documantion mention that there is a way to develop entire project localy and then upload it to back& platform for production. how can it be done?
There is sample code somewhere ?
thanks
The server side in Back& can be built with cloud JavaScript or with node.js.
Using Backand CLI and node.js you can develop any project in node.js and upload it.
The steps are simple:
Create new action with "Server side node.js code"
Run the action.init command (copy paste from the action page)
Use the backand shell project to build the node.js project
Run action.deploy command to upload and run the code on the server
for debug node.js localy :
first make sure you have action defind in the back& site under the section objects/[objectName]/action
example:
project name: MyApp
with object call: Users
with the action files (for upload a images file)
so
In local terminal
1. go to project's root folder.
2. type backand login and follow the instraction (email & password).
3. type backand action init and follow the instration.
project name : MyApp,
object name : Users,
action name : files
to be continue...

Apache Synapse custom Mediator to execute system command

I create a custom mediator in apache synapse to invoke some system command such as java -jar, execute shell script, etc.. But i have no luck to this, i tried these codes :
Process process =runtime.exec("touch /opt/FILE.txt");
and
Process p = runtime.exec(new String[]{"java","-jar","/opt/MyMediator.jar"});
both of them were runnable when i create the jar and run the jar manually, but when i deploy it to the synapse lib, it wont work. So can anybody tell me how to do this properly?
Thanks,

WSO2 Hive analyser script result set

I am using WSO2 ESB 4.5.1 and WSO2 BAM 2.0.0. In my Hive script I am attempting to get a single value and assign it to a variable so I can later use it in SQL statements. I can use a variable using hiveconf but I'm not sure how to assign a single value from result set to it.
any ideas?
Thanks.
You can extend the AbstractHiveAnalyzer and write you own class which executes the query and set the hive conf value, similar to this summarizer. Here you can see the execute() method should be implemented and this will be called by BAM. Here you can add your preferred query and assign the hive conf with 'setProperty("your_hive_conf", yourResult.string());'.
You can build your java application as typical '.jar' file or osgi bundle. If you have packaged as just a '.jar' file, then you should place the jar in $BAM_HOME/repository/components/lib. If you packaged the application as osgi bundle, then place the file in $BAM_HOME/repository/components/dropins folder. And restart BAM server.
And finally in your hive script that you add in BAM, you should include you extended class as 'class your.package.name.HiveAnalyzerImpl;', so that BAM will run the execute() method which you have implemented in your class and your hive conf will be set. And then, value you have set for your hive conf can be used in the hive script as ${hiveconf:your_hive_conf}.
Hope this helps.