I am typing this code in Google Colabs
def mytimer():
print("Python Program\n")
my_timer = threading.Timer(0.8, mytimer)
my_timer.start()
print("Bye\n")
In some cases I get the output as
Bye
At other times, I get
Bye
Python Program
Why is this difference occuring? Should I add any other line of code? Or any criteria I should be careful about?
You are missing few things here
import logging
import threading
import time
def mytimer():
print("Python Program\n")
my_timer = threading.Timer(0.2, mytimer) # change it to 0.8 will not print
my_timer.start()
print("Bye\n")
threading.Timer - As per docs Call a function after a specified number of seconds when it is 0.2 it will call that function and it prints both when it is 0.8 it is not calling that function so it is not calling, This has nothing to do with colab
you can refer this
Related
I would expect when a function is vectorized by np.vectorize the total number of method run is the same as the input length. For example if the input is a scalar, the pre-vectorized method should only be run once. In a way, I expect a similar behaviour to map(func, input_array).
However, running the example, you will see that the vectorized method unnecessarily ran func multiple times when the input is only scalar.
Does anyone know if I am using the method wrong? I have also opened a github issue as well.
import numpy as np
import logging
logging.basicConfig(level=logging.DEBUG)
def func(x):
logging.debug(f"Computation started")
return x
func_v1 = np.vectorize(func)
func_v2 = np.vectorize(func_v1)
func_v1(1) # logging shows the method func is ran twice
func_v2(1) # logging shows the method func is ran four times.
According to the numpy documentation, the additional run is required when the otype is not provided. The additional run is to obtain the output type.
I am new to PyMC3 and Bayesian inference methods. I have a simple code that tries to infer the value of some decay constant (=1) from the artificial data generated using a truncated exponential distribution:
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
import pymc3 as pm
import arviz as az
T = stats.truncexpon(b = 10.)
t = T.rvs(1000)
#Bayesian Inference
with pm.Model() as model:
#Define Priors
lam = pm.Gamma('$\lambda$', alpha=1, beta=1)
#Define Likelihood
time = pm.Exponential('time', lam = lam, observed = t)
#Inference
trace = pm.sample(20, start = {'lam': 10.}, \
step=pm.Metropolis(), chains=1, cores=1, \
progressbar = True)
az.plot_trace(trace)
plt.show()
This code produces a trace like below
I am really confused as to why the starting value of 10. is not accepted by the sampler. The trace above should start at 10. I am using python 3.7 to run the code.
Thank you.
Few things going on:
when the sampler first starts it has a tuning phase; samples during this phase are discarded by default, but this can be controlled with the discard_tuned_samples argument
the keys in the start argument dictionary need to correspond to the name given to the RandomVariable ('$\lambda$') not the Python variable
Incorporating those two, one can try
trace = pm.sample(20, start = {'$\lambda$': 10.},
step=pm.Metropolis(), chains=1, cores=1,
discard_tuned_samples=False)
However, the other possible issue is that
the starting value isn't guaranteed to be emitted in the first draw; only if the first proposal sample is rejected, which is down to chance.
Fixing the game (setting a random seed), though, we can get glimpse:
trace = pm.sample(20, start = {'$\lambda$': 10.},
step=pm.Metropolis(), chains=1, cores=1,
discard_tuned_samples=False, random_seed=1)
...
trace.get_values(varname='$\lambda$')[:10]
# array([10. , 5.42397358, 3.19841997, 1.09383329, 1.09383329,
# 1.09383329, 1.09383329, 1.09383329, 1.09383329, 1.09383329])
1)This is the code snippet that I am trying to run.The main idea is that, I want the ipywidgets to be interactive, while the data is constantly being fetched from the data source. And the data is being updated at certain interval using the while loop.The objective is to plot the principal components interactively by changing the number of principal components(PC) to be considered, using the #interact function.
Also it runs perfectly fine without while loop, that is, when we are not considering the auto-update for the dataset, with the while loop. But when I include the while loop, it doesn't handle the interactive-ness of the widgets(that is, the interaction of the number of PC).
My feeling is that "while True" loop does not let the interaction of ipywidget to happen, due to some execution issue.
2) Also I looked into threading but i am unsure of how to use the fucntion which is in functools(select_data), be called using threading.Thread.
Any sort of help would be appreciated. Thanks
def data_import_date(start_date,end_date):
end_date1=end_date.strftime('%Y-%m-%dT%H:%M:%S')
start_date=pd.Timestamp(start_date)
end_date=pd.Timestamp(end_date1)
button=widgets.Button(description='Pull Data')
button.on_click(functools.partial(select_data,rs_=[start_date,end_date]))
vbox=widgets.VBox([button])
display(vbox,out)
def select_data(b,rs_):
# clear_output()
start_date=rs_[0]
end_date1=rs_[1]
print("Data pulling started")
with out:
clear_output()
seeq_login()
[item1,item2]=query_seeq_for_data()
i=0
while True:
end_date=end_date1.strftime('%Y-%m-%dT%H:%M:%S')
print("Start & End date: ",start_date,end_date)
if i==0:
[X_data,Y_data]=pull_data(item1,item2,start_date,end_date)
else:
[X_data_live,Y_data_live]=pull_data(item1,item2,start_date,end_date)
X_data=X_data.append(X_data_live)
Y_data=Y_data.append(Y_data_live)
print("Data pulling completed.\nNow you're ready for your analysis")
clear_output()
[X_train,X_test,Y_train,Y_test]=train_test(X_data,Y_data)
[Xp_train,components,explained_variance_ratio,_,_]=apply_PCA(X_train,X_test)
plot_PC_variance(X_data,explained_variance_ratio)
plot_PC(X_data,components)
time.sleep(20)
start_date=end_date
end_date1=end_date1+datetime.timedelta(days=1)
i+=1
a=interact(data_import_date,
start_date=widgets.DatePicker(value=pd.to_datetime(start_date)),
end_date=widgets.DatePicker(value=pd.to_datetime(end_date)))
def plot_PC(X_data,components):
Np_comp=(1,len(X_data.columns),1)
#interact
def principal_components(PC1=Np_comp,PC2=Np_comp):
fig,ax=plt.subplots(1,1,figsize=(10,10))
plt.figure(5)
print(PC1, PC2)
ax.set_xlabel("Principal Component {}".format(PC1), fontsize=14)
ax.set_ylabel("Principal Component {}".format(PC2), fontsize=14)
ax.set_title("Principal components {0} & {1}".format(PC1,PC2), fontsize=(20))
ax.scatter(components[:,PC1],components[:,PC2])
A fairly rudimentary example that shows how you can run a loop whilst polling for widget changes, but it's not really my area of expertise.
Drag the slider whilst the values are printing and you should see the printed value change to reflect current slider position.
import threading
from IPython.display import display
import ipywidgets as widgets
import time
float_val = widgets.FloatSlider()
display(float_val)
def work(progress):
total = 10
for i in range(total):
time.sleep(1)
print(float_val.value)
thread = threading.Thread(target=work, args=(progress,))
thread.start()
I am using python 2.7 on windows 7 and I am currently trying to learn parallel processing.
I downloaded the multiprocessing 2.6.2.1 python package and installed it using pip.
When I try to run the foolowing very simple code, the program seems to get stuck, even after one hour it doesn't exit the execution despite the code to be super simple.
What am I missing?? thank you very much
from multiprocessing import Pool
def f(x):
return x*x
array =[1,2,3,4,5]
p=Pool()
result = p.map(f, array)
p.close()
p.join()
print result
The issue here is the way multiprocessing works. Think of it as python opening a new instance and importing all the modules all over again. You'll want to use the if __name__ == '__main__' convention. The following works fine:
import multiprocessing
def f(x):
return x * x
def main():
p = multiprocessing.Pool(multiprocessing.cpu_count())
result = p.imap(f, xrange(1, 6))
print list(result)
if __name__ == '__main__':
main()
I have changed a few other parts of the code too so you can see other ways to achieve the same thing, but ultimately you only need to stop the code executing over and over as python re-imports the code you are running.
I wrote a script that calls functions from QIIME to build a bunch of plots among other things. Everything runs fine to completion, but matplotlib always throws the following feedback for every plot it creates (super annoying):
/usr/local/lib/python2.7/dist-packages/matplotlib/pyplot.py:412: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (matplotlib.pyplot.figure) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam figure.max_num_figures).
max_open_warning, RuntimeWarning)
I found this page which seems to explain how to fix this problem , but after I follow directions, nothing changes:
import matplotlib as mpl
mpl.rcParams[figure.max_open_warning'] = 0
I went into the file after calling matplotlib directly from python to see which rcparams file I should be investigating and manually changed the 20 to 0. Still no change. In case the documentation was incorrect, I also changed it to 1000, and still am getting the same warning messages.
I understand that this could be a problem for people running on computers with limited power, but that isn't a problem in my case. How can I make this feedback go away permanently?
Try setting it this way:
import matplotlib as plt
plt.rcParams.update({'figure.max_open_warning': 0})
Not sure exactly why this works, but it mirrors the way I have changed the font size in the past and seems to fix the warnings for me.
Another way I just tried and it worked:
import matplotlib as mpl
mpl.rc('figure', max_open_warning = 0)
When using Seaborn you can do it like this
import seaborn as sns
sns.set_theme(rc={'figure.max_open_warning': 0})
Check out this article which basically says to plt.close(fig1) after you're done with fig1. This way you don't have too many figs floating around in memory.
In Matplotlib, figure.max_open_warning is a configuration parameter that determines the maximum number of figures that can be opened before a warning is issued. By default, the value of this parameter is 20. This means that if you open more than 20 figures in a single Matplotlib session, you will see a warning message. You can change the value of this parameter by using the matplotlib.rcParams function. For example:
import matplotlib.pyplot as plt
plt.rcParams['figure.max_open_warning'] = 50
This will set the value of figure.max_open_warning to 50, so that you will see a warning message if you open more than 50 figures in a single Matplotlib session.