I am struggling with a weird error. I'm new to Julia so maybe I don't understand something.
Consider the code:
using Plots;
xyz = 1
anim = #animate for (j, iterated_variable) in enumerate(1:10)
xyz = xyz
plot(1,1)
end
it will yield the error "UndefVarError: xyz not defined"
while
xyz = 1
anim = #animate for (j, iterated_variable) in enumerate(1:10)
print(xyz)
plot(1,1)
end
will run and (oddly enough) print exactly:
111111
1111
where the digits 1,2,7,8,9,10 are printed in monospace and the others in the regular font.
Removing the #animation handle makes the code do what you expect it to do.
xyz = 1
for (j, iterated_variable) in enumerate(1:10)
print(xyz)
plot(1,1)
end
and it will output
1111111111.
This error is quite frustrating I must admit, especially since I really am starting to like Julia. Any idea of what is happening?
Julia 1.6.3, VSCodium 1.66.1, Julia language support v1.6.17, notebooks
(edited the question, there was a code mistake)
If by notebooks you mean Pluto.jl notebooks and you run the first snippet all in one cell moving using Plots in a different one should fix the issue
Related
I have a spooky issue.
I have a lambda function which recieves 3 values, then puts them together, and sends them off to slack with a simple post request.
def handler(event, context):
result = []
try:
event_body = json.loads(event["body"])
rows = event_body["data"]
for row in rows:
x, y, z= row[1:]
result.append(f"| {x:<30} | {y:<20} | {z.strip()}|\n")
send_slack_message(slack_url, '\n'.join(result))
This should work right?
Except, in Slack, they look like this, where it is obviously not aligned.
Hmm.. I must have done something wrong
except... If I copy and paste it in from Slack to here, it is perfectly spaced/aligned/justified
XR45898 114 (140.1%)
DATABASE_BYTES(GB) 0 (860.6%)
Evidently Slack is misinterpreting something, because StackOverflow is able to interpret the markdown perfectly. Why is this, and how do I fix it?
EDIT -
I've also tried adding "x=x.ljust(40)" instead of doing the ":<40" thing, but that produces the exact same result.
I know this topic has been discussed a lot and I am so sorry, that I stil dont't find the sulution, even the difference between a view and a copy is easy to understand (in other languages)
def hole_aktienkurse_und_berechne_hist_einstandspreis(index, start_date, end_date):
df_history = pdr.get_data_yahoo(symbols=index, start=start_date, end=end_date)
df_history['HistEK'] = df_history['Adj Close']
df_only_trd_index = df_group_trade.loc[index].copy()
for i_hst, r_hst in df_history.iterrows():
df_bis = df_only_trd_index[(df_only_trd_index['DateClose']<=i_hst) & (df_only_trd_index['OpenPos']==0)].copy()
# here comes the part what causes the trouble:
df_history.loc[i_hst]['HistEK'] = df_history.loc[i_hst]['Adj Close'] - df_bis['Total'].sum()/100.0
return df_history
I think I tried nearly everithing, but I don't get it. python is not easy when it comes to this topic.
When you have to specify bow index and column in .loc you have to put all together otherwise the annoying message relative to views appears.
df_history.loc[i_hst, 'HistEK'] = df_history.loc[i_hst, 'Adj Close'] - df_bis['Total'].sum()/100.0
Look the examples here
I was wondering today about how finding a specific value on a plot and drawing the right line that goes with. I used to do that on an old chart library, and I was wondering that perhaps this functionnality exist but I don't know how to find it.
The result should look like this: https://miro.medium.com/max/1070/1*Ckhi9soE9Lx2lIf9tPVLMQ.png
To provide some context, I'm doing a PCA over my data, and I would like to point out some thresholds at 97.5, 99 and 99.5% of explained cumuled variance.
Have a great day!
EDIT:
See Answer
As solved by ImportanceOfBeingErnest, here is the code:
whole_pca = PCA().fit(np.array(inputs['Scale'].tolist()))
cumul = np.cumsum(np.round(whole_pca.explained_variance_ratio_, decimals=3)*100)
over_95 = np.argmax(cumul>95)
over_99 = np.argmax(cumul>99)
over_995 = np.argmax(cumul>99.5)
plt.plot(cumul)
plt.plot([0,over_95,over_95], [95,95,0])
plt.plot([0,over_99,over_99], [99,99,0])
plt.plot([0,over_995,over_995], [99.5,99.5,0])
plt.xlim(left=0)
plt.ylim(bottom=80)
plt.ylabel('% Variance Explained')
plt.xlabel('# of Features')
plt.title('PCA Analysis')
Result in:
Thank you!
I'm following a tutorial on NLP but have encountered a key error error when trying to group my raw data into good and bad reviews. Here is the tutorial link: https://towardsdatascience.com/detecting-bad-customer-reviews-with-nlp-d8b36134dc7e
#reviews.csv
I am so angry about the service
Nothing was wrong, all good
The bedroom was dirty
The food was great
#nlp.py
import pandas as pd
#read data
reviews_df = pd.read_csv("reviews.csv")
# append the positive and negative text reviews
reviews_df["review"] = reviews_df["Negative_Review"] +
reviews_df["Positive_Review"]
reviews_df.columns
I'm seeing the following error:
File "pandas\_libs\hashtable_class_helper.pxi", line 1500, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'Negative_Review'
Why is this happening?
You're getting this error because you did not understand how to structure your data.
When you do df['reviews']=df['Positive_reviews']+df['Negative_reviews'] you're actually summing the values of Positive reviews to Negative reviews(which does not exist currently) into the 'reviews' column (chich also does not exist).
Your csv is nothing more than a plaintext file with one text in each row. Also, since you're working with text, remember to enclose every string in quotation marks("), otherwise your commas will create fakecolumns.
With your approach, it seems that you'll still tag all your reviews manually (usually, if you're working with machine learning, you'll do this outside code and load it to your machine learning file).
In order for your code to work, you want to do the following:
import pandas as pd
df = pd.read_csv('TestFileFolder/57886076.csv', names=['text'])
## Fill with placeholder values
df['Positive_review']=0
df['Negative_review']=1
df.head()
Result:
text Positive_review Negative_review
0 I am so angry about the service 0 1
1 Nothing was wrong, all good 0 1
2 The bedroom was dirty 0 1
3 The food was great 0 1
However, I would recommend you to have a single column (is_review_positive) and have it to true or false. You can easily encode it later on.
I know other people have asked similar questions in past but I am still stuck on how to solve the problem and was hoping someone could offer some help. Using PsychoPy, I would like to present different images, specifically 16 emotional trials, 16 neutral trials and 16 face trials. I would like to pseudo randomize the loop such that there would not be more than 2 consecutive emotional trials. I created the experiment in Builder but compiled a script after reading through previous posts on pseudo randomization.
I have read the previous posts that suggest creating randomized excel files and using those, but considering how many trials I have, I think that would be too many and was hoping for some help with coding. I have tried to implement and tweak some of the code that has been posted for my experiment, but to no avail.
Does anyone have any advice for my situation?
Thank you,
Rae
Here's an approach that will always converge very quickly, given that you have 16 of each type and only reject runs of more than two emotion trials. #brittUWaterloo's suggestion to generate trials offline is very good--this what I do myself typically. (I like to have a small number of random orders, do them forward for some subjects and backwards for others, and prescreen them to make sure there are no weird or unintended juxtapositions.) But the algorithm below is certainly safe enough to do within an experiment if you prefer.
This first example assumes that you can represent a given trial using a string, such as 'e' for an emotion trial, 'n' neutral, 'f' face. This would work with 'emo', 'neut', 'face' as well, not just single letters, just change eee to emoemoemo in the code:
import random
trials = ['e'] * 16 + ['n'] * 16 + ['f'] * 16
while 'eee' in ''.join(trials):
random.shuffle(trials)
print trials
Here's a more general way of doing it, where the trial codes are not restricted to be strings (although they are strings here for illustration):
import random
def run_of_3(trials, obj):
# detect if there's a run of at least 3 objects 'obj'
for i in range(2, len(trials)):
if trials[i-2: i+1] == [obj] * 3:
return True
return False
tr = ['e'] * 16 + ['n'] * 16 + ['f'] * 16
while run_of_3(tr, 'e'):
random.shuffle(tr)
print tr
Edit: To create a PsychoPy-style conditions file from the trial list, just write the values into a file like this:
with open('emo_neu_face.csv', 'wb') as f:
f.write('stim\n') # this is a 'header' row
f.write('\n'.join(tr)) # these are the values
Then you can use that as a conditions file in a Builder loop in the regular way. You could also open this in Excel, and so on.
This is not quite right, but hopefully will give you some ideas. I think you could occassionally get caught in an infinite cycle in the elif statement if the last three items ended up the same, but you could add some sort of a counter there. In any case this shows a strategy you could adapt. Rather than put this in the experimental code, I would generate the trial sequence separately at the command line, and then save a successful output as a list in the experimental code to show to all participants, and know things wouldn't crash during an actual run.
import random as r
#making some dummy data
abc = ['f']*10 + ['e']*10 + ['d']*10
def f (l1,l2):
#just looking at the output to see how it works; can delete
print "l1 = " + str(l1)
print l2
if not l2:
#checks if second list is empty, if so, we are done
out = list(l1)
elif (l1[-1] == l1[-2] and l1[-1] == l2[0]):
#shuffling changes list in place, have to copy it to use it
r.shuffle(l2)
t = list(l2)
f (l1,t)
else:
print "i am here"
l1.append(l2.pop(0))
f(l1,l2)
return l1
You would then run it with something like newlist = f(abc[0:2],abc[2:-1])