Debugging kaggle AI code for detecting narcissistic and dark triad personality type (Unboundlocalerror: the local variable ) - error-handling

I am testing one AI code on kaggle site (it's link), and get some errors as you can see below, so if possible I like to have some commitment to fix that error:
enter image description here
Error content is :
`
Unboundlocalerror: the local variable 'log' referenced before as us segmented.`
This codes could be used in one big project to detect dark triad in tweet's and english contents (like this project):
Paper
Studying the Dark Triad of Personality through Twitter Behavior
October 2016
DOI:
10.1145/2983323.2983822
enter image description here
Thanks.

Related

Adding globally defined objects for Intellisense and linting to Monaco Editor in javascript/typescript

I have a working Monaco editor in a React app that correctly underlines out javascript errors.
I am trying to register an object called data that I have defined as a JSON schema, so that:
Typing the letter 'd' in the code editor would prompt intellisense to suggest data
Intellisense would correctly suggest all the properties of the data object, and the properties of all of its child properties according to the JSON schema
The editor does not show an undefined variable error when data is used.
I have tried to use monarch to register data as a keyword but am only managing to handle the coloring and error handling, not the properties suggestion.
I also looked at registerDocumentSymbolProvider. registerCompletionItemProvider and registerDeclarationProvider (I think the last one will help with intellisense but not with getting rid of the undefined error) but I don't understand what the purpose of each are and the documentation only defines their usage with sentences like "registers a document symbol provider" which I haven't so far found very helpful.
Would appreciate any help with this. I am excited to use the more advanced features of this great tool but the steep learning curve and my inability to find information has left me pretty disheartened so far!

Tensorflow2 Object detection working like classification

After I built my custom object detection with ssd_mobilenet_v2_fpnlite_320x320, my result looks like classification not object deteciton.
Even after I changed classification -> detection in ssd_mobilenet_v2_fpnlite_320x320 config file..., it still give me results I uploaded with pictures. I have no idea what is wrong with my object detection.
Also, sometimes after training, classname+percentage does not appear on the detected image. For example, my 4th picture I uploaded does not show the class name and percentage...
*** really weird this is that when I use 'inference graph/saved_modle' to detect image, it does works like first 4 pictures, but When I used ssd_mobile..tpu/saved_model. It work fine as 5th picture***

ArcGIS for JavaScript 4.21 Emoji in textSymbol

Hello I'm using ArcGIS for JavaScript 4.21
I was abel to create a Graphic with poligon. I want to se a title to polygon so I have created another object above it. That object have a type "text" it is a SimpleText object. And it works fine with simple text with letters. But if someone use a Emoji in text it returns a this exception:
[esri.views.2d.engine.webgl.TextureManager] k {name: 'mapview-invalid-resource', details: undefined, message: "Couldn't find font josefin-slab-regular. Falling back to Arial Unicode MS Regular"}
I think it is because of unicode, but in prevous version 3.32, I was able to use Emojies. I can't find out a solution to deal with it. So I want to ask if anyone has encountered this problem. Thank you.
here is my example in code pan.
Problem is on row 144. If you change text: "👩" to text: "Helo" it works.
Tomáš, I don't think that will work.
In 4x, the supported fonts to use with TextSymbol on graphics are limited to the list at https://developers.arcgis.com/javascript/latest/labeling/#fonts-for-featurelayer-csvlayer-and-streamlayer.
In 3x, it worked quite differently, so the font support for TextSymbol was dependent on that specific machines general browser/OS support of fonts.

In gym, how should we implement the environment's render method so that Monitor's produced videos are not black?

How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)

Adding logs to a WellSectionWindow

How would one add logs to a well-section window programatically? For the following well-logs within my Petrel input tree and using the code below only "Sonic" log is displayed on the WellSectionWindow
Well
->WellLogs
- Density
- Sonic
- Gamma ray
Borehole borehole = arguments.object as Borehole;
WellSectionWindow wsw = WellSectionWindow.CreateWindow();
wsw.ShowObject(borehole);
Within Petrel(2013.1), I can navigate to the Log element->(right-click)->"Add to template"->"Vertical"->"In new track". I would like to know if something similar could be achieved using Ocean APIs and guide me towards relevant documentation. Also, I'd like to know why "Sonic" log was displayed within the WellSectionWindow in Petrel and how did it get prioritized over Density or Gamma ray log.
The WellLogVersion of a WellLog corresponds to the global well log in the input tree.
If you want to display the log, you can call wsw.ShowObject(wellLogVersion) and it will be displayed.
If you want to control the order of the logs being displayed, you'll need to deal with the format template nodes of the well section templates. The details can be found in the Ocean dev guide, Volume 9, Chapter 3.