Can knowledge graphs deal with a sentence including a preposition or adjective - wordnet

I am thinking of how suitable knowledge graphs such as "Google Knowledge Graph", "WordNet", "Yago", or "FreeBase" is for representing facts including a preposition or adjective.
For example, "Obama has a daughter" can clearly be represented by node and link relations. "Obama" and "daughter" are nodes. "has a" is a link.
However, I can not find a way to represent a sentence with a preposition or adjective by googling by several keywords.
Suppose you have a fact that "Obama has a white dog in whitehouse", it seems impossible to be represented by graph structures. Obama's dog is white, bat not all dog is white. Also, Obama's dog is kept in whitehouse, but not all dog is.
My first question is whether knowledge graph can represent this kind of fact or not. My second question is how knowledge graph can do this, if the first answer is yes.

You'd represent this is a series of facts. For example:
barackObama owns fido
fido isA dog
fido livesIn theWhiteHouse
fido hasFurColour white
i.e. you have a specific node in your graph which represents the specific object, and then assert further facts about that object. Similarly, while you could assert a single fact "barackObama hasA daughter", you'd probably assert a number of facts linking the two nodes "barackObama" and "maliaObama".
As with everything else, there is no one "right" representation of your data - it varies depending on the problem you're trying to solve.

Related

Is there an API to get a full citation (such as a BibTeX or JSON citation) from an arbitrary URL?

Say I have a URL like https://www.science.org/doi/10.1126/science.abb4363, how can I get the full citation as:
#article{
doi:10.1126/science.abb4363,
author = {Sergio Almécija and Ashley S. Hammond and Nathan E. Thompson and Kelsey D. Pugh and Salvador Moyà-Solà and David M. Alba },
title = {Fossil apes and human evolution},
journal = {Science},
volume = {372},
number = {6542},
pages = {eabb4363},
year = {2021},
doi = {10.1126/science.abb4363},
URL = {https://www.science.org/doi/abs/10.1126/science.abb4363},
eprint = {https://www.science.org/doi/pdf/10.1126/science.abb4363},
abstract = {There has been much focus on the evolution of primates and especially where and how humans diverged in this process. It has often been suggested that the last common ancestor between humans and other apes, especially our closest relative, the chimpanzee, was ape- or chimp-like. Almécija et al. review this area and conclude that the morphology of fossil apes was varied and that it is likely that the last shared ape ancestor had its own set of traits, different from those of modern humans and modern apes, both of which have been undergoing separate suites of selection pressures. Science, this issue p. eabb4363 A Review describes the unique and varied morphologies in fossil and modern apes, including humans. Humans diverged from apes (chimpanzees, specifically) toward the end of the Miocene ~9.3 million to 6.5 million years ago. Understanding the origins of the human lineage (hominins) requires reconstructing the morphology, behavior, and environment of the chimpanzee-human last common ancestor. Modern hominoids (that is, humans and apes) share multiple features (for example, an orthograde body plan facilitating upright positional behaviors). However, the fossil record indicates that living hominoids constitute narrow representatives of an ancient radiation of more widely distributed, diverse species, none of which exhibit the entire suite of locomotor adaptations present in the extant relatives. Hence, some modern ape similarities might have evolved in parallel in response to similar selection pressures. Current evidence suggests that hominins originated in Africa from Miocene ape ancestors unlike any living species.}}
I was able to download the citation by visiting the link manually, but are there any programmatic APIs to convert a URL (like even a Wikipedia URL) into a formal citation? If not, I am not sure what is the recommended approach to getting these efficiently.

single person detection in latest bodypix

When I try the bokeh segmentation effect using body-pix#1.0.0, It detects/segments the person (A) in front of the camera. If another person (B) is standing behind, away from A, B is being blurred out. If the person B comes very close to the contour of A, then person B is also getting detected. This is the preferred behaviour.
Now when I try with body-pix#2.0.0, both Person A and B are getting detected even though I am using segmentPerson API. Pls note, person B is standing much away from person A, still both are getting detected. The advantage I see with 2.0 is that the contour of the person detected is much more accurate and smoother than that in 1.0 which had a gap in the contour and the bokeh effect was missing around this gap. In 2.0, the contour is more accurate. But multiple people are getting detected. Is there any parameter I could tweak to restrict this to single person detection and use the smoother contour?
Thanks
For those who wants to know the answer. Source: https://github.com/tensorflow/tfjs/issues/2547
If you want to use BodyPix 2.0 to only blur just a subset of people (e.g. the large people), a quick way would be to use BodyPix 2.0's Multi-Person Segmentation API: https://github.com/tensorflow/tfjs-models/tree/master/body-pix#multi-person-segmentation.
This method returns an array of PersonSegmentation object. In your case it will be an array of two PersonSegmentation object: one for Person A and one for Person B.
You could then remove certain people (in your case Person B) from that array and pass the resulting array (with only 1 element: Person A) to the drawBokehEffect https://github.com/tensorflow/tfjs-models/tree/master/body-pix#bodypixdrawbokeheffect.
To automate this process for other cases (3 or more people):
Each PersonSegmentation object has a .pose field that contains the 2D coordinates (in image pixel space) of the person's 17 keypoints. They can be used to compute the smallest bounding box area for each person. The person bounding box area can then be used as a criteria to remove small people in the image.

Is there a way to create a Traits class to parametrise Envelope_diagram_2 where the X monotone curves can be segments, rays or conic curves?

I am using the Envelope_3 package of CGAL-4.9.1 and I need to compute an upper envelope where the resulting envelope diagram (Envelope_diagram_2<EnvTraits>) could have edges of three different types:
segments
rays
parabolic arcs (conic arcs)
The three provided models of Envelope_Traits_3 are not enough for this.
I therefore need to create my own EnvTraits (which have to be a model of the concept Envelope_Traits_3).
For now, I made a something like the already provided Env_sphere_traits_3<ConicTraits> model, with which I have at my disposal both parabolic arcs and segments (I just use straight arcs).
The problem arises because I also need to be able to use Rays. How could I do this? Is there a Traits class that I can extend (just like I'm doing right now with Arr_conic_traits_2) that provides X_monotone_curve_2s that can be of the three types that I need?
I found the Arr_polycurve_traits_2 class, hoping that it would allow curves of different type to be stored as subcurves, but it actually just allows to store polycurves that are all of the same kind (linear, bezier, conic, circular...).
What you need is a model of the EnvelopeTraits_3 concept and of the ArrangementOpenBoundaryTraits_2 concept. Among all traits classes provided by the "2D Arrangements" package only instances of the templates Arr_linear_traits_2, Arr_rational_function_traits_2, and Arr_algebraic_segment_traits_2 are models of the later concept.
I suggest that you develop something like Env_your_object_traits_3<AlgebraicTraits_2>, where the template parameter AlgebraicTraits_2 can be substituted with an instance of Arr_algebraic_segment_traits_2.
Efi

Can I apply here Liskov substitution principle

I have two data models which are represented by the following classes:
1) ImagesSet - an object that owns 2DImage's, each 2DImage has its own position (origin(3DPoint), x-,y-axes(3DVector) and dimension along x and y axes(in pixels)), but the same pixel size(in mm for example), angle between x and y axes(90 degrees)
This object has following methods(in pseudo code):
AddImage(2DImage);
RemoveImage(ImageIndex);
number GetNumberOfImages();
2DImage Get2DImage(ImageIndex);
2) 3DImage - an objects that is similar to the first but with following restrictions:
it can store 2D images only with the same x-,y-axes and dimensions along x and y axes.
Is it correct in this case to derive 3DImage from ImagesSet?
From my point of view 3DImage "is a" ImagesSet (but with small restrictions)
Could I apply here Liskov substitution principle?
In this case if we are trying to add an image with another x,y axes - method AddImage either will throw an exception or return an error.
Thanks in advance,
Sergey
I agree with maxim1000 that LSP will be violated because derived class adds restrictions that are not present in the base class. If you take a close look at your description you will notice that the question can be turned upside-down: Can ImageSet derive from 3DImage?
Your situation is somewhat similar to Ellipse-Circle problem. Which one derives from the other? Is circle an ellipse with a constraint, or is an ellipse a circle with additional radius? The point is that both are wrong. If you constrain ellipse to equal radiuses, then client which attempts to set different values would receive an error.
Otherwise, if we say that ellipse is just a less constrained circle, we get a more subtle mistake. Suppose that shapes may not breach boundaries of the screen. Now suppose that a circle is replaced with an ellipse. Depending on which coordinate was tested, the shape might break out of the screen area without changing the client code. That is the exact violation of LSP.
Conclusion is - circle and ellipse are separate classes; 3DImage and ImageSet are separate classes.
May be it's just me, but whenever I hear "derive or not derive" my first reaction "not derive" :)
Two reasons in this case:
LSP is violated exactly because of those "small restrictions". So until you have AddImage in your base class which allows to add an image with any orientation, 3DImage is not an ImagesSet. There will be no way for algorithms to state that they need this feature (and comments is not a good place :) ), so you'll have to rely on run-time checks. It's still possible to program in this way, but this will be one more overhead for developers.
Whenever you create some abstraction, it's important to understand why exactly it's created. With derivation you implicitly create an abstraction - it's interface of 3DImage. And instead of this it's better to create this abstraction explicitly. Create an interface class, list there methods useful for algorithms able to work on both data structures and make both ImagesSet and 3DImage implementing that interface possibly adding some other methods.
P.S.
And likely AddImage will become one of those added methods - different in ImagesSet and 3DImage, but that depends...
Dear maxim1000 and sysexpand,
Thanks for the answers. I agree with you. It is clear now that LSP is violated and in this case I can't derive 3DImage from ImagesSet.
I need to redesign the solution in the following way:
2DImage will contain:
2DDimension's
PixelSize(in mm)
PixelData
2DImageOrientated will be derived from 2DImage and will contain new data:
3DPoint origin,
3DVector x-,y-axes
I will create pure interface IImagesSet:
number GetNumberOfImages()
RemoveImage(ImageIndex)
2DImageOrientated Get2DImage()
ImagesSet will be derived from IImagesSet and will contain the following:
vector<2DImageOrientated>
Add2DImage(2DImageOrientated)
number GetNumberOfImages()
RemoveImage(ImageIndex)
2DImageOrientated Get2DImage()
3DImage will be also derived from IImagesSet and will contain the following.
vector<2DImageOrientated>
Add2DImage(2DImage)
SetOrigin(3DPoint)
SetXAxis(3DVector)
SetYAxis(3DVector)
number GetNumberOfImages()
RemoveImage(ImageIndex)
2DImageOrientated Get2DImage()
In this case I think LSP is not violated.

text based RPG command interpreter

I was just playing a text based RPG and I got to wondering, how exactly were the command interpreters implemented and is there a better way to implement something similar now? It would be easy enough to make a ton of if statements, but that seems cumbersome especially considering for the most part pick up the gold is the same as pick up gold which has the same effect as take gold. I'm sure this is a really in depth question, I'd just like to know the general idea of how interpreters like that were implemented. Or if there's an open source game with a decent and representative interpreter, that would be perfect.
Answers can be language independent, but try to keep it in something reasonable, not prolog or golfscript or something. I'm not sure exactly what to tag this as.
The usual name for this sort of game is text adventure or interactive fiction, if it is single player, or MUD if it is multiplayer.
There are several special purpose programming languages for writing interactive fiction, such as Inform 6, Inform 7 (an entirely new language that compiles down to Inform 6), TADS, Hugo, and more.
Here's an example of a game in Inform 7, that has a room, an object in the room, and you can pick up, drop, and otherwise manipulate the object:
"Example Game" by Brian Campbell
The Alley is a room. "You are in a small, dark alley." A bronze key is in the
Alley. "A bronze key lies on the ground."
Produces when played:
Example Game
An Interactive Fiction by Brian Campbell
Release 1 / Serial number 100823 / Inform 7 build 6E59 (I6/v6.31 lib 6/12N) SD
Alley
You are in a small, dark alley.
A bronze key lies on the ground.
>take key
Taken.
>drop key
Dropped.
>take the key
Taken.
>drop key
Dropped.
>pick up the bronze key
Taken.
>put down the bronze key
Dropped.
>
For the multiplayer games, which tend to have simpler parsers than interactive fiction engines, you can check out a list of MUD servers.
If you would like to write your own parser, you can start by simply checking your input against regular expressions. For instance, in Ruby (as you didn't specify a language):
case input
when /(?:take|pick +up)(?: +(?:the|a))? +(.*)/
take_command(lookup_name($3))
when /(?:drop|put +down)(?: +(?:the|a))? +(.*)/
drop_command(lookup_name($3))
end
You may discover that this becomes cumbersome after a while. You could simplify it somewhat using some shorthands to avoid repetition:
OPT_ART = "(?: +(?:the|a))?" # shorthand for an optional article
case input
when /(?:take|pick +up)#{OPT_ART} +(.*)/
take_command(lookup_name($3))
when /(?:drop|put +down)#{OPT_ART} +(.*)/
drop_command(lookup_name($3))
end
This may start to get slow if you have a lot of commands, and it checks the input against each command in sequence. You also may find that it still becomes hard to read, and involves some repetition that is difficult to simply extract into shorthands.
At that point, you might want to look into lexers and parsers, a topic much too big for me to do justice to in a reply here. There are many lexer and parser generators, that given a description of a language, will produce a lexer or parser that is capable of parsing that language; check out the linked articles for some starting points.
As an example of how a parser generator would work, I'll give an example in Treetop, a Ruby based parser generator:
grammar Adventure
rule command
take / drop
end
rule take
('take' / 'pick' space 'up') article? space object {
def command
:take
end
}
end
rule drop
('drop' / 'put' space 'down') article? space object {
def command
:drop
end
}
end
rule space
' '+
end
rule article
space ('a' / 'the')
end
rule object
[a-zA-Z0-9 ]+
end
end
Which can be used as follows:
require 'treetop'
Treetop.load 'adventure.tt'
parser = AdventureParser.new
tree = parser.parse('take the key')
tree.command # => :take
tree.object.text_value # => "key"
If by 'text based RPG' you are referring to Interactive Fiction, there are specific programming languages for this. My favorite (the only one I know ;P) is Inform: http://en.wikipedia.org/wiki/Inform
The rec.arts.int-fiction FAQ has further information: http://www.plover.net/~textfire/raiffaq/FAQ.htm