Discrepancy between Kepler GL coordinates and Google Maps - deck.gl

I'm trying to create a map layer with a point in https://kepler.gl/demo, but I'm facing that the coordinates that I'm using are rendering differently in Google Maps and in Kepler GL.
This is the geojson that I am loading in kepler gl:
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [42.2812989, -8.7366615]
},
"properties": {
"name": "foo"
}
}
This puts a point in the middle of the sea...but if I put those coordinates in Google Maps the point is in Galicia (Spain), which is the 'real' location.
Maybe there is something that I'm not taking in count?

You need to switch the coordinates. First the longitude and then the latitude. Then you will see your point in its real location.

Related

Accessing a Word(.docx) file's content with Microsoft Graph REST API?

Is there a way to obtain the content of a Word document stored in the cloud through the Microsoft Graph API without having to download the file locally?
The goal is to build an app that analyzes a Word document's inner content and produce some interesting data from it. However after searching through Microsoft's Dev Center, Graph Explorer, and their API's documentation repository, I can't find any API endpoints that can serve me that data.
I can find some endpoints that deal with manipulating Excel's contents, but not one that deals with Word. Does Microsoft Graph not support retrieving a Word document's content?
EDIT: For example, I know I can read the contents of a "message" and even apply a search on it through query parameters, as demonstrated by one of Microsoft's samples. But I can't seem to find how to do this with Word documents.
Well, it's possible to download the content of the document.
See: Download the contents of a DriveItem.
For example:
GET /v1.0/me/drive/root:/some-folder/document.docx:/content
But you'll get the entire docx, with embedded images and all. Don't know if this is what you are looking for.
As an example, see the helix-word2md project that fetches a docx and converts it to markdown.
I'm afraid you can't direly access word content. What you can do is use web URL property of a DriveItem opening a document the associated Word Online or native world if it is installed.
You can use this below to show specific item or all items:
GET /users/{userId}/drive/items/{itemId}
GET me/drive/root/children/
This is the result below:
{
"#microsoft.graph.downloadUrl": "",
"createdDateTime": "2018-08-10T01:43:00Z",
"eTag": "\"{00000000-3E94-4161-9B82-0000000},2\"",
"id": "00000000IOJA4ONFB6MFAZXARX7L7RU4NV",
"lastModifiedDateTime": "2018-08-10T01:43:00Z",
"name": "daily check.docx",
"webUrl": "https://xxxxxxx",
"cTag": "\"c:{00000000-3E94-4161-9B82-37FAFF1A71B5},2\"",
"size": 26330,
"createdBy": {
"user": {
"email": "000000.onmicrosoft.com",
"id": "000000-93dc-41b7-b89b-760c4128455a",
"displayName": "Chris"
}
},
"lastModifiedBy": {
"user": {
"email": "0000#0000.onmicrosoft.com",
"id": "00000000-93dc-41b7-b89b-00000000",
"displayName": "Chris"
}
},
"parentReference": {
"driveId":
"b!000000000gdQMtns72t31yqWMhnFCjmCqO3tR5ypOf17NKl2USqo1bNqhOzrZ",
"driveType": "business",
"id": "00000VN6Y2GOVW7725BZO354PWSELRRZ",
"path": "/drive/root:"
},
"file": {
"mimeType": "application/vnd.openxmlformats-
officedocument.wordprocessingml.document",
"hashes": {
"quickXorHash": "OSOK7r2hIVSeY1+FjaCnlOxn2p8="
}
},
"fileSystemInfo": {
"createdDateTime": "2018-08-10T01:43:00Z",
"lastModifiedDateTime": "2018-08-10T01:43:00Z"
}
}

How do I change the background image on an Alexa Show skill card?

I'm new to programming Alexa skills, especially with the Echo Show. I am trying to change the background image of the skill card from the default dark grey to something else. I know there has to be a way to do this because when I say, "Alexa, tell me a joke." that skill's background is red. And when I say, "Alexa, tell me about LeBron James." Alexa changes the background to LeBron James and the text auto scrolls. Any help on this would be great.
You can indeed change the background of an Alexa Show skill. Unfortunately, at this time Amazon does not offer a bunch of styling functionality beyond that for the Show skills.
The display interface reference is the documentation that you should read. It will give you an understanding of how all your calls and responses will be sent/received as JSON objects. In order to change the background you must choose one of the few template options they have available and add the background key and value to your JSON response structure.
For example, check out the following response structure you should send back from your AWS lambda function. It renders BodyTemplate2 which displays an image on the side of the screen with text on the other side. (This was taken from the display interface reference). Look at the key, "backgroundImage" and the following value.
{
"type": "Display.RenderTemplate",
"template": {
"type": "BodyTemplate2",
"token": "A2079",
"backButton": "VISIBLE",
"backgroundImage": {
"contentDescription": "Textured grey background",
"sources": [
{
"url": "https://www.example.com/background-image1.png"
}
],
"title": "My Favorite Car",
"image": {
"contentDescription": "My favorite car",
"sources": [
{
"url": "https://www.example.com/my-favorite-car.png"
}
]
},
"textContent": {
"primaryText": {
"text": "See my favorite car",
"type": "PlainText"
},
"secondaryText": {
"text": "Custom-painted",
"type": "PlainText"
},
"tertiaryText": {
"text": "By me!",
"type": "PlainText"
}
}
}
}
}

Map is shown without buildings

I'm looking for rendering 3D Buildings extrusion in a React-Native app using github /mapbox/react-native-mapbox-gl
I created a custom map in MapBox Studio,added the following line into the style.json and uploaded the map in Mapbox Studio
{
"id": "buildings",
"type": "fill-extrusion",
"source": "composite",
"source-layer": "building",
"minzoom": 15,
"filter": [
"all",
[
"==",
"extrude",
"true"
],
[
">",
"height",
1
]
],
"paint": {
"fill-extrusion-color": "hsl(206, 7%, 61%)",
"fill-extrusion-height": {
"type": "identity",
"property": "height"
},
"fill-extrusion-base": {
"type": "identity",
"property": "min_height"
},
"fill-extrusion-opacity": 1,
"fill-extrusion-translate-anchor": "viewport"
}
}
The buildings are rendered as expected in mapbox-studio but when I go back in my React-Native App the map is shown but without the buildings.
Do you guys have any idea about how to display buildings in 3D with the react-native-mapbox-gl sdk ?
Thanks.
Screenshot of mapbox studio
Screenshot of the React-Native Map
Compatibility fix for this issue:
Add a new layer for your custom map in Mapbox Studio
Source: Mapbox Streets V7 (building)
Type: Fill extrusion
Edit style
Click on Height and then Edit JSON (in the bottom of the panel)
Then, paste the following code:
{
"type": "identity",
"property": "height"
}
Paste and publish your style and use as a MapBox style for your map in React-Native

how to grab geo-data location from gramfeed?

I'm kind of new to all the programming language, and i want to grab the geo-locations for academic research in purpose of visualization data.
There is any simple way for this? or simple tutorial how to do this? i need to extract the geo-locations from the map to csv\json\xls file
The readme here (https://github.com/Instagram/python-instagram) is a tutorial.
For example to authenticate with the API use:
from instagram.client import InstagramAPI
access_token = "YOUR_ACCESS_TOKEN"
client_secret = "YOUR_CLIENT_SECRET"
api = InstagramAPI(access_token=access_token, client_secret=client_secret)
Then you could locations information as with these three queries:
api.location(location_id)
api.location_recent_media(count, max_id, location_id)*
api.location_search(q, count, lat, lng, foursquare_id, foursquare_v2_id)
The docs for the location "endpoint" of this API (https://www.instagram.com/developer/endpoints/locations/).
Essentially the above commands could send a request like:
https://api.instagram.com/v1/locations/search?lat=48.858844&lng=2.294351&access_token=ACCESS-TOKEN
The Instagram API response would be:
{
"data": [{
"id": "788029",
"latitude": 48.858844300000001,
"longitude": 2.2943506,
"name": "Eiffel Tower, Paris"
},
{
"id": "545331",
"latitude": 48.858334059662262,
"longitude": 2.2943401336669909,
"name": "Restaurant 58 Tour Eiffel"
},
{
"id": "421930",
"latitude": 48.858325999999998,
"longitude": 2.294505,
"name": "American Library in Paris"
}]
}
Python can certainly export this data into one of the file types you mentioned. Note that there is also some really nice plotting capabilities, take a look; (http://matplotlib.org/basemap/users/examples.html). The benefit of the wrapper is that you can directly interact with the response data. It would be like a Python dict object.

Xbmc Database Path

I am working with XBMC. I have installed XBMC in my system(Windows 7, 32 bit). Xbmc is working fine in my system. I have developed an application in order to control the Xbmc remotely from Ipad. In order to retrieve the music files or video files from Xbmc, I am unable to. By searching the forums of xbmc, I found that we can write an sql query to get them out. But, the thing is I am unable to make out where the database is located in my system. Someone help me out where I can find it.
Regards,
Sushma.
The database itself
By default the location of the database is that described on the wiki page XBMC databases
but the actual location can be changed by the user, or a different database technology can be used entirely.
The settings that would affect this are located in advancedsettings.xml.
But in general it is advised by the XBMC developers to never access the database directly.
JSONRPC
In order to help with interacting with the database XBMC has supported the JSONRPC queries, the one downside of these is that XBMC needs to be running at the time to respond to these queries. The major advantage is that it XBMC will find the database for you and expose access to it with a common interface.
JSONRPC support was first added to XBMC in "Darhma" (v10), became really useful in "Eden" (v11) and will support almost everything possible in "Frodo" (v12). Information about the use of JSONRPC can be found in the wiki.
An example
In this example I'm assuming that you are targeting "Eden", the current stable release of XBMC. Also I have formatted the following with new lines, these are not required and are not present in the response from XBMC.
Request
If you were to use JSONRPC the request you would need to send would look something like:
{
"jsonrpc": "2.0",
"method": "VideoLibrary.GetMovies",
"params": {
"properties": [
"title",
"year",
"file"
],
"limits": {
"start": 0,
"end": 2
}
},
"id": 1
}
Note: If you wanted different information about each movie you could use other properties listed here.
*Note: You probably want to omit the "limits" part to get all movies.*
Responce
The response to this would be something like:
{
"id": 1,
"jsonrpc": "2.0",
"result": {
"limits": {
"end": 2,
"start": 0,
"total": 47
},
"movies": [
{
"label": "Label for movie",
"movieid": 1,
"title": "Title of movie",
"year": 2012
},
{
"label": "Label for another movie",
"movieid": 2,
"title": "Title of another movie",
"year": 2010
},
{
"label": "Label for a third movie",
"movieid": 3,
"title": "Title of a third movie",
"year": 2012
}
]
}
}
What to do now?
You have a choice at this point, you can either:
Add "file" to the list of properties, this will return the "file" property, the location of the video file.
Use JSONRPC to tell xbmc to play a movie.
Using this method is best when you don't want to play the file locally (on the iPad) but instead on XBMC.
Playing a movie on XBMC via JSONRPC
This is quite simple, use the "movieid" you received earlier in the following request:
{
"jsonrpc": "2.0",
"method": "Player.Open",
"params": {
"item": {
"movieid": 2
}
},
"id": 1
}
Lastly I would note that there are equivalent commands for TV episodes as shown for movies.