Am trying to load a local TileFile layer in pydeck(0.5) in jupyter notebook
I am using the following code:
import pydeck as pdk
data ='https://localhost:/home/user/myfolder/tiles/{z}/{x}/{y}.png`
layer = pdk.Layer(
'TileLayer', # `type` positional argument is here
data=data
)
# Set the viewport location
view_state = pdk.ViewState(
longitude=50,
latitude=50,
zoom=0,
min_zoom=0,
max_zoom=5,
pitch=40.5,
bearing=-27.36)
# Combined all of it and render a viewport
r = pdk.Deck(layers=[layer], initial_view_state=view_state)
r.to_html('TileLayer-example.html'
This just creates a blank view window in Jupyter where it should show an image.
If I take launch the TileLayer-example.html independently into a browser I also get a blank window, However browser (Firefox) console output is as follows:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://localhost:/home/user/myfolder/tiles/0/0/0.png. (Reason: CORS request did not succeed).
TypeError: NetworkError when attempting to fetch resource. tile-layer.js:18:56
value tile-layer.js:18
value tile-layer.js:136
t tile-2d-header.js:90
u runtime.js:45
_invoke runtime.js:271
e runtime.js:97
Babel 2
r
u
I am not sure if the problem is CORS related (calling a local file) or if my data path is just plain wrong? I have tried may variants on path but all seem to have the same console warning issues.
Any suggestions for simple safe ways around the CoRs problem, or suggestions for accessing local TileFiles in PyDeck would be much appreciated
For other folk currently looking to implement TileLayer in Pydeck. It appears TileLayer is not yet documented or currently supported in pydeck and can only be achieved with a custom layer see this github issue and this suggested workaround with a custom layer
With respect to the CoRs issue of trying to display a local file resource in the browser when called from a URL. if you are looking for a quick fix to do a test/debug then in Firefox: the suggestions here help. For Firefox specifically they suggest the following:
go to about:config
search for privacy.file_unique_origin
set it to false.
Note this is a security risk so reset it to true after debugging.
The only workable solution currently seems to be to put the tilefile in the working directory from which your deck project is launched
Related
I have customized the layout of sales order template ( added footer and header, formatted content etc. ). For this i created a new module and installed it. When i use this module locally (mac os), i am receiving the sales order document as pdf in the way i set it up. When using this module on a server and all changes do not apply at all. I can see that the module is installed, loaded and also if i switch the PDF-Report to HTML i can see that the Layout is set.
There are no 404 Errors in Logfile, so i am somehow stuck
As far as i understood the PDF File is a rendered HTML-File, therefore i am obviously missing some information here.
So my question is, where can i check what layout is used to generate the PDF-File?
Thanks for any help on that!
After finally finding something via more searches i found the solition which i want to share:
It is important to understand that if odoo is running public on any other port than 8069 ( portforwarding etc. ) this issue will always occur.
Generating the pdf will try to find needed assets on the public port which will not lead where needed.
The Solution is so easy but somehow i wasnt able to find it easily:
All you need to do is to set a proper url for report generation.
Goto -> Settings -> Parameters -> System Parameters
and add:
key: report.url
value: http://localhost:8069
localhost is here the correct domain, do not change this, so the machine will call the report url on itself.
Can you check your report layout in Settings ->Technical Settings ->Actions->Reports then search for your report and check qweb views
I also faced the same problem, struggled for a long period. After reading #patrick.tresp explanation I understood the reason.
For my case, I made made the odoo port (8069) as the default port, which made the base url as the localhost (domain.com) without the port number. However report url is not detecting the port, hence the problem occurs.
When I explicitly defined the report url, the problem get sorted.
i.e., Goto
Settings -> Parameters -> System Parameters
and add:
key: report.url
value: http://domain.ext:8069
I have a small Pyramid application that by default used the waitress web server when I set it up. However I am now trying to switch to CherryPy since it works much better with sse.
But for uncaught exceptions I got a 500 error page with content in waitress, but using cherrypy the pages are just blank (the status is correctly 500 though).
The only thing I did to switch was to change the line:
use = egg:waitress#main
to
use = egg:pyramid#cherrypy
In the documentation for CherryPy I can read that I can set a custom error message for an unanticipated error. Tried that out but I saw no effect at all, the function is never called - I even tried to add a breakpoint to CherryPy's internal error response but it was not hit either.
I suspect something else is wrong though since I assume CherryPy should show "something" by default for a 500 page ?
I have attempted to reproduce the issue using the starter scaffold that comes with Pyramid, and made the following modification to the existing views.py that it comes with:
from pyramid.view import view_config
from pyramid.httpexceptions import HTTPInternalServerError
#view_config(route_name='home', renderer='templates/mytemplate.pt')
def my_view(request):
raise HTTPInternalServerError()
On both CherryPy and waitress this returns a page with the HTTPInternalServerError() on it including the text.
Changing the raise to:
raise ValueError('test')
However only shows something on the page if the pyramid_debugtoolbar is enabled, and the user accessing the URL is allowed to see the pyramid_debugtoolbar (this is controlled by the hosts setting for pyramid_debugtoolbar).
CherryPy does not have its own text. Unfortunately I don't see a way to use the _cp_config method of enabling custom error messages, as there is no way to set it up on the HTTP server that is used when using the CherrypyWSGIServer which is used by the Pyramid cherrypy entrypoint used by pserve.
What you can do, is set up a default exception view in Pyramid such as the following:
#view_config(context=Exception)
def exception_view(request):
request.response.status = 500
request.response.text = u'Something went very wrong. Sorry!'
return request.response
You can off course customise this exception view however you'd like. If this exception view raises however, you will be at the mercy of CherryPy who will serve you a blank page.
I have a canvas that was created by a user (me) dragging an image onto a dropzone. As part of saving the image, I call toDataURL.
var theCanvas = document.getElementById ("idCardCanvas");
var urlImage = theCanvas.toDataURL ();
This code has worked for months when it was running from one domain, littlecardeditor.com.
I am now in the process of adding this functionality to Radio3, at http://radio3.io/.
Both domains are Amazon S3 buckets, both have the same CORS configuration (clutching at straws here, I don't see why CORS enters into it, since the images come off my hard disk, not from any domain).
I dragged the images from the Desktop, based on advice from another thread here.
None of it makes a difference. It works when I use it on littlecardeditor.com and fails with the "tainted canvas" error when called on radio3.io.
Not sure what else to look at. Any clues would be much appreciated! :-)
I'm using the chrome://favicon/ in my Google Chrome extension to get the favicon for RSS feeds. What I do is get the base path of linked page, and append it to chrome://favicon/http://<domainpath>.
It's working really unreliably. A lot of the time it's reporting the standard "no-favicon"-icon, even when the page really has a favicon. There is almost 0 documentation regarding the chrome://favicon mechanism, so it's difficult to understand how it actually works. Is it just a cache of links that have been visited? Is it possible to detect if there was an icon or not?
From some simple testing it's just a cache of favicons for pages you have visited. So if I subscribe to dribbble.com's RSS feed, it won't show a favicon in my extension. Then if I visit chrome://favicon/http://dribbble.com/ it won't return right icon. Then I open dribbble.com in another tab, it shows its icon in the tab, then when I reload the chrome://favicon/http://dribbble.com/-tab, it will return the correct favicon. Then I open my extensions popup and it still shows the standard icon. But if I then restart Chrome it will get the correct icon everywhere.
Now that's just from some basic research, and doesn't get me any closer to a solution. So my question is: Is the chrome://favicon/ a correct use-case for what I'm doing. Is there any documentation for it? And what is this its intended behavior?
I've seen this problem as well and it's really obnoxious.
From what I can tell, Chrome populates the chrome://favicon/ cache after you visit a URL (omitting the #hash part of the URL if any). It appears to usually populate this cache sometime after a page is completely loaded. If you try to access chrome://favicon/http://yoururl.com before the associated page is completely loaded you will often get back the default 'globe icon'. Subsequently refreshing the page you're displaying the icon(s) on will then fix them.
So, if you can, possibly just refreshing the page you're displaying the icons on just prior to displaying it to the user may serve as a fix.
In my use case, I am actually opening tabs which I want to obtain the favicons from. So far the most reliable approach I have found to obtain them looks roughly like this:
chrome.webNavigation.onCompleted.addListener(onCompleted);
function onCompleted(details)
{
if (details.frameId > 0)
{
// we don't care about activity occurring within a subframe of a tab
return;
}
chrome.tabs.get(details.tabId, function(tab) {
var url = tab.url ? tab.url.replace(/#.*$/, '') : ''; // drop #hash
var favicon;
var delay;
if (tab.favIconUrl && tab.favIconUrl != ''
&& tab.favIconUrl.indexOf('chrome://favicon/') == -1) {
// favicon appears to be a normal url
favicon = tab.favIconUrl;
delay = 0;
}
else {
// couldn't obtain favicon as a normal url, try chrome://favicon/url
favicon = 'chrome://favicon/' + url;
delay = 100; // larger values will probably be more reliable
}
setTimeout(function() {
/// set favicon wherever it needs to be set here
console.log('delay', delay, 'tabId', tab.id, 'favicon', favicon);
}, delay);
});
}
This approach returns the correct favicon about 95% of the time for new URLs, using delay=100. Increasing the delay if you can accept it will increase the reliability (I'm using 1500ms for my use case and it misses <1% of the time on new URLs; this reliability worsens when many tabs are being opened simultaneously). Obviously this is a pretty imprecise way of making it work but it is the best method I've figured out so far.
Another possible approach is to instead pull favicons from http://www.google.com/s2/favicons?domain=somedomain.com. I don't like this approach very much as it requires accessing the external network, relies on a service that has no guarantee of being up, and is itself somewhat unreliable; I have seen it inconsistently return the "globe" icon for a www.domain.com URL yet return the proper icon for just domain.com.
Hope this helps in some way.
As of Oct 2020, it appears chrome extensions using manifest version 3 are no longer able to access chrome://favicon/* urls. I haven't found the 'dedicated API' the message refers to.
Manifest v3 and higher extensions will not have access to the
chrome://favicon host; instead, we'll provide a dedicated API
permission and different URL. This results in being able to
tighten our permissions around the chrome:-scheme.
In order to use chrome://favicon/some-site in extension. manifest.json need to be updated:
"permissions": ["chrome://favicon/"],
"content_security_policy": "img-src chrome://favicon;"
Test on Version 63.0.3239.132 (Official Build) (64-bit)
chrome://favicon url is deprecated in favor of new favicon API with manifest v3.
// manifest.json
{
"permissions": ["favicon"]
}
// utils.js
function getFaviconUrl(url) {
return `chrome-extension://${chrome.runtime.id}/_favicon/?pageUrl=${encodeURIComponent(url)}&size=32`;
}
Source: https://groups.google.com/a/chromium.org/g/chromium-extensions/c/qS1rVpQVl8o/m/qmg1M13wBAAJ
I inspected the website-icon in Chrome history page and found this simpler method.
You can get the favicon url by --
favIconURL = "chrome://favicon/size/16#1x/" + tab.url;
Don't forget to add "permissions" and "content_security_policy" to Chrome. (https://stackoverflow.com/a/48304708/9586876)
In the latest version of Chrome, Version 78.0.3904.87 (Official Build) (64-bit)) when tested, adding just img-src chrome://favicon; as content_security_policy will still show 2 warnings:
'content_security_policy': CSP directive 'script-src' must be specified (either explicitly, or implicitly via 'default-src') and must whitelist only secure resources.
And:
'content_security_policy': CSP directive 'object-src' must be specified (either explicitly, or implicitly via 'default-src') and must whitelist only secure resources.
To get rid of them use:
"permissions": ["chrome://favicon/"],
"content_security_policy": "script-src 'self'; object-src 'self'; img-src chrome://favicon;"
Now you can use chrome://favicon/http://example.com without getting any errors or warnings.
I'm currently experiencing problems with static content - most noticeably jQuery datepicker images, but also other static files - which results in images/static content loaded many times - I can clearly see it in IE6 status bar (not to mention SLOW rendering).
The problem and possible solutions seems to be described here: http://www.explainth.at/en/tricks/flickfix.shtml. However, I use IIS6 not Apache, and static files that I don't want to feed through php or asp.
How do I make IE6 cache static images properly? How do I add custom response header for specific files/folders?
Hm, let met re-phrase it. I'm not sure it is caused by the bugs above. Actually, I tried appcmd to apply cacheControlMode/etc and it doesn't seem to work. As far as I remember, IE6 also does not cache for XMLHttpRequest calls? So, the biggest problem that I need to solve is:
in jQuery calendar, moving mouse over image buttons (prev/next) causes them to be reloaded-refreshed
in jQuery dialog, each dialog('open') causes images from theme (like header background) to be re-loaded/refreshed
etc
This link probably gives a better explanation: http://ajaxian.com/archives/internet-explorer-and-ajax-image-caching-woes
How do I solve this - that is, without feeding images through ASP.NET to setup headers?
Thanks everybody for listening, the trick with appcmd seems to work ;-) The problem was that I used jQuery theme from googleapis... which obviously was not affected by appcmd ;-) Moving theme to local folder did the trick. These are the commands:
\Windows\system32\inetsrv\appcmd.exe set config "Default Web Site/images" -section:system.webServer/staticContent -clientCache.cacheControlMode:UseMaxAge
\Windows\system32\inetsrv\appcmd.exe set config "Default Web Site/images" -section:system.webServer/staticContent -clientCache.cacheControlMaxAge:"01:00:00"
from http://forums.iis.net/t/1067723.aspx