GoodData: How to make the Dashboard Macros work in Firefox? - gooddata

I can embed content from my app to GoodData dashboard using "Web content". Dashboard macros are a way how to have the content customized depending on the dashboards it appears in - some reference:
http://developer.gooddata.com/article/how-to-use-dashboard-macros
http://developer.gooddata.com/article/dashboard-macro-reference
I'm trying to embed content on a link like this:
http://myserver.com/apps/my_app#%CURRENT_DASHBOARD_URI%/%CURRENT_DASHBOARD_TAB_URI%
It works fine in Chrome, but it does weird things in Firefox. Seems like I'm the macros don't work - the %CURRENT_DASHBOARD_URI% isn't being replaced with a tring like %2Fgdc%2Fmd%2FGoodSalesDemo%2Fobj%2F1952 as suggested in the docs

In fact macros work in Firefox, the issue is just the way Firefox works with decoding URLs.
Chrome doesn't decode the string dashboard URI for you and therefore you get:
http://myserver.com/apps/my_app#%2Fgdc%2Fmd%2FGoodSalesDemo%2Fobj%2F1952/85f6945b672d
Firefox does the decoding for you and you get
http://myserver.com/apps/my_app#/gdc/md/GoodSalesDemo/obj/1952/85f6945b672d
Therefore slash isn't a good character to separate %CURRENT_DASHBOARD_URI% and %CURRENT_DASHBOARD_TAB_URI% in your app.
Also when parsing the parameters out of the URL, you have to make sure that it's decoded - e.g. the decodeURIComponent function in JavaScript. decoding won't hurt the already decoded string in Firefox and will decode the string in Chrome.

Related

TinyMCE 5 - large images pasted via Safari do not render correctly

We are running TinyMCE version 5.4.1 with various options including:
paste_data_images: true
powerpaste_allow_local_image: true
When we drag & drop (or paste) in smaller images (400px X 400px) everything seems to work fine. The Base64 encoding is saved to the database and the image is rendered from all browsers, Chrome, Firefox and Safari.
However, when we paste in a larger image (1920px x 1081px) the image is only saved and rendered correctly in Chrome and Firefox. In Safari the Base64 encoding is saved with all lowercase characters. Therefore it doesn't render when attempting to view it. Has anyone else experienced this?
I have searched here as well as on the TinyMCE website but don't see anything mentioning this behavior. We will eventually attempt to move away from this Base64 implementation as it's no longer recommended but it's what we have for the time being so I'm just trying to address this issue.
When the page loads, its' elements can do so in parallel. But when the browser sees the base64 image, it blocks the page from loading until this image is rendered. Thus, inserting large images into the page as base64 is certainly not a good practice - it may slow down page loads and worsen the UX.
To fix this problem and maybe several other issues, utilizing the automatic_uploads option is highly recommended. It will upload pasted images on the server instead of converting them to base64. Here is the example of the PHP upload handler that will upload images and give their URLs back to TinyMCE.
Concerning the issue with Safari, some minimal reproducible example would be very useful.
I should also mention that PowerPaste is a premium feature that will not work with TinyMCE opensource. If you are using the paid version of TinyMCE, you can create a support ticket.

How can I get the REAL source code in the browser?

I'm trying to write a test for a simple API, which always fails because of a strange browser behaviour.
The response coming from the API is just some plain text:
foo-bar-123
I can see exactly that in the browser window and also as response in the network tab.
Okay so far, but when I look at the Inspector, I see something like that:
<html><head></head><body>foo-bar-123</body></html>
If I control the browser with selenium, the result of webdriver.page_source is the same.
For reasons I don't understand, the browser adds some HTML tags to the content.
Is this some strange kind of "feature"? Can this be switched off?
I don't think it's a bug because both Firefox and Chrome are showing this behaviour.
I just want to get the real content without any fancy stuff the browser thinks I need.

l20n: Why does IE fail to render HTML tags that translation string contains?

L20n is really helpful when it comes to implementing a localization requirement in our web application project and works perfectly fine in Chrome and Firefox and almost gets us there in Internet Explorer 11.
We are using HTML (which is supported) in the translation strings, they are formatted like this example:
"About <strong> a </strong>"
It works beautifully in Chrome and Firefox:
Translation result in Chrome
Unfortunately when I switched to Internet Explorer 11, getting this lovely sight on the same part of the page:
Translation result in IE 11
We're not doing anything weird or super special, it's pretty basic implementation.
Question is - has anyone encountered this issue while working with l20n and if so - is there anything that can be done to get Internet Explorer to render tags in translation strings?
After help from #Sampson (cheers!) and pointing out that a) It was issue on IE11/Edge Mode not Edge b) IE11 does not support one of the HTML5 elements such as Template, I dug around a little bit before I was about to announce complete defeat and digging through l20n github pages:
l20n compatibility
Using babel polyfill
Using patched webcomponents HTMLTemplateElement polyfill (link provided on page
I added babel and modified polyfill js file and after quick deploy and nervous refresh, it appeared the solution worked. Testing it as we speak so that it doesn't cause issues with the app but all looks good so far.
I know this is a workaround but our IE traffic is small enough yet I was not able to give up on these users that would like to use what I am currently working on in Irish and coincidentally would end up using IE11.
Due to the lack of support for the <template> element in Internet Explorer 11, the following fails:
// https://github.com/l20n/builds/blob/0d58a55afa6ae5aa868b8002fae5ee0e2124e35d/l20n.js#L94
var templateSupported = 'content' in document.createElement('template');
It's worth noting that the l20n.js team doesn't consider IE to be supported.

Import.io > Extractor : page never load, so cannot extract datas

Import.io is working pretty fine, but there is one website I would like to extract datas, but when I start the extractor, then enter the URL http://restaurant.michelin.fr/restaurants/france/75000-paris/restaurants-michelin/page-4/ which is loaded. Then I press the ON button, but the page won't load, nothing is displayed.... blank page and looks like it's still loading... In that case, how can I do ? I've also tried with the crawler, but same result. I restarted the program and computer but always the same issue. Thanks a lot.
The import.io desktop app browser uses firefox24. Few websites aren't compatible with the browser and this appears to be what is happening in this case.
It does however work in Magic! https://magic.import.io/
Once you have published the Magic API, you can then use the tools in MyData such as Bulk and Chain to add more URLs.
I have just tried to save a Magic API and it worked a treat. The only disadvantage here is that you won't be able to edit the columns until after you have extracted the data.

Selenium output url

I need to test an output of a page without making it pop up, but just go to the url.
If you have a pdf or word export link how would you test its contents.
The answer is to step outside of Selenium
To test a URL without making the window pop-up, consider downloading the contents of the URL without Selenium. I'm assuming you are using Java, so look into downloading the content of a URL in java. It also depends on what you are testing, this works if you want to just check if a URL is broken or not for example.
To check the contents of a PDF or a Word file, you also can't use Selenium. You will need to find a Java library to parse the PDF and Word files and then use that to read the content.
As you can see both require a bit of research on your part. The key is not getting hung up on Selenium and making use of the rest of the programming language.