How do I emulate WebGL stuff with OESMesa on AWS Lambda? - selenium

I would like to take a screenshot of a web site using WebGL. I don't have to use GPU to open that site. Using emulation is enough for me.
At the beginning, I already tried headless-chrome to do this. That can take screenshot of ordinal web sites. But, It not works for WebGL canvases.
I think one of possibility is using OSMesa or something to emulate OpenGL.
I have used all of my strategy for overcoming this. Is this actually possible to do?
If yes, please tell me how to do that. If no, I would like to know why.
Thanks.

Yes it is possible!
You need the right combination of:
headless-chromium binary
libosmesa.os binary (in same directory)
launch chrome headless with the right flags, such as (see link for full details): ['--use-gl=osmesa', '--enable-webgl', '--ignore-gpu-blacklist', '--homedir=/tmp', '--single-process', '--data-path=/tmp/data-path', '--disk-cache-dir=/tmp/cache-dir']
This thread on the serverless-chrome github project discusses the issue and provides some binaries which I have used to capture screenshots of WebGL content on AWS Lambda using Page.captureScreenshot().
https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-416494572
(See comment by #apalchys on 28th August)
This particular example uses SwiftShader which seems to be the preferred option going forward.
Note, however, that I was unable to create PDFs using Page.printToPDF() using this version - WebGL content just appears blank/white. However, I was able to also get Page.printToPDF() using an earlier version which uses osmesa, see https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-371199530

Related

Flashing target with GHS probe using command line

We're using Greenhills Multi IDE and Greenhills Debug Probe to program and debug our target system (a Coldfire based, bare metal system). Currently I flash the target using the IDE debugger GUI, but I would prefer to use a command line interface to do it.
The documentation is fairly sketchy, and only gives a very simple example. As far as I can tell I should be able to use grun with gflash to do this, but I'm having a hard time figuring out which GUI fields map to which grun options. Anyone with any experience of this?
Basically I need to be able to specify (see image above):
Flash device (this one I've got figured out I think)
Base address
Image file (we use raw images)
Offset in flash
Alternate RAM base
Alternate flash utility
Possibly also alternate MBS script
Any tips, tricks, or pointers to better documentation than the standard GHS one? Would be much appreciated!
Is below screenshot from debugger command reference is of any help? You can use it to download your source on HW. I will be able to share more details is this helps. Or you can share your solution if you had already found it.
I used mpadmin for this purpose.
mpadmin -update <IP-addr_of_your-probe | -usb> firmware.frm

Is there any good tutoria or reference for writing code with Magma?

Currently I am trying to use Magma to do matrix operation on GPU, however, I found few documents about it. The only thing I can refer to is its testing program and the online generated document(here), which is not convenient to use. And the user guide seems outdated.
If you look here, getri and potri are supported.

Where can I browse the sourcecode for libc online (like doxygen)

Sometimes I want to look up the implementations of functions in the stdlib, I've downloaded the sourcecode, but it's quite messy.
Just greping is not really suitable because of the many hits.
Does anyone know a webpage doxygen style that has the documentation.
The same goes for the linux kernel.
Thanks
You should check if your distribution is using the vanilla GLIBC or the EGLIBC fork (Debian and Ubuntu have switched to EGLIBC EDIT: they switched back around 2014).
Anyway, the repository browser for GLIBC is at http://sourceware.org/git/?p=glibc.git
http://code.woboq.org/userspace/glibc/, posted by #guruz below, is a good alternative.
The source is a bit complicated by the presence of multiple versions of the same files.
How about this for libc documentation? And perhaps this for the kernel? There is also Google Code search; here is an example search.
More on Google Code Search You can enter search queries like this: package:linux-2.6 malloc for any references to malloc in the linux-2.6 kernel.
Edit: Google Code search is now shut down. But you can access the git repo at http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git and it has search as well.
You can try http://code.woboq.org/userspace/glibc/
It has nice navigation/hilighting similar to an IDE.
To help navigate the source to glibc, perhaps try something like ctags or cscope?
Note: I get dumber every time I look at the glibc source, so please be careful! :)
If you are using GNU C (glibc), the functions (beyond the GNU extensions) follow the POSIX standard as far as their arguments, implementation, failure and return values. If you want to peek under the hood of static members, you'll have to look at the code.
Every push (that I can remember) to try and adopt something like Doxygen for glibc was rejected for the following reasons:
Redundant, POSIX already documents almost everything thats exposed, as well as man and info pages.
Too much work initially
More work for maintainers
As far as the kernel goes, Linux does use a system very similar to Doxygen called Kerneldoc.
You can also get actual Doxygen-generated docs from http://fossies.org/dox/glibc.

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.

Is there a script that turns a Pharo core image into something more useful, that would include an OmniBrowser?

I cannot use the most recent dev Pharo release because of some strange issues with the compiler built into Pharo. Well. I was wondering if there is a quick way to install all the nifty extras into Pharo that the core image misses, as compared to the dev image.
With all non-core Pharo images come a script which was used to build that image. Just edit that file and drag&drop it on a new core.
You could also tell me what you don't like in the Pharo images so that I can enhance them.
There is also the script I published on the Pharo wiki that I use to build my images:
http://code.google.com/p/pharo/wiki/ImageBuildScripts
Of course it is very specific to my preferences and needs, but you can take it as an example and adapt it to your own needs.
CommandShell works with Pharo 9.10.10. You will hit several errors as you try to load the package due to Pharo lacking MVC, but you can simply proceed past the first bunch and abandon the last one (that tries to actually open a CommandShell in Morphic). At that point, you'll have a class called PipeableOSProcess that can be used very easily to grab output. For example:
(PipeableOSProcess command: 'ls /bin') output
will return the contents of your bin directory as a string.
Ok, OB itself can be easily downloaded using ScriptLoader loadSuperOB.
Damien adds (from comment below):
The problem with that approach is that nobody really maintains it.
Moreover, you miss some configuration steps to enhance the use of OB
(for example, you won't have the OB-based browsers if you ask for the
senders of a message from a workspace)