Official bzip2 site is offline - bzip2

Recently, I found out, that the official bzip page has changed (http://www.bzip.org/index.html). I had a script, which took the source code from bzip.org and compiled it. Now, the source code and all other stuff are not anymore on this site.
I found, that there is a possible copy of that code at https://github.com/enthought/bzip2-1.0.6.
But I am unsure, if this is the real bzip2 code without any harmful changes. Is there another official bzip.org site or another site, where I can download the official source code?

You may still access the source from web archives
https://web.archive.org/web/20180624184806/http://www.bzip.org/downloads.html

Related

Remove requests to any GAFAM (google api here) from Nuxt

Is there any easy way to remove any query to google font api "fonts.gstatic.com" in Nuxt.Js ? I would rather provide font files myself.
So far I tried to remove any mention of fonts.gstatic.com from .nuxt/components/index.js, but it seems that the command build reset my modifications, so nothing changed.
My configuration is quite simple, I initialized an app with #nuxt/content-theme-docs.
Since the concern is more aimed towards GAFAM (avoid the usage of Google fonts), the solution would be to fork the package for the Nuxt team and strip the related module.
Here is where to find it: https://github.com/nuxt/content/search?q=fonts
This module of Nuxt is aimed towards so fast, pain-free and easy to setup documentation. Hence probably why, Nuxt's team was using such package (since it's still the goto as of today to use Google fonts).
You can follow this answer if you want to use a module on build time: https://stackoverflow.com/a/68166329/8816585
Otherwise, you can use this website to have your fonts locally (link those to your CSS file and you should be fine): https://google-webfonts-helper.herokuapp.com/fonts

404 in RESPONSIVE FileManager's Java Large File Uploader

With SocialEngine's RESPONSIVE FileManager-based file uploader, itself a plugin component for the rich text editor, we are having a problem whereby when a user clicks "JAVA Upload (big size files)" the uploader frame loads a 404 error.
In the error log, the following line is the only indication I have for this problem:
/filemanager/dialog.php?type=4&descending=false&crossdomain=1&lang=en&akey=key
So it's not immediately obvious what framework or plugin Responsive FileManager expects to encounter which it can't actually find, or for that matter, where it's looking to find it. (I have RTFM but there is nothing about configuring the Java uploader in the manual. I have also tried reading the dialog.php source code but I couldn't find anything particularly useful when I did so.)
It may perhaps be looking for the file wjhk.jupload.jar in the
filemanager/uploader/
directory. But I'm not sure why it can't find that file, or why it gets an error when it attempts to do so.
Surely I am not the only person to have this problem?
SocialEngine doesn't come with a Java uploader at all, and its largely advised against using java for file uploading on the web.
It sounds like work of a 3rd party plugin (that might be misconfigured?). Check all your plugin files and make sure they were all uploaded to your server. Its also possible that your host disallows .jar files as they tend to be vectors for abuse. So it is worth contacting them.
Finally, contact the original developer of the plugin with your issue.

Where can I download the Slang sources to SqueakVM?

Is there an archive somewhere that hosts the Slang sources for SqueakVM as a zip of .st sources? I want to just download them rather than attempting to coerce VMMaker to load into a running ST image and filing them out myself? I'm not trying to avoid this out of laziness, but because finding a set of instructions that actually work on a modern ST like Pharo amongst 30 years of archived discussions and dead links is apparently beyond me.
I have the bluebook, but it's pre-squeak, and not OCR'd either so I'd have to type it all in by hand.
To clarify: I don't want to run VMMaker. I don't want to build a new SqueakVM, I just want to be able to open the Slang sources for SqueakVM not Cog in a text editor and read them.
All Slang source bar recent releases of the Pharo fork is in the VMMaker package on http://source.squeak.org/VMMaker, project page http://source.squeak.org/VMMaker.html. Source packages consist of zip archives named .mcz, e.g. http://source.squeak.org/VMMaker/VMMaker.oscog-eem.839.mcz:
Archive: package-cache/VMMaker.oscog-eem.839.mcz
Length Date Time Name
-------- ---- ---- ----
16 07-30-14 19:41 package
724852 07-30-14 19:41 version
7186510 07-30-14 19:41 snapshot/source.st
7562064 07-30-14 19:41 snapshot.bin
-------- -------
15473442 4 files
However, as Nicolas Cellier says, accessing the source through Monticello is easier; for that you'll need to build a Squeak or Pharo image containing the loaded VMMaker and supporting packages. You'll also be able to run the VM simulator there-in to really explore it properly.
Scripts to build VMMaker images for the Cog branch are in the following svn directory, a part of the Cog svn source tree which contains generated sources from Slang, platform support code and build directories, in addition to the image directory:
http://www.squeakvm.org/svn/squeak/branches/Cog/image/
BuildSqueakTrunkVMMakerImage.st
The Cog VM is the fast JIT VM for Squeak. I am its principal author. My web site for Cog is http://www.mirandabanda.org/cogblog/. The site contains several blog posts that describe the VM, download directories for VM builds, an overview of the project, etc.
HTH
I can't help you with the sources directly but I can give you a recipe on to how to build the PharoVM for OS X (10.9). To get the Slang sources you probably don't even need to build the VM fully but only need to do what the README on github says to generate the sources (which is equivalent to step 1 in what follows).
follow instructions on https://github.com/pharo-project/pharo-vm
in the image, uncomment the debug line in PharoVMBuilder>>buildMacOSX32
in the image, remove all the occurrences of the compiler flag -fno-cse-follow-jumps
in the build files, change OS version number (two places) on the first 10 lines of build/CMakeLists.txt
change the line //#import <OpenGL/CGLMac ro.h> to #import <OpenGL/GL.h> in platforms/iOS/vm/OSX/sqSqueakOSXOpenGLView.m
follow the instructions on github
That's how I build my VM anyway. These instructions worked back in March 2014 and should still be valid by the time of writing.
Update
The above does not work anymore. I've written a script for building the PharoVM on 10.9 which can be found in my github repo. Feel free to use, copy, modify to your heart's content.
The question is which VMMaker? classical interpreter VM, or COG (JIT)?
If it's from Pharo, it will be COG, and the answer from Max is perfect, I have nothing to add.
If it's from Squeak, I don't recommend using a .st file, Monticello is really superior.
For classical VM, you might try one of the answers at How to load VMMaker in Squeak?
For COG, it's constantly changing, there's no more a prebuilt image, but if you load svn source from http://squeakvm.org/svn/squeak/branches/Cog then you'll find some startup script for loading all necessary package in a Squeak image, see end of README or image/Build*.st files.
Somehow, the recipe for building an image is better than a prebuilt image, so it's progress.
By far the easiest thing would be to load VMMaker into an image and read the code in the browser(s). That is, after all, the tool(s) intended for reading Smalltalk code. Slang is just Smalltalk with some restrictions and fudges to make it simple (hah! Have you read how the codegen classes do the transform?) to output C code and thus build the vm with 'normal' tools.
If you're trying to learn about the vm you could potentially gain some help from http://www.rowledge.org/resources/tim's-Home-page/Squeak/OE-Tour.pdf which is generally considered the standard work on the Squeak vm for now.

symfony1.4-like symfony2 installation

symfony1.x followed a good standart that the whole framework lies somewhere outside and is available to any project. Today I started to read symfony2 documentation and actually downloaded the 'with vendors' 2.0.1 package which is presented on download page. After opening the package I was a bit surprised of what I've seen. But after looking around the package I found that the only folder I need is 'vendors' one - so I copied its content to my '...\lib\vendor\symfony2' folder (near '...\lib\vendor\symfony' and '...\lib\vendor\ext'). I added it to include path and proceed reading documentation. And then I found a problem - a command listed 'php app/console generate:bundle --namespace=Acme/HelloBundle --format=yml' produced simple questions. Did they miss to explain how to generate a project (structure, preconfiguration, command-line files, etc)? And what about '.bat' and '.sh' files?
Symfony2 is very, very different than symfony 1 - especially when it comes to the directory structure. You can't simply drop in the vendors dir and expect it to work. This page explains how to setup a new Symfony2 project.
I recommend you to forget Symfony... and to think with Symfony2 about another think completely different than S1.
Installation is really simple and you will need some advanced PHP knowledge just to understand how it works... But if you have worked with S1, I expect you will have not much problems :)

How to keep synchronized, per-version documentation?

I am working on a small toy project who is getting more and more releases. Until now, the documentation was just a set of pages in the wordpress blog I setup for the project. However, as time passes, new releases are out and I should update the online documentation to match the most recent release.
Unfortunately, if I do so, the docs for the previous releases will "disappear" as my doc pages are updated to the most recent version, therefore I decided to include the documentation in the release package and to keep the most recent documentation available online as a web page as well.
A trivial idea would be to wget the current docs from the wordpress pages, save them into the svn and therefore into the release package, repeating the procedure at every new release. Unfortunately, the HTML I get must be hacked by hand to fix the links (or I should hack wordpress to use BASE so that the HTML code is easily relocatable, something I don't want to do).
How should I handle the requirements of having at the same time:
user-browsable documentation for the proper version included in the downloadable package
most recent documentation available online (and properly styled with my web theme)
keep synchronized between the svn and the actual online contents (in wordpress, or something else that fits nicely with my wordpress setup)
easy to use
Thanks
Edit: started a bounty to see if I can lure more answers. I think this is a quite important issue, and it would be nice to have multiple hints and opinions for future readers.
I would check your pages into SVN, and then have your webserver update from its local SVN working copy when you're ready to release. Put everything into SVN--wordpress, CSS, HTML, etc.
WGet can convert all the links in the document for you. See the convert-links option:
http://www.gnu.org/software/wget/manual/html_node/Advanced-Usage.html
Using this in conjuction with the other methods could yield a solution.
I think there are two problems to be solved here
how and where to keep the documentation aligned with the code
where to publish the documentation
For 1 i think it's best to:
keep the documentation in a repository (SVN or git or whatever you already use for the code) as a set of files, instead of in a db as it is easier to keep a history of changes (an possibly to stay in par with the code releases
use an approach where the documentation is generated from a set of source files (you'd keep the sources in the repository) from which the html files for the distribution package or for publishing on the web are generated. The two could possibly differ, as on the web you'd need to keep some version information (in the URL) that you don't need when packaging a single release.
To do "2" there are several tools that may generate a static site. One of them is Jekyll it's in ruby and looks quite complete and customizable.
Assuming that you use a tool like jekyll and keep the files and source in SVN you might setup your repo in this way:
repo/
tags/
rel1.0/
source/
documentation/
rel2.0/
source/
documentation/
rel3.0/
source/
documentation/
trunk/
source/
documentation/
That is:
You keep the current documentation beside the source in the trunk
When you do a release you create a tag for the release
you configure your documentation generator to generate documentation for each of the repo/tags//documentation directory such that the documentation for each release is put in documentation_site/ directory
So to publish the documentation (point 2 above):
you copy on the server the contents of the documentation_site directory, putting it in the same base dir of your wordpress install or linking from that, such that each release doc can be accessed as: http://yoursite/project/docs/relXX/
you create a link to the current release documentation such that it can always be reached as http://yoursite/project/docs/current
The trick here is to publish the documentation always under a proper release identifier (in the URL, on the filesystem) and use a link (or a redirect) to make sure that the "current documentation" on the web server points to the current release.
I have seen some programs use help & manual. But I am a Mac user and I have no experience with it to know if it's any good. I'm looking for a solution myself for Mac.
For my own projects, if that were a need, I would create a sub-dir for the documentation, and have all the files refer from the known-base of there relatively. For example,
index.html -- refers to images/example.jpg
README
-- subdirs....
images/example.jpg
section/index.html -- links back to '../index.html',
-- refers to ../images/example.jpg
If the docs are included in the SVN/tarball download, then they are readable as-is. If they are generated from some original files, they would be pre-generated for a downloadable version.
Archive versions of the documentation can be unpacked/generated and placed into named directorys (eg docs/v1.05/)
Its a simple PHP script that can be written to get a list the subdirs of the /docs/ directory from the local disk and display a list, and highlighting the most recent, for example.