How to include static data files into Archlinux AUR package? - archlinux

I have a small script that uses some static text files as data source. I want to make Archlinux AUR package for this script. I plan to install the script into /usr/bin/ and static text files somewhere locally ~/.data_files
I have several static files: data1.txt, data2.txt, data3.txt. Basically, I need the package manager to install the script into /usr/bin/, create ~/.data_files directory and copy the static files there.
How should I configure my PKGBUILD in such case?
Here is my current version:
# Maintainer: john doe
pkgname=myscript
pkgver=1.0
pkgrel=1
pkgdesc="test script"
arch=(any)
url="https://github.com/me/myscript"
license=('MIT')
depends=('file')
source=('https://raw.githubusercontent.com/me/myscript/master/myscript')
md5sums=('1fa410f1647700a6da3ab0ebyc52465d')
package() {
install -D -m 755 myscript ${pkgdir}/usr/bin/myscript
}

Let me quote one of the most active moderator of Archlinux Forum when he said here :
Do not touch the users home directory in a PKGBUILD, especially do not delete things because weird bugs can do bad things.
Now as a maintainer in AUR, I would suggest to add your statics files in the folder /usr/share/${pkgname}/ as it's also suggested in Arch Packaging Standards
Here's my suggestion (open for editions, suggestions, advices...):
# Maintainer: john doe <john at doe dot com>
pkgname=myscript
pkgver=1.0
pkgrel=1
pkgdesc="test script"
arch=(any)
url="https://github.com/me/myscript"
license=('MIT')
depends=('file')
source=('https://raw.githubusercontent.com/me/myscript/master/myscript'
'data1.txt'
'data2.txt'
'data3.txt')
sha256sums=('77eff738ea7fdeee5f5707cafdf34f74e3bf8df3b8b656a08a8740a45a7e22c45a7e60c31b13c71f5ee04aff9c82ac43abb39c37b2ea6b02a6454e262f336f73'
'sha256Ofdata1.txt'
'sha256Ofdata2.txt'
'sha256Ofdata3.txt')
package() {
install -Dm755 myscript "${pkgdir}/usr/share/${pkgname}/myscript"
install -Dm644 data1.txt "${pkgdir}/usr/share/${pkgname}/data1.txt"
install -Dm644 data2.txt "${pkgdir}/usr/share/${pkgname}/data2.txt"
install -Dm644 data3.txt "${pkgdir}/usr/share/${pkgname}/data3.txt"
}
Because of md5 known vulnerabilities, I used sha256 but you can choose to use other sha* for integrity check.

Related

Install mozroot-certdata package on a read only root file system

I have an established yocto build which I'm now trying to switch over to having a root file system (eg. EXTRA_IMAGE_FEATURES += "read-only-rootfs").
However I'm running into an issue with a recipe in the meta-mono layer: mozroot-certdata. I see the culprit is the pkg_postint script (http://git.yoctoproject.org/cgit/cgit.cgi/meta-mono/tree/recipes-mono/mozroot-certdata/mozroot-certdata_1.0.0.bb) which needs to modify the root file system on first boot which the build system is correctly flagging as impossible with a read only root file system:
ERROR: The following packages could not be configured offline and rootfs is read-only: ['mozroot-certdata']
My question is: is there a way to get these mozroot certs installed and configured with mono during the build process, such that the root file system does not need to be modified at boot/run time?
Well, I had a brief look at this late this summer, as I'm also using a read-only rootfs. The problem is that mozroot.exe is hardcoded to write into /usr/share/.mono/certs and does not respect your sysroot. You could probably hack mozroot.exe to actually write the imported files into the sysroot, though my time limit didn't allow me to try this (and neither have I ever looked into mono at all...).
My solution was instead to do the import at every boot. (It could also be done only once, but then the issue about updates come along). To achive this I made a bind mount on the directory where mozroot.exe wants to write the certdata.
Details of my solution
Add a file volatile-binds.bbappend with the following contents:
VOLATILE_BINDS += "\
/tmp/mono-certs /usr/share/.mono/certs \n\
"
That will make a bind mount from /tmp/mono-certs to /usr/share/.mono/certs, thus you'll be able to import the certs.
Then I added a service file and a mozroot-certdata_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
DEPENDS += "mono-native"
SRC_URI += "file://mozroot-certdata.service \
"
inherit systemd
SYSTEMD_SERVICE_${PN} = "mozroot-certdata.service"
do_install_append() {
mkdir -p ${D}${datadir}/.mono/certs
mkdir -p ${D}${systemd_system_unitdir}
install -m 440 ${WORKDIR}/mozroot-certdata.service ${D}${systemd_system_unitdir}/mozroot-certdata.service
}
FILES_${PN} += "${datadir}"
# Empty the postinstallation script, as we can import the cert offline.
pkg_postinst_${PN} () {
# mono $D/usr/lib/mono/4.5/mozroots.exe --import --machine --ask remove --file $D/${sysconfdir}/ssl/certdata.txt
}
The service file mozroot-certdata.service:
[Unit]
Description=Import certficates to Mono
After=tmp-mono-certs.service
[Service]
Type=oneshot
ExecStart=/usr/bin/mono /usr/lib/mono/4.5/mozroots.exe --import --machine --ask-remove --file /etc/ssl/certdata.txt
[Install]
WantedBy=multi-user.target
is there a way to get these mozroot certs installed and configured with mono during the build process
Yes but it requires mosroots binary to be executable at rootfs creation time. See Post-Installation Scripts in documentation.
The 'else' branch in pkg_postinst is what gets executed at that time and if it succeeds, then the delayed postinst is not needed (and you shouldn't get a build error). mono-native recipe already exists so you should be able to depend on that and to fix the else branch in the pkg_postinst function so it finds native mono & mosroots.exe and writes to the correct place under $D.
As Anders mentioned this alone is not enough if you care about package-based upgrades.

Create local debian repository

My goal is to demonstrate creating a local debian repository with controlled versions of tools used (e.g. compiler versions) to make a build system more predictable.
I've tried to follow this example: http://linuxconfig.org/easy-way-to-create-a-debian-package-and-local-package-repository
but when I get to the apt-get update stage, I always get a 404 not found on the repository I've added.
The apache2 server is running, I can view the default page installed at http://localhost/html/index.html.
I am trying this with the file fortune-mod_1%3a1.99.1-7_amd64.deb installed to /var/www/debs. I create the Packages.gz file as the tutorial suggests:
dpkg-scanpackages debs /dev/null | gzip -9c > debs/Packages.gz
I also add a new file: /etc/apt/sources.list.d/myppa.list with this line:
deb http://localhost debs/
I restart the apache2 service just in case:
sudo service apache2 restart
but running:
sudo apt-get update
still produces this error:
W: Failed to fetch http://localhost/debs/Packages 404 Not Found
Is there something basic I'm missing? Ultimately, I'd like to get this working over a LAN, but first have to get it working on a single machine.
EDIT: I'm doing this on Ubuntu 14.04.
EDIT: Show contents of file /etc/apt/sources.list.d/myppa.list
tldr; use aptly
It's the easiest apt repository management tool I've found and it comes with neat tutorial showing how to create, populate, and publish your own apt repository.
References:
https://www.aptly.info/
https://www.aptly.info/tutorial/repo/
I ended up solving the problem. It was an issue with the default document root being different for the tutorial than on my system. All I did was move my debs folder to html (document root turns out to be /var/www/html, not just /var/www on my install). That did the trick.

Using Sonarqube with Xcode

I am following this article to integrate SonarQube with Xcode and analyse Objective-C code. Though the setup is functional and getting no error/warnings after running the shell script, no violations are shown in the Dashboard. All i get to see is basic metrics like no. of lines of code, no. of files, etc.
Is there anyone who has tried this and guide me further.
In addition to the article you have specified above, I have few additions to that. You can follow the steps below,
Prerequisites:
Sonar
Sonar-runner
SonarQube Objective-C plugin (Licensed)
XCTool
OCLint (violations) and gcovr (code coverage)
MySql and JDK
Installation Steps:
Download and install MySql dmg. And then start the MySQL server from the System Preferences or via the command line or if restarted it has to be command line.
To start - sudo /usr/local/mysql/support-files/mysql.server start
To restart - sudo /usr/local/mysql/support-files/mysql.server restart
To stop - sudo /usr/local/mysql/support-files/mysql.server stop
Download and install latest JDK version.
Go to the terminal and enter the following commands to install the
prerequisites. (Homebrew is the package
management system for Mac Operating System. to install homebrew, enter the command -
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)")
Sonar - brew install sonar
Sonar-runner - brew install sonar-runner
XCTool - brew install xctool
OCLint - brew install oclint or
brew install https://gist.githubusercontent.com/TonyAnhTran/e1522b93853c5a456b74/raw/157549c7a77261e906fb88bc5606afd8bd727a73/oclint.rb for version 0.8.1(updated))
gcovr - brew install gcovr
Configuration:
- Set environment path of the Sonar:
export SONAR_HOME=/usr/local/Cellar/sonar-runner/2.4/libexec
export SONAR=$SONAR_HOME/bin
export PATH=$SONAR:$PATH
finally the command echo $SONAR_HOME should return the path - /usr/local/Cellar/sonar-runner/2.4/libexec
- Set up MySql DB:
export PATH=${PATH}:/usr/local/mysql/bin
mysql -u root;
CREATE DATABASE sonar_firstdb;
CREATE USER 'sonar'#'localhost' IDENTIFIED BY 'sonar';
GRANT ALL PRIVILEGES ON sonar_firstdb.* TO 'sonar'#'localhost';
FLUSH PRIVILEGES;
exit
- Set Sonar configuration settings:
vi /usr/local/Cellar/sonar/5.1.2/libexec/conf/sonar.properties
You can comment out most options except credentials and mysql and make sure that you enter the correct database name.
eg:
sonar.jdbc.url=jdbc:mysql://localhost:3306/**sonar_firstdb**?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
.
vi /usr/local/Cellar/sonar-runner/2.4/libexec/conf/sonar-runner.properties
You can comment out most options except credentials and mysql and make sure that you enter the correct database name.
eg:
sonar.jdbc.url=jdbc:mysql://localhost:3306/sonar_firstdb?useUnicode=true&characterEncoding=utf8
Start sonar using the command -
sonar start
The command will launch sonar so navigate to http://localhost:9000 in your browser of choice. Login (admin/admin) and have a look around.
Now you have to install the Objective-C or Swift plugin.
Move to Settings -> System -> Update Center -> Available Plugins (install the required plugin).
You have to restart the sonar to complete the installation once the pligin is added, And add license key once the plugin is installed.
through terminal go to the root directory of a project you want sonar to inspect, and create a project specific properties file with the following command:
vi sonar-project.properties
Add the following project specific properties and edit the bolded sections as per your project.
// Required configuration
sonar.projectKey=**com.payoda.wordsudoku**
sonar.projectName=**DragDrop**
sonar.projectVersion=**1.0**
sonar.language=**objc**
// Project description
sonar.projectDescription=**Sample description**
// Path to source directories
sonar.sources=**~/path to your project**
// Path to test directories (comment if no test)
//sonar.tests=testSrcDir
// Xcode project configuration (.xcodeproj or .xcworkspace)
// -> If you have a project: configure only sonar.objectivec.project
// -> If you have a workspace: configure sonar.objectivec.workspace and sonar.objectivec.project
// and use the later to specify which project(s) to include in the analysis (comma separated list)
sonar.objectivec.project=**DragDrop.xcodeproj**
// sonar.objectivec.workspace=myApplication.xcworkspace
// Scheme to build your application
sonar.objectivec.appScheme=**DragDrop**
// Scheme to build and run your tests (comment following line of you don't have any tests)
//sonar.objectivec.testScheme=myApplicationTests
/////////////////////////
// Optional configuration
// Encoding of the source code
sonar.sourceEncoding=**UTF-8**
// JUnit report generated by run-sonar.sh is stored in sonar-reports/TEST-report.xml
// Change it only if you generate the file on your own
// Change it only if you generate the file on your own
// The XML files have to be prefixed by TEST- otherwise they are not processed
// sonar.junit.reportsPath=sonar-reports/
// Cobertura report generated by run-sonar.sh is stored in sonar-reports/coverage.xml
// Change it only if you generate the file on your own
// sonar.objectivec.coverage.reportPattern=sonar-reports/coverage*.xml
// OCLint report generated by run-sonar.sh is stored in sonar-reports/oclint.xml
// Change it only if you generate the file on your own
// sonar.objectivec.oclint.report=sonar-reports/oclint.xml
// Paths to exclude from coverage report (tests, 3rd party libraries etc.)
// sonar.objectivec.excludedPathsFromCoverage=pattern1,pattern2
sonar.objectivec.excludedPathsFromCoverage=.*Tests.*
// Project SCM settings
// sonar.scm.enabled=true
// sonar.scm.url=scm:git:https://...
Save the file and you can reuse the same for other projects.
In the project root directory run the command - sonar-runner
You should try it with an older version of SonarQube (< 4.0 usually works).

How can I install matplotlib for my AWS Elastic Beanstalk application?

I'm having a hell of a time deploying matplotlib on AWS Elastic Beanstalk. I gather that my issue comes from some dependencies and the way that EB deploys packages installed with PIP, and have attempted to follow the instructions here on SO for resolving the issue.
I first tried incrementally deploying, as suggested in the linked answer, by adding pieces of the matplotlib package stack to my requirements.txt file in stages. But this takes forever (for each stage) and is prone to failure and timing out (which seems to leave build directories behind that stall subsequent package installations).
So the simple solution mentioned off-handedly at the end of the answer appeals to me: just eb ssh, activate the virtialenv with
source /opt/python/run/venv/bin/activate
and pip install packages manually. But I can't get this to work either. First I'm often confronted with left-beind build directories (as mentioned above)
pip can't proceed with requirement 'xxxx' due to a pre-existing build directory.
location: /opt/python/run/venv/build/xxxx
This is likely due to a previous installation that failed.
pip is being responsible and not assuming it can delete this.
Please delete it and try again.
But even after removing these, I consistently get
Exception:
Traceback (most recent call last):
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/basecommand.py", line 122, in main
status = self.run(options, args)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/commands/install.py", line 278, in run
requirement_set.prepare_files(finder, force_root_egg_info=self.bundle, bundle=self.bundle)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1197, in prepare_files
do_download,
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/req.py", line 1375, in unpack_url
self.session,
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/download.py", line 582, in unpack_http_url
unpack_file(temp_location, location, content_type, link)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 625, in unpack_file
untar_file(filename, location)
File "/opt/python/run/venv/lib/python2.7/site-packages/pip/util.py", line 533, in untar_file
os.makedirs(location)
File "/opt/python/run/venv/lib64/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: '/opt/python/run/venv/build/xxxx'
in response to pip install xxxx (and sudo pip fails with sudo: pip: command not found).
What can I do to get this working on AWS-EB? In particular, what do I need to do to get the simple SSH+PIP approach working; or is there some other better — simpler! — approach I should try.
FWIW, I have a .ebextensions/software.config with
packages:
yum:
gcc-c++: []
gcc-gfortran: []
python-devel: []
atlas-sse3-devel: []
lapack-devel: []
libpng-devel: []
freetype-devel: []
zlib-devel: []
and a requirements.txt that ends with
pytz==2014.10
pyparsing==2.0.3
python-dateutil==2.4.0
nose==1.3.4
six>=1.8.0
mock==1.0.1
numpy==1.9.1
matplotlib==1.4.2
After about 4 hours, I've gotten far as numpy (as reported by pip list in the EB virtualenv).
And (in case it matters) the user who is SSHing is part in a group with the policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"iam:PassRole"
],
"Resource": "*"
}
]
}
I have used many approaches to build and deploy numpy/scipy/matplotlib, on Windows as well as Linux systems. I have used system-provided package managers (aptitude, rpm), 3rd-party package managers (pypm), Python package managers (easy_install, pip), source releases, used different build environments/tools (GCC, but also Intel MKL, OpenMP). While doing so, I have run into many many quite annoying situations, but have also learned a lot about the pros and cons of each approach.
I have no experience with Elastic Beanstalk (EB), but I have experience with EC2. I see that you can SSH into an instance and poke around. So, what I suggest further below is based on
above-stated experiences and on
the more or less obvious boundary conditions regarding Beanstalk and on
your application scenario, described in another question here on SO and on
the fact that you just want to get things running, quickly
My suggestion: start off with not building these things yourself. Do not use pip. If possible, try to use the package manager of the Linux distribution in place and let it handle the installation of everything required for you, with a single command (e.g. sudo apt-get install python-matplotlib).
Disadvantages:
possibly old package versions, depending on the Linux distro in use
non-optimized builds (e.g. not built against e.g. Intel MKL or not leveraging OpenMP features or not using special instruction sets)
Advantages:
it quickly downloads, because packages are most likely cached near your machine
it quickly installs (these packages are pre-built, no compilation involved)
it just works
So, I hope you can just use aptitude or rpm or whatever on these machines and inherit the great work that the distribution package maintainers do for you, behind the scenes.
Once you are confident in your application and identified some bottleneck or issue, you might have reason to use a newer version of numpy/matplotlib/... or you might have reason to have a faster version of these, by creating an optimized build.
Edit: EB-related details of outlined approach
In the meantime we have learned that EB by default runs Amazon Linux which is based on Red Hat Enterprise Linux. Likewise, it uses yum as package manager and packages are in RPM format.
Amazon provides documentation about available packages. In Amazon Linux 2014.09, these packages are available: http://aws.amazon.com/de/amazon-linux-ami/2014.09-packages/
In this list we find
numpy-1.7.2
python-matplotlib-0.99.1.2
This version of matplotlib is very old, according to the changelog it is from September 2009: "2009-09-21 Tagged for release 0.99.1".
I did not anticipate it to be so old, but still, it might be sufficient for your needs. So we proceed with our plan (but I'd understand if that's a blocker).
Now, we have learned that system Python and EB Python are isolated from each other. That does not mean that EB Python cannot access system Python site packages. We just need it to tell so. A simple and clean method is to set up a proper directory structure with the packages that should be accessible to EB Python, and to communicate this directory to EB Python via sys.path.
Clearly, we need to customize the bootstrapping phase of EB containers. The available tools are documented here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Obviously, we want to make use of the packages approach, and tell EB to install the numpy and python-matplotlib packages via yum. So the corresponding config file section should contain:
packages:
yum:
numpy: []
python-matplotlib: []
Explicitly mentioning numpy might not be necessary, it likely is a dependency of python-matplotlib.
Also, we need to make use of the commands section:
You can use the commands key to execute commands on the EC2 instance.
The commands are processed in alphabetical order by name, and they run
before the application and web server are set up and the application
version file is extracted.
The following three commands create above-mentioned directory, and set up symbolic links to the numpy/mpl installation paths (these paths hopefully are available in the moment these commands become executed):
commands:
00-create-dir:
command: "mkdir -p /opt/py26-selected-site-packages"
01-link-numpy:
command: "ln -s /usr/lib64/python2.6/site-packages/numpy /opt/py26-selected-site-packages/numpy"
02-link-mpl:
command: "ln -s /usr/lib64/python2.6/site-packages/matplotlib /opt/py26-selected-site-packages/matplotlib"
Two uncertainties: the AWS docs to not clarify that packages are processed before commands are executed. You have to try. It it does not work, use container_commands. Secondly, it is just an educated guess that /usr/lib64/python2.6/site-packages/matplotlib is available after installing python-matplotlib. It should be installed to this place, but it may end up somewhere else. Needs to be tested. Numpy should end up where specified as inferred from this article.
[UPDATE FROM SEB]
AWS documentation says "The cfn-init helper script processes these configuration sections in the following order: packages, groups, users, sources, files, commands, and then services."
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
So, your approach is safe
[/UPDATE]
The crucial step, as pointed out in the comments to this answer, is to tell your Python app where to look for packages. Direct modification of sys.path before attempting to import is a reliable method to take control of this. The following code adds our special directory to the selection of directories in which Python looks out for packages, and then attempts to import matplotlib:
sys.path.append("/opt/py26-selected-site-packages")
from matplotlib import pyplot
The order in sys.path defines priorities, so in case there is any other matplotlib or numpy package available in one of the other directories, it might be a better idea to
sys.path.insert(0, "/opt/py26-selected-site-packages")
However, this should not be necessary if our whole approach was well thought-through.
To add to Jan-Philip Answer :
AWS Elastic Beanstalk is using Amazon Linux distribution (except for .Net environments). Amazon Linux uses the yum package manager. MatPlotLib is available in Amazon's software repository.
[ec2-user#ip-1-1-1-174 ~]$ yum list | grep matplot
python-matplotlib.x86_64 0.99.1.2-1.6.amzn1 amzn-main
If this version is the one you need for your application, I would try to simply modify your .ebextensions/software.config file and to add the package to the yum section of it:
packages:
yum:
python-matplotlib: []
python-devel: []
atlas-sse3-devel: []
lapack-devel: []
libpng-devel: []
freetype-devel: []
zlib-devel: []
A last note about AWS Elastic BeansTalk and SSH.
While Amazon gives you the possibility to SSH to your Elastic Beanstalk instances, you should use this possibility only for debugging purposes, to understand why your app failed or is not installing as suggested.
Other than that, your deployment must be 100% automatic. When Elastic Beanstalk (Auto Scaling to be precise) will scale out your infrastructure (add more instances) or scale it in (terminate instances) depending on your application workload, all your manual configuration will be lost.
Best practices is to not install SSH keys on your production environment, it further reduces the surface of attacks.
I might be a bit late to this question, but as AWS and a lot of the cloud service providers are moving into Docker and taking into consideration that you haven't specified the platform . I have a fast solution to your question:
Use the generic docker platform.
I created some images with Python, Numpy, Scipy and Matplotlib preinstalled, so you can directly pull and start using them with one line of code.
Python 2.7(This one also has the versions that you were specifying for numpy and matplotlib)
sudo docker pull chuseuiti/pynuscimat2.7
Python 3.4
sudo docker pull chuseuiti/pynusci
However you can create your own image or modify existing images.
In case you want to automate your instances, you can pass a Dockerfile to AWS with the definition of your image.
Tip, in case you don't know about docker:
It is need to login before been able to pull:
sudo docker login
After pulling the image, you can generate and work in a container created from an image with the next code:
sudo docker run -i -t chuseuiti/pynuscimat2.7 bash
PS. At least with the free tier AWS is always complaining about running out of time with scipy and matplotlib, it takes too much time to install them, that is why I use this option.

How do you compile a Pharo VM without an image?

I have already cloned the VM and installed all dependencies for my platform. Now I am a bit confused because a couple of guides suggests that Pharo image should be started to generate the C sources translated from Slang.
"Unix"
PharoVMBuilder buildUnix32.
"OSX"
PharoVMBuilder buildMacOSX32.
"Windows"
PharoVMBuilder buildWin32.
But how you generate a VM when you cannot start a VM in your platform? This sounds like chicken and egg problem.
This means is not possible to build a VM if you cannot start an image in that platform?
If you download pre-generated sources from the CI server as suggested by Esteban, you don't need the pharo-vm sources cloned from any repository. Just uncompress in a new folder and build from there.
Assuming you have your new sources in c:\phs, open directories.cmake and rename the hardcoded path as follows:
set(topDir "c:/phs/")
set(buildDir "c:/phs/build")
set(thirdpartyDir "${buildDir}/thirdparty")
set(platformsDir "c:/phs/platforms")
set(srcDir "c:/phs/src")
set(srcPluginsDir "${srcDir}/plugins")
set(srcVMDir "${srcDir}/vm")
set(platformName "win32")
set(targetPlatform ${platformsDir}/${platformName})
set(crossDir "${platformsDir}/Cross")
set(platformVMDir "${targetPlatform}/vm")
set(outputDir "c:/phs/results")
As you could not start a VM, I suppose you need to change at least the compilation flags used to generate the sources in the CI server. They are in c:\phs\build\CMakeLists.txt specially the following flags:
-march=... (your processor architecture, search for Safe Cflags)
Removing -g0 which suppress debug options
Remove -O2 (optimizations)
Remove -DNDEBUG
Modify -DDEBUGVM=0 to -DDEBUGVM=1
and finally start the build script
cd /c/phs/build
bash build.sh
You need to pre-generate the sources outside or take pre-generated sources from other place.
Let's assume you want to compile a kind of unix, you can download pre-generated sources from here:
https://ci.inria.fr/pharo/view/3.0-VM/job/PharoSVM/Architecture=32,Slave=vm-builder-linux/lastSuccessfulBuild/artifact/sources.tar.gz (for a stack vm)
https://ci.inria.fr/pharo/view/3.0-VM/job/PharoVM/Architecture=32,Slave=vm-builder-linux/lastSuccessfulBuild/artifact/sources.tar.gz (for a cog vm)