I have already cloned the VM and installed all dependencies for my platform. Now I am a bit confused because a couple of guides suggests that Pharo image should be started to generate the C sources translated from Slang.
"Unix"
PharoVMBuilder buildUnix32.
"OSX"
PharoVMBuilder buildMacOSX32.
"Windows"
PharoVMBuilder buildWin32.
But how you generate a VM when you cannot start a VM in your platform? This sounds like chicken and egg problem.
This means is not possible to build a VM if you cannot start an image in that platform?
If you download pre-generated sources from the CI server as suggested by Esteban, you don't need the pharo-vm sources cloned from any repository. Just uncompress in a new folder and build from there.
Assuming you have your new sources in c:\phs, open directories.cmake and rename the hardcoded path as follows:
set(topDir "c:/phs/")
set(buildDir "c:/phs/build")
set(thirdpartyDir "${buildDir}/thirdparty")
set(platformsDir "c:/phs/platforms")
set(srcDir "c:/phs/src")
set(srcPluginsDir "${srcDir}/plugins")
set(srcVMDir "${srcDir}/vm")
set(platformName "win32")
set(targetPlatform ${platformsDir}/${platformName})
set(crossDir "${platformsDir}/Cross")
set(platformVMDir "${targetPlatform}/vm")
set(outputDir "c:/phs/results")
As you could not start a VM, I suppose you need to change at least the compilation flags used to generate the sources in the CI server. They are in c:\phs\build\CMakeLists.txt specially the following flags:
-march=... (your processor architecture, search for Safe Cflags)
Removing -g0 which suppress debug options
Remove -O2 (optimizations)
Remove -DNDEBUG
Modify -DDEBUGVM=0 to -DDEBUGVM=1
and finally start the build script
cd /c/phs/build
bash build.sh
You need to pre-generate the sources outside or take pre-generated sources from other place.
Let's assume you want to compile a kind of unix, you can download pre-generated sources from here:
https://ci.inria.fr/pharo/view/3.0-VM/job/PharoSVM/Architecture=32,Slave=vm-builder-linux/lastSuccessfulBuild/artifact/sources.tar.gz (for a stack vm)
https://ci.inria.fr/pharo/view/3.0-VM/job/PharoVM/Architecture=32,Slave=vm-builder-linux/lastSuccessfulBuild/artifact/sources.tar.gz (for a cog vm)
Related
I have a new and compiled version of VASP that performs magnetic constrains of the local moment orientations on-the-fly, which I'd like to test and use with pyiron.
Please could you provide guidance and steps to follow in order to add this version of VASP to pyiron as one more executable?
Thank you,
Eduardo
You need to add corresponding run script(s) to the pyiron resources.
In your .pyiron config file the paths for the resources are stated as RESOURCE_PATHS = /comma/separated/list, /of/paths/to/the/resources.
In the resources you need to add the run scripts in the directory vasp/bin/ using the naming convention run_code_version[_mpie].sh. The actual run script is, of course, dependent on the cluster and the libraries used to build VASP and might look similar to
run_vasp_version.sh (single core version):
module load intel/...
srun -n 1 /path/to/the/new/executable/version/vasp_std
run_vasp_version_mpie.sh (mpi version):
module load intel/... impi/...
srun -n $1 /path/to/the/new/executable/version/vasp_std
Please have a look at the other vasp run scripts which you have used from the shared resources.
(For MPIE colleagues: Detailed examples for the setup/run scripts for vasp on cmti can be found in this privat repo.)
How do I build and run the code from github for NS3 in the link provided below
https://github.com/mkheirkhah/mptcp
it has already ns3 installation steps with mptcp
https://github.com/mkheirkhah/mptcp
this is the Installations steps go according to it ul get to know
We have tested this code on Mac (with llvm-gcc42 and python 2.7.3-11) and several Linux distributions (e.g. Red Hat with gcc4.4.7 or Ubuntu16.4 with gcc5.4.0).
Clone the MPTCP's repository
git clone https://github.com/mkheirkhah/mptcp.git
Configure and build
CXXFLAGS="-Wall" ./waf configure build
Run a simulation
./waf --run "mptcp"
https://github.com/Kashif-Nadeem/ns-3-dev-git is the more recent fork of https://github.com/teto/ns-3-dev-git/wiki which started as mkheirkhah's fork.
It should work with the latest ns-3. Compared to mkheirkhah's approach (I haven't checked if it is still valid), it tries to reuse the TCP socket code so that it can use TCP socket application. You can read more details from https://www.researchgate.net/publication/313623789_An_Implementation_of_Multipath_TCP_in_ns3
I've loaded Magritte and Seaside from the configuration browser into Pharo 4, but I don't see that the package Magritte-Seaside was loaded.
How do I load this package?
I highly recommend you loading the Stephan's QCMagritte package which includes the correct directives to load Seaside 3 with a Zinc adaptor so you can start a web server without loading anything else:
From MinGW command line:
$ wget -O- http://get.pharo.org/40+vm | bash
$ ./pharo-vm/Pharo.exe Pharo.image config \
"http://smalltalkhub.com/mc/Pharo/MetaRepoForPharo40" \
"ConfigurationOfQCMagritte" --printVersion --install=stable --group=All
Create an adaptor, start a web server with the Seaside Control Panel on port 8080, and then point your browser to http://localhost:8080/browse to see applications
The configuration browser only loads default groups for the configurations it loads. In the ConfigurationOfSeaside and ConfigurationOfMagritte you'll find many more groups.
In the ConfigurationOfQCMagritte I use 'Seaside' from Magritte and #('JQueryUI' 'JQuery-JSON') from Seaside. If you don't mind the extra packages, you could just load QCMagritte from the configuration browser.
To just add the missing packages, you could load the latest Magritte-Seaside and Magritte-Pharo-Seaside packages from the Magritte3 smalltalkhub repository with the Monticello Browser.
A pre-loaded QCMagritte image is available from http:ci.inria.fr/pharo-contribution/job/QCMagritte
I saw the group Seaside defined as a Metacello group in the configuration's baseline for 3.3 (which is used by 3.5, the current version). So I was able to load the package by evaluating:
(ConfigurationOfMagritte3 project version: #stable) load: 'Seaside'.
As mention in Orafce Install.orafunc:
..install Orafce functions in the database, either run the orafce.sql script using the pgAdmin SQL tool..
I tried running the orafce--3.0.sql in pgAdmin sql editor. This give me error
ERROR: could not access file "MODULE_PATHNAME": No such file or directory.
What do you mean by module path?
Installed program:
strawberry perl with DBD::Oracle
postgresql 9.3
pgAdmin III
Not fully installed:
ora2pg
I tried installing ora2pg...with a problem.
H:\PostgreSQL\ora2pg-12.1>perl makefile.pl
Unparsable version '' for prerequisite DBD::Oracle at makefile.pl line 553
Generating a dmake-style Makefile
Writing Makefile for Ora2Pg
Writing MYMETA.yml and MYMETA.json
Done...
H:\PostgreSQL\ora2pg-12.1>dmake && dmake install
"Installing default configuration file (ora2pg_dist.conf) to C:\ora2pg"
Appending installation info to C:\strawberry\perl\lib/perllocal.pod
dmake: Warning: -- Target [install] was made but the time stamp has not been up
dated.
Suggested Solution:
I downloaded a copy of orafce from okbob github
Unzip the file to folder D:/Postgresql/orafce-master
I copy only the following files
orafce--unpackaged--3.0.6.sql
orafce--3.0.6.sql
orafce.control
to folder C:\Program Files\PostgreSQL\9.3\share\extension
Then I try running this command in pgAdmin III sql tools.
CREATE EXTENSION orafce;
I received this Warning and Error.
[WARNING ] CREATE EXTENSION orafce
ERROR: syntax error in file "C:/Program Files/PostgreSQL/9.3/share/extension/orafce.control" line 1, near end of line
I checked orafce.control content. It has this config.
# intarray extension
comment = 'Functions and operators that emulate a subset of functions and packages from the Oracle RDBMS'
default_version = '3.0.6'
module_pathname = '$libdir/orafunc'
relocatable = false`
I can't pass to this wall. What seems the problem?
So, you are working with source raw files. You should to compile these files first - and later you can use it. It is relative simply on Unix like platforms, where C compiler is usually available, and pretty hard on MS Windows, where you have to install C compiler first.
I afraid so we lost pgFoundry archive, where was orafce precompiled and packed.
Almost all Linux distributions support orafce directly - and you can install it without compilation from repositories.
see http://wiki.postgresql.org/wiki/Building_and_Installing_PostgreSQL_Extension_Modules
$libdir is symbol, that is used for PostgreSQL extensions directory. It can be different for any platform - and it is replaced inside compilation stage by actual value. MODULE_PATHNAME has similar meaning. In compilation stage is replaced by valid actual path to library with compiled code.
I am sorry - we don't provide a compiled files - mainly due high risk for MS Windows. We have no forces, and tools to maintain all Win safely. In this moment, you can:
try to contact someone who use orafce for windows for backup of orafce installers
try to compile this extension by self (Microsoft Visual Studio Express edition is free and downloadable on internet).
other possibility is migrate database server to Linux - almost all database maintenance and usage is more simply and more robust there (due missing viruses, antiviruses and less resource requests). The Linux is primary platform for Oracle too.
some tutorials:
http://blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows/
http://www.scribd.com/doc/40725510/Build-PostgreSQL-C-Functions-on-Windows
How to symbolicate the ios crash report after uploading to server in a linux environment where iOS development tools and scripts will not be available. I know Apple uses atos and some other tools to map the hex addresses to symbol along with .dYSM file.
I can upload .dYSM file along with crash report to server. Refered QuincyKit, but they are doing symbolication locally. But other's like HockeyApp and Critterism are doing it remotely.
Pls recommend the possible ways to do it in server.
It is possible. You can take a look at https://github.com/facebook/atosl
I got it working under Linux. (Ubuntu Server) However, it takes some time to get it up and running.
Installing atosl
First, you need to install libdwarf-dev, dwarfdump, binutils-dev and libiberty-dev.
E.g. on Ubuntu:
$ sudo apt-get install libdwarf-dev dwarfdump binutils-dev libiberty-dev
Download or clone the atosl repo from GitHub:
$ git clone https://github.com/facebook/atosl.git
CD to the atosl dir
$ cd atosl
Create a local config config.mk.local which contains a flag with the location of your binutil apps. (in Ubuntu by default that's /usr/bin). If you're not sure, you can find out by executing cat /var/lib/dpkg/info/binutils.list | less and copy the path of the file objdump. E.g. if the entry is /usr/bin/objdump, your path is /usr/bin.
So in the end, your config.mk.local should look like this:
LDFLAGS += -L/usr/bin
Compile it:
$ make
Now you can start using it:
$ ./atosl --help
Symbolicating example
To show how atosl is used, I'll provide a simple example.
Now let's take a look at a line from the crash log:
13 ErrorApp 0x000ea294 0xe3000 + 29332
To symbolicate this, we will need the load address, and the runtime address.
In this example the runtime address is 0x000ea294, and the load address is 0xe3000.
Now we have everything we need:
$ ./atosl -o [YOUR_dSYM_FILE] -l [LOAD_ADDRESS] [RUNTIME_ADDRESS]
In this example:
$ ./atosl -o ErrorApp.app.dSYM/Contents/Resources/DWARF/ErrorApp -l 0xe3000 0x000ea294
Which returns the symbolicated line:
main (in ErrorApp) (main.m:16)
FYI
Your vmaddr, which usually is 0x00001000, you can find by looking at the segname __TEXT Mach-O load command of your binary. In my example, this happens to be different, namely 0x00004000
To find the address, we need to do some math.
The address is found by the following formula:
address = vmaddr + ( runtime_address - load_address )
In this example our address is:
0x00004000 + ( 0x000ea294 - 0xe3000 ) = 0xB294
I haven't played around with this that much yet, but for now it seems to give me the results I needed. Maybe it will work for you too.
You need to implement your own linux compatible versions of atos, otool and dwarfdump (at least the functionality needed for symbolication). The Apple tools are not open source and only run on Mac OS X.
None of the services provide a solution that can be used by 3rd parties on non OS X systems. So your only chance, besides implementing the required functionality to run on your linux system, is to do it on a Mac like QuincyKit does it, see https://github.com/TheRealKerni/QuincyKit/wiki/Remote-symbolication or use a third party service.
Note: I am the creator of QuincyKit and Co-Founder of HockeyApp.