I have included taskLib.h and I am calling taskRtpLock()
Default Build Spec: SIMLINUXdiab
Active Build Spec: SIMLINUXdiab
Error: identifier "taskRtpLock" is undefined
There is two files called taskLib.h: one is located in $(TGT_DIR)/h/ and another is in $(TGT_DIR)/usr/h. The first is for kernel task and the second one is for Real Time Processes. Guess for some reason the first one is included in your case.
But I'm not quite sure about the actual reason because right now I have no sources to check it out. Anyway, ensure you added RTP components to your project with the following commands:
$ cd /path/to/WindRiver/
$ wrenv -p vxworks-6.5
$ cd path/to/your/project
$ vxprj bundle add projectFile.wpj BUNDLE_RTP_DEVELOP
Related
I have an established yocto build which I'm now trying to switch over to having a root file system (eg. EXTRA_IMAGE_FEATURES += "read-only-rootfs").
However I'm running into an issue with a recipe in the meta-mono layer: mozroot-certdata. I see the culprit is the pkg_postint script (http://git.yoctoproject.org/cgit/cgit.cgi/meta-mono/tree/recipes-mono/mozroot-certdata/mozroot-certdata_1.0.0.bb) which needs to modify the root file system on first boot which the build system is correctly flagging as impossible with a read only root file system:
ERROR: The following packages could not be configured offline and rootfs is read-only: ['mozroot-certdata']
My question is: is there a way to get these mozroot certs installed and configured with mono during the build process, such that the root file system does not need to be modified at boot/run time?
Well, I had a brief look at this late this summer, as I'm also using a read-only rootfs. The problem is that mozroot.exe is hardcoded to write into /usr/share/.mono/certs and does not respect your sysroot. You could probably hack mozroot.exe to actually write the imported files into the sysroot, though my time limit didn't allow me to try this (and neither have I ever looked into mono at all...).
My solution was instead to do the import at every boot. (It could also be done only once, but then the issue about updates come along). To achive this I made a bind mount on the directory where mozroot.exe wants to write the certdata.
Details of my solution
Add a file volatile-binds.bbappend with the following contents:
VOLATILE_BINDS += "\
/tmp/mono-certs /usr/share/.mono/certs \n\
"
That will make a bind mount from /tmp/mono-certs to /usr/share/.mono/certs, thus you'll be able to import the certs.
Then I added a service file and a mozroot-certdata_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
DEPENDS += "mono-native"
SRC_URI += "file://mozroot-certdata.service \
"
inherit systemd
SYSTEMD_SERVICE_${PN} = "mozroot-certdata.service"
do_install_append() {
mkdir -p ${D}${datadir}/.mono/certs
mkdir -p ${D}${systemd_system_unitdir}
install -m 440 ${WORKDIR}/mozroot-certdata.service ${D}${systemd_system_unitdir}/mozroot-certdata.service
}
FILES_${PN} += "${datadir}"
# Empty the postinstallation script, as we can import the cert offline.
pkg_postinst_${PN} () {
# mono $D/usr/lib/mono/4.5/mozroots.exe --import --machine --ask remove --file $D/${sysconfdir}/ssl/certdata.txt
}
The service file mozroot-certdata.service:
[Unit]
Description=Import certficates to Mono
After=tmp-mono-certs.service
[Service]
Type=oneshot
ExecStart=/usr/bin/mono /usr/lib/mono/4.5/mozroots.exe --import --machine --ask-remove --file /etc/ssl/certdata.txt
[Install]
WantedBy=multi-user.target
is there a way to get these mozroot certs installed and configured with mono during the build process
Yes but it requires mosroots binary to be executable at rootfs creation time. See Post-Installation Scripts in documentation.
The 'else' branch in pkg_postinst is what gets executed at that time and if it succeeds, then the delayed postinst is not needed (and you shouldn't get a build error). mono-native recipe already exists so you should be able to depend on that and to fix the else branch in the pkg_postinst function so it finds native mono & mosroots.exe and writes to the correct place under $D.
As Anders mentioned this alone is not enough if you care about package-based upgrades.
I'm trying to publish an .apk into my Application Center through console. I've followed this note but it doesn't work in my environment:
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/7.1/moving-production/distributing-mobile-applications-with-application-center/#cmdLineTools
If I type :
./acdeploytool.sh /home/miguel/Downloads/HelloWorldMyHelloAndroid.apk
I get this error message:
FWLAC0803E: Unable to connect:
Connection refused
Perhaps the server or context is wrongly specified.
File:/home/myUser/Downloads/HelloWorldMyHelloAndroid.apk
And if I try another way using this java command:
java com.ibm.appcenter.Upload -f http://localhost:9080 -c applicationcenter -u demo -p demo /home/myUser/Downloads/HelloWorldMyHelloAndroid.apk
I get this one:
Error: Could not find or load main class com.ibm.appcenter.Upload
I don't get any errors when I do this 'publish' operation directly in Application Center or through MobileFirst Studio.
Miguel, whether you use the script or the Java command, you need to specify the arguments to use. Please try the following:
./acdeploytool.sh -s http://localhost:9080 -c applicationcenter -u demo -p demo /home/miguel/Downloads/HelloWorldMyHelloAndroid.apk
I tried a similar command in my environment and was able to successfully deploy the apk to Application Center. If the command still does not work, make sure that the host/port that you are using are correct, and that the username and password are valid.
For the Java command that you executed, I see a few problems. First, the -cp argument needs to be specified in order to add the applicationcenterdeploytool.jar and json4j.jar files to the classpath. Next, the command shows "-f", but it should be "-s" to specify the server. Lastly, the path that was specified for the .apk is different than what you specified in the first command: myUser vs. miguel. So make sure that the correct path is used. If there are any further questions, let me know. Thanks.
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
I am aware of the ENV instruction, but I do no want these variables to be environment variables.
Is there a way to declare variables at Dockerfile scope?
You can use ARG - see https://docs.docker.com/engine/reference/builder/#arg
The ARG instruction defines a variable that users can pass at
build-time to the builder with the docker build command using the
--build-arg <varname>=<value> flag. If a user specifies a build
argument that was not defined in the Dockerfile, the build outputs an
error.
Can be useful with COPY during build time (e.g. copying tag specific content like specific folders)
For example:
ARG MODEL_TO_COPY
COPY application ./application
COPY $MODEL_TO_COPY ./application/$MODEL_TO_COPY
While building the container:
docker build --build-arg MODEL_TO_COPY=model_name -t <container>:<model_name specific tag> .
To answer your question:
In my Dockerfile, I would like to define variables that I can use later in the Dockerfile.
You can define a variable with:
ARG myvalue=3
Spaces around the equal character are not allowed.
And use it later with:
RUN echo $myvalue > /test
To my knowledge, only ENV allows that, as mentioned in "Environment replacement"
Environment variables (declared with the ENV statement) can also be used in certain instructions as variables to be interpreted by the Dockerfile.
They have to be environment variables in order to be redeclared in each new containers created for each line of the Dockerfile by docker build.
In other words, those variables aren't interpreted directly in a Dockerfile, but in a container created for a Dockerfile line, hence the use of environment variable.
This day, I use both ARG (docker 1.10+, and docker build --build-arg var=value) and ENV.
Using ARG alone means your variable is visible at build time, not at runtime.
My Dockerfile usually has:
ARG var
ENV var=${var}
In your case, ARG is enough: I use it typically for setting http_proxy variable, that docker build needs for accessing internet at build time.
Christopher King adds in the comments:
Watch out!
The ARG variable is only in scope for the "stage that it is used" and needs to be redeclared for each stage.
He points out to Dockerfile / scope
An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile not from the argument’s use on the command-line or elsewhere.
For example, consider this Dockerfile:
FROM busybox
USER ${user:-some_user}
ARG user
USER $user
# ...
A user builds this file by calling:
docker build --build-arg user=what_user .
The USER at line 2 evaluates to some_user as the user variable is defined on the subsequent line 3.
The USER at line 4 evaluates to what_user as user is defined and the what_user value was passed on the command line.
Prior to its definition by an ARG instruction, any use of a variable results in an empty string.
An ARG instruction goes out of scope at the end of the build stage where it was defined.
To use an arg in multiple stages, each stage must include the ARG instruction.
If the variable is re-used within the same RUN instruction, one could simply set a shell variable. I really like how they approached this with the official Ruby Dockerfile.
You can use ARG variable defaultValue and during the run command you can even update this value using --build-arg variable=value. To use these variables in the docker file you can refer them as $variable in run command.
Note: These variables would be available for Linux commands like RUN echo $variable and they wouldn't persist in the image.
Late to the party, but if you don't want to expose environment variables, I guess it's easier to do something like this:
RUN echo 1 > /tmp/__var_1
RUN echo `cat /tmp/__var_1`
RUN rm -f /tmp/__var_1
I ended up doing it because we host private npm packages in aws codeartifact:
RUN aws codeartifact get-authorization-token --output text > /tmp/codeartifact.token
RUN npm config set //company-123456.d.codeartifact.us-east-2.amazonaws.com/npm/internal/:_authToken=`cat /tmp/codeartifact.token`
RUN rm -f /tmp/codeartifact.token
And here ARG cannot work and i don't want to use ENV because i don't want to expose this token to anything else
I've posted this issue on the google groups area for mapnik already, so I'm just going to c/p from there:
I've been troubleshooting this issue for a couple days, now. Bear with me for context:
Starting from a freshly-untarred source of mapnik 2.2.0 on RHEL 6.5:
First off, running "./configure" (which just calls scons/scons.py) works great. Finishes correctly, then I run make + make install and I get exactly what I want built and installed.
My goal has been to create a mapnik 2.2.0 RPM for internal use. Please do not suggest using an "official" mapnik RPM instead (or any other already-built mapnik RPM), as my entire purpose here is to build mapnik from source and create my own RPM.
That being said, when my RPM gets to its %build phase and runs ./configure, it freezes while scons checks for freetype-config. I've let it sit for hours, and nothing happens.
After I looked at the generated config.log file, it would appear that configure is failing on an "awk" command that goes something like this:
awk '{print $\(NF-1\)}'
The command is performed on the string that results from the command:
/usr/bin/freetype-config --libs --cflags
I'm wondering, first of all, how to fix this, but second of all why this would work manually, but fail when the rpmbuild process gets to the ./configure command. I've made sure that the same user (myself) is running both ./configure commands on the exact same files.
My third question would be: why does the configure step freeze on this issue, instead of throwing some sort of error and stopping?
Note that the problem with the "awk" command is the backslashes--the error is:
awk: {print $\(NF-1\)}
^ backslash not last character on line
When I remove the backslashes, the command succeeds. Whether or not removing the backslashes is the correct thing to do in the long run, I do not know.
It would seem this is some sort of python string-parsing / encoding issue, but I'm unsure. What really confuses me is why this works manually and fails when the rpmbuild tries to do it.
My only thought is that there's some difference between the rpmbuild environment and my shell that I configured on. What it is, I do not know.
Thank you for any assistance you can provide. If I can provide any more context, please let me know.
UPDATE: an svn command runs after the awk failure, but is missing whatever number gets passed after the -r option. I'm wondering if this could be causing the hang...thoughts?
I have been given the unpleasant task of installing a Rails 3 app I have written on Windows Server 2008 (definitely not my choice - was promised a linux server but I.T. pulled the rug out at the last minute so please don't suggest a change in environment as a solution).
I followed the instructions on this blog post (with a few minor modifications) and now actually have my app up and running under Windows/IIS (proxying mongrel) after a great deal of frustration. The only thing remaining is to get mongrel running as a service.
Unfortunately the mongrel gem has not been kept up-to-date for Rails 3 and while I can get the app running under mongrel at the command line I am unable to use mongrel_service to get the app running as a service.
The solution to this appears to be to use the service_wrapper project on github which has been mentioned in this previous question. The project is not yet complete but apparently functional but comes without documentation/binaries. I have looked through the source-code and don't really understand what is it/how it works so was wondering if someone can point me in the right direction (or, even better, walk me through how) to get this installed.
So close, yet still so far.....
Alright I have this worked out (with a little help from luislavena himself - thanks).
Download service_wrapper-0.1.0-win32.zip from https://github.com/luislavena/service_wrapper/downloads and extract service_wrapper.exe from bin/. I extracted it to C:\service_wrapper.
Next set up a configuration file. I used the hello example and modified it for my app then placed it in the C:\service_wrapper directory.
; Service section, it will be the only section read by service_wrapper
[service]
; Provide full path to executable to avoid issues when executable path was not
; added to system PATH.
executable = C:\Ruby192\bin\ruby.exe
; Provide there the arguments you will pass to executable from the command line
arguments = C:\railsapp\script\rails s -e production
; Which directory will be used when invoking executable.
; Provide a full path to the directory (not to a file)
directory = C:\railsapp
; Optionally specify a logfile where both STDOUT and STDERR of executable will
; be redirected.
; Please note that full path is also required.
logfile = C:\railsapp\log\service_wrapper.log
Now just create the service with
sc create railsapp binPath= "C:\service_wrapper\service_wrapper.exe C:\service_wrapper\service_wrapper.conf" start= auto
(watch for the spaces after binPath= and start=. It won't work without them)
Then start it with
net start railsapp
And you're home and hosed!
I ought to contribute due to this article. For config of using bundle exec, use the following:
Note that I am setting up rubyCAS! it's a great OpenCAS authentication mechanism!!!
; Service section, it will be the only section read by service_wrapper
[service]
; Provide full path to executable to avoid issues when executable path was not
; added to system PATH.
executable = C:\Ruby\bin\ruby.exe
; Provide there the arguments you will pass to executable from the command line
arguments = D:\rubycas-server bundle exec rackup -s mongrel -p 11011
; Which directory will be used when invoking executable.
; Provide a full path to the directory (not to a file)
directory = D:\rubycas-server
; Optionally specify a logfile where both STDOUT and STDERR of executable will
; be redirected.
; Please note that full path is also required.
logfile = D:\rubycas-server\log\service_wrapper.log