just started to learn rust and wanted a real project to work on. Since my raspberry-pi only came with a non-pwm fan I wanted to write a small CPU temp monitor/fan control program for the pi.
Now I want to execute some unit tests and having some problems with it. First, some information:
I am working on MacOS
I installed compile tools and am using target armv7-unknown-linux-musleabihf
(target gnueabihf didn't work for me, even though it is mentioned in multiple tutorials)
I use a raspberry-pi 4 B
I only use one dependency: rppal = "0.13.1"
Compiling to the raspberry-pi target works like a charm and I can execute it on the pi.
But when I want to execute the tests on my MacOS the rppal dependency fails to compile.
Now I was wondering if there is a way to run tests with only compiling what is actually needed.
Related
Why is compiling and running Kotlin extremely slow (at least on my machine)? I have the latest version of Kotlin compiler installed on my machine.
Specifically the command:
kotlinc main.kt -include-runtime -d main.jar && java -jar main.jar
It's so slow that printing "hello word" takes up to 9 seconds.
I initially thought that it is slow by default (I used dDcoder app) but now I've used the online playground and Sololearn it's much more faster.
My PC is running Windows 10 with Core i5 and 4GB RAM.
This is usually a common complaint among Kotlin users, especially when compiling a project for the first time. Unfortunately, there's nothing you can do about it, your PC spec is sufficient for an effective build and run of Kotlin project.
My advice, for offline compiling, use Intellij IDEA. This IDE has the most efficient support for Kotlin... as it is produced and managed by Jetbrains.
If you use Intellij IDEA already, The project will take less time to compile from first build.
I had the same complaint when I started working with Kotlin from Java. Java's compile time is faster, and there's nothing you can do about the compile time difference as of the present.
Maybe I'm biased, but I think it is unusual to compile Kotlin code this way, so it is not that well optimized. Usually, we use either IntelliJ or Gradle (or Maven), not kotlinc directly. Gradle can cache compiled code, it can run a background daemon to not re-initialize everything with each build, etc.
If we create a Gradle project, even consisting of several submodules, with some multiplatform modules, etc. then running it repeatedly with minor changes takes less than a second each. Even if changes are spread among several submodules, so Gradle has to build all of them.
Having said that, compilation time is a common problem in Kotlin right now. I believe optimizations are somewhere near the top of the Kotlin team priority list, so hopefully we'll see some improvement in the near future. As for now, it is considered a problem for some people.
So I saw this project http://mxe.cc/ and tried it, it seems like it is very easy to compile stuff for windows with this. I tried to hack it a little bit to compile binaries for linux instead, because, if it compiles for other system so easily how can it be hard to compile for host? 90% of the stuff seems to just build out of the box, but there are some errors and therefore I cannot build. I want to ask, how correctly should I configure mxe to build for the linux host? I know this is not supported but I don't think it should be that hard because we build from source anyway. And there are next to no modifications for downloaded sources too (in a windows build that is).
For people who might ask why I don't want to use shared stuff, I want to basically have two options:
dpkg package for user with dependencies specified (the linux way)
single standalone static executable
Any suggestions? Or maybe there's whole another guide on linux on how to build things from scratch (without a lot of manual work like mxe does)?
We have a storage controller that is used as target, it runs MIPS and has some additional hardware that goes with it. Development is done with Green Hills Compiler. Can we use VALGRIND to perform analysis on the code base ?
If your target is not running Linux or an OS with a POSIX API it is unlikely that you will be able to build and execute Valgrind natively. One possibility is to build your embedded code on Linux in a suitable test harness and execute tests on that.
I wonder if it is possible to make an Objective-C project (no UI, no simulator is needed, hosted on Github) to be built on Travis?
Current Travis docs seems not to contain any information regarding the option to have Objective-C projects built on Travis.
But Building a C Project says that:
Travis VMs are 32 bit and currently provide
gcc 4.6
clang 3.1
core GNU build toolchain (autotools, make), cmake, scons
and that default test script is run like
./configure && make && make test
So, to rephrase the subject question:
Do any workarounds exist to make SenTestingKit test suite, that my project currently uses, behave like a C test suite, so it could be treated as such by Travis VM?
NOTE 1: Here is the list of resources, which seem to be related to the question:
Compiling Objective-C without a GUI. It claims that Both gcc and clang compile Objective-C 2.0 and that sounds very promising!
NOTE 2: I suspect that it is possible to use some testing tool other than SenTestingKit:
this tool should be easy to be run without a GUI and without xcodebuild etc: I've opened related issue for that: Is there any non-Xcode-based command line testing tool for Objective-C?. I am even thinking about writing an easy one just to be used by my project.
A few days Travis announced that they are now also supporting objetive-c projects. See Building An Objective-C Project for more info.
Update
It is even possible to make Travis launch the simulator so that you can run application tests and UI related stuff. I required a little extra work, but it's pretty straight forward. I wrote about it here.
I have a Cocoa application which uses an installer. I want to be able to run code coverage over the code (after it has been installed).
This is not the usual unit-test scenario where a single binary will run a suite of tests. Rather, the tests in question will interact with the UI and the app back-end whilst it is running, so I ideally want to be able to start the application knowing that Gcov is profiling it and then run tests against it.
Any ideas?
Update
Thanks to mustISignUp. To clarify why I asked the question:
The mistake I made was thinking that the object, .gcno and .gcda files had to be installed alongside the binaries (thus making the installer difficult). As it happens, the original location of the files is hard-wired into the code along with the instrumentation code.
The solution I went with is zipping up the code on the build machine and putting it on disk on the test machine. lcov (or just gcov) can be run from there. Alternatively, the .gcda files will be created on disk and they must be copied back up to the machine containing the source code.
Either way, the source code doesn't have to be present at install and run time, but if you want to get your results back lcov-style, the coverage counter files produced must be reconciled with the source code.
The app needs to be compiled with the correct GCC flags which will insert the profiling instructions into the code. It isn't something you can turn on and off. ie your code is modified at compile time to output the coverage info.
So, if you compiled the app with the correct flags it will emit coverage data, if you didn't, it won't (and you certainly wouldn't want it to for an app you were going to distribute).