Is it possible to customize the Solution Platform Name when creating Visual Studio solutions using CMake?
(Note: All of the below was tested with CMake 3.12.0 and CMake 3.15.0-rc1)
Based on tests with some open source projects (CEF etc.) and also a simplified demo project (https://github.com/cognitivewaves/CMake-VisualStudio-Example) it seems like CMake creates solution files using the following pattern (for 32-bit):
...
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Win32 = Debug|Win32
Release|Win32 = Release|Win32
...
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Debug|Win32.ActiveCfg = Debug|Win32
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Debug|Win32.Build.0 = Debug|Win32
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Release|Win32.ActiveCfg = Release|Win32
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Release|Win32.Build.0 = Release|Win32
...
EngGlobalSection
...
EndGlobal
...
Generated using:
cmake .. -G "Visual Studio 14" -Tv140
However, I would like to create the solutions using 'x86' instead of 'Win32' for the Solution Platform name to match the default formats created by VS 2015 (Update 3) and VS 2019 for example.
Like so:
...
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|x86 = Debug|x86
Release|x86 = Release|x86
...
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Debug|x86.ActiveCfg = Debug|Win32
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Debug|x86.Build.0 = Debug|Win32
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Release|x86.ActiveCfg = Release|Win32
{01234567-89AB-CDEF-0123-456789ABCDEF0}.Release|x86.Build.0 = Release|Win32
...
EngGlobalSection
...
EndGlobal
...
Specifying 'x86' at the command line
cmake .. -G "Visual Studio 14 x86" -Tv140
seems to be unsupported (Error: Could not create name generator ...).
Likewise for using the host variable
cmake .. -G "Visual Studio 14" -Tv140,host=x86
(error with CMake 3.12.0: contains invalid field 'host=x86'; no error with CMake 3.15.0-RC1 but doesn't seem to have an effect either)
or via
cmake .. -G "Visual Studio 14" -Tv140 -DCMAKE_VS_PLATFORM_TOOLSET_HOST_ARCHITECTURE=x86
(no error but doesn't seem to have any effect either)
Is there another argument I am missing / overlooked in the documentation?
Thank you for taking the time to read this and with best regards,
-T
Related
I wrote my own cudaMelloc as follows, which I plan to apply in tensorflow serving (GPU) to trace the cudaMelloc calls via the LD_PRELOAD mechanism (could be used to limit the GPU usage for each tf serving container with proper modification as well).
typedef cudaError_t (*cu_malloc)(void **, size_t);
/* cudaMalloc wrapper function */
cudaError_t cudaMalloc(void **devPtr, size_t size)
{
//cudaError_t (*cu_malloc)(void **devPtr, size_t size);
cu_malloc real_cu_malloc = NULL;
char *error;
real_cu_malloc = (cu_malloc)dlsym(RTLD_NEXT, "cudaMalloc");
if ((error = dlerror()) != NULL) {
fputs(error, stderr);
exit(1);
}
cudaError_t res = real_cu_malloc(devPtr, size);
printf("cudaMalloc(%d) = %p\n", (int)size, devPtr);
return res;
}
I compile the above code into a dynamic lib file using the following command:
nvcc --compiler-options "-DRUNTIME -shared -fpic" --cudart=shared -o libmycudaMalloc.so mycudaMalloc.cu -ldl
When applied to a vector_add program compiled with command nvcc -g --cudart=shared -o vector_add_dynamic vector_add.cu, it works well:
root#ubuntu:~# LD_PRELOAD=./libmycudaMalloc.so ./vector_add_dynamic
cudaMalloc(800000) = 0x7ffe22ce1580
cudaMalloc(800000) = 0x7ffe22ce1588
cudaMalloc(800000) = 0x7ffe22ce1590
But when I apply it to tensorflow serving using the following command, the cudaMelloc calls do not refer to the dynamic lib I wrote.
root#ubuntu:~# LD_PRELOAD=/root/libmycudaMalloc.so ./tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=resnet --model_base_path=/models/resnet
So here's my questions:
Is it because that tensorflow-serving is built in a fully static manner, such that tf-serving refers to the libcudart_static.a instead of libcudart.so?
If so, how could I build tf-serving to enable dynamic linking?
Is it because that tensorflow-serving is built in a fully static manner, such that tf-serving refers to the libcudart_static.a instead of libcudart.so?
It probably isn't built fully-static. You can see whether it is or not by running:
readelf -d tensorflow_model_server | grep NEEDED
But it probably is linked with libcudart_static.a. You can see whether it is or not with:
readelf -Ws tensorflow_model_server | grep ' cudaMalloc$'
If you see unresolved (U) symbol (as you would for the vector_add_dynamic binary), then LD_PRELOAD should work. But you'll probably see a defined (T or t) symbol instead.
If so, how could I build tf-serving to enable dynamic linking?
Sure: it's open-source. All you have to do is figure out how to build it, then how to build it without libcudart_static.a, and then figure out what (if anything) breaks when you do so.
I have maintained my user.lua project folder specific. Is there anything in place where I can exclude Zerobrane's env path when I check a Module require statement with "Evaluate in Console"?
The reason for this is , i will ensure that everything is working within the plugin Engine himself.
This is what would be checked for a missing Module
lualibs and bin is cerobrane specific, if I see it right
Output
local toast = require("toast")
[string " local toast = require("toast")"]:1: module 'toast' not found:
no field package.preload['toast']
no file 'lualibs/toast.lua'
no file 'lualibs/toast/toast.lua'
no file 'lualibs/toast/init.lua'
no file './toast.lua'
no file '/usr/local/share/luajit-2.0.4/toast.lua'
no file '/usr/local/share/lua/5.1/toast.lua'
no file '/usr/local/share/lua/5.1/toast/init.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/toast.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/toast/init.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Modules/toast.lua'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Modules/toast/init.lua'
no file 'bin/clibs/libtoast.dylib'
no file 'bin/clibs/toast.dylib'
no file './toast.so'
no file '/usr/local/lib/lua/5.1/toast.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
no file '/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua/Internals/libtoast_64.so'
Here is my user.lua file at this time
--[[--
Use this file to specify **User** preferences.
Review [examples](+/Applications/ZeroBraneStudio.app/Contents/ZeroBraneStudio/cfg/user-sample.lua) or check [online documentation](http://studio.zerobrane.com/documentation.html) for details.
--]]--
--https://studio.zerobrane.com/doc-general-preferences#debugger
-- to automatically open files requested during debugging
editor.autoactivate = true
--enable verbose output
--debugger.verbose=true
--[[--
specify how print results should be redirected in the application being debugged (v0.39+). Use 'c' for ‘copying’ (appears in the application output and the Output panel), 'r' for ‘redirecting’ (only appears in the Output panel), or 'd' for ‘default’ (only appears in the application output). This is mostly useful for remote debugging to specify how the output should be redirected.
--]]--
debugger.redirect="c"
-- to force execution to continue immediately after starting debugging;
-- set to `false` to disable (the interpreter will stop on the first line or
-- when debugging starts); some interpreters may use `true` or `false`
-- by default, but can be still reconfigured with this setting.
debugger.runonstart = true
-- FlyWithLua.ini version 2.7.6 build 2018-10-24
-- Where to search for modules.
-- use this to evaluate your project folder , select the print function / right Mousebutton --> Evaluate in Console
--print(ide.filetree.projdir)
ZBSProjDir = "/Volumes/SSD2go PKT/X-Plane 11 stable/Resources/plugins/FlyWithLua"
INTERNALS_DIRECTORY = ZBSProjDir .. "/Internals/"
MODULES_DIRECTORY = ZBSProjDir .. "/Modules/"
package.path = table.concat({
package.path,
INTERNALS_DIRECTORY .. "?.lua",
INTERNALS_DIRECTORY .. "?/init.lua",
MODULES_DIRECTORY .. "?.lua",
MODULES_DIRECTORY .. "?/init.lua",
}, ";")
package.cpath = table.concat({
package.cpath,
INTERNALS_DIRECTORY .. "?.ext",
MODULES_DIRECTORY .. "?.ext",
}, ";")
-- Produce a correct name pattern for binary modules for OS and architecture.
-- This resolves clash between OS X and Linux binary modules by requiring "lib"
-- prefix for Linux ones.
local library_pattern = "?_64."
if SYSTEM == "IBM" then
library_pattern = library_pattern .. "dll"
elseif SYSTEM == "APL" then
library_pattern = library_pattern .. "so"
else
library_pattern = "lib" .. library_pattern .. "so"
end
package.cpath = package.cpath:gsub("?.ext", library_pattern)
Version --> ZeroBrane Studio (1.90; MobDebug 0.706)
Greetings Lars
You should get the desired effect if toast module file is located in your project directory. When a command is executed in the Console, the current directory is set to the project directory, so even though lualibs folder from the IDE may be in the path, it should make no difference (unless you copied the module to lualibs).
ubuntu#ip-172-31-32-122:~/src/tensorflow$ bazel-bin/tensorflow/examples/image_retraining/retrain --help
usage: retrain.py [-h] [--image_dir IMAGE_DIR] [--output_graph OUTPUT_GRAPH]
[--output_labels OUTPUT_LABELS]
[--summaries_dir SUMMARIES_DIR]
[--how_many_training_steps HOW_MANY_TRAINING_STEPS]
[--learning_rate LEARNING_RATE]
[--testing_percentage TESTING_PERCENTAGE]
[--validation_percentage VALIDATION_PERCENTAGE]
[--eval_step_interval EVAL_STEP_INTERVAL]
[--train_batch_size TRAIN_BATCH_SIZE]
[--test_batch_size TEST_BATCH_SIZE]
[--validation_batch_size VALIDATION_BATCH_SIZE]
[--print_misclassified_test_images] [--model_dir MODEL_DIR]
[--bottleneck_dir BOTTLENECK_DIR]
[--final_tensor_name FINAL_TENSOR_NAME] [--flip_left_right]
[--random_crop RANDOM_CROP] [--random_scale RANDOM_SCALE]
[--random_brightness RANDOM_BRIGHTNESS]
Looks like there is no --architecture option available. If you pass it, the script will just ignore it without complaining. This means inception is the only option for retraining. Is it intended here?
Here is how I built the script:
/usr/local/bin/bazel build tensorflow/examples/image_retraining:retrain
Bazel version:
ubuntu#ip-172-31-32-122:~/src/tensorflow$ /usr/local/bin/bazel version
............................
Build label: 0.5.2
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Tue Jun 27 13:27:03 2017 (1498570023)
Build timestamp: 1498570023
Build timestamp as int: 1498570023
I am not sure if this doubt still exists, however, if it still does: In the new version of tensorflow (version 1.7.0), the team have introduced "--tfhub_module" as an argument which is now used to control the architecture you want to use for retraining your model. With this argument, many new architectures like NASnet and PNASnet have been introduced. You can find the original code here.
I am trying to build and simulate a project that references INET (the INETWirelessTutorial actually), however I get the error below that says " Error: Cannot load library '../inet/src//libINET.dll': A dynamic link library (DLL) initialization routine failed". My project WireTryLiveson references project INET. I wonder if I need to link path environment variables differently? I tried the following:
Open "RSVPPacket.msg", modify it then change it back, and save it (which was suggested by a google group when I searched "libinet.dll")
Build INET
Neither seemed to work. INET Example projects simulate just fine.
What should I change so I can load libINET.dll?
You can see the libINET.dll here on the Project Explorer on the left.
Here are my environment variables.
Here's what the OMNET++ Console displays with my error:
Starting...
$ cd C:/inetModule/WireTryLiveson
$ WireTryLiveson.exe -m -n .;../inet/src;../inet/examples;../inet/tutorials;../inet/showcases --image-path=../inet/images -l ../inet/src/INET omnetpp.ini
OMNeT++ Discrete Event Simulation (C) 1992-2017 Andras Varga, OpenSim Ltd.
Version: 5.1.1, build: 170508-adbabd0, edition: Academic Public License -- NOT FOR COMMERCIAL USE
See the license for distribution terms and warranty disclaimer
<!> Error: Cannot load library '../inet/src//libINET.dll': A dynamic link library (DLL) initialization routine failed
End.
Simulation terminated with exit code: -1073741819
Working directory: C:/inetModule/WireTryLiveson
Command line: WireTryLiveson.exe -m -n .;../inet/src;../inet/examples;../inet/tutorials;../inet/showcases --image-path=../inet/images -l ../inet/src/INET omnetpp.ini
Environment variables:
PATH=;C:/inetModule/inet/src;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\mingw64\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin;;C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/ide/jre/bin/server;C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/ide/jre/bin;C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/ide/jre/lib/amd64;.;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\mingw64\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\local\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin;C:\Windows\System32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin\site_perl;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin\vendor_perl;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\tools\win64\usr\bin\core_perl;C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1;
OMNETPP_ROOT=C:/omnetpp-5.1.1-src-windows/omnetpp-5.1.1/
OMNETPP_IMAGE_PATH=C:\omnetpp-5.1.1-src-windows\omnetpp-5.1.1\images
Here's my omnetpp.ini:
[General]
network = WirelessTry
sim-time-limit = 25s
*.host*.networkLayer.arpType = "GlobalARP"
*.hostA.numUdpApps = 1
*.hostA.udpApp[0].typename = "UDPBasicApp"
*.hostA.udpApp[0].destAddresses = "hostB"
*.hostA.udpApp[0].destPort = 5000
*.hostA.udpApp[0].messageLength = 1000B
*.hostA.udpApp[0].sendInterval = exponential(12ms)
*.hostA.udpApp[0].packetName = "UDPData"
*.hostB.numUdpApps = 1
*.hostB.udpApp[0].typename = "UDPSink"
*.hostB.udpApp[0].localPort = 5000
*.host*.wlan[0].typename = "IdealWirelessNic"
*.host*.wlan[0].mac.useAck = false
*.host*.wlan[0].mac.fullDuplex = false
*.host*.wlan[0].radio.transmitter.communicationRange = 500m
*.host*.wlan[0].radio.receiver.ignoreInterference = true
*.host*.**.bitrate = 1Mbps
Here's my file wirelessTry.ned:
import inet.common.figures.DelegateSignalConfigurator;
import inet.networklayer.configurator.ipv4.IPv4NetworkConfigurator;
import inet.node.inet.INetworkNode;
import inet.physicallayer.contract.packetlevel.IRadioMedium;
import inet.visualizer.integrated.IntegratedCanvasVisualizer;
network WirelessTry
{
parameters:
string hostType = default("WirelessHost");
string mediumType = default("IdealRadioMedium");
#display("bgb=650,500;bgg=100,1,grey95");
#figure[title](type=label; pos=0,-1; anchor=sw; color=darkblue);
#figure[rcvdPkText](type=indicatorText; pos=420,20; anchor=w; font=,20; textFormat="packets received: %g"; initialValue=0);
#statistic[rcvdPk](source=hostB_rcvdPk; record=figure(count); targetFigure=rcvdPkText);
#signal[hostB_rcvdPk];
#delegatesignal[rcvdPk](source=hostB.udpApp[0].rcvdPk; target=hostB_rcvdPk);
submodules:
visualizer: IntegratedCanvasVisualizer {
#display("p=580,125");
}
configurator: IPv4NetworkConfigurator {
#display("p=580,200");
}
radioMedium: <mediumType> like IRadioMedium {
#display("p=580,275");
}
figureHelper: DelegateSignalConfigurator {
#display("p=580,350");
}
hostA: <hostType> like INetworkNode {
#display("p=50,325");
}
hostB: <hostType> like INetworkNode {
#display("p=450,325");
}
}
First of all go to INET properties | OMNeT++ | Makemake | select src | Options... | Compile tab | More >> and ensure that you have set "Export include path for other projects" and "Force compiling object files for use in DLLs". And in Target tab set "Export this shared/static library for other projects". Then rebuild INET.
If it doesn't help, then try the following workaround:
In properties of your project (e.g. WireTryLiveson):
select inet in Project References
go to OMNeT++ | Makemake | select src | Options... | Target and select "Shared library (.dll, .so or .dylib)"
go to OMNeT++ | Makemake | select src | Options... | Compile and select "Add include paths exported from referenced projects"
In Run | Run Configurations... of your project select opp_run and ensure that Working dir actually indicates the directory which contains omnetpp.ini.
In Ruby (RAKE) you can document your tasks in the following way
# rakefile
desc "cleans ./temp"
task :clean do
p :cleaning
end
desc "compile the source"
task :compile => :clean do
p :compiling
end
$ rake -T # Display the tasks with descriptions, then exit.
rake clean # cleans ./temp
rake compile # compile the source
Is this possible with fake ?
The same is implemented in FAKE, as I found out when reading the source
// build.fsx
Description "remove temp/"
Target "Clean" (fun _ ->
CleanDirs [buildDir; deployDir]
)
// ....so on
Dependency graph is shown with .../fake.exe --listTargets or -lt
Available targets:
- Clean - remove temp/
Depends on: []
- Build
Depends on: ["Clean"]
- Deploy
Depends on: ["Test"]
- Test
Depends on: ["Build"]