When I performed a release build of Chromium in a Windows environment according to the following procedure, the file size was 48GB.
https://chromium.googlesource.com/chromium/src/+/main/docs/windows_build_instructions.md
However, the file installed using the Chrome installer is about 700MB.
Is there a build option to output the minimum required files for browsing?
The build options currently in use are:
gen gen out/Default --args="is_debug=false enable_nacl=false symbol_level=0 blink_symbol_level=0 is_component_build=true"
autoninja -C out/Default chrome
You are using is_component_build flag which will build many shared libraries to avoid the long linkage time, if it's not specified. You can just remove that flag, then it will be automatically set to false. That should minimize the number of files output during the compilation process.
Also your build folder includes debug symbols and other files thereby increasing the build folder's size to a few GBs. Generally even for highly optimized release build, it's greater than 20 GB and for debug build it's around 80-100 GB.
So, you should not be comparing the build folder size with the installation folder size. Also, mini_installer will greatly reduce the size of your Chromium fork while distributing your app. So you should consider making an installer for your Chromium fork for distribution to end-users.
Related
We are working on a a system that uses cloud runner, where we have a tech like Spring + Gradle plus Mongo.
The system is containerized and runs on cloud-run in gcp. However, gcp has a hard limit of 2gb on container size, which we are trying to fit into, as of now.
Upon deeper investigation, I found that, the gradle wrapper that we use downloads at least 170mb extra than what we needed.
It includes following -
It contains documentation, which is not needed while running a build via wrapper.
It does not delete the zip file after extracting the same..
Together it counts to 270 mb, which quite big for us.. What I want to know is, Is there any wrapper configuration OOTM that will help me avoid these extra files being downloaded on our system?
It seems you used the Gradle distribution type "all", which includes source code and the Gradle documentation (e.g., for IDE support -- source).
Since you run the Gradle wrapper in the cloud, you probably do not require IDE support: Use distribution type "bin". At least in latest versions of Gradle (version 7) this is the default, but you can still be explicit to make sure:
# gradle wrapper --gradle-version 7.0.2 --distribution-type bin
The size difference is about 200 MB:
# du -hs ~/.gradle/wrapper/dists/gradle-7.0.2-{all,bin}
438M ~/.gradle/wrapper/dists/gradle-7.0.2-all
229M ~/.gradle/wrapper/dists/gradle-7.0.2-bin
Gradle still keeps the zip-file, so you will have to delete that manually.
For us, this what we went with.
Although we used gradle wrapper, We also made sure that we install relavent gradle wrapper upfront on the machine, so that when we run gradlew command, we dont download it again.
When you download gradle wrapper while building image, you have liberty to delete the file from the filesystem while building container image. So you can free up some space.
Next step is you can squash the image to remove all unnecessary layers. How can i remove layers from my docker image?
I use cmake to install a linux driver. To get the kernel version I use CMAKE_SYSTEM_VERSION which should yield the output of uname -r.
However, after I installed a new kernel, I tried to reinstall the driver with cmake, just to note that it was installing to /lib/modules/<previous kernel>/... rather than to the directory for the current kernel. uname- r gives the correct result.
I use the following line in my CMakeLists.txt:
install(PROGRAMS myDevice.ko DESTINATION "/lib/modules/${CMAKE_SYSTEM_VERSION}/kernel/drivers/myDevice" COMPONENT driver)
I could not find CMAKE_SYSTEM_VERSION in the CMakeCache.txt, and just rerunning cmake .. did not do the trick either. I had to regenerate the entire build folder.
I would like to know if there is a better way, since the build folder also contains applications, that should not need rebuilding, just because there is a new kernel.
There are CMake variables, which are fixed since the first configuration. CMAKE_SYSTEM_VERSION is one of them. CMAKE_C_COMPILER is another example of "fixed" variables: changing it invalidates cache completely.
So, you need to make different builds for different kernels.
You may separate configuration and building of user-space application from kernel modules. So rebuilding kernel module doesn't force rebuilding of the applications.
Exactly that approach I have used in my projects: setting one option (introduced with option() command) builds only applications, setting another option builds only kernel modules. By default, none option is set and both kernel-space and user-space components are built.
I'm using buildroot to create a linux system for raspberry pi. I want to use the initramfs to enable to system to self-patch. The procedure roughly runs as follows:
Raspi boots, kernel loads initramfs
The initramfs-system (which contains busybox, zsync etc.) connects to a central server and checks if there are boot-file updates available (e.g. a new kernel)
If not, it checks if there is a system update available and downloads that if needed
The downloaded (squashfs) system image is mounted and executed via switch_root
My problem is that I need to compile a secondary busybox (and some more packages) for the initramfs which do not belong in the main system. I currently solved this by manually tinkering with the package files to install into target/initramfs, moving this folder out with pre-build and back in again with post-build, but this seems rather hacky. Additionally, different package types require different types of changes. Is there a better solution to this problem? If one could for example manually overwrite the target directory for each package, this problem would be rather easy to solve.
Create two separate buildroot configurations.
One configuration will have the kernel and the initramfs.
The other configuration only has the squashfs rootfs.
Creating a partial rootfs from a configuration is very tricky, because you have to be sure that you don't miss any shared libraries or other auxiliary files needed by some program.
Note that to speed up the build, you can use ccache and/or use an external toolchain. See the manual.
I made changes to Intellij Community Edition (ce). I can compile and run those changes from within the IntelliJ editor. That launches a second instance of IntelliJ ce which is running from classes containing my changes. What I want to do is just run those changes without having to first load the source, compile and run from within IntelliJ.
Netbeans made this easy by just producing an executable as a result of the build. With Intellij, it's not at all clear what has to be done. I have tried the following-
using the Run configuration Intellij itself uses to run the altered classes- this includes setting the working directory , main class, vm options and classpath. Actually, this doesn't work for reasons unknown to me.
on someone's suggestion, running dist.gant in build. This blows up with very many errors which are not helpful (no class def found errors which indicate some confusion on Intellij's part on classpaths somewhere)
Running WinLauncher.exe under bin gives the error message that it can't find VM options file (although it's in bin, (and also for good measure under bin/win with the other files which are co-located with vmoptions in the intellij directory structure for Intellij proper. )
ALl this is just harder than it should be. The solution is to provide an executable as a result of the build and place it in a predictable location.
Has anyone ever actually DONE what I am trying to do- make changes to the community source then use the resultant editor not as a project you're working with in IntelliJ but as the Intellij editor you're working through?
FOLLOW UP
User60561 had the correct answer. Just to mop up the details, in artifacts, there is a compressed file (win.zip for Windows, mac.zip for Mac etc.). In order to run your snapshot, you have to unzip this archive (after which it will have the same name, minus the zip extension) then go into folder "bin". There you'll see two executables: idea.exe and idea64.exe, for 32 and 64 bit versions, respectively. Clicking on these runs your snapshot.
Adjusting contents of the files idea.exe.vmoptions and idea64.exe.vmoptions lets you set the VM parameters to suit yourself, typically people might want to give the VM more memory through the -Xmx value.
It seems straightforward:
To build the distribution archive of IntelliJ IDEA Community Edition, execute build.xml Ant build script in the root directory of the source code. The results of the build execution can be found at out/artifacts.
https://github.com/JetBrains/intellij-community#building
So download ant, and run ant in the directory that you have it stored in. Make sure to use the commandline to launch ant in order to make sure everything is working correctly.
Initially execute getPlugins.bat/sh, then:
Use update.bat/sh according to it's instructions
Or
Click on: Main Menu | Build | IntelliJ IDEA CE build
Copy content of intellij-community\out\deploy (lib, plugins folder) into existing IJ installation (sometimes it is better to delete existing folders if they contain older dependencies or when the installation was of Ultimate version)
We bundle the Java 6 JRE with our application installer so that it can be run on any machine, but this makes the application a little bit heavier. So we are planning to reduce the size of the JRE. If anyone has done this sort of task, can you please provide guidance to move forward with this?
Look at the README file in the JRE directory. The 'Optional Files and Directories' section lists a number of files that can be removed from the Oracle/Sun JRE if you are packaging it with your application.
I use an Ant buildfile to copy the JRE from the system install location to the package directory when creating an installation. Put the list of files you want excluded in a separate file and use the 'excludesfile' attribute to load this list:
<copy todir="${deployed_jre_dir}">
<fileset dir="${system_jre_dir}" excludesfile="jre_excludes.properties"
</copy>
Sample jre_excludes.properties file:
# per the README from the JRE, these files are for the browser plugin and are not needed otherwise
#bin/javaw.exe
bin/javaws.exe
bin/javacpl.exe
bin/jucheck.exe
bin/jusched.exe
bin/wsdetect.dll
bin/NPJPI*.dll
bin/NPJava*
bin/NPOJI610.dll
bin/RegUtils.dll
bin/axbridge.dll
bin/deploy.dll
bin/jpicom.dll
bin/javacpl.cpl
bin/jpiexp.dll
bin/jpinscp.dll
bin/jpioji.dll
bin/jpishare.dll
lib/deploy.jar
lib/plugin.jar
lib/javaws.jar
lib/javaws/messages*
lib/javaws/miniSplash.jpg
bin/new_plugin**
bin/jureg*
bin/ssv*
bin/jqs*
bin/jp2*
lib/deploy/**/*
# if you do not need any RMI stuff
# wildcard to catch .exe files on Windows
# note rmi.dll is not excluded, which is needed by jconsole; add rmi.dll if you do not need jsonsole
bin/jbroker*
bin/java-rmi*
bin/rmid*
bin/rmiregistry*
bin/tnameserv*
bin/orbd*
bin/servertool*
# do not include QuickTime
# this will be in the jre dir for machines that have QT installed
lib/ext/QTJava.zip
Some update info: since java 8 there is an official Oracle tool called jrecreate for creating of small embedded JRE packages.
For my Java 8 Update 144 desktop application I exclude the 2 big Java FX files:
bin/jfxwebkit.dll // ~34 MB unpacked
lib/ext/jfxrt.jar // ~17 MB unpacked
The zipped jre is 49 MB instead of 66 MB.
For me this is an acceptable tradeoff between reduced size and added build complexity (and potential bugs).
You're trying to reduce a standard JRE's size? Don't do that. You can choose to bundle an alternative JRE which might be smaller. A list can be found on this Wikipedia page. As always, beware of compatibility issues and test your application thoroughly.
An other, and safer, way is to just require an installation of a JRE on the target machine.
You can use jlink tool to post-process JDK and create a smaller image by keeping only a specified set of JMPS modules and debug information. This is a common practice right now as we are using more containerized environments.