Copying read-only sections into RAM - executable

In Linux for example, when the loader loads an application or library into RAM, doesn't this make everything writable then?
For example, the .text area of an executable file should be read only. Will it no longer be read-only after it is loaded into RAM? This I don't understand.

When an application or library is loaded into RAM, the loader makes a copy of the file in RAM. The copy is writable, but the original file on disk is still read-only.
The .text area of an executable file is read-only because it contains the machine code for the program. Once the program is loaded into RAM, that machine code is executed directly from RAM, so it doesn't need to be read from the disk again.

Related

Accessing absolute file paths from linux kernel driver in the context of application calling from a chroot

Linux Driver question.
I have an application effectively calling into my kernel module.
The kernel module has to read files from a specific absolute path, during the call from the application.
This all works fine under normal conditions.
The problem occurs when the application is being run from a chroot.
At that point, running within the context of the application that is running from chroot, my driver no longer has access to the absolute path for the file it must read.
The driver is using filp_open() to open the file, which fails when application is running from chroot.
Is there way for me to specify the root for my file opens to use without disturbing the application's chroot, or causing races with the application accessing other files within the chroot.
The Linux version is centos 7.1 kernel 3.10.0-229-el7.x86_64
Any info greatly appreciated.
This took a lot of crawling around through the kernel code, but I figured out how to this.
First I needed to use get_fs_root(init_task.fs, &realrootpath)
This gets the real root path, not the chroot path.
Then I needed to lookup the file name using filename_lookup() setting namei data to my rootpath and passing in the LOOKUP_ROOT flag so it looks it up from the real root path.
Finally I had to use dentry_open() to open the file using the path I looked up.
At this point I could access and read a file that outside the current tasks chroot environment.

Custom Open Virtual Format (OVF)

As everyone knows OVF is Open Virtual Format for exporting virtual appliances it is helps in many aspects and reliable. I got to know about OVF from wiki Open Virtualization Format. Hypervisors like VMware bare-metal hypervisor, Virtualbox, Hyprer-V, has provided their tools for converting VM to OVF/OVA formats. Got to know from below helpful links VMware, Hyper-v,VirtualBox.
But how can i do the custom OVF if i have only VHD,VHDX,VDI,VMDK files of some Virtual Machine?
Does there any difference between VMDK and VMDK from exported OVF?
Is there any programmatic approach by using which i can do this easily?
Thanks
VMware OVF package consist of sparse disk. I did it simple way with the help of VirtualBox, VirtualBox provides you command line option for disk conversion so you can get your disk in target format and then create package, Package consist of .OVF file and .MF file along with disks in one folder.
.MF File consist of SHA1 check-sum of all files in package
.OVF consist of deployment configuration i.e Controller, Disks, RAM,
CPU etc.
No need to study everything just export some VM in OVF format and then refer that .OVF and do changes as you want and update check-sum in .MF file
how can i do the custom OVF?
VMWare OVF file is just an .xml file. It contains the information about resources like disk file .vmdk, cd/disk file .iso, memory, vCPU, Network Adapter, virtual disk size and host configuration parameters.
For reference you can export ovf file from any VM which is already created/running on host.
Does there any difference between VMDK and VMDK from exported OVF?
We can export VMDK from host not from OVF(.ovf is just file). I think exported VMDK and VMDK are same. Because from exported VMDK can also be used to bring-up VM on host.
Is there any programmatic approach by using which i can do this easily?
You can update the ovf file using any programming language. But I prefer to choose python and library.
I prefer to use .OVA instead of .OVF file.
Basically .OVA is tar of .VMDK, .OVF, .MF(cryptography file of all files in .OVA tar (optional)), .iso(optional), etc.
IF you use .OVF file to bring-up instances, you need to keep all the information provided files in same directory like .VMDK, .iso, etc. There may be chances of missing files or placed in different directory.

How is an executable file run on an O/S?

Just a conceptual question. A program file is compiled and linked with required libraries into an exe file. Now this file I understand is machine code, which the processor can understand. What I am curious about is how the OS plays a role. I would of thought the OS actually interprets the exe file ? I mean if I write a program in assembly, I could do modification on memory blocks anywhere, does the OS protect against this?
Yes the OS (specifically the loader) parses the executable file format. On Windows this is a PE (Portable Executable). On Linux, this is typically an ELF (Executable and Linkable Format) file. Generally, the OS doesn't concern itself with the actual code, however.
The loader determines which chunks of the program on disk go where in memory. Then it allocates those virtual address ranges, and copies the relevant parts of the file into place. Then it does any relocations required, and finally jumps to the entry point (also specified in the file format.)
The thing to remember is that most all modern OSes protect processes from one another by means of Virtual Memory. That means that every process runs isolated in its own Virtual Address space. So if Notepad writes to address 0x700000, he's not going to affect a variable that Word has at 0x700000. By the way these virtual addresses work, those actually map to totally different addresses in RAM.
On x86 platforms, this security is provided by the Protected Mode and Paging features of the processor.
The key is that it is the hardware that prevents you from doing anything "bad".
Peering Inside the PE: A Tour of the Win32 Portable Executable File Format
Microsoft PE and COFF Specification
ELF Specification

/tmp files filling up with surefires files

When Jenkins invokes maven build, /tmp fills with 100s of surefire839014140451157473tmp, how to explicitly redirect to another directory during the build. For clover build it fills with 100s of grover53460334580.jar? Any idea to over come this?
And any body know exact step by step to create ramdisk so I could redirect surefire stuffs into that ramdisk ? Will it save write time to hard drive?
Thanks
Many programs respect the TMPDIR (and sometimes TMP) environment variables. Maybe Jenkins uses APIs that respect them? Try:
TMPDIR=/path/to/bigger/filesystem jenkins
when launching Jenkins. (Or however you start it -- does it run as a daemon and have a shell-script to launch it?)
There might be some performance benefit to using a RAM-based filesystem -- ext3, ext4, and similar journaled filesystems will order writes to disk, and even a quick fd=open(O_CREAT); unlink(fd); sequence will probably require both on-disk journal updates and directory updates. (Homework: test this.) A RAM-based filesystem won't perform the journaling, and might or might not write anything to disk (depending upon which one you pick).
There are two main choices: ramfs is a very simple window into the kernel's caching mechanism. There is no disk-based backing for your files at all, and no memory limits. You can fill all your memory with one of these very quickly, and suffer very horrible consequences. (Almost no programs handle out-of-disk well, and the OOM-killer can't free up any of this memory.) See the Linux kernel file Documentation/filesystems/ramfs-rootfs-initramfs.txt.
tmpfs is a slight modification of ramfs -- you can specify an upper limit on the space it can allocate (-o size) and the page cache can swap the data to the swap partitions or swap files -- which is an excellent bonus, as your memory might be significantly better used elsewhere, such as keeping your compiler, linker, source files, and object files in core. See the Linux kernel file Documentation/filesystems/tmpfs.txt.
Adding this line to your /etc/fstab will change /tmp globally:
tmpfs /tmp tmpfs defaults 0 0
(The default is to allow up to half your RAM to be used on the filesystem. Change the defaults if you need to.)
If you want to mount a tmpfs somewhere else, you can; maybe combine that with the TMPDIR environment variable from above or learn about the new shared-subtree features in Documentation/filesystems/sharedsubtree.txt or made easy via pam_namespace to make it visible only to your Jenkins and child processes.

CF - Config file on device gets read-only attribute

I have added a config file (myapp.exe.config) that is deployed to the device after installation. But the problem is that this file gets read-only attribute. I have tried adding some stuff in the setup project in "codeINSTALL_EXIT" function. On emulator it works ... it removes the read only attribute, while when installing on the phone the attribute stays.
SetFileAttributes(szPathConfig, FILE_ATTRIBUTE_NORMAL)
Any ideas?
It's not completely clear from your question how the file is getting deployed (though I think from a CAb only). Things to check/know:
If you install via CAB, but then deploy from Studio (i.e. Debug) the file will get overwritten and the file studio pushes may well be read-only, especially if your SCC mechanism locks local files that aren't checked out (like VSS does).
When you build a CAB file, the file attributes get inherited from the source, meaning that if the file is read-only on the PC when you create the CAB, it will be read-only coming out of the CAB. One woudl think that the EXIT of the installer would be late enough to alter the attributes, but I've never tested it. Following your current path, you might check that the attributes before setting and also check to see if the Set call is actually succeeding. Personally I'd just make sure all files were read/write enabled before building the CAB to avoid the whole problem in the first place.