Is there any way to make cloud-init get the datasource from the hard disk itself? - datasource

I was implementing the cloud-init feature in our product. i was assigned to implement this without using cd-rom (infact only from the hard disk itself). So is there any way for it?

Sure. You can use the NoCloud Datasource, which will look in /var/lib/cloud/seed without special configuration. See the seed example in the cloud-init sources.

Related

Is there a way to write to a file while it is being used by another process (a process in ring 0?)

Recently, I've been trying to write to a .PAK file while it is being used by another process in ring 0. This has been a problem for quite a while and i haven't had much success. I am able to use any programming language necessary to accomplish this, but C#/VB.net is preferred. I originally wanted to use a find and replace system when editing, but I will just choose and offset to write to and such instead.
No, I can't just terminate the process then edit; the process must be running. Yes, I obviously know the process with the file handle attached.
No, I can't just run as admin because the process is established in ring 0/the kernel.
I've tried multiple methods including setting the process speed temporarily to 0 to edit then revert, and changing the FileShare and other parameters, none with any success.
One approach which I have been told a lot and which I have no experience in is creating a "Kernel Driver". I'm not sure how to go about this and I cant find much info online so if you think that's is the best method please inform me on how to get started. Any help is appreciated!
Always create a temporary file (a copy of your original file). If you need to process a file within your codes, create a temp file, use the temp file and process that file. So if you need another process, there will be no problem.

How to change linux kernel process priority in programmatic way?

I found some function 'renice' that changes the nice value of process.
But I want to know how to change priority in kernel code.
Is it okay that just changing the priority value in sched_entity of process?
If you want to change the niceness of the process programmatically, I would advise against setting these values in the kernel struct directly. Instead, you can utilize several POSIX functions such as setpriority or pthread_setschedparam.
The default scheduler policy on Linux is SCHED_OTHER, so you're, by default, achieving the same thing using these two functions, as SCHED_OTHER just uses niceness level to schedule.
If you have access to the task_struct, in order to achieve this directly, you just need to set static_prio in task_struct.

Is there a way to get "Dominator Tree"-like functionality from a running JVM?

Recently, I've been digging in to JVM heap dumps using Eclipse MAT. I like it, but the one feature that I seem to use the most is the Dominator Tree. Eclipse's example screenshot:
Anyways, I find that a lot of the time, I usually get the most value out of just looking at that table and getting the first few entries. Since the turnaround time for getting this is:
Create Heap Dump (jcmd <pid> GC.heap_dump)
Download/Pull heapdump to a location (MAT isn't installed on our servers)
Run Eclipse MAT's ParseHeapDump.sh tool to build the various trees
Open MAT, click Dominator Tree icon.
Analyze
Is there a way to get this equivalent information off of a running JVM programmatically? I'd like to run some kind of gather_dominators.sh <pid> script on a host and get the Top X Objects from a JVM, but I don't know where to start.
If by "running jvm" You meant - "getting the info without doing stop-world heap-dump" then the obvious answer is: in order to do such thing without "full-scan" - the data needs to be collected throughout system life-time by tapping creation/release of each object and by maintaining the statistics. You could achieve such things by instrumentation or by using a ready-made custom agents (jol/jamm/etc). Note that many GCs are already doing similar work to collect (and print) statistics. IIRC - newer JVMs even keep track of such info within the class-metadata area (so getting statistics is instant).
https://github.com/google/allocation-instrumenter
(google-allocation-instumenter)
http://blog.javabenchmark.org/2013/07/compute-java-object-memory-footprint-at.html (with JAMM)
https://github.com/jbellis/jamm (JAMM src)
In Java, what is the best way to determine the size of an object? (JOL/etc here)
http://www.javaworld.com/article/2074458/core-java/estimating-java-object-sizes-with-instrumentation.html (short DYI guide)
https://www.youtube.com/results?search_query=Understanding+Java+GC
(webCast on how GC traverses objects for similar purposes)
On other hand - if You're fine to grab a heap-dump (which should be fine on any production system with any proper node-redundancy in place, designed for handling unavoidable Sun-JVM stop-world GC pauses), then Jhat, MAT-api, YourKit and Jol are probably Your best friends:
Programmatically analyze java heap dump file
How to analyse the heap dump using jmap in java
It is important to note that currently-existing heap-dump format loses the info about actual sizes of objects, so all tools (MAT/etc) are just trying to GUESS it properly:
http://shipilev.net/blog/2014/heapdump-is-a-lie/ (What Heap Dumps Are Lying To You About, by Aleksey Shipilёv)
HTH :)

MBXMapKit using local .mbtiles database file

I would like to use MapKit (on osx) to display custom map tiles from a .mbtiles (sqlite) database of the sort exported from TileMill.
MBXMapKit looks great, and is almost what I'm looking for. I could see how, with very little modification, MBXMapKit could be tweaked to point to a local .mbtiles database file.
Is there any way to use the MBXMapKit framework to accomplish this without tweaking? I did read the docs, and couldn't find a straightforward answer. I did find a private method on MBXOfflineMapDatabase called -initWithContentsOfFile: which sounds promising and looks like it does what I need -- is there anything to watch out for if I expose and use that method?
Alternate option is to subclass MKTileOverlay and use -loadTileAtPath:result:, which is easy to do, but also requires managing the connection to the sqlite file etc.
Have a look at this for the latest on MBTiles support:
https://github.com/mapbox/mbxmapkit/issues/3
It'll be coming probably in the next release. This should be distinct and separate from both the normal performance cache (NSURLCache) as well as the (also SQLite-backed) offline databases, which are meant for individual tile downloads being placed into a cache one-by-one.
It took me quite some time to work this out, but here is the link that got me on the right path.
https://github.com/mapbox/mbxmapkit/pull/110/commits/8b9fbf3fd56ae804a38c737305f128fd43a8225d
For some reason the method _mbtilesOverlay = [[MBXMBTilesOverlay alloc] initWithMBTilesURL:mbtilesURL]; can not be used on the latest version of MBXMapKit. I just replaced the .m and .h files with the files in the link, and used MBXViewController.m as a guide to get the map view to show the tile overlay.

How to write in kernel mode to some process's virtual memory

I want to use my Unix module in order to write to another process memory (I would like to do it in kernel mode and avoid the pthread interface).
I have to use function (like do_mmap(..), do_unmmap(..), sys_mprotect(..), etc.) which affect the current process memory instead of the process I'd like to it to affect.
So I figured, I need find a way to do a context switch to the process I want in order to make the process I want the current. I tried to copy the implementation of the schedule() with a minor change:
I replaced the line:
next = pick_next_task(rq);
with:
next = myNext;
My problem is that schedule requires so much structs and functions which I can't include, so I have to re-implement them. it seems pretty bad to do such a thing. Do you have any suggestions?
I want to avoid to modify the existing kernel, so I won't have to force the users to restart and modify their operating system in order to use my program (which is why I use modules).
By the way, I use the "2.6.38-11-generic" version of Linux.
Use the get_user_pages() function to get the pages of the target process (more precisely, its mm_struct)
Map the page(s) that you need via kmap() or kmap_atomic() (depending on the context)
Write/read at the address returned by the mapping (withing a page size).
Destroy the mapping via kunmap() or kunmap_atomic()