I'm trying to use powercli to return whether or not a datastore on a esxhost has got Storage I/O Control enabled.
I've been using GetView -viewtype Datastore -Filter #{"Name"="DS_NAME"} however I cant seem to dig out the piece of information I require from the result.
Any powerCli pro's out there than can help out?
If the result of your Get-View command is in $vol then you are looking for $vol.IormConfiguration.Enabled.
The Datastore objects returned by Get-Datastore also include a boolean property called StorageIOControlEnabled. Get-VIObjectByVIView will convert the results of your Get-View into those Datastore objects.
Maybe datastores with no VMFS volumes behave differently, not sure.
Related
Over many different Objective-C iOS coding projects I have frequently come across the issue of having data be accessible after I initially got it.
For example, currently I am reading from the stackoverflow API. I do this with a session and get a dictionary back (my JSON response).
But outside the scope of the session, the dictionary is unavailable! I can't copy the contents to a different dictionary that I've defined globally, or anything. It's like it disappears outside of the session.
So I am wondering, what's the best way to save this data that I want to use? From what I've been reading it seems like NSUserDefaults or maybe creating a plist file, although admittedly I've been having trouble with both options. If there is a method that is best for this then I can concentrate on that.
Thank you!
It depends on how persistent you want to be.
If you save this dictionary into a global variable, it is stored in the part of device's RAM that is reserved for the running app. When the app stops running (gets killed by OS or removed by the user) or if your device reboots - this memory is lost.
If you save this dictionary onto the device's flash memory drive (and its file system) - it will live past restarts/reboots.
Usually people combine the approaches: when you get the data from the network, you keep it in a global variable, and save it to the file system. After the app restart you try to load the data from the file system. The reason for not using the FS all the time is that it is much slower than RAM access. I guess I'm describing caching.
Note that you can implement manual caching (using plain data or text files, NSUserDefaults, Core Data or other libraries), but also you can utilize a builtin HTTP cache - NSURLCache. If you create a session with NSURLSession.sharedSession it will use the default NSURLCache and respect a caching policy dictated by the server side.
For more control and full offline support I'd recommend to implement caching manually. See this about reading and writing plists and writeToFile:atomically:.
I have just started with gremlin. I have successfully built and stored a graph on gremlin-server using python script, using the package gremlin_python.
I was curious to know how is the data of the graph stored on disk, but could not find it. (I did find that Titan graphDB stores it in Cassandra/HBase but I'm not using Titan, just the gremlin-server.)
TinkerGraph is an in-memory graph database, so it does not store anything to the file system and is non-transactional in nature. You can however configure it to write its contents on close to a specified format by setting these configuration properties:
gremlin.tinkergraph.graphLocation
gremlin.tinkergraph.graphFormat
Those settings and how to use them are described in the TinkerPop reference documentation.
I have a CKAN datastore with a column named "recvTime" of type timestamp (i.e. using "timestamp" as type at datastore_create time, as shown in this link). Example value for this column is "2014-06-12T16:08:39.542000".
I have a large numbers of records in the datastore (thousands) and I would like to delete the rows before a given date in "recvTime". My first thought was doing it using the REST API with the datastore_delete operation using a range filter, but it is not possible as described in the following Q&A.
Is there any other way of solving the issue, please?
Given that I have access to the host where CKAN server is running, I wonder if this could be achieved executing a regular SQL sentence on the Postgresql engine where the datastore is persisted. However, I haven't found information about manipulating the CKAN underlying datamodel in the CKAN documentation, so don't know if this a good idea or if it is risky...
Any workaround or information pointer is highly welcome. Thanks!
You could definitely do this directly on the underlying database if you were willing to dig in there (the structure is pretty simple with tables named after the corresponding resource id). You could even turn this into an API of your own using an extension (though you'd want to be careful about permissions).
You might also be interested in the new support (master only atm) for extending the DataStore API via a plugin in an extension - see https://github.com/ckan/ckan/pull/1725
I have a cacti instance that polls many servers. I have a different analytic platform where I need to get the data to this platfrom from cacti. Has anybody done something like this? Is it possible to retrieve cacti data remotely via web service calls or anything?
You could use the rrdtool dump data. Find where cacti stores the rrd files. Usually something like /var/lib/cacti/rra or /usr/share/cacti/rra
For each graph there should be a graph_name.rrd. Use rrdtool dump command to convert these into XML files which can be parsed and sent to your other program?
rrdtool dump graph_name.rrd
Please verify that the correct datasource is created as well as there is no mistakes when creating the graph template. You can also use a debug function at the top of the graph that tells you if it found the rd database or not.
I am interested in creating a routine that would query the currently running cache processes and then write this information to a file. How could this be done in Cache 2008.2?
PERFMON might be what you're looking for. That's app with it's own UI, but you can call it's functions directly too, as an API.
Check the Cache docs for "Cache Monitoring Guide". That will give you links to PERFMON docs, as well as docs for other system monitoring tools.
You might find something useful in the Class Reference, under packages %SYSTEM, %SYS, and %Monitor.
For some process info you might need to shell out to the OS. In that case check into the $ZF function. That will let you invoke os-level commands from within Cache.
Oh, and you might want to consider saving the process data within the Cache DB, rather than dumping it out to a file. That is, create a Persistent Class with Properties corresponding to each process attribute that you want to capture, then write code to create, populate, and save instances of that class, taking the data from PERFMON or whatever other source you choose.
If you do that you can use Cache SQL to generate whatever kind of report you need. (Cache will automatically generate a SQL Table corresponding to your Persistent Class.) Cache supports ODBC, so you can use an external tool like Crystal Reports or Access for that part.
Obviously that will be more work than just echoing data to a file, but some kind of structure will be needed if you're going to do anything interesting with the information.