Exporting namespaced data from vault and getting error - migration

I need to migrate data from one vault instance to another. The data is stored in kv engines which are stored within childnamespaces under a non root namespace.I want to preserve the namespace paths and the kv secrets but unsure if its possible
I tried like the below to list the secrets even and it doesnt seem to work
vault kv list secret/ -namespace=productname/envname/application-namespace/service-name
productname is the parent namespace and the rest are child namespaces which my application creates dynamically.
Under the child namespace service-name there is a kv engine called "secret" which has a lot of secrets.
Worst case scenario I can recreate all the child namespaces of each product manually but I will need to export and import the contents of the "secret" kv engine at least.
However when I run that command I get
Too many arguments (expected 1, got 2)

Related

Get blobs from azure storage excluding subfolders

Trying to get blobs from Azure storage container using CloudBlobContainer.ListBlobs, but I want to only get blobs from the specific folder (using prefix), not the sub-folders under the main folder.
For example lets say I have this:
Folder
Sub-Folder
image2.jpg
image1.jpg
If I use Folder as the prefix, I want to get image1.jpg AND exclude image2.jpg (anything under sub-folders)
From Reference
If you call CloudBlobContainer.listBlobs() method,it will by default return a list of BlobItems that contains the blobs and directories immediately under the container. It's the default behavior with v8.
Even if you want to match pattern ,As mentioned by #Gaurav mantri- reference
There's limited support for server-side searching in Azure Blob
Storage. Only thing you can filter on is the blob prefix i.e. you can
instruct Azure Storage service to only returns blobs names of which
start with certain characters.
But if you want to get based on file name ,below example might give an idea
var container = blobClient.GetContainerReference(containerName);
foreach (var file in container.ListBlobs(prefix: " Folder/filename.xml", useFlatBlobListing: true))
{ …}
In your case try with Folder/filename.jpg.
Note:
useFlatBlobListing: Setting this value to true will ensure that only blobs are returned (including inside any sub folders inside that
directory) and not directories and blobs.
So, if you only want blobs, you have to set the UseFlatBlobListing property option to true.
Other references:
Ref 1,ref 2

How to regularly update mongoose db in express js without req

I want to retrieve 'Content' instances sorted by a ranking factor, which changes by likes, dislikes and time. First solution I came up was adding 'ranking factor' virtual field on the 'Content' model, but mongoose didn't allow me to sort the instances by the virtual field when retrieving them. So the only solution I have now is adding 'ranking factor' field on the model and update all content instances regularly using setTimeOut function of node js. Then where should I add setTimeOut function in express js structure?
I considered adding setTimeOut(2000, // update all content instances here) on app.js file, but there was a feeling that it is not somewhat right. I'm planning to add the codes like below.
setTimeOut(2000, Content.findAndUpdate(*, {rankingFactor: calculateRankingFactorWithTime(window.currentDate, this.likes, this.dislike}))

Accessing resources of a dynamically loaded module

I can't find a way to correctly get access to resources of an installed distribution. For example, when a module is loaded dynamically:
require ::($module);
One way to get hold of its %?RESOURCES is to ask module to have a sub which would return this hash:
sub resources { %?RESOURCES }
But that adds extra to the boilerplate code.
Another way is to deep scan $*REPO and fetch module's distribution meta.
Are there any better options to achieve this task?
One way is to use $*REPO ( as you already mention ) along with the Distribution object that CompUnit::Repository provides as an interface to the META6 data and its mapping to a given data store / file system.
my $spec = CompUnit::DependencySpecification.new(:short-name<Zef>);
my $dist = $*REPO.resolve($spec).distribution;
say $dist.content("resources/$_").open.slurp for $dist.meta<resources>.list;
Note this only works for installed distributions at the moment, but would work for not-yet-installed distributions ( like -Ilib ) with https://github.com/rakudo/rakudo/pull/1812

Dgrid collection data not accessible after filtering on collection

So, I am using collection in my dgrid and the store is of type [Memory, Trackable]. I am using store filtering (as given here). When I filter the store data, then the returned collection object does not have any data attribute and thus I am unable to access the data from the collection. Although, the changes are reflected in the d-grid when I change the collection but I need to access the data from collection to do other things.
Here is my code:
var filterObj= new this.store.Filter();
var tagFilter= filterObj.in('tagList', selectedTags);
var newCollection= this.store.filter(tagFilter);
this.grid.set('collection', newCollection);
I am unable to retrieve data from newCollection as well as from this.grid.collection. Am I doing something wrong here?
The fetch (and fetchSync in the case of Memory) APIs are the correct public APIs to use (and that is why dgrid still has no trouble querying the collection).
data is an implementation detail, and you shouldn't really be trying to access a store/collection's data via the data property. That is ordinarily present on the root store when it gets mixed in via constructor arguments.

issue in passign hash values across objects in ruby onr rails

I am running into problems passing a hash (serialized as a hash) to a newly created object. The values get converted to yaml format.
Consider the following:
Model ComputerUser:
...
serialize preferences
'#in the database I see the following "{0=>{color:red format:html}}"
....
#computer_user.registrations.build(:user_pref => :preferences}.save
#computer_user.user_pref;
'#the above statement spills out the data in yaml format and that is how it gets persisted in the db.
Now, if I do the following from rails console, I don't see the same issue, i.e. the hash is stored as a hash and not converted to the yaml format. I see the following when I inspect the value of the column in the new object:
=>{0=>{color:red format:html}}
Please note that I have used serialize for the attributes in the source as well as the target. Things seem to work from the console but just note from the controller! Any ideas what is going on? Why is the issue occurring only in the web application and not on the console.
The issue was that I was not using instance variables for assignment.
If you have "serialize, Hash" declared in both classes, this is how you need to build the child object from the parent:
#computer_user.registrations.build(:user_prefs => #computer_user.preferences).save
That works like a charm.