I am implementing an Express Web service using CouchBase as my database. To get all documents stored in a bucket, i created a view using the web console.
My question is if there is a way to do the same thing but without creating a view or using N1QL.
I was looking at the Couchbase Server REST API, but i didn't found a way.
Thank you
You could design your schema around something like this. I am thinking of a key pattern specifically that would allow for a bulk get of a range of docs.
Beyond that, there is no way without a view or N1QL.
In Couchbase 3.0 and higher, you can also use DCP to stream all documents from a bucket. Currently the DCP protocol is only implemented in java, you can see an example here: http://github.com/branor/couchbase-dcp-consumer
Note that there is a problem in the 1.1.0+ version of the couchbase core-io library, so you need to use version 1.1.0-dp (developer preview) to open a stream. DCP support in the SDK is still experimental, so I wouldn't use it in production yet.
Create a document that will hold the keys of all your documents.
While inserting a key value pair in couchbase, also append the key to that document.
Eg:
<Key1, Value1>
<Key2, Value2>
.
.
.
<Keyx, Valuex>
<All_Keys, <Key1, Key2, Key3...Keyx>>
To get all the documents,
Just do a client.get("All_Keys") and then do a client.getBulk() operation.
Related
Earlier nuget package 'Microsoft.Bot.Builder.Azure' had AzureTableStorage, AzureBlobStorage and CosmosDbStorage but now the latest version has only AzureBlobStorage and CosmosDbStorage .
What if I need to used the TableStorage also the IStorage of Microsoft.Bot.Builder.IStorage doesn't allow table storage but only blob and cosmos dB storage.
Is table storage not supported for IStorage or am I missing something?
I have also tried upgrading all nuget packages and used target framework .NET Core 2.2
The provider for table storage was removed before the SDK was released due to limitations that the team didn't have time to work around. That said, as you can see, there is an IStorage abstraction that would allow you to write your own implementation on top of Azure Table Storage if that's something you think you need.
Honestly, I don't know if I see much of a point in it. If you don't need the ability to perform ad-hoc queries over the data then blob storage is the cheapest, fastest option. If you do want to perform ad-hoc queries over the data, then table storage was never going to help you anyway due to it only having partition/row key indexability, so you'd need to go to something more powerful like CosmosDB which can index on all of the data.
FWIW, if you wanted to resurrect the AzureTableStorage implementation, you can always grab the last version that existed before it was removed from the SDK here.
Let me explain in more detail:
1st: I'm running endeca 3.1, so Endeca Server here refers to 3.0's Data Domain.
I'm required to use an Endeca Server currently present on Endeca (Downloaded a Demo VM). All the info on it, including, groups, attributes and data, must be merged into out Endeca Server. (It can also be the other way around, i could merge my Endeca Server into this one.)
So far, i've tried to do the following:
1) Clone the Endeca Server
2) use the putCollection sconfig operation to create a collection on it with the same name i have on mine.
3) Load configurations using the LoadCollection & LoadAttributes graphs from OEID POC Template 3.1. I point to the new collection on the Configuration.xls file.
This is where i encounter an issue. The LoadAttributes graph gets a T/O message from the server's WS. Then the config WSDL becomes inaccesible for a while. I can't go beyond this point.
I've been able to load data into the collection, but i need to load the attributes first.
THanks in advance for your replies.
Regards
There are a few techniques.
Have you tried exporting the data domain and then importing it?
You can use the endeca-cmd tools to export to a file, and then import from that file. This would enable you to add 2 datastores into one server.
If you want to combine 2 datastores then that is a different question.
The simplest approach in 3.1 if the data collections are small. Extract then as CSV (via a data-table), convert to XLS and add them via self provisioning into separate collections within a single data store. If you are running in the VM this is potentially the easiest approach.
This can also be done using Integrator.
You don't need to load the attributes unless you are using multi-value types. You can call against the conversation web-service to extract data and then load it using 'bulk-load' I would not worry too much about creating the attributes unless this becomes essential due to their type or complexity. If you cannot call against the conversation web-service, then again extract as csv and load using Integrator.
I have a CKAN datastore with a column named "recvTime" of type timestamp (i.e. using "timestamp" as type at datastore_create time, as shown in this link). Example value for this column is "2014-06-12T16:08:39.542000".
I have a large numbers of records in the datastore (thousands) and I would like to delete the rows before a given date in "recvTime". My first thought was doing it using the REST API with the datastore_delete operation using a range filter, but it is not possible as described in the following Q&A.
Is there any other way of solving the issue, please?
Given that I have access to the host where CKAN server is running, I wonder if this could be achieved executing a regular SQL sentence on the Postgresql engine where the datastore is persisted. However, I haven't found information about manipulating the CKAN underlying datamodel in the CKAN documentation, so don't know if this a good idea or if it is risky...
Any workaround or information pointer is highly welcome. Thanks!
You could definitely do this directly on the underlying database if you were willing to dig in there (the structure is pretty simple with tables named after the corresponding resource id). You could even turn this into an API of your own using an extension (though you'd want to be careful about permissions).
You might also be interested in the new support (master only atm) for extending the DataStore API via a plugin in an extension - see https://github.com/ckan/ckan/pull/1725
I have a scenario in which multiple lines of text are to be appended to an existing text file... Is it possible to do this using Jclouds? (That would be ideal for me as jclouds supports a lot of cloud providers)...
Even if this is not doable using jclouds, does the native API of Amazon S3/Rackspace Cloudfiles/Azure storage support appending content to existing blobs?
If this is doable, then kindly point me to good working examples which show the same...
This is not possible in the underlying blob stores I know of.
What is the process to access data from a SQL data source and have it fill in a list box control so that the user may select one of the values?
I have been given the name of the database and server, the login ID and password.
Code samples would really be appreciated as I have never done any SQL coding.
The latest Extension Library on OpenNTF ( extlib.openntf.org ) has a whole bunch of Relational Database extensions.
You'll need to get the JDBC drivers for whatever SQL server your going to be accessing and then take a look at the ExtLib demo application on how to create the JDBC connector from your application. Once the connector is in place you can then just the new controls in ExtLib to easily create a view pane etc.
You will also need more then the SQL server, username and password, you'll need to find out the different tables that you'll be accessing so that you can reference them from your Xpages application.
I've created a video showing JDBC access from XPages: http://www.youtube.com/watch?v=p6oRCsTsVqc
Wait for the book that will e released soon about the extlib. I know Jeremy hodge wrote the chapter so you might be able to get some info from him.
From an answer I gave earlier: you might want to check out the blog post announcing the JDBC support . It has an excellent video explanation and a link to a slide deck.
Also, take a look at Xpages101 lesson 61. It's paid-for content, but well worth it if you're serious about Xpages development.
If you want to combine Upgrade Pack 1 (UP1) with the Extension Library JDBC parts, then make sure to use the Extension Library that matches exactly the UP1 version. This is version 853-20111215 of the Extension Library. Then you can use the update site method to only deploy the experimental parts of the Extension Library (com.ibm.xsp.extlibx.feature_8.5.3.20111215-0914.jar).
For newer releases of Extension Library things might (will) have changed so that UP1 and Extension Library can not work together.
When UP2 is released, you need to remove the Extension Library package and deploy UP2. At that point in time UP2 might contain the JDBC support.
Roy,
As the previous posters put the ext library stuff will make it a little more "Drag and Drop", but you can use regular JDBC connection to get the data you want, Its pretty simple, but a lot more code than using Domino as a backend. You might want to look at this John Mackey blog post about doing a very similar thing...http://www.jmackey.net/groupwareinc/johnblog/johnblog.nsf/d6plinks/GROC-7G9GT4
Keep in mind that you need the actual ext. library for this. The upgrade pack does not contain the JDBC stuff.
Edit:
Keep in mind that if you don't need "LIVE" data access, and the information you want is fairly static you could always just use a lotusscript agent to pull the data down into Notes Documents. Run that once a day or whatever. No fancy XPages stuff needed. That's fairly common coding and practices with examples available.
Then just have the list box pull from the documents you brought down.