Cannot create an index, i.e. /{db}/_index not working on 2.0.0 - indexing

I spent hours to figure out why I cannot use Mango Query features. In Fauxton I can neither add Mango Indexes, neither run a Mango query. For instance, in NodeJS:
var PouchDB = require('pouchdb');
PouchDB.plugin(require('pouchdb-find'));
var db = new PouchDB('http://localhost:5986/books');
db.createIndex({ index: { fields: ['nom'] } })
.then(console.log)
.catch(console.log);
=> { error: 'bad_request',
reason: 'Referer header required.',
name: 'bad_request',
status: 400,
message: 'Referer header required.' }
Any clue welcome! Thanks

It looks like this plugin can only perform the search operation on a local PouchDB database, and not translate it to a remote CouchDB query.
You probably want to set up the local db like this:
var db = new PouchDB('books') (instead of the url) and then setup replication for your documents as described here in the PouchDB docs. Your index will not be synced however.
An advantage caused by this is that you can always query your database even if the CouchDB server goes down.

Related

expo-sqlite using existing local database

I am using expo and react native to build a truth or dare app. I want to store hundreds of truth or dare questions to feed to the user. I figured SQLite would be most efficient for this (and allow offline usage). I created the db using the DB Browser (SQLite) tool and created a single table named "Prompts" with several rows.
Here's the code I use for opening and performing a transation:
import database from "../assets/db/TruthOrDareDB.db"
const db = SQLite.openDatabase(database);
console.log(db);
db.transaction((tx) => {
console.log("transaction test");
tx.executeSql(
`
SELECT *
FROM Prompts;`,
[],
(_, result) => console.log("executeSql"),
(transaction, error) => console.log(error)
);
});
The openDatabase call returns a webSQLDatabase obj. I recieve the "transaction" log to the console but I do not get the "executeSql" log or an error. I would expect to get at least one, why am I not?
And as far as design do you agree that SQLite is the best for my goal?

How to use select_object_content via rusoto / rust?

The following code tries to select some data from a file stored on S3:
let client = S3Client::new(Region::default());
let source = ... object providing bucket and key ...;
let r = SelectObjectContentRequest {
bucket: source.bucket,
key: source.key,
expression: "select id from S3Object[*].id".to_string(),
expression_type: "SQL".to_string(),
input_serialization: InputSerialization {
json: Some(JSONInput { type_: Some("LINES".to_string()) }),
..Default::default()
},
output_serialization: OutputSerialization {
json: Some(JSONOutput { record_delimiter: Some("\n".to_string()) }),
..Default::default()
},
..Default::default()
};
It causes the following error:
The specified method is not allowed against this
resource.POST
The example is a 1:1 port of a working Python/boto3 example, so I'm quite sure it should work. I found this issue, which is a few month old and the status is not clear to me. How do I get this working with Rust?
Unfortunately s3 select still doesn't work on the latest rusoto_s3-0.40.0. The issue you linked has all the answer. The problems are twofold.
First, right now the s3 select request rusoto sends out has a bogus query string. It should be /ObjectName?select&select-type=2, but rusoto encodes it to be /bjectName?select%26select-type=2. That's the error you saw.
To verify, run your project like so:
$ RUST_LOG=rusoto,hyper=debug cargo run
You will see logs from rusoto and hyper. Sure enough it emits an incorrect URI. One can even dig into the code responsible:
let mut params = Params::new();
params.put("select&select-type", "2");
request.set_params(params);
It is supposed to be:
let mut params = Params::new();
params.put("select-type", "2");
params.put("select", "");
request.set_params(params);
Although the fix seems trivial, remember these are glue code generated from AWS botocore service manifests, not manually coded. To incorporate the fix is not that straightforward.
Second, the bigger problem. The AWS s3 select response uses a customized binary format. rusoto simply doesn't have a deserializer for that yet.

How get raw SQL from sequelize migraions

I have a bunch of Sequilize migration files. All looks like:
module.exports = {
up: //up migration
down: //down migration,
};
Is there a programmatically way to get SQL queries from that files? It will be ok to use Node ecosystem. The only requirement do that automatically.
Why do I want do this?
I wan't to create SQL migrations from javascript files to put them into entrypoint of my Postgres base image for local development. And I don't want to put Node.js with Sequelize into my image which depends only on Postgres official base image from Docker Hub.
If you already have a database with the right schema, all you need is the schema. You can use pg_dump command to get the schema
pg_dump.exe -U username -d databasename -s schemaname> myschema.sql
You can now import this schema
psql -d database_name -h localhost -U postgres < myschema.sql
I know you're asking how to get this programatically but just exposing the raw SQL is valuable. I was able to get the raw sql (sorting this out led me to this question) by adding the key logging to the options object.
This is my migration:
await queryInterface.addIndex(
constants.EVENTS_TABLE_NAME,
['created_at'],
{ using: 'brin', concurrently: true, logging: console.log }
);
and the output from the migration:
== 20220311183756-create-brin-index-on-created-at: migrating =======
Executing (default): CREATE INDEX CONCURRENTLY "events_created_at" ON "events" USING brin ("created_at")
== 20220311183756-create-brin-index-on-created-at: migrated (0.019s)
Here is an example from their docs:
await sequelize.query('SELECT 1', {
// A function (or false) for logging your queries
// Will get called for every SQL query that gets sent
// to the server.
logging: console.log,
// If plain is true, then sequelize will only return the first
// record of the result set. In case of false it will return all records.
plain: false,
// Set this to true if you don't have a model definition for your query.
raw: false,
// The type of query you are executing. The query type affects how results are formatted before they are passed back.
type: QueryTypes.SELECT
});

ArcGis Offline map layer changes synchronization

In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.

The name 'NodaTimeField' does not exist in the current context error during installation of index on RavenDB

I am using NodaTime's LocalDate in RavenDB index.
Here is an example of the index:
public class TaskIndex : AbstractIndexCreationTask<ScheduleTask>
{
public TaskIndex()
{
Map = tasks => from task in tasks
select new
{
task.Name,
PlannedStartDate = task.PlannedStartDate.AsLocalDate().Resolve(),
PlannedDueDate = task.PlannedDueDate.AsLocalDate().Resolve()
};
Index(x => x.Name, FieldIndexing.Analyzed);
Store(x => x.Name, FieldStorage.Yes);
TermVector(x => x.Name, FieldTermVector.WithPositionsAndOffsets);
}
}
I installed RavenDB-NodaTime bundle as described here.
Here is a piece of code I use to install index:
var assembly = AppDomain.CurrentDomain.Load(new AssemblyName
{
Name = "cs.Scheduling"
});
var catalog = new AssemblyCatalog(assembly);
var provider = new CompositionContainer(catalog);
var commands = documentStore.DatabaseCommands.ForDatabase(dbName);
IndexCreation.CreateIndexes(provider, commands, documentStore.Conventions);
documentStore is configured with default database, but then I use it to install index to different (tenant) database name of which comes in dbName.
During the installation of the index I got an exception: The name 'NodaTimeField' does not exist in the current context.
I have one default database which is completely different from database I try to install index for. So basically the case is similar to one described here but I am using standalone version of RavenDB server.
I tried to find out how I can do suggested there but was not able to do that:
embeddableDocumentStore.ServerIfEmbedded.Options.DatabaseLandlord.SetupTenantConfiguration += configuration =>
{
configuration.Catalog.Catalogs.Add(new TypeCatalog(typeof(DeleteOnConflict)));
configuration.Catalog.Catalogs.Add(new TypeCatalog(typeof(PutOnConflict)));
};
Version of RavenDB I am using is 2.5.2956.
RavenDB.Client.NodaTime - 2.5.10.
Hope for your help. Thanks.
In my case that was a very silly mistake. When I was installing RavenDB server some time ago I installed it into non-default destination. Later some RavenDB updates were installed into default destination (i.e. \Program Files (x86)\RavenDB). And when I was installing RavenDB-NodaTime bundle I put it into incorrect destination (\Program Files (x86)\RavenDB).
After detecting this issue and configuring RavenDB server in my correct destination properly an error described in the heading has gone away.
Hope this answer can help somebody else.
P.S. Later there was a deserialization error during reading data from db (RavenDB was not aware of how to deserialize date from string in "yyyy-MM-dd" format to LocalDate object) which I fixed by calling store.ConfigureForNodaTime(DateTimeZoneProviders.Tzdb); after store.Initialize(); call as Steven suggested in his answer.
I believe the answer is that your tenant database does not have the bundle "activated" Your database document (under settings in Raven 3) should have something like
"Raven/ActiveBundles": "Encryption;Compression;NodaTime"
Also you must call
store.ConfigureForNodaTime(DateTimeZoneProviders.Tzdb);
I call this after store.Initialize(). Once you do both of these things, you may have to fix existing data by re-saving your documents (not sure if there is another way). New data will be properly stored like '2016-2-3' format which should make your index return data.