Getting ElasticSearch Oracle SQL River Working - sql

I am new to ElasticSearch and am trying to set up an instance on my local machine that will connect to an Oracle SQL database on another server. Currently, I'm using ES 1.5.2 and have the plugin ojdbc6.jar downloaded. I'm also using a Windows machine.
I can open the elasticsearch.bat file and successfully start a Test_Node, but I'm having trouble getting the moving pieces to align. I have pieced together from several question/answers as well as some of the ES documentation the following commands.
My command to build the river:
curl -XPUT "localhost:9200/_river/JDBC_River/_meta" -d #config.json
My #config.json doc:
{"type": "jdbc",
"jdbc": "{
\"strategy\": \"oneshot\",
\"url\": \"jdbc:oracle:thin:#server.database:1521/Hello.WORLD\",
\"user\":\"user\",
\"password\":\"password\",
\"sql\":\"select id as \"_id\", count(*) from table group by stuff\",
\"gridfs\": false
}",
"index": "{
\"index\": \"CHANNEL_DATA\",
\"type\": \"MDM_data\"
}"
}
The command I'm using to check that the river exists:
curl -XGET "localhost:9200/JDBC_River/_search?pretty"
This command receives this output:
{
"error" : "IndexMissingException[[JDBC_River] mising]",
"status" : 404
}
Moreover, the commands I have pieced together for pulling the data back to view and to delete the river are as follows, respectively:
curl -XGET "localhost:9200/JDBC_River"
curl -XDELETE "localhost:9200/_river/JDBC_River/"
Can anyone help me out as to why my river doesn't exist, but it is created? My hunch is that it is a driver issue and that my river isn't actually hitting the oracle database, but I have no idea what syntax should be used as most of the documentation I've found for Oracle JDBC is outdated.

Related

Sorting artifacts using aql and cleanup old artifacts

I'm trying to sort the list of artifacts from jfrog artifactory but getting (The requested URL returned error: 400 Bad Request), in the jfrog documentation (https://www.jfrog.com/confluence/display/JFROG/Artifactory+Comparison+Matrix) says it won't work for open source services. After we get list of artifacts need to delete old artifacts from subfolder in the artifactory repo. Tried with CLI and AQL but nothing worked.
Our repo url looks like this
http://domainname/artifactory/repo/folder/subfolder/test1.zip
Like test 1.zip we have many artifacts(let's say 50)in that subfolder. Looking for help on this, anyone pls me on this issue. Thanks.
While sorting is not supported in OSS versions, if you would like to delete artifacts older than a certain time period, you can use Relative Time Operators, parse the output, and use a script to delete those artifacts.
You can also specify a specific date. There are several Comparison Operators that you can use.
You can use the below AQL for reference:
curl -uadmin:password -XPOST "http://localhost:8082/artifactory/api/search/aql" -d 'items.find({"repo": "repo"}, {"path": "folder/subfolder"}, {"created" : {"$before" : "2minutes"}})' -H "Content-Type: text/plain"

Graphdb restore from backup in curl

I'm writing a script to automatically setup a repository starting from a clean GraphDB running in a Docker container.
I have a config.ttl file containing repository configuration, the namespace and a dump in a file init.nq
I have successfully created the repository using the config.ttf and updated namespace but I cannot understand how to load the init.nq file.
This operation is extremely simple from web interface: Import -> RFD -> Upload, but I'm not able to understand how to perform it using Curl. I suppose that the correct API should be
post /repositories/{repositoryID}/statements
but the dump is to huge to pass it as simple text (~44MB).
This should work:
curl -X POST -H "Content-Type:application/n-quads" -T init.nq 'http://localhost:7200/repositories/test/statements'

Schemaless configuration not writing to index

I am somewhat new to Solr and have been trying to follow the example of using the Schemaless configuration. I start up Solr with the following command:
bin/solr start -e schemaless
And solr does start up. I am trying to post an xml document to the schemaless index using the curl command as follows:
curl "http://localhost:8983/solr/gettingstarted/update?commit=true&wt=xml" -H "Content-type:application/xml" -d "xml text goes here"
However, when I run the curl command to view the fields that should have been added to the index, curl http://localhost:8983/solr/gettingstarted/schema/fields , I only see the defaults that existed when first starting up solr.
Is there anything I am missing when starting solr?
Thanks for your help in advance.

use dfs does not work in later versions of drill on the drill web page

When using the web page displayed by drill on localhost:8047/query (by default) running the following commands fail:
use dfs.mydfs;
and then:
show files;
Then I receive this error:
org.apache.drill.common.exceptions.UserRemoteException: VALIDATION ERROR: SHOW FILES is supported in workspace type schema only. Schema [] is not a workspace schema. [Error Id: 872e6708-0aaa-480e-af32-9aaf6f84de2b on 172.28.128.1:31010]
While if I use the terminal to enter the same commands the command works correct.
I've also found that this affects 1.6 and above and that this behaviour is not seen on 1.5 below.
This command works in both the web and commmand line/terminal version:
show files in df.workspace;
I have configured multiple types of dfs and have tried both OS X and Windows 10 and found the issue to be the same.
I tried looking through the drill jira to see if this was registered as bug and I looked briefly through the release notes as well.

DB2 generate DLL error

Earlier this morning I trying to create DLL for my DB2 database but it kept giving the following message:
The DB2 Administration Server encountered a script error. Script
error code "1".
Explanation:
A script error was encountered while the DB2 Administration server was
executing the script. The script exited with error code "".
User response:
Verify that the script is correct.
If you continue to receive this error message after attempting the
suggested response, refer to the DB2 Administration Server's First
Failure Data Capture Log for additional information or contact IBM
Support.
can anyone help me on this?
my user is ROOT and schema is SQLJ
command is db2look -d OBDB -z SQLJ -u ROOT -e -l -x -c ;
Error code is SQL22220.
i once too had this problem. may be you are trying to generate dll for more than 30 tables at a time. reduce the no. of tables and see if this works. worked for me though but errors can be due to other reason as well, plz let me know.... :)