Schemaless configuration not writing to index - indexing

I am somewhat new to Solr and have been trying to follow the example of using the Schemaless configuration. I start up Solr with the following command:
bin/solr start -e schemaless
And solr does start up. I am trying to post an xml document to the schemaless index using the curl command as follows:
curl "http://localhost:8983/solr/gettingstarted/update?commit=true&wt=xml" -H "Content-type:application/xml" -d "xml text goes here"
However, when I run the curl command to view the fields that should have been added to the index, curl http://localhost:8983/solr/gettingstarted/schema/fields , I only see the defaults that existed when first starting up solr.
Is there anything I am missing when starting solr?
Thanks for your help in advance.

Related

What does Zap Proxy "HTTP Parameter Override" scan do?

When I run a baseline scane on a target I get the following result:
docker run -t owasp/zap2docker-stable zap-baseline.py -d -t https://mytarget.com
Result:
WARN-NEW: HTTP Parameter Override [10026] x 3
What does this result mean? What this scan is about?
Interesting timing, this was just being discussed on the issue tracker the other day: https://github.com/zaproxy/zaproxy/issues/4454
The thread that started it all: http://lists.owasp.org/pipermail/owasp-leaders/2012-July/007521.html
Basically it has to do with forms that don't have actions, or that propagate GET paras into form actions. (Mainly impacting JSP/Servlet).
Edit: Of course you could also use the -r report.html (or any of the reporting options) to get full details vs. just the summary.
-r report_html file to write the full ZAP HTML report
-w report_md file to write the full ZAP Wiki (Markdown) report
-x report_xml file to write the full ZAP XML report

Graphdb restore from backup in curl

I'm writing a script to automatically setup a repository starting from a clean GraphDB running in a Docker container.
I have a config.ttl file containing repository configuration, the namespace and a dump in a file init.nq
I have successfully created the repository using the config.ttf and updated namespace but I cannot understand how to load the init.nq file.
This operation is extremely simple from web interface: Import -> RFD -> Upload, but I'm not able to understand how to perform it using Curl. I suppose that the correct API should be
post /repositories/{repositoryID}/statements
but the dump is to huge to pass it as simple text (~44MB).
This should work:
curl -X POST -H "Content-Type:application/n-quads" -T init.nq 'http://localhost:7200/repositories/test/statements'

Return value msfconsole

I am trying to script a brute-force ssh attack with msfconsole -x <command> using a .sh and I need to check if the command has been successful or not so I need a return value.
Looking at the docs I wasn't able to find any information about this topic.
You can use -o to create a file with the result of the brute force and then you can read the file using Bash.

Apache solr indexed data not visible even after commit

I am not able to find the indexed data of the files during search
Below are the steps I am following
OS: Windows 7
Solr version: solr 5.2.1
Steps:
cd ..\solr 5.2.1\bin
solr start -e techproducts
Check if solr is up and running and core is visible. Search for the indexed files. Total 32 files exists and shown in the search result .
cd ..\solr-5.2.1\example\exampledocs
curl "http://localhost:8983/solr/techproducts/update/extract?literal.id=33&commit=true" -H "myfile=#example/exampledocs/test.pdf"
Result: It shows successfully indexed and committed but the files returned from the search are still 32 files.
try with other way of indexing ... java -Durl=http://localhost:8983/solr/techproducts/update/extract -Dparams=literal.id=33 -jar post.jar test.pdf It indexed and committed successfully
Open solr GUI and do a search now 33 files returned , but can not see any information of the test.pdf file, meta data and data both not visible
Restart the solr
now we can see the data of the file
Surprisingly instant commit worked for me just a day before with the same configuration, once I did delete all index, from then I am facing this problem not only with this solr instance for other solr instances in the same machine.
Below are the autocommit config from solrconfig.xml
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>

How to stop the running solr server?

I want to stop Solr server which is been started with below command
$ solr start -e dih
by above command my intention was to launch example DIH application bundled with the package.
Now I want to stop the above server, when I try to stop, I am getting below error:
$ solr stop -p 8984
ERROR: Solr home directory D:\Softwares\solr-5.0.0\ must contain
solr.xml
I am new to Solr.
i had set the SOLR_HOME as solr\bin, but where as per the document , it says
The Solr Home directory typically contains the following...
solr.xml *
so i have set SOLR_HOME to solr\server\solr, where i have solr.xml.
after modifying my home to above path, solr stop starts working