I'm trying to set up a SPARQL endpoint for an Organisation as part of a Open source project with Apache Jena Fuseki and will be hosting it on a server publicly soon and i've uploaded the open data into it.
While i want users to be able to directly query the dataset without any authentication i'd like to disable only adding of new datasets through any sort of auth(Even Basic auth would do as of now , ie : major concern is only that other people should not corrupt the endpoint)
Right now i've identified three ways users can do this
Through the admin UI from a browser ( internally calls the POST request to {fusekihostedURL}/dataset/update?=xxxx )
through a POST request through command line to the publicly hosted dataset
through the Sparql Graph protocol ( sends POST request to {fusekiURl}/dataset/data )
i can use a shiro.ini with a basic auth of username and password , but that blocks the fuseki landing page itself with a pop-up of username and password which i don't want to enable querying for the public .
Apart from sitting behind a reverse proxy ( using fuseki as a war file on tomcat so blocking that port would mean blocking all other applications ) is there anything that could be done ?
Any help would be greatly appreciated
If you write a configuration file for your dataset, then you can simply not provide endpoints for update and graph store protocol update (which will also disable dataset update).
You can also do this in shiro.ini - it will take editing of the shiro.ini file to put in more sophistiated rules than the default. Not providing the services is the most secure.
Related
I use Nextcloud as a normal user to store and share files.
I decided to use it as a backend for a web application I am developing so that I can store the files in Nextcloud while the frontend is done by me.
I spent some hours on the API docs
https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/index.html
and, with some disappointment, unless I have not made a mistake, I realized that the only API that can be used from outside Nextcloud is the WebDav API.
This is a minimalistic API that allows doing basic things such as uploading a file by passing the full path like with this GET statement (authenticated by basic auth passing username and password in the headers:
GET https://nextcloud.example.com/remote.php/dav/files/username/FolderOne/SubFolderTwo/HelloWorld.txt
This will download the file located in /FolderOne/SubFolderTwo/HelloWorld.txt
with a PUT request, it is possible to overwrite the file by passing the file content in the raw body request
This is very effective but minimalistic.
I was expecting to have a full REST API to access more properties and perform complex operations.
Could you please tell me if I missed some important information?
There is the OCS API but it works only from inside Nextcloud.
Thanks.
A full REST API is avaiable - https://docs.nextcloud.com/server/22/developer_manual/client_apis/OCS/ocs-api-overview.html
Create a Share - https://docs.nextcloud.com/server/latest/developer_manual/client_apis/OCS/ocs-share-api.html
The OwnCloud documentation also offers more examples
https://doc.owncloud.com/server/10.8/developer_manual/core/apis/ocs-share-api.html
You can register an App id and use that to login or passthru a username and password in the authentication header.
My current setup is like this. The entire project was built using the official docs here - https://identityserver4.readthedocs.io/en/latest/
API Server
Auth Server with local login, google login and github login
Console based c# client
JS based client
MVC based client.
(all of it, as described in the official docs)
Locally, all of them work beautifully. Able to login, access api endpoints, logout, redirect, the whole thing works smooth.
I have deployed all 5 of them to five different azure web apps. They all have the standard xyz.azurewebsites.net domains ready to use. Now, I have run into some problems.
the console C# client is able to talk to the deployed auth server, collect token using a local account on the auth server and make calls to the deployed API server. Based on this, I assume that both the api server and the auth server working hand in hand, as they should.
Problem #1 - the JS client keeps saying
'The login is blocked because of CORS Missing Allow Origin '
Problem #2 - the MVC client loads the auth server, and then the auth server gives me this error.
Sorry, there was an error : unauthorized_client
Request Id: 80005c0f-0000-eb00-b63f-84710c7967bb
Note : I have set the CORS policy on the auth server, both these clients, under client definition as follows. I am not too concerned about keeping the auth server open, so dont mind if any and every domain can call the auth server.
AllowedCorsOrigins = { "*.*" },
Also Note : I have set the URLS in the code before deployment. all loclahost:port number lines have been replaced correctly with the corresponding now published URLs.
So, what am I missing out here?
Update 1
I was able to solve the CORS issue. Have posted a answer here on another question.
Not able to enable CORS for identity server 4 in asp.net core
Update 2
So, now, both the JS client and the MVC client, are giving identical errors.
Sorry, there was an error : unauthorized_client
Request Id: 80005c0f-0000-eb00-b63f-84710c7967bb
Update 3
I have opened an issue which has log details.
https://github.com/IdentityServer/IdentityServer4/issues/4691
I am not sure if this counts as an answer, but posting for my own question, as it might might help others. Also, this is only a guess at this point.
I found out that the redirects were permanently stored in the database I used with EF migrations. That mean, local in memory redirects were being overwritten anyway by the database stored migrations. I believe this is the issue.
I also realized that the console app is working fine for it does not depend on redirect URLs where as the JS and MVC based clients dont work because they do depend on redirect URLs.
At this point, the best thing to do and for you (if you used EF migrations to store your auth server configuration) on database would be start over and switch to in memory only. Alternatively, you can try and update the database to suit your deployment requirements.
Ultimately, I believe, unless it is absolutely necessary, keep the auth server config (like redirects and CORS settings) in memory as they dont take up much value and are rarely changed.
I used the DCOS 1.8.8,and applied arangodb 3.1.3 on it. When I set the jwt-key and make the authentication works. I meet the following problems.
1.when I open the web site: http://master.mesos/service/arangodb3, I need to fill the user and pasword, however the password for the "root" is not null. I wonder what's the user and password, and how can i get a available one?
2.when I used the coordinator address instand of the master.mesos, and I enter the web UI, then changed the root's password. what's the amazing thing is that the password I set is not work with the web site(used the master.mesos as Web UI).Also the CLUSTER panel doesn't show well(No data~ in total).
Both ArangoDB and DC/OS are using the standard HTTP Authentication header. Under the hood ArangoDB Authentication is working fine however the reverse proxy that DC/OS is using will not forward any authentication down to arangodb. That is why you can't log in into ArangoDB through the DC/OS whereas it will work from inside the mesos cluster.
This is the reason why you have to manually enable authentication via the extra args and there is no simple checkbox for this :( It is simply not supported right now and not really useable.
I am trying to use JMeter to test our Web Application. We originally used LoadComplete to test our Web Application, but because LoadComplete is not able to run on a non-GUI mode, we were not able to use the max stat’s from our test server (strain our 8 CPU’s and 8GB’s of RAM). That is why I moving towards JMeter (https://blazemeter.com/blog/5-ways-launch-jmeter-test-without-using-jmeter-gui).
The test includes logging in, choosing a specific app, do a simple task through this app and then end the recording. The HTTP Requests, which are failing are printing Failed Access on their Response Data on the View Results Tree.
I used the HTTP(S) Test Script Recorder to record each HTTP request. My JMeter project is failing on a few different HTTP Requests, which includes oauthtoken Get Request that includes jessionid="item", a GET resourceLastAccessedTime Request, and a couple GET resourceLastAccessedTime Requests. I tried to follow blazementer's guidance for how to use JMeter for Login Authentication, because these requests seem to be involved with the authentication of each user after logging in and the problem I am getting on Response data for each of these requests on the View Results Tree it says Access denied. (https://docs.blazemeter.com/customer/portal/articles/1743663-how-to-use-jmeter-for-login-authentication-).
One of the steps is to "copy and paste" the Parameters from the Post request after you login to these requests. I can add these parameters to these requests right below where it says Send Parameters with the request, but our POST request only has two parameters (the login name and the password). Is there somewhere else to look for these parameters?
I tried a combination of a lot of different attempts, but I am still unsuccessful (meaning: I moved the Regular Expression Extractor to a few different HTTP requests and I moved which HTTP requests to put those parameters and I have not been successful yet).
Do you know of a URL that could be helpful for this?
Don't trust Test Script Recorder! It doesn't follow any logic while recording your requests. It just records requests processed through proxy as they are. In case you use parameters that can't be defined as constants, the best way would be to rewrite the script manually.
Be patient and spend some hours (only once!) to learn how to construct any test scenarios (even complex) manually using Jmeter GUI. It will save you a lot of time for debugging.
It seems like (just a guess) that your test scenario doesn't contain Cookie Manager item. Based on what you wrote above, it seems like after logging to serer (by sending POST with login and password) it sets some cookies by Set-Cookie HTTP header. These cookies should be included in every next request as a prove that you successfully logged in before (the most common logic for simple web applications). So, if you get Access Denied, means you didn't include appropriate cookies in test request. Use Cookie Manager for that.
Feel free to ping me in case you need any assistance.
Jmeter help manual is all you need to know about how each element works.
P.S.: Jmeter also can generate distributed load from multiple slave servers, in GUI and CLI modes both. So, in case you need to stress your server yout, Jmeter is the best choise.
And welcome to Jmeter users family! Good luck.
I have two domains pointing to same server, what i am trying to do is when i log in into the application using domainOne.com the session is maintained for that domain , if i try to access the application from other domain domainTwo.com the session is not there.
I want the same session values in both the domains,
I have PHP application [Yii Framework Application], and the requirement is, one third party application wants part of my application content that needs to be authenticated. I authenticate the content using SSO(Single Sign On[JWT]) and pointed their domain to my content(which needs to be shared) using this way, i am able to login using their's(Third party's) domain but when i access the same section using my original domain the session is not there (No Session when using My domain).
What i want is, when i log in using their domain and access the content from my domain it should show me as logged in user
Conditions -
domainTwo.com/someContent - Logged in using this
domainOne.com/someContent - Session is not here
and vice versa
P.S someContent is in my server only.
Please can anyone help?
Thanks in advance..!
edit - Requirement is the client dont want iframes, please suggest methods which dont use iframes.