Opa Bundle with aws s3 configuration - amazon-s3

I am new to opa server.
I am working on using opa as authorizer for multiple policies and use case is that I upload multiple policies to s3 bucket and then each policy has his own link like /opa/example/allow or /opa/example1/approve.
Now I want to use different requests coming to use this link along with data to check whether they are allowed for specific policy or not.
I am little confused with config as I was going through opa docs. Can someone guide me for same. Following is config I am using and whenever I am hitting opa server it is giving blank response. I have taken this from some blog but not sure if it will work or not.
config.yaml
bundle-s3:
url: https://opa-testing1.s3.us-west-2.amazonaws.com/
credentials:
s3_signing:
environment_credentials: {}
inside this bucket I have a bundle with rego file as follow :
package opa.examples
import input.user
import input.method
default allow = false
allow { user = "alice" }
allow {
user = "bob"
method = "GET"
}
opa server I am running using : opa run --server -c config.yaml
Request I am hitting is on : localhost:8181/v1/data/opa/example/allow
Can someone help me how I can achieve my use case here and some blogs or examples will be very useful

Related

How do I need to configure MongooseIM to allow registering new users? Getting error: Can't register user at node: not_allowed

I am currently trying to add chat functionality with MongooseIM to an app that already comes with users/accounts.
The idea was to add a mongooseIM chat server and register all existing (and future) users with their user ID in mongooseIM.
Setup
I am working with the mongooseIM docker container and have set up a docker compose that loads custom configuration.
In the custom configuration, I have added the admin REST API and can do requests like listing all registered users or the available commands.
Problem
Whenever a new user should be registered through the API, I get the response:
Can't register user at node: not_allowed and a 500 status code.
Trying to register a user through mongooseimctl returns Error: account_unprivileged.
What I tried
I think I have been reading through the documentation and google results for about 6 hours by now.
Testing with the standard docker container (and no extra configuration) works from the command line, but I failed testing the API because I do not know how to access the API then (or if it is enabled at all). Maybe someone has a hint on this for me?
One idea was that the action really is not allowed, but the /commands route of the admin interface contains the register_user action in the results, so I think its enabled/allowed:
%{
"action" => "create",
"category" => "users",
"desc" => "Register a user",
"name" => "register_user"
},
When using the default docker container and trying to register a user for a non-existent domain also results in "not_allowed", so this could be a configuration problem. I have a host name configured in my mongooseim.toml config file:
[general]
loglevel = "warning"
hosts = ["myhost"]
default_server_domain = "myhost"
language = "en"
I am quite positive I am missing some configuration/setup somewhere and would appreciate any hints/help.
Edit 1
I added dummy authorization (== no authorization) to the config file:
[auth]
methods = ["dummy"]
Now, I no longer get a "not_allowed" error.
Instead, the response always states the user already exists, while requesting the user list always returns an empty list.
I tried sending messages between made-up user jids, i get no errors, but also no messages are returned for any users.
"dummy" method is for testing only. It makes any user to exist (check ejabberd_auth_dummy.erl code, it is really just a dummy without any implementation).
You should use internal or rdbms auth_methods instead.
rdbms method would need an rdbms connection configured.
internal method is used to store users in Mnesia (but Mnesia backends are not recommended because RDBMS just works more reliably and efficiently).

AWS CDK CloudFront to S3 redirection

I'm using AWS CDK to setup S3 and CloudFront static website hosting. All works well until I want to redirect "http[s]//:www.mydomain.com" to "https ://mydomain.com". I do not want to make the S3 repositories public rather provide bucket permission for the CloudFront "Origin Access Identity". The relevant snippet of my CDK code is as follows:
const wwwbucket = new s3.Bucket(this, "www." + domainName, {
websiteRedirect: {
hostName: domainName,
protocol: s3.RedirectProtocol.HTTPS },
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL
})
const oaiWWW = new cloudfront.OriginAccessIdentity(this, 'CloudFront-OriginAccessIdentity-WWW', {
comment: 'Allows CloudFront to access the bucket'
})
wwwbucket.grantRead(oaiWWW)
const cloudFrontRedirect = new cloudfront.CloudFrontWebDistribution(this, 'https://www.' + domainname + '.com redirect', {
aliasConfiguration: {
acmCertRef: certificateArn,
names: [ "www." + domainName ],
sslMethod: cloudfront.SSLMethod.SNI,
securityPolicy: cloudfront.SecurityPolicyProtocol.TLS_V1_1_2016,
},
defaultRootObject: "",
originConfigs: [
// {
// customOriginSource: {
// domainName: wwwbucket.bucketWebsiteDomainName
// },
// behaviors : [ {isDefaultBehavior: true}],
// },
{
s3OriginSource: {
s3BucketSource: wwwbucket,
originAccessIdentity: oaiWWW
},
behaviors : [ {isDefaultBehavior: true}],
}
]
});
Unfortunately the result is that rather than redirecting, browsing to www.mydomain.com results in the browser showing an S3 XML bucket listing result. I can fix the problem manually by using the AWS console to edit CloudFront's "Origin Domain Name" within "origin settings" from:
bucketname.s3.eu-west-2.amazonaws.com
to:
bucketname.s3-website.eu-west-2.amazonaws.com
Then all works as expected. I have tried changing my CDK script to use a customOriginSource rather than s3OriginSource (commented-out code above) which results in the correct address in CloudFront's "Origin Domain Name" but then the CloudFront distribution does not have a "Origin Access Identity" and so can't access the S3 bucket.
Does anyone know a way to achieve the redirect without having to make the redirect bucket public or edit the "Origin Domain Name" manually via the AWS console?
I thought I'd found an answer using a CDK escape hatch. After creating the CloudFront distribution for my redirect I modified the CloudFormation JSON behind the CDK class as (in typescript):
type ChangeDomainName = {
origins: {
domainName: string
}[]
}
const cfnCloudFrontRedirect = cloudFrontRedirect.node.defaultChild as cloudfront.CfnDistribution
var distributionConfig = cfnCloudFrontRedirect.distributionConfig as cloudfront.CfnDistribution.DistributionConfigProperty & ChangeDomainName
distributionConfig.origins[0].domainName = wwwbucket.bucketWebsiteDomainName
cfnCloudFrontRedirect.distributionConfig = distributionConfig
Unfortunately although this appeared to generate the CloudFormation template I was aiming for (checked using cdk synthesize) when deploying (cdk deploy) the following error was generated by CloudFormation:
UPDATE_FAILED | AWS::CloudFront::Distribution |
The parameter Origin DomainName does not refer to a valid S3 bucket.
It appears that even though it's possible to set a website endpoint of the form - ${bucketname}.s3-website.${region}.amazonaws.com - manually in the field Origin Domain Name within the CloudFront console this isn't possible using CloudFormation. This leads me to conclude either:
This is a bug with CloudFormation.
It's a bug in the console, in that the console shouldn't allow this setup.
However although currently modifying Origin Domain Name in console works, I don't know if this is a "legal" configuration that could be changed in the future in which case my code might stop working. The current solutions are:
Make the redirect bucket public in which case the customOriginSource will work.
Rather than using a redirect bucket instead use a Lambda to perform the redirect and deploy within CloudFront using "Lambda#Edge".
I would prefer not to make my redirect bucket public as it results in warnings when using security checking tools. The option of deploying "Lambda#Edge" using the cdk outside of us-east-1 currently looks painful so for the moment I'll continue manually editing Origin Domain Name in the console.
For reference the AWS documentation appears to imply that the API prohibits this use, though the console permits it, see: Using Amazon S3 Buckets Configured as Website Endpoints for Your Origin. See also:
Key differences between a website endpoint and a REST API endpoint

S3 alternate endpoints for presigned operations

I am using minio client to access S3. The S3 storage I am using has two endpoints - one (say EP1) which is accessible from a private network and other (say EP2) from the internet. My application creates a presigned URL for downloading an S3 object using EP1 since it cannot access EP2. This URL is used by another application which is not on this private network and hence has access only to EP2. This URL is (obviously) not working when used by the application outside the network since this URL has EP1 in it.
I have gone through minio documentation but did not find anything which help me specify alternate endpoints.
So my question is -
Is there anything which I have missed from minio that can help me?
Is there any S3 feature which allows generating presigned URL for an
object with EP2 in it?
Or is this not solvable without changing
current network layout?
You can use minio-js to manage this
Here is an example that you can use
var Minio = require('minio')
var s3Client = new Minio.Client({
endPoint: "EP2",
port: 9000,
useSSL: false,
accessKey: "minio",
secretKey: "minio123",
region: "us-east-1"
})
var presignedUrl = s3Client.presignedPutObject('my-bucketname', 'my-objectname', 1000, function(e, presignedUrl) {
if (e) return console.log(e)
console.log(presignedUrl)
})
This will not contact the server at all. The only thing here is that you need to know the region that bucket belongs to. If you have not set any location in minio, then you can use us-east-1 by default.

Serve git-lfs files from express' public folder

I'm using node.js (express) on Heroku, where the slug size is limited to 300MB.
In order to keep my slug small, I'd like to use git-lfs to track my express' public folder.
In that way all my assets (images, videos...) are uploaded to a lfs-store (say AWS S3) and git-lfs leaves a pointer file (with probably the S3 URL in it?).
I'd like express redirects to the remote S3 file when serving files from the public folder.
My problem is I don't kwon how to retrieve the URL from the pointer file's content...
app.use('/public/:pointerfile', function (req, res, next) {
var file = req.params.pointerfile;
fs.readFile('public/'+file, function (er, data) {
if (er) return next(er);
var url = retrieveUrl(data); // <-- HELP ME HERE with the retrieveUrl function
res.redirect(url);
});
});
Don't you think it will not be too expensive to make express read and parse potentially all the public/* files. Maybe I could cache the URL once parsed?
Actually the pointer file doesn't contain any url information in it (as can be seen in the link you provided, or here) - it just keeps the oid(Object ID) for the blob which is just its sha256.
You can however achieve what you're looking for using the oid and the lfs api that allows you to download specific oids using the batch request.
You can tell what is the endpoint that's used to store your blobs from .git/config which can accept non-default lfsurl tags such as:
[remote "origin"]
url = https://...
fetch = +refs/heads/*:refs/remotes/origin/*
lfsurl = "https://..."
or a separate
[lfs]
url = "https://..."
If there's no lfsurl tag then you're using GitHub's endpoint (which may in turn redirect to S3):
Git remote: https://git-server.com/user/repo.git
Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs
Git remote: git#git-server.com:user/repo.git
Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs
But you should work against it and not S3 directly, as GitHub's redirect response will probably contain some authentication information as well.
Check the batch response doc to see the response structure - you will basically need to parse the relevant parts and make your own call to retrieve the blobs (which is what git lfs would've done in your stead during checkout).
A typical response (taken from the doc I referenced) would look something like:
{
"_links": {
"download": {
"href": "https://storage-server.com/OID",
"header": {
"Authorization": "Basic ...",
}
}
}
}
So you would GET https://storage-server.com/OID with whatever headers was returned from the batch response - the last step will be to rename the blob that was returned (it's name will typically be just the oid as git lfs uses checksum based storage) - the pointer file has the original resource's name so just rename the blob to that.
I've finally made a middleware for this: express-lfs with a demo here: https://expresslfs.herokuapp.com
There you can download a 400Mo file as a proof.
See usage here: https://github.com/goodenough/express-lfs#usage
PS: Thanks to #fundeldman for good advices in his answer ;)

Run BitTorrent Sync in API mode

I want to use the API of BitTorrent Sync. For this I first have to run it in API mode.
I was checking the "Enabling the API" section in the following link:
http://www.bittorrent.com/sync/developers/api
But I am unable to run it.
Can anybody please share some experience with it. I am new to it.
Here is what I execute in command prompt:-
C:\Program Files (x86)\BitTorrent Sync>btsync.exe /config D:\config.api
Any help would be greatly appreciated.
It was my mistake. This is the right way to run it:
BTSync.exe /config D:\config.api
The problem was with the config file. Here is the way it should be:
{
// path to folder where Sync will store its internal data,
// folder must exist on disk
"storage_path" : "c://Users/Folder1/btsync",
// run Sync in GUI-less mode
"use_gui" : true,
"webui" : {
// IP address and port to access HTTP API
"listen" : "0.0.0.0:9090",
// login and password for HTTP basic authentication
// authentication is optional, but it's recommended to use some
// secret values unique for each Sync installation
"login" : "api",
"password" : "secret",
// replace xxx with API key received from BitTorrent
"api_key" : "xxx"
}
}