I trying to integrate S3fs into Pydio to use my own storage servers (so not amazon).
Accessing an s3fs mount as local filesystem from Pydio is malfunctioning, there are bunch of commands like ls which doesnt work on it therefore I must use the aws-sdk to interface with it from pydio.
The problem is that from the Amazon SDK it's only possible to select Amazons own servers through a region drop down list. To complicate things I also need to use proxy to access my own s3 storage.
Did anyone manage to implement this?
Using just the amazon Sdk how would this look like from php?
What I tried:
<?php
require_once("/usr/share/pydio/plugins/access.s3/aS3StreamWrapper/lib/wrapper/aS3StreamWrapper.class.php");
use Aws\S3\S3Client;
if (!in_array("s3", stream_get_wrappers())) {
$wrapper = new aS3StreamWrapper();
$wrapper->register(array('protocol' => 's3',
'http' => array(
'proxy' => 'proxy://10.0.0.1:80',
'request_fulluri' => true,
),
'acl' => AmazonS3::ACL_OWNER_FULL_CONTROL,
'key' => "<key>",
'secretKey' => "<secret>",
'region' => "s3.myprivatecloud.lan"));
}
?>
Thanks
if this is still a pending question, FYI in the latest versions (v6 beta 2) we've changed the access.s3 plugin to use the last version of aws-sdk, and also we added some parameters to easily use this plugin pointing to alternative s3-compatible storages.
-c
Related
I'm testing setting up Datomic Cloud using an IntelliJ IDE. I'm following the Client API tut from Datomic but am stuck initializing the client.
The spec from an API client is here, and the tut is here, under the step Using Datomic Cloud.
So the tut says to init a client like so:
(require '[datomic.client.api :as d])
(def cfg {:server-type :ion
:region "<your AWS Region>" ;; e.g. us-east-1
:system "<system name>"
:creds-profile "<your_aws_profile_if_not_using_the_default>"
:endpoint "<your endpoint>"})
They say to include an AWS profile if not using the default. I am using the default as far as I know--I'm not part of an org in AWS.
This is the (partially redacted) code from my tutorial.core namespace, where I'm trying to init Datomic:
(ns tutorial.core
(:gen-class))
(require '[datomic.client.api :as d])
(def cfg {:server-type :cloud
:region "us-east-2"
:system "roam"
:endpoint "https://API_ID.execute-api.us-east-2.amazonaws.com"
})
(def client (d/client cfg))
(d/create-database client {:db-name "blocks"})
(d/connect client {:db-name "blocks"})
However, I'm getting an error from Clojure: Forbidden to read keyfile at s3://URL/roam/datomic/access/admin/.keys. Make sure that your endpoint is correct, and that your ambient AWS credentials allow you to GetObject on the keyfile.
Do I need some sort of credential? Could anything else be causing this error? I got the endpoint URL from the ClientApiGatewayEndpoint in my CloudFormation Datomic stack.
Please let me know if I should provide more info! Thanks.
I tried the solution mentioned here and it didn't not work, I can't find an answered question for this anywhere online.
When you initialize the client from your computer, the datomic library is trying to reach S3 to read some configuration file using whatever AWS credentials you passed. You mention that you are using the default profile, so most likely you have a ~/.aws/credentials file with a [default] entry and an access and secret key.
The error:
Forbidden to read keyfile at s3://URL/roam/datomic/access/admin/.keys.
Make sure that your endpoint is correct, and that your ambient AWS
credentials allow you to GetObject on the keyfile.
Means that the datomic library can't read the file from S3. There's many reasons why this could be the case. Try running the following using the AWS cli
aws s3 cp s3://URL/roam/datomic/access/admin/.keys .
I'm assuming that it will fail and that would mean your default profile doesn't have the necessary permissions to read this resource from S3, thus the error you get from datomic. To fix it you would need to add the necessary permission to your AWS user IAM role (GetObject on the keyfile, as suggested by the error message).
One thing you could try is creating a profile. In your ~/.aws/credentials
[datomic]
aws_access_key_id = your-access-key
aws_secret_access_key = your-secret-key
region = us-east-2
Then change your cfg map to:
(def cfg {:server-type :cloud
:region "us-east-2"
:system "roam"
:creds-profile "datomic"
:endpoint "https://API_ID.execute-api.us-east-2.amazonaws.com"
}
When I create a datasource, a service restart is required to make it work, regardless of the method used to create it (standalone.xml, JBoss CLI, JBoss Administration Console). Attached is the procedure I have written for my team (exported from our Wiki space). The datasource gets created successfully, but when I test the connection, I get this:
From JBoss Administration Console
Unknown error
Unexpected HTTP response: 500
Request
{
"address" => [
("subsystem" => "datasources"),
("data-source" => "dsMyApp")
],
"operation" => "test-connection-in-pool"
}
Response
Internal Server Error
{
"outcome" => "failed",
"failure-description" => "JBAS010440: failed to invoke operation: JBAS010442: failed to match pool. Check JndiName: java:/dsMyApp",
"rolled-back" => true,
"response-headers" => {"process-state" => "reload-required"}
}
From JBoss CLI
JBAS010440: failed to invoke operation: JBAS010442: failed to match pool. Check JndiName: java:/dsMyApp
If I restart the JBoss server, the datasource works fine (server, port, username and password are all correct).
Any thoughts?
Thank you
The Quick Answer: YES, restarting makes a reload and then activates the datasource
I suggest you doing a reload with jboss-cli (It´s the quickest way)
I´ve created all my datasources with jboss-cli and I always need to
perform this action to allow them to work. After the reload, the datasource connection can be tested.
/opt/wildfly/bin/jboss-cli.sh --connect --controller=192.168.119.116:9990 --commands="reload --host=master"
Hope it helps
I'm using logstash to read a CSV file and post the information to my ActiveMQ using the stomp protocol.
Everything is working great, I only want to add persistence to those messages but I don't know how to tell logstash to do so.
The ActiveMQ site say I need to tell my stomp producer to add the "persistent:true" parameter, but I don't find any documentation about this on logstash site.
Anyone knows anything about this?
Thanks in advance,
http://activemq.apache.org/stomp.html
Well, persistence cannot be set on logstash stomp output.
If this is very important to you, it should be a simple fix in the source.
You can find the file here:
And this line:
#client.send(event.sprintf(#destination), event.to_json)
should be something like this:
#client.send(event.sprintf(#destination), event.to_json, :persistent => true)
You have to build it and install the plugin yourself. My Ruby skills are limited so I have no idea how to do that. Maybe consider adding that as a config param and contribute it with a pull request?
Now you can use the attribute headers to send persistent messages:
stomp {
host => "localhost"
port => 61612
destination => "my_queue"
headers => {
"persistent" => true
}
}
Source:
https://github.com/logstash-plugins/logstash-output-stomp/issues/7
I have the following situation:
We have a webapp built with Zend Framework 1.10 that is available under www.domain.com/webapp
On the server filesystem, we really also have the webapp deployed in /srv/www/webapp
Now, for some reasons I can't detail too much, the project manager has requested, now that the app is finished, that each client recieves his own url litteraly.
So we would have:
www.domain.com/webapp/client1
www.domain.com/webapp/client2
Normally, what start after the webapp/ would be the controllers, actions and so forth from zend.
Therefore the question: is there a quick way in apache to create these virtual subdirectories, as in the example, client1, client2 ?
I guess it should be possible with url rewriting ?
Thanks in advance
Rather than creating virtual directories, this can be solved by creating a specific route with Zend_Route. Assuming, controller as User and the action to pass the name would be view, then
$route = new Zend_Controller_Router_Route(
'webapp/:username',
array(
'controller' => 'user',
'action' => 'view',
'username' => 'defaultuser'
)
);
I want my Rails 3.1 app to scale up to 1 web dyno at 8am, then down to 0 web dynos at 5pm.
BUT, I do not want to sign up for a paid service, and I cannot count on my home computer being connected to the internet.
It seems like the Heroku Scheduler should make this trivial. Any quick solutions or links?
The answer is 'yes' you can do this from scheduler and it's trivial once you know the answer:
Add a heroku config var with your app name: heroku config:add APP_NAME:blah
Add gem 'heroku' to your Gemfile
In order to verify, manually scale up/down your app: heroku ps:scale web=2
Add a rake task to lib/tasks/scheduler.rake:
desc "Scale up dynos"
task :spin_up => :environment do
heroku = Heroku::Client.new('USERNAME', 'PASSWORD')
heroku.ps_scale(ENV['APP_NAME'], :type=>'web', :qty=>2)
end
# Add a similar task to Spin down
Add the Scheduler addon: heroku addons:add scheduler:standard
Use the Scheduler web interface to add "rake spin_up" at whatever time you like
Add a rake spin_down task and schedule it for whenever.
Notes:
Step 1 is needed because I couldn't find any other way to be certain of the App name (and I use 'staging' and 'production' environments for my apps.
Step 3 is required because otherwise the ruby command errors out as it requires that you first agree (via Yes/No response) that you will be charged money as a result of this action.
In step 4, I couldn't find any docs about how to do this with an API key via the heroku gem, so it looks like user/pass is required.
Hope this helps someone else!
Just implemented this approach (good answer above #dnszero), thought I would update the answer with Heroku's new API.
Add your app name as a heroku config variable
require 'heroku-api'
desc "Scale UP dynos"
task :spin_up => :environment do
heroku = Heroku::API.new(:api_key => 'YOUR_ACCOUNT_API_KEY')
heroku.post_ps_scale(ENV['APP_NAME'], 'web', 2)
end
This is with heroku (2.31.2), heroku-api (0.3.5)
You can scale your web process to zero by
heroku ps:scale web=0
or back to 1 via
heroku ps:scale web=1
you'd then have to have a task set to run at 8 that scales it up and one that runs at 17 that scales it down. Heroku may require you to verify your account (ie enter credit card details) to use the Heroku Scheduler plus then you'd have to have the Heroku gem inside your app and your Heroku credentials too so it can turn your app on or off.
But like Neil says - you get 750hrs a month free which can't roll over into the next month so why not just leave it running all the time?
See also this complete gist, which also deals with the right command to use from the Heroku scheduler: https://gist.github.com/maggix/8676595
So I decided to implement this in 2017 and saw that the Heroku gem used by the accepted answer has been deprecated in favor of the 'platform-api' gem. I just thought I'd post what worked for me, since i haven't seen any other posts with a more up-to-date answer.
Here is my rake file that scales my web dynos to a count of 2. I used 'httparty' gem to make a PATCH request with appropriate headers to the Platform API, as per their docs, in the "Formation" section.
require 'platform-api'
require 'httparty'
desc "Scale UP dynos"
task :scale_up => :environment do
headers = {
"Authorization" => "Bearer #{ENV['HEROKU_API_KEY']}",
"Content-Type" => "application/json",
"Accept" => "application/vnd.heroku+json; version=3"
}
params = {
:quantity => 2,
:size => "standard-1X"
}
response = HTTParty.patch("https://api.heroku.com/apps/#{ENV['APP_NAME']}/formation/web", body: params.to_json, headers: headers)
puts response
end
As an update to #Ren's answer, Heroku's Platform API gem makes this really easy.
heroku = PlatformAPI.connect_oauth(<HEROKU PLATFORM API KEY>)
heroku.formation.batch_update(<HEROKU APP NAME>, {"updates" =>
[{"process" => <PROCESS NAME>,
"quantity" => <QUANTITY AS INTEGER>,
"size" => <DYNO SIZE>}]
})
If you're running on the cedar stack, you won't be able to scale to zero web dynos without changing the procfile and deploying.
Also, why bother if you get one free dyno a month (750 dyno hours, a little over a month in fact)?