I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);
Related
I have a celery app which has to be pinged by another app. This other app uses json to serialize celery task parameters, but my app has a custom serialization protocol. When the other app tries to ping my app (app.control.ping), it throws the following error:
"Celery ping failed: Refusing to deserialize untrusted content of type application/x-stjson (application/x-stjson)"
My whole codebase relies on this custom encoding, so I was wondering if there is a way to configure a json serialization but only for this ping, and to continue using the custom encoding for the other tasks.
These are the relevant celery settings:
accept_content = [CUSTOM_CELERY_SERIALIZATION, "json"]
result_accept_content = [CUSTOM_CELERY_SERIALIZATION, "json"]
result_serializer = CUSTOM_CELERY_SERIALIZATION
task_serializer = CUSTOM_CELERY_SERIALIZATION
event_serializer = CUSTOM_CELERY_SERIALIZATION
Changing any of the last 3 to [CUSTOM_CELERY_SERIALIZATION, "json"] causes the app to crash, so that's not an option.
Specs: celery=5.1.2
python: 3.8
OS: Linux docker container
Any help would be much appreciated.
Changing any of the last 3 to [CUSTOM_CELERY_SERIALIZATION, "json"] causes the app to crash, so that's not an option.
Because result_serializer, task_serializer, and event_serializer doesn't accept list but just a single str value, unlike e.g. accept_content
The list for e.g. accept_content is possible because if there are 2 items, we can check if the type of an incoming request is one of the 2 items. It isn't possible for e.g. result_serializer because if there were 2 items, then what should be chosen for the result of task-A? (thus the need for a single value)
This means that if you set result_serializer = 'json', this will have a global effect where all the results of all tasks (the returned value of the tasks which can be retrieved by calling e.g. response.get()) would be serialized/deserialized using the json-serializer. Thus, it might work for the ping but it might not for the tasks that can't be directly serialized/deserialized to/from JSON which really needs the custom stjson-serializer.
Currently with Celery==5.1.2, it seems that task-specific setting of result_serializer isn't possible, thus we can't set a single task to be encoded in 'json' and not 'stjson' without setting it globally for all, I assume the same case applies to ping.
Open request to add result_serializer option for tasks
A short discussion in another question
Not the best solution but a workaround is that instead of fixing it in your app's side, you may opt to just add support to serialize/deserialize the contents of type 'application/x-stjson' in the other app.
other_app/celery.py
import ast
from celery import Celery
from kombu.serialization import register
# This is just a possible implementation. Replace with the actual serializer/deserializer for stjson in your app.
def stjson_encoder(obj):
return str(obj)
def stjson_decoder(obj):
obj = ast.literal_eval(obj)
return obj
register(
'stjson',
stjson_encoder,
stjson_decoder,
content_type='application/x-stjson',
content_encoding='utf-8',
)
app = Celery('other_app')
app.conf.update(
accept_content=['json', 'stjson'],
)
You app remains to accept and respond stjson format, but now the other app is configured to be able to parse such format.
I'm trying to add ssh-keys to my Google Cloud project at the project level with terraform:
resource "google_compute_project_metadata_item" "oslogin" {
project = "${google_project_services.myproject.project}"
key = "enable-oslogin"
value = "false"
}
resource "google_compute_project_metadata_item" "block-project-ssh-keys" {
project = "${google_project_services.myproject.project}"
key = "block-project-ssh-keys"
value = "false"
}
resource "google_compute_project_metadata_item" "ssh-keys" {
key = "ssh-keys"
value = "user:ssh-rsa myverylongpublickeythatireplacewithtexthereforobviousreasons user#computer.local"
depends_on = [
"google_project_services.myproject",
]
}
I tried all types of combinations of the 2 metadata flags oslogin and block-project-ssh-keys, which always get set without issues. But the ssh keys never appear in GCPs web GUI let alone the authorized_keys file. I even tried adding the depends_on, to make sure the project is existent before adding the keys, but that didn't help either.
Yet, Terraform says:
google_compute_project_metadata_item.ssh-keys: Creation complete after 8s (ID: ssh-keys)
Adding the exact same key manually on the web GUI works fine. At this point I believe I have tried everything, read all the first page Google results to 'terraform gcp add ssh key' and similar queries ... I'm at my wits end.
The issue was that the ssh key was being added to a different project.
I started with Google's tutorial on GCP/Terraform. This creates a generic project with the gcloud tool first. Then proceeds to create accounts using that generic project. This is necessary because you need a user to run terraform against their API. Then they create a new project facilitating these users with terraform each time you apply. The generic project created with gcloud is not being touched after the initial creation.
If you omit the "project" parameter from the google_compute_project_metadata_item.ssh-keys resource, it used the generic project and added the ssh keys there - at least in my case.
Solution: explicitly add the project parameter to the metadata resource item to make sure it's being added to the right project
I need to read the properties which are stated in my one of the .yaml file(eg banner.yaml). These properties should be read in a java class so that they can be accessed and the operation can be performed wisely.
This is my label.yaml file
/content/documents/administration/labels:
jcr:primaryType: hippostd:folder
jcr:mixinTypes: ['mix:referenceable']
jcr:uuid: 7ec0e757-373b-465a-9886-d072bb813f58
hippostd:foldertype: [new-resource-bundle, new-untranslated-folder]
/global:
jcr:primaryType: hippo:handle
jcr:mixinTypes: ['hippo:named', 'mix:referenceable']
jcr:uuid: 31e4796a-4025-48a5-9a6e-c31ba1fb387e
hippo:name: Global
How should I access the hippo:name property which should return me Global as value in one of the java class ?
Any help will be appreciated.
Create a class which extends BaseHstComponent, which allows you to make use of HST Content Bean's
Create a session Object, for this you need to have valid credentials of your repository.
Session session = repository.login("admin", "admin".toCharArray());
Now, create object of javax.jcr.Node, for this you require relPath to your .yaml file.
In your case it will be /content/documents/administration/labels/global
Node node = session.getRootNode().getNode("content/articles/myarticle");
Now, by using getProperty method you can access the property.
node.getProperty("hippotranslation:locale");
you can refere the link https://www.onehippo.org/library/concepts/content-repository/jcr-interface.html
you can't read a yaml file from within the application. The yaml file is bootstrapped in the repository. The data you show represents a resource bundle. You can access it programmatically using the utility class ResourceBundleUtils#getBundle
Or on a template use . Then you can use keys as normal.
I strongly suggest you follow our tutorials before continuing.
more details here:
https://www.onehippo.org/library/concepts/translations/hst-2-dynamic-resource-bundles-support.html
I am performing a query using the BigQuery Java API with the following code:
try (FileInputStream input = new FileInputStream(serviceAccountKeyFile)) {
GoogleCredentials credentials = GoogleCredentials.fromStream(input);
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.build()
.getService();
QueryRequest request = QueryRequest.of("SELECT * FROM foo.Bar");
QueryResponse response = bigQuery.query(request);
// Handle the response ...
}
Notice that I am using a specific service account whose key file is given by serviceAccountKeyFile.
I was expecting that the API would pick up the project_id from the key file. But it is actually picking up the project_id from the default key file referenced by the GOOGLE_APPLICATION_CREDENTIALS environment variable.
This seems like a bug to me. Is there a way to workaround the bug by setting the default project explicitly?
Yeah, that doesn't sound right at all. It does sound like a bug. I always just use the export the GOOGLE_APPLICATION_CREDENTIALS environment variable in our applications.
Anyway, you try explicitly setting the project id to see if it works:
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.setProjectId("project-id") //<--try setting it here
.build()
.getService();
I don't believe the project is coming from GOOGLE_APPLICATION_CREDENTIALS. I suspect that the project being picked up is the gcloud default project set by gcloud init or gcloud config set project.
From my testing, BigQuery doesn't use a project where the service account is created. I think the key is used only for authorization, and you always have to set a target project. There are a number of ways:
.setProjectId(<target-project>) in the builder
Define GOOGLE_CLOUD_PROJECT
gcloud config set project <target-project>
The query job will then be created in target-project. Of course, your service key should have access to target-project, which may or may not be the same project where your key is created. That is, you can run a query on projects other than the project where your key is created, as long as your key has permission to do so.
I want to create and configure SSLContext object and then make mysql.jdbc.Driver use it for establishing secure connection. Is there an approach for it better then custom jdbc.Driver?
You can create a custom com.mysql.jdbc.SocketFactory class that creates SSLSockets using an SSLSocketFactory coming from this SSLContext. Then, you can pass that class name to the MySQL JDBC connector using the socketFactory property (see table in the documentation).
This needs to have a constructor with no parameters, but its Socket connect(String host, Properties props) method should get the JDBC properties via its props parameter (if you need).
Note that you should not only check the validity of your certificate, but also check that the host name matches. If you're using Java 7, this can be done like this before returning the SSLSocket you've just created:
SSLParameters sslParams = new SSLParameters();
sslParams.setEndpointIdentificationAlgorithm("HTTPS");
sslSocket.setSSLParameters(sslParams);
(The host name matching rules for HTTPS should be sufficiently sensible for most protocols, including MySQL.)