Default project id in BigQuery Java API - google-bigquery

I am performing a query using the BigQuery Java API with the following code:
try (FileInputStream input = new FileInputStream(serviceAccountKeyFile)) {
GoogleCredentials credentials = GoogleCredentials.fromStream(input);
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.build()
.getService();
QueryRequest request = QueryRequest.of("SELECT * FROM foo.Bar");
QueryResponse response = bigQuery.query(request);
// Handle the response ...
}
Notice that I am using a specific service account whose key file is given by serviceAccountKeyFile.
I was expecting that the API would pick up the project_id from the key file. But it is actually picking up the project_id from the default key file referenced by the GOOGLE_APPLICATION_CREDENTIALS environment variable.
This seems like a bug to me. Is there a way to workaround the bug by setting the default project explicitly?

Yeah, that doesn't sound right at all. It does sound like a bug. I always just use the export the GOOGLE_APPLICATION_CREDENTIALS environment variable in our applications.
Anyway, you try explicitly setting the project id to see if it works:
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.setProjectId("project-id") //<--try setting it here
.build()
.getService();

I don't believe the project is coming from GOOGLE_APPLICATION_CREDENTIALS. I suspect that the project being picked up is the gcloud default project set by gcloud init or gcloud config set project.
From my testing, BigQuery doesn't use a project where the service account is created. I think the key is used only for authorization, and you always have to set a target project. There are a number of ways:
.setProjectId(<target-project>) in the builder
Define GOOGLE_CLOUD_PROJECT
gcloud config set project <target-project>
The query job will then be created in target-project. Of course, your service key should have access to target-project, which may or may not be the same project where your key is created. That is, you can run a query on projects other than the project where your key is created, as long as your key has permission to do so.

Related

Terraform/GCP: ssh-keys not being added to metdata

I'm trying to add ssh-keys to my Google Cloud project at the project level with terraform:
resource "google_compute_project_metadata_item" "oslogin" {
project = "${google_project_services.myproject.project}"
key = "enable-oslogin"
value = "false"
}
resource "google_compute_project_metadata_item" "block-project-ssh-keys" {
project = "${google_project_services.myproject.project}"
key = "block-project-ssh-keys"
value = "false"
}
resource "google_compute_project_metadata_item" "ssh-keys" {
key = "ssh-keys"
value = "user:ssh-rsa myverylongpublickeythatireplacewithtexthereforobviousreasons user#computer.local"
depends_on = [
"google_project_services.myproject",
]
}
I tried all types of combinations of the 2 metadata flags oslogin and block-project-ssh-keys, which always get set without issues. But the ssh keys never appear in GCPs web GUI let alone the authorized_keys file. I even tried adding the depends_on, to make sure the project is existent before adding the keys, but that didn't help either.
Yet, Terraform says:
google_compute_project_metadata_item.ssh-keys: Creation complete after 8s (ID: ssh-keys)
Adding the exact same key manually on the web GUI works fine. At this point I believe I have tried everything, read all the first page Google results to 'terraform gcp add ssh key' and similar queries ... I'm at my wits end.
The issue was that the ssh key was being added to a different project.
I started with Google's tutorial on GCP/Terraform. This creates a generic project with the gcloud tool first. Then proceeds to create accounts using that generic project. This is necessary because you need a user to run terraform against their API. Then they create a new project facilitating these users with terraform each time you apply. The generic project created with gcloud is not being touched after the initial creation.
If you omit the "project" parameter from the google_compute_project_metadata_item.ssh-keys resource, it used the generic project and added the ssh keys there - at least in my case.
Solution: explicitly add the project parameter to the metadata resource item to make sure it's being added to the right project

Nexus 3 Repository Manager Create (Or Run Pre-generated) Task Without Using User Interface

This question arose when I was trying to reboot my Nexus3 container on a weekly schedule and connect to an S3 bucket I have. I have my container set up to connect to the S3 bucket just fine (it creates a new [A-Z,0-9]-metrics.properties file each time) but the previous artifacts are not found when looking though the UI.
I used the Repair - Reconcile component database from blob store task from the UI settings and it works great!
But... all the previous steps are done automatically through scripts and I would like the same for the final step of Reconciling the blob store.
Connecting to the S3 blob store is done with reference to examples from nexus-book-examples. As below:
Map<String, String> config = new HashMap<>()
config.put("bucket", "nexus-artifact-storage")
blobStore.createS3BlobStore('nexus-artifact-storage', config)
AWS credentials are provided during the docker run step so the above is all that is needed for the blob store set up. It is called by a modified version of provision.sh, which is a script from the nexus-book-examples git page.
Is there a way to either:
Create a task with a groovy script? or,
Reference one of the task types and run the task that way with a POST?
depending on the specific version of repository manager that you are using, there may be REST endpoints for listing and running scheduled tasks. This was introduced in 3.6.0 according to this ticket: https://issues.sonatype.org/browse/NEXUS-11935. For more information about the REST integration in 3.x, check out the following: https://help.sonatype.com/display/NXRM3/Tasks+API
For creating a scheduled task, you will have to add some groovy code. Perhaps the following would be a good start:
import org.sonatype.nexus.scheduling.TaskConfiguration
import org.sonatype.nexus.scheduling.TaskInfo
import org.sonatype.nexus.scheduling.TaskScheduler
import groovy.json.JsonOutput
import groovy.json.JsonSlurper
class TaskXO
{
String typeId
Boolean enabled
String name
String alertEmail
Map<String, String> properties
}
TaskXO task = new JsonSlurper().parseText(args)
TaskScheduler scheduler = container.lookup(TaskScheduler.class.name)
TaskConfiguration config = scheduler.createTaskConfigurationInstance(task.typeId)
config.enabled = task.enabled
config.name = task.name
config.alertEmail = task.alertEmail
task.properties?.each { key, value -> config.setString(key, value) }
TaskInfo taskInfo = scheduler.scheduleTask(config, scheduler.scheduleFactory.manual())
JsonOutput.toJson(taskInfo)

How do I generate the variation file for all assets

I'm new to Akeneo, and I discovered profile configuration for assets.
So I imported my YML in order to add asset transformations, and now, cli based, I can't find a command that allows me to generate the variation file for all assets. I saw the command to do that asset by asset and channel by channel, but I need to do that for all of them.
Do you know how I can manage to do that ? I already tried pim:asset:generate-missing-variation-files but that didn't change anything
There is no built-in command to do that, however you could develop a very simple command to achieve this.
You can use the pimee_product_asset.finder.asset service to call retrieveVariationsNotGenerated() in order to retrieve every variation that are not yet genreated, then finally use the pimee_product_asset.variation_file_generator to generate the variation with generate().
Not tested code, but this would be like that:
$finder = $this->get('pimee_product_asset.finder.asset');
$generator = $this->get('pimee_product_asset.variation_file_generator');
$variations = $finder->retrieveVariationsNotGenerated();
foreach ($variations as $variation) {
$generator->generate($variation);
}

Setting user credentials on aws instance using jclouds

I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);

Application name is not set. Call Builder#setApplicationName. error

Application: Connecting to BigQuery using BigQuery APIs for Java
Environment: Eclipse, Windows 7
My application was running fine until last night. I've made no changes (except for restarting my computer) and my code is suddenly giving me this error:
Application name is not set. Call Builder#setApplicationName.
Thankfully I had a tar'd version of my workspace from last night. I ran a folder compare and found the local_db.bin file was different. I deleted the existing local_db.bin file and tried to run the program again. And it worked fine!
Any idea why this might have happened?
Hopefully this will help anyone else who stumbles upon this issue.
Try this to set your application name
Drive service = new Drive.Builder(httpTransport, jsonFactory, null)
.setHttpRequestInitializer(credential)
.setApplicationName("Your app name")
.build();
If you are working with only Firebase Dynamic Links without Android or iOS app
Try this.
builder.setApplicationName(firebaseUtil.getApplicationName());
FirebaseUtil is custom class add keys and application name to this class
FirebaseDynamicLinks.Builder builder = new FirebaseDynamicLinks.Builder(
GoogleNetHttpTransport.newTrustedTransport(), JacksonFactory.getDefaultInstance(), null);
// initialize with api key
FirebaseDynamicLinksRequestInitializer firebaseDynamicLinksRequestInitializer = new FirebaseDynamicLinksRequestInitializer(
firebaseUtil.getFirebaseApiKey());
builder.setFirebaseDynamicLinksRequestInitializer(firebaseDynamicLinksRequestInitializer);
builder.setApplicationName(firebaseUtil.getApplicationName());
// build dynamic links
FirebaseDynamicLinks firebasedynamiclinks = builder.build();
// create Firebase Dynamic Links request
CreateShortDynamicLinkRequest createShortLinkRequest = new CreateShortDynamicLinkRequest();
createShortLinkRequest.setLongDynamicLink(firebaseUtil.getFirebaseUrlPrefix() + "?link=" + urlToShorten);
Suffix suffix = new Suffix();
suffix.setOption(firebaseUtil.getShortSuffixOption());
createShortLinkRequest.setSuffix(suffix);
// request short url
FirebaseDynamicLinks.ShortLinks.Create request = firebasedynamiclinks.shortLinks()
.create(createShortLinkRequest);
CreateShortDynamicLinkResponse createShortDynamicLinkResponse = request.execute();