Nexus 3 Repository Manager Create (Or Run Pre-generated) Task Without Using User Interface - api

This question arose when I was trying to reboot my Nexus3 container on a weekly schedule and connect to an S3 bucket I have. I have my container set up to connect to the S3 bucket just fine (it creates a new [A-Z,0-9]-metrics.properties file each time) but the previous artifacts are not found when looking though the UI.
I used the Repair - Reconcile component database from blob store task from the UI settings and it works great!
But... all the previous steps are done automatically through scripts and I would like the same for the final step of Reconciling the blob store.
Connecting to the S3 blob store is done with reference to examples from nexus-book-examples. As below:
Map<String, String> config = new HashMap<>()
config.put("bucket", "nexus-artifact-storage")
blobStore.createS3BlobStore('nexus-artifact-storage', config)
AWS credentials are provided during the docker run step so the above is all that is needed for the blob store set up. It is called by a modified version of provision.sh, which is a script from the nexus-book-examples git page.
Is there a way to either:
Create a task with a groovy script? or,
Reference one of the task types and run the task that way with a POST?

depending on the specific version of repository manager that you are using, there may be REST endpoints for listing and running scheduled tasks. This was introduced in 3.6.0 according to this ticket: https://issues.sonatype.org/browse/NEXUS-11935. For more information about the REST integration in 3.x, check out the following: https://help.sonatype.com/display/NXRM3/Tasks+API
For creating a scheduled task, you will have to add some groovy code. Perhaps the following would be a good start:
import org.sonatype.nexus.scheduling.TaskConfiguration
import org.sonatype.nexus.scheduling.TaskInfo
import org.sonatype.nexus.scheduling.TaskScheduler
import groovy.json.JsonOutput
import groovy.json.JsonSlurper
class TaskXO
{
String typeId
Boolean enabled
String name
String alertEmail
Map<String, String> properties
}
TaskXO task = new JsonSlurper().parseText(args)
TaskScheduler scheduler = container.lookup(TaskScheduler.class.name)
TaskConfiguration config = scheduler.createTaskConfigurationInstance(task.typeId)
config.enabled = task.enabled
config.name = task.name
config.alertEmail = task.alertEmail
task.properties?.each { key, value -> config.setString(key, value) }
TaskInfo taskInfo = scheduler.scheduleTask(config, scheduler.scheduleFactory.manual())
JsonOutput.toJson(taskInfo)

Related

How to work with exported Stack Driver logs from Google Cloud Projects into BigQuery

I have created an "export" from my Stackdriver Logging page in my Google Cloud project. I configured the export to go to a BigQuery dataset.
When I go to BigQuery, I see the dataset.
There are no tables in my dataset, since Stackdriver export created the BigQuery dataset for me.
How do I see the data that was exported? Since there are no tables I cannot perform a "select * from X". I could create a table but I don't know what columns to add nor do I know how to tell Stackdriver logging to write to that table.
I must be missing a step.
Google has a short 1 minute video on exporting to Big Query but it stops exactly at the point where I am in the process.
When a new Stackdriver export is defined, it will then start to export newly written log records to the target sink (BQ in this case). As per the documentation found here:
https://cloud.google.com/logging/docs/export/
it states:
Since exporting happens for new log entries only, you cannot export
log entries that Logging received before your sink was created.
If one wants to export existing logs to a file, one can use gcloud (or API) as described here:
https://cloud.google.com/logging/docs/reference/tools/gcloud-logging#reading_log_entries
The output of this "dump" of existing log records can then used in whatever manner you see fit. For example, it could be imported into a BQ table.
To export logs in the bigquery from the stackdrive , you have to create Logger Sink using code or GCP logging UI
Then create Sink, add a filter.
https://cloud.google.com/logging/docs/export/configure_export_v2
Then add logs to stack driver using code
public static void writeLog(Severity severity, String logName, Map<String, String> jsonMap) {
List<Map<String, String>> maps = limitMap(jsonMap);
for (Map<String, String> map : maps) {
LogEntry logEntry = LogEntry.newBuilder(Payload.JsonPayload.of(map))
.setSeverity(severity)
.setLogName(logName)
.setResource(monitoredResource)
.build();
logging.write(Collections.singleton(logEntry));
}
}
private static MonitoredResource monitoredResource =
MonitoredResource.newBuilder("global")
.addLabel("project_id", logging.getOptions().getProjectId())
.build();
https://cloud.google.com/bigquery/docs/writing-results

ArcGis Offline map layer changes synchronization

In my WPF application I’m trying to use off-line map functionality. Right now my feature service is configured for data sync and I’m able to create data replica on server and download local copy of geodatabase.
gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
Envelope extent = new Envelope(xmin, ymin, xmax, ymax, new SpatialReference(wkidStart));
GenerateGeodatabaseParameters generateParams = await _gdbSyncTask.CreateDefaultGenerateGeodatabaseParametersAsync(extent);
_generateGdbJob = _gdbSyncTask.GenerateGeodatabase(generateParams, _gdbPath);
_generateGdbJob.JobChanged += GenerateGdbJobChanged;
_generateGdbJob.ProgressChanged += ((object sender, EventArgs e) =>
{
UpdateProgressBar();
});
_generateGdbJob.Start();
After initial synchronization, I’m able to successfully work with map in off-line mode. This includes operations like adding new geometries or editing existing polygons inside local DB.
However, when I’m trying to synchronize changes back to server – I’m getting no results.
To perform data synchronization with local database – I’m using the following code:
SyncGeodatabaseParameters parameters = new SyncGeodatabaseParameters()
{
GeodatabaseSyncDirection = SyncDirection.Bidirectional,
RollbackOnFailure = false
};
Geodatabase gdb = await Geodatabase.OpenAsync(this.GetGdbPath());
foreach (GeodatabaseFeatureTable table in gdb.GeodatabaseFeatureTables)
{
long id = table.ServiceLayerId;
SyncLayerOption option = new SyncLayerOption(id);
option.SyncDirection = SyncDirection.Bidirectional;
parameters.LayerOptions.Add(option);
}
_gdbSyncTask = await GeodatabaseSyncTask.CreateAsync(_featureServiceUri);
SyncGeodatabaseJob job = _gdbSyncTask.SyncGeodatabase(parameters, gdb);
job.JobChanged += SyncJob_JobChanged;
job.ProgressChanged += SyncJob_ProgressChanged;
job.Start();
Everything goes well. The synchronization ends with status “Succeeded”. The messages logged by the SyncGeodatabaseJob are like on the screen below:
However – when I open edited feature layer from server inside map web client I cannot found any of my local changes. In the serve database I can also see that no new records were created during synchronization.
Interesting think is that when I open “Replica” data inside web I can see the following information:
Replica Server Gen: 2
Creation Date: 2018/02/07 10:49:54 UTC
Last Sync Date: 2018/02/07 10:49:54 UTC
The “Last Sync Data” is equal to replica “Creation date” However, in the replica log in ArcMap I can see the following information:
Can anyone can tell me how should I interpret above described situation? Am I missing some steps in my code? Or maybe some configuration feature is missing on the server? It looks like data modifications are successfully pushed back to replica on server but after that replica is not synchronized with server database (should it work automatically?).
I’m a “fresh” person regarding ArcGis development so any help will be appreciated
Thanks for all the answers. It occurred that there is versioning enabled on the server database and the offline, versioned changes was not reconciled to the server.
After running reconcile/post script (http://desktop.arcgis.com/en/arcmap/10.3/manage-data/geodatabases/automate-reconcile-post-after-sync.htm) off-line changes started to be visibile to other system users.
The code looks ok on fast look so I would assume that there is something going on in the setup.
What do you get back from the sync operation after the sync has completed? Note that you can just use await syncJob.GetResultsAsync to start the job and wait the results.
How is the Feature Service set up on the server? Please refer https://enterprise.arcgis.com/en/server/latest/publish-services/linux/prepare-data-for-offline-use.htm for the different ways to set these things.

Default project id in BigQuery Java API

I am performing a query using the BigQuery Java API with the following code:
try (FileInputStream input = new FileInputStream(serviceAccountKeyFile)) {
GoogleCredentials credentials = GoogleCredentials.fromStream(input);
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.build()
.getService();
QueryRequest request = QueryRequest.of("SELECT * FROM foo.Bar");
QueryResponse response = bigQuery.query(request);
// Handle the response ...
}
Notice that I am using a specific service account whose key file is given by serviceAccountKeyFile.
I was expecting that the API would pick up the project_id from the key file. But it is actually picking up the project_id from the default key file referenced by the GOOGLE_APPLICATION_CREDENTIALS environment variable.
This seems like a bug to me. Is there a way to workaround the bug by setting the default project explicitly?
Yeah, that doesn't sound right at all. It does sound like a bug. I always just use the export the GOOGLE_APPLICATION_CREDENTIALS environment variable in our applications.
Anyway, you try explicitly setting the project id to see if it works:
BigQuery bigQuery = BigQueryOptions.newBuilder()
.setCredentials(credentials)
.setProjectId("project-id") //<--try setting it here
.build()
.getService();
I don't believe the project is coming from GOOGLE_APPLICATION_CREDENTIALS. I suspect that the project being picked up is the gcloud default project set by gcloud init or gcloud config set project.
From my testing, BigQuery doesn't use a project where the service account is created. I think the key is used only for authorization, and you always have to set a target project. There are a number of ways:
.setProjectId(<target-project>) in the builder
Define GOOGLE_CLOUD_PROJECT
gcloud config set project <target-project>
The query job will then be created in target-project. Of course, your service key should have access to target-project, which may or may not be the same project where your key is created. That is, you can run a query on projects other than the project where your key is created, as long as your key has permission to do so.

Setting user credentials on aws instance using jclouds

I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);

repository created via RepositoryManager not behaving the same as repo created via workbench

I create sesame native java store using following code:
Create a native java store:
// create a configuration for the SAIL stack
boolean persist = true;
String indexes = "spoc,posc,cspo";
SailImplConfig backendConfig = new NativeStoreConfig(indexes);
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
// create a configuration for the repository implementation
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
repConfig.setTitle(repositoryId);
manager.addRepositoryConfig(repConfig);
Repository repository = manager.getRepository(repositoryId);
create a in-memory store:
// create a configuration for the SAIL stack
boolean persist = true;
SailImplConfig backendConfig = new MemoryStoreConfig(persist);
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
// create a configuration for the repository implementation
RepositoryImplConfig repositoryTypeSpec = new SailRepositoryConfig(backendConfig);
RepositoryConfig repConfig = new RepositoryConfig(repositoryId, repositoryTypeSpec);
repConfig.setTitle(repositoryId);
manager.addRepositoryConfig(repConfig);
Repository repository = manager.getRepository(repositoryId);
When I store data in this repo and query back, the results are not same as the results returned from repository created using workbench. I get duplicate/multiple entries in my resultset.
Same behavior for in-memory store.
I also observed that my triples belong to a blank context which is not the case in repository created via workbench.
What is wrong with my code above?
There is nothing wrong with your code, as far as I can see. If the store as created from the Workbench behaves differently, this most likely means that it's configured with a different SAIL stack.
The most likely candidate for the difference is this bit:
// stack an inferencer config on top of our backend-config
backendConfig = new ForwardChainingRDFSInferencerConfig(backendConfig);
You have configured your repository with a reasoner on top here. If the repository created via the workbench does not use a reasoner, you will get different results on identical queries (including, sometimes, apparent duplicate results).
If you consider this a problem, you can fix this in two ways. One is (of course) to simply not create your repository with a reasoner on top. The other is to disable reasoning for specific queries. In the Workbench, you can do this by disabling the "Include inferred statements" checkbox in the query screen. Programmatically, you can do this by using Query.setIncludeInferred(false) on your prepared query object. See the javadoc for details.